ai-agents 0.5.0 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d07ac97ca06177ee4504601099af6bc6ba3e4d8557b28d34cd3222240b27575d
4
- data.tar.gz: cd0c7121918c8a28c3760325c9acd319e86d349f34fb715d6b953912a99f98a8
3
+ metadata.gz: 7f8fa7ec73784bc0fb1e9e6bd4852c6d0295160c4ca28d435c895cba28f5cd58
4
+ data.tar.gz: 3e489f11ae2a5c93b4232ec78e3c7385c83f303c28039ed35fa082669fedaf4b
5
5
  SHA512:
6
- metadata.gz: 63db003d8bf43b2ba8d52d48dbb24d2f8bf86ee4b4e8ab704316cf89d0b80156110a8438a7ca86a6fc37182ff9f5a723ace7c474e53a078de050cb22eff4b268
7
- data.tar.gz: 05606d58c663ca0b9b062ed537cba33a4b987e5fd720a1188c003150f880a7c953cc28096ca4024854dcdec0a9896a7da35653ebb4f95c2e04a0f5b97e3e1c2e
6
+ metadata.gz: ba5050ba743466993826d888673d90696d27f422df3d3972027fa141de05d6436c45cd44dba0836dc6350ad83c0ff1628ba43bbce6d93e5e49e8efc450554e91
7
+ data.tar.gz: efed7a181a6f52c4c8feb8b84e1bbb2309b5216a85569dae69dd4038cde0cdad7a283e066181414e50657aee063e79935573799fdc24abc6253b02bcb8e5d09f
data/.rubocop.yml CHANGED
@@ -10,20 +10,18 @@ Style/StringLiterals:
10
10
  Style/StringLiteralsInInterpolation:
11
11
  EnforcedStyle: double_quotes
12
12
 
13
+ Metrics/MethodLength:
14
+ Max: 20
15
+ Metrics/ClassLength:
16
+ Enabled: false
17
+
18
+ RSpec/MultipleDescribes:
19
+ Enabled: false
13
20
  RSpec/MultipleExpectations:
14
21
  Max: 10
15
-
16
22
  RSpec/ExampleLength:
17
23
  Max: 20
18
-
19
24
  RSpec/MultipleMemoizedHelpers:
20
25
  Max: 15
21
-
22
26
  RSpec/SpecFilePathFormat:
23
27
  Enabled: false
24
-
25
- Metrics/MethodLength:
26
- Max: 20
27
-
28
- RSpec/MultipleDescribes:
29
- Enabled: false
data/CHANGELOG.md CHANGED
@@ -5,6 +5,25 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.6.0] - 2025-10-16
9
+
10
+ ### Added
11
+ - **Custom HTTP Headers Support**: Agents can now specify custom HTTP headers for LLM requests
12
+ - Added `headers` parameter to `Agent#initialize` for setting agent-level default headers
13
+ - Runtime headers can be passed via `headers` parameter in `AgentRunner#run` method
14
+ - Runtime headers take precedence over agent-level headers when keys overlap
15
+ - Headers are automatically normalized (symbolized keys) and validated
16
+ - Full support for headers across agent handoffs with proper merging logic
17
+ - New `Agents::Helpers::Headers` module for header normalization and merging
18
+ - Comprehensive test coverage for header functionality
19
+
20
+ ### Changed
21
+ - **Code Organization**: Refactored internal helpers into dedicated module structure
22
+ - Moved `MessageExtractor` to `Agents::Helpers::MessageExtractor` module
23
+ - Converted `MessageExtractor` from class-based to module-function pattern
24
+ - Created `lib/agents/helpers/` directory for helper modules
25
+ - All helper modules now use consistent flat naming convention (`Agents::Helpers::ModuleName`)
26
+
8
27
  ## [0.5.0] - 2025-08-20
9
28
 
10
29
  ### Added
@@ -0,0 +1,91 @@
1
+ ---
2
+ layout: default
3
+ title: Custom Request Headers
4
+ parent: Guides
5
+ nav_order: 6
6
+ ---
7
+
8
+ # Custom Request Headers
9
+
10
+ Custom HTTP headers allow you to pass additional metadata with your LLM API requests. This is useful for authentication, request tracking, A/B testing, and provider-specific features.
11
+
12
+ ## Basic Usage
13
+
14
+ ### Agent-Level Headers
15
+
16
+ Set default headers when creating an agent that will be applied to all requests:
17
+
18
+ ```ruby
19
+ agent = Agents::Agent.new(
20
+ name: "Assistant",
21
+ instructions: "You are a helpful assistant",
22
+ headers: {
23
+ "X-Custom-ID" => "agent-123",
24
+ "X-Environment" => "production"
25
+ }
26
+ )
27
+
28
+ runner = Agents::Runner.with_agents(agent)
29
+ result = runner.run("Hello!")
30
+ # All requests will include the custom headers
31
+ ```
32
+
33
+ ### Runtime Headers
34
+
35
+ Override or add headers for specific requests:
36
+
37
+ ```ruby
38
+ agent = Agents::Agent.new(
39
+ name: "Assistant",
40
+ instructions: "You are a helpful assistant"
41
+ )
42
+
43
+ runner = Agents::Runner.with_agents(agent)
44
+
45
+ # Pass headers at runtime
46
+ result = runner.run(
47
+ "What's the weather?",
48
+ headers: {
49
+ "X-Request-ID" => "req-456",
50
+ "X-User-ID" => "user-789"
51
+ }
52
+ )
53
+ ```
54
+
55
+ ### Header precedence
56
+
57
+ When both agent-level and runtime headers are provided, **runtime headers take precedence**:
58
+
59
+ ```ruby
60
+ agent = Agents::Agent.new(
61
+ name: "Assistant",
62
+ instructions: "You are a helpful assistant",
63
+ headers: {
64
+ "X-Environment" => "staging",
65
+ "X-Agent-ID" => "agent-001"
66
+ }
67
+ )
68
+
69
+ runner = Agents::Runner.with_agents(agent)
70
+
71
+ result = runner.run(
72
+ "Hello!",
73
+ headers: {
74
+ "X-Environment" => "production", # Overrides agent's staging value
75
+ "X-Request-ID" => "req-123" # Additional header
76
+ }
77
+ )
78
+
79
+ # Final headers sent to LLM API:
80
+ # {
81
+ # "X-Environment" => "production", # Runtime value wins
82
+ # "X-Agent-ID" => "agent-001", # From agent
83
+ # "X-Request-ID" => "req-123" # From runtime
84
+ # }
85
+ ```
86
+
87
+ ## See Also
88
+
89
+ - [Multi-Agent Systems](multi-agent-systems.html) - Using headers across agent handoffs
90
+ - [Rails Integration](rails-integration.html) - Request tracking in Rails applications
91
+ - [State Persistence](state-persistence.html) - Combining headers with conversation state
data/docs/guides.md CHANGED
@@ -17,3 +17,4 @@ Practical guides for building real-world applications with the AI Agents library
17
17
  - **[Rails Integration](guides/rails-integration.html)** - Integrating agents with Ruby on Rails applications and ActiveRecord persistence
18
18
  - **[State Persistence](guides/state-persistence.html)** - Managing conversation state and context across sessions and processes
19
19
  - **[Structured Output](guides/structured-output.html)** - Enforcing JSON schema validation for reliable agent responses
20
+ - **[Custom Request Headers](guides/request-headers.html)** - Adding custom HTTP headers for authentication, tracking, and provider-specific features
@@ -6,6 +6,7 @@ require_relative "tools/create_lead_tool"
6
6
  require_relative "tools/create_checkout_tool"
7
7
  require_relative "tools/search_docs_tool"
8
8
  require_relative "tools/escalate_to_human_tool"
9
+ require "ruby_llm/schema"
9
10
 
10
11
  module ISPSupport
11
12
  # Factory for creating all ISP support agents with proper handoff relationships.
@@ -56,7 +57,8 @@ module ISPSupport
56
57
  instructions: sales_instructions_with_state,
57
58
  model: "gpt-4.1-mini",
58
59
  tools: [ISPSupport::CreateLeadTool.new, ISPSupport::CreateCheckoutTool.new],
59
- temperature: 0.8 # Higher temperature for more persuasive, varied sales language
60
+ temperature: 0.8, # Higher temperature for more persuasive, varied sales language
61
+ response_schema: sales_response_schema
60
62
  )
61
63
  end
62
64
 
@@ -70,7 +72,8 @@ module ISPSupport
70
72
  ISPSupport::SearchDocsTool.new,
71
73
  ISPSupport::EscalateToHumanTool.new
72
74
  ],
73
- temperature: 0.5 # Balanced temperature for helpful but consistent technical support
75
+ temperature: 0.5, # Balanced temperature for helpful but consistent technical support
76
+ response_schema: triage_response_schema
74
77
  )
75
78
  end
76
79
 
@@ -95,22 +98,33 @@ module ISPSupport
95
98
  end
96
99
 
97
100
  def triage_response_schema
98
- {
99
- type: "object",
100
- properties: {
101
- response: {
102
- type: "string",
103
- description: "Your response to the customer"
104
- },
105
- intent: {
106
- type: "string",
107
- enum: %w[sales support unclear],
108
- description: "The detected intent category"
109
- }
110
- },
111
- required: %w[response intent],
112
- additionalProperties: false
113
- }
101
+ RubyLLM::Schema.create do
102
+ string :response, description: "Your response to the customer"
103
+ string :intent, enum: %w[sales support unclear], description: "The detected intent category"
104
+ array :sentiment, description: "Customer sentiment indicators" do
105
+ string enum: %w[positive neutral negative frustrated urgent confused satisfied]
106
+ end
107
+ end
108
+ end
109
+
110
+ def support_response_schema
111
+ RubyLLM::Schema.create do
112
+ string :response, description: "Your response to the customer"
113
+ string :intent, enum: %w[support], description: "The intent category (always support)"
114
+ array :sentiment, description: "Customer sentiment indicators" do
115
+ string enum: %w[positive neutral negative frustrated urgent confused satisfied]
116
+ end
117
+ end
118
+ end
119
+
120
+ def sales_response_schema
121
+ RubyLLM::Schema.create do
122
+ string :response, description: "Your response to the customer"
123
+ string :intent, enum: %w[sales], description: "The intent category (always sales)"
124
+ array :sentiment, description: "Customer sentiment indicators" do
125
+ string enum: %w[positive neutral negative frustrated urgent confused satisfied]
126
+ end
127
+ end
114
128
  end
115
129
 
116
130
  def sales_instructions
@@ -2,6 +2,7 @@
2
2
  # frozen_string_literal: true
3
3
 
4
4
  require "json"
5
+ require "readline"
5
6
  require_relative "../../lib/agents"
6
7
  require_relative "agents_factory"
7
8
 
@@ -29,41 +30,59 @@ class ISPSupportDemo
29
30
  @context = {}
30
31
  @current_status = ""
31
32
 
32
- puts "🏢 Welcome to ISP Customer Support!"
33
- puts "Type '/help' for commands or 'exit' to quit."
33
+ puts green("🏢 Welcome to ISP Customer Support!")
34
+ puts dim_text("Type '/help' for commands or 'exit' to quit.")
34
35
  puts
35
36
  end
36
37
 
37
38
  def start
38
39
  loop do
39
- print "💬 You: "
40
- user_input = gets.chomp.strip
40
+ user_input = Readline.readline(cyan("\u{1F4AC} You: "), true)
41
+ next unless user_input # Handle Ctrl+D
41
42
 
43
+ user_input = user_input.strip
42
44
  command_result = handle_command(user_input)
43
45
  break if command_result == :exit
44
46
  next if command_result == :handled || user_input.empty?
45
47
 
46
48
  # Clear any previous status and show agent is working
47
49
  clear_status_line
48
- print "🤖 Processing..."
50
+ print yellow("🤖 Processing...")
49
51
 
50
- # Use the runner - it automatically determines the right agent from context
51
- result = @runner.run(user_input, context: @context)
52
+ begin
53
+ # Use the runner - it automatically determines the right agent from context
54
+ result = @runner.run(user_input, context: @context)
52
55
 
53
- # Update our context with the returned context from Runner
54
- @context = result.context if result.respond_to?(:context) && result.context
56
+ # Update our context with the returned context from Runner
57
+ @context = result.context if result.respond_to?(:context) && result.context
55
58
 
56
- # Clear status and show response
57
- clear_status_line
59
+ # Clear status and show response with callback history
60
+ clear_status_line
58
61
 
59
- # Handle structured output from triage agent
60
- output = result.output || "[No output]"
61
- if @context[:current_agent] == "Triage Agent" && output.is_a?(Hash)
62
- # Display the response from structured response
63
- puts "🤖 #{output["response"]}"
64
- puts "\e[2m [Intent]: #{output["intent"]}\e[0m" if output["intent"]
65
- else
66
- puts "🤖 #{output}"
62
+ # Display callback messages if any
63
+ if @callback_messages.any?
64
+ puts dim_text(@callback_messages.join("\n"))
65
+ @callback_messages.clear
66
+ end
67
+
68
+ # Handle structured output from agents
69
+ output = result.output || "[No output]"
70
+
71
+ if output.is_a?(Hash) && output.key?("response")
72
+ # Display the response from structured response
73
+ puts "🤖 #{output["response"]}"
74
+ puts dim_text(" [Intent]: #{output["intent"]}") if output["intent"]
75
+ puts dim_text(" [Sentiment]: #{output["sentiment"].join(", ")}") if output["sentiment"]&.any?
76
+ else
77
+ puts "🤖 #{output}"
78
+ end
79
+
80
+ puts # Add blank line after agent response
81
+ rescue StandardError => e
82
+ clear_status_line
83
+ puts red("❌ Error: #{e.message}")
84
+ puts dim_text("Please try again or type '/help' for assistance.")
85
+ puts # Add blank line after error message
67
86
  end
68
87
  end
69
88
  end
@@ -71,26 +90,36 @@ class ISPSupportDemo
71
90
  private
72
91
 
73
92
  def setup_callbacks
93
+ @callback_messages = []
94
+
74
95
  @runner.on_agent_thinking do |agent_name, _input|
75
- update_status("🧠 #{agent_name} is thinking...")
96
+ message = "🧠 #{agent_name} is thinking..."
97
+ update_status(message)
98
+ @callback_messages << message
76
99
  end
77
100
 
78
101
  @runner.on_tool_start do |tool_name, _args|
79
- update_status("🔧 Using #{tool_name}...")
102
+ message = "🔧 Using #{tool_name}..."
103
+ update_status(message)
104
+ @callback_messages << message
80
105
  end
81
106
 
82
107
  @runner.on_tool_complete do |tool_name, _result|
83
- update_status("✅ #{tool_name} completed")
108
+ message = "✅ #{tool_name} completed"
109
+ update_status(message)
110
+ @callback_messages << message
84
111
  end
85
112
 
86
113
  @runner.on_agent_handoff do |from_agent, to_agent, _reason|
87
- update_status("🔄 Handoff: #{from_agent} → #{to_agent}")
114
+ message = "🔄 Handoff: #{from_agent} → #{to_agent}"
115
+ update_status(message)
116
+ @callback_messages << message
88
117
  end
89
118
  end
90
119
 
91
120
  def update_status(message)
92
121
  clear_status_line
93
- print message
122
+ print dim_text(message)
94
123
  $stdout.flush
95
124
  end
96
125
 
@@ -197,6 +226,27 @@ class ISPSupportDemo
197
226
  else "Unknown agent"
198
227
  end
199
228
  end
229
+
230
+ # ANSI color helper methods
231
+ def dim_text(text)
232
+ "\e[90m#{text}\e[0m"
233
+ end
234
+
235
+ def green(text)
236
+ "\e[32m#{text}\e[0m"
237
+ end
238
+
239
+ def yellow(text)
240
+ "\e[33m#{text}\e[0m"
241
+ end
242
+
243
+ def red(text)
244
+ "\e[31m#{text}\e[0m"
245
+ end
246
+
247
+ def cyan(text)
248
+ "\e[36m#{text}\e[0m"
249
+ end
200
250
  end
201
251
 
202
252
  # Run the demo
data/lib/agents/agent.rb CHANGED
@@ -4,7 +4,7 @@
4
4
  # Agents are immutable, thread-safe objects that can be cloned with modifications.
5
5
  # They encapsulate the configuration needed to interact with an LLM including
6
6
  # instructions, tools, and potential handoff targets.
7
- #
7
+ require_relative "helpers/headers"
8
8
  # @example Creating a basic agent
9
9
  # agent = Agents::Agent.new(
10
10
  # name: "Assistant",
@@ -50,7 +50,7 @@
50
50
  # )
51
51
  module Agents
52
52
  class Agent
53
- attr_reader :name, :instructions, :model, :tools, :handoff_agents, :temperature, :response_schema
53
+ attr_reader :name, :instructions, :model, :tools, :handoff_agents, :temperature, :response_schema, :headers
54
54
 
55
55
  # Initialize a new Agent instance
56
56
  #
@@ -61,8 +61,9 @@ module Agents
61
61
  # @param handoff_agents [Array<Agents::Agent>] Array of agents this agent can hand off to
62
62
  # @param temperature [Float] Controls randomness in responses (0.0 = deterministic, 1.0 = very random, default: 0.7)
63
63
  # @param response_schema [Hash, nil] JSON schema for structured output responses
64
+ # @param headers [Hash, nil] Default HTTP headers applied to LLM requests
64
65
  def initialize(name:, instructions: nil, model: "gpt-4.1-mini", tools: [], handoff_agents: [], temperature: 0.7,
65
- response_schema: nil)
66
+ response_schema: nil, headers: nil)
66
67
  @name = name
67
68
  @instructions = instructions
68
69
  @model = model
@@ -70,6 +71,7 @@ module Agents
70
71
  @handoff_agents = []
71
72
  @temperature = temperature
72
73
  @response_schema = response_schema
74
+ @headers = Helpers::Headers.normalize(headers, freeze_result: true)
73
75
 
74
76
  # Mutex for thread-safe handoff registration
75
77
  # While agents are typically configured at startup, we want to ensure
@@ -164,7 +166,8 @@ module Agents
164
166
  tools: changes.fetch(:tools, @tools.dup),
165
167
  handoff_agents: changes.fetch(:handoff_agents, @handoff_agents),
166
168
  temperature: changes.fetch(:temperature, @temperature),
167
- response_schema: changes.fetch(:response_schema, @response_schema)
169
+ response_schema: changes.fetch(:response_schema, @response_schema),
170
+ headers: changes.fetch(:headers, @headers)
168
171
  )
169
172
  end
170
173
 
@@ -58,12 +58,12 @@ module Agents
58
58
  # @param input [String] User's message
59
59
  # @param context [Hash] Conversation context (will be restored if continuing conversation)
60
60
  # @param max_turns [Integer] Maximum turns before stopping (default: 10)
61
+ # @param headers [Hash, nil] Custom HTTP headers to pass through to the underlying LLM provider
61
62
  # @return [RunResult] Execution result with output, messages, and updated context
62
- def run(input, context: {}, max_turns: Runner::DEFAULT_MAX_TURNS)
63
+ def run(input, context: {}, max_turns: Runner::DEFAULT_MAX_TURNS, headers: nil)
63
64
  # Determine which agent should handle this conversation
64
65
  # Uses conversation history to maintain continuity across handoffs
65
66
  current_agent = determine_conversation_agent(context)
66
-
67
67
  # Execute using stateless Runner - each execution is independent and thread-safe
68
68
  # Pass callbacks to enable real-time event notifications
69
69
  Runner.new.run(
@@ -72,6 +72,7 @@ module Agents
72
72
  context: context,
73
73
  registry: @registry,
74
74
  max_turns: max_turns,
75
+ headers: headers,
75
76
  callbacks: @callbacks
76
77
  )
77
78
  end
@@ -0,0 +1,29 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Agents::Helpers::Headers
4
+ module_function
5
+
6
+ def normalize(headers, freeze_result: false)
7
+ return freeze_result ? {}.freeze : {} if headers.nil? || (headers.respond_to?(:empty?) && headers.empty?)
8
+
9
+ hash = headers.respond_to?(:to_h) ? headers.to_h : headers
10
+ raise ArgumentError, "headers must be a Hash or respond to #to_h" unless hash.is_a?(Hash)
11
+
12
+ result = symbolize_keys(hash)
13
+ freeze_result ? result.freeze : result
14
+ end
15
+
16
+ def merge(agent_headers, runtime_headers)
17
+ return runtime_headers if agent_headers.empty?
18
+ return agent_headers if runtime_headers.empty?
19
+
20
+ agent_headers.merge(runtime_headers) { |_key, _agent_value, runtime_value| runtime_value }
21
+ end
22
+
23
+ def symbolize_keys(hash)
24
+ hash.each_with_object({}) do |(key, value), memo|
25
+ memo[key.is_a?(Symbol) ? key : key.to_sym] = value
26
+ end
27
+ end
28
+ private_class_method :symbolize_keys
29
+ end
@@ -0,0 +1,88 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Service object responsible for extracting and formatting conversation messages
4
+ # from RubyLLM chat objects into a format suitable for persistence and context restoration.
5
+ #
6
+ # Handles different message types:
7
+ # - User messages: Basic content preservation
8
+ # - Assistant messages: Includes agent attribution and tool calls
9
+ # - Tool result messages: Links back to original tool calls
10
+ #
11
+ # @example Extract messages from a chat
12
+ # messages = Agents::Helpers::MessageExtractor.extract_messages(chat, current_agent)
13
+ # #=> [
14
+ # { role: :user, content: "Hello" },
15
+ # { role: :assistant, content: "Hi!", agent_name: "Support", tool_calls: [...] },
16
+ # { role: :tool, content: "Result", tool_call_id: "call_123" }
17
+ # ]
18
+ module Agents::Helpers::MessageExtractor
19
+ module_function
20
+
21
+ # Check if content is considered empty (handles both String and Hash content)
22
+ #
23
+ # @param content [String, Hash, nil] The content to check
24
+ # @return [Boolean] true if content is empty, false otherwise
25
+ def content_empty?(content)
26
+ case content
27
+ when String
28
+ content.strip.empty?
29
+ when Hash
30
+ content.empty?
31
+ else
32
+ content.nil?
33
+ end
34
+ end
35
+
36
+ # Extract messages from a chat object for conversation history persistence
37
+ #
38
+ # @param chat [Object] Chat object that responds to :messages
39
+ # @param current_agent [Agent] The agent currently handling the conversation
40
+ # @return [Array<Hash>] Array of message hashes suitable for persistence
41
+ def extract_messages(chat, current_agent)
42
+ return [] unless chat.respond_to?(:messages)
43
+
44
+ chat.messages.filter_map do |msg|
45
+ case msg.role
46
+ when :user, :assistant
47
+ extract_user_or_assistant_message(msg, current_agent)
48
+ when :tool
49
+ extract_tool_message(msg)
50
+ end
51
+ end
52
+ end
53
+
54
+ def extract_user_or_assistant_message(msg, current_agent)
55
+ return nil unless msg.content && !content_empty?(msg.content)
56
+
57
+ message = {
58
+ role: msg.role,
59
+ content: msg.content
60
+ }
61
+
62
+ if msg.role == :assistant
63
+ # Add agent attribution for conversation continuity
64
+ message[:agent_name] = current_agent.name if current_agent
65
+
66
+ # Add tool calls if present
67
+ if msg.tool_call? && msg.tool_calls
68
+ # RubyLLM stores tool_calls as Hash with call_id => ToolCall object
69
+ # Reference: RubyLLM::StreamAccumulator#tool_calls_from_stream
70
+ message[:tool_calls] = msg.tool_calls.values.map(&:to_h)
71
+ end
72
+ end
73
+
74
+ message
75
+ end
76
+ private_class_method :extract_user_or_assistant_message
77
+
78
+ def extract_tool_message(msg)
79
+ return nil unless msg.tool_result?
80
+
81
+ {
82
+ role: msg.role,
83
+ content: msg.content,
84
+ tool_call_id: msg.tool_call_id
85
+ }
86
+ end
87
+ private_class_method :extract_tool_message
88
+ end
@@ -0,0 +1,9 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Agents
4
+ module Helpers
5
+ end
6
+ end
7
+
8
+ require_relative "helpers/headers"
9
+ require_relative "helpers/message_extractor"
data/lib/agents/runner.rb CHANGED
@@ -1,7 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require_relative "message_extractor"
4
-
5
3
  module Agents
6
4
  # The execution engine that orchestrates conversations between users and agents.
7
5
  # Runner manages the conversation flow, handles tool execution through RubyLLM,
@@ -55,6 +53,7 @@ module Agents
55
53
  DEFAULT_MAX_TURNS = 10
56
54
 
57
55
  class MaxTurnsExceeded < StandardError; end
56
+ class AgentNotFoundError < StandardError; end
58
57
 
59
58
  # Create a thread-safe agent runner for multi-agent conversations.
60
59
  # The first agent becomes the default entry point for new conversations.
@@ -79,9 +78,10 @@ module Agents
79
78
  # @param context [Hash] Shared context data accessible to all tools
80
79
  # @param registry [Hash] Registry of agents for handoff resolution
81
80
  # @param max_turns [Integer] Maximum conversation turns before stopping
81
+ # @param headers [Hash, nil] Custom HTTP headers passed to the underlying LLM provider
82
82
  # @param callbacks [Hash] Optional callbacks for real-time event notifications
83
83
  # @return [RunResult] The result containing output, messages, and usage
84
- def run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, callbacks: {})
84
+ def run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, headers: nil, callbacks: {})
85
85
  # The starting_agent is already determined by AgentRunner based on conversation history
86
86
  current_agent = starting_agent
87
87
 
@@ -90,15 +90,22 @@ module Agents
90
90
  context_wrapper = RunContext.new(context_copy, callbacks: callbacks)
91
91
  current_turn = 0
92
92
 
93
+ runtime_headers = Helpers::Headers.normalize(headers)
94
+ agent_headers = Helpers::Headers.normalize(current_agent.headers)
95
+
93
96
  # Create chat and restore conversation history
94
- chat = create_chat(current_agent, context_wrapper)
97
+ chat = RubyLLM::Chat.new(model: current_agent.model)
98
+ current_headers = Helpers::Headers.merge(agent_headers, runtime_headers)
99
+ apply_headers(chat, current_headers)
100
+ configure_chat_for_agent(chat, current_agent, context_wrapper, replace: false)
95
101
  restore_conversation_history(chat, context_wrapper)
96
102
 
103
+
97
104
  loop do
98
105
  current_turn += 1
99
106
  raise MaxTurnsExceeded, "Exceeded maximum turns: #{max_turns}" if current_turn > max_turns
100
107
 
101
- # Get response from LLM (Extended Chat handles tool execution with handoff detection)
108
+ # Get response from LLM (RubyLLM handles tool execution with halting based handoff detection)
102
109
  result = if current_turn == 1
103
110
  # Emit agent thinking event for initial message
104
111
  context_wrapper.callback_manager.emit_agent_thinking(current_agent.name, input)
@@ -118,14 +125,14 @@ module Agents
118
125
  # Validate that the target agent is in our registry
119
126
  # This prevents handoffs to agents that weren't explicitly provided
120
127
  unless registry[next_agent.name]
121
- puts "[Agents] Warning: Handoff to unregistered agent '#{next_agent.name}', continuing with current agent"
122
- # Return the halt content as the final response
123
128
  save_conversation_state(chat, context_wrapper, current_agent)
129
+ error = AgentNotFoundError.new("Handoff failed: Agent '#{next_agent.name}' not found in registry")
124
130
  return RunResult.new(
125
- output: response.content,
126
- messages: MessageExtractor.extract_messages(chat, current_agent),
131
+ output: nil,
132
+ messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
127
133
  usage: context_wrapper.usage,
128
- context: context_wrapper.context
134
+ context: context_wrapper.context,
135
+ error: error
129
136
  )
130
137
  end
131
138
 
@@ -139,9 +146,11 @@ module Agents
139
146
  current_agent = next_agent
140
147
  context_wrapper.context[:current_agent] = next_agent.name
141
148
 
142
- # Create new chat for new agent with restored history
143
- chat = create_chat(current_agent, context_wrapper)
144
- restore_conversation_history(chat, context_wrapper)
149
+ # Reconfigure existing chat for new agent - preserves conversation history automatically
150
+ configure_chat_for_agent(chat, current_agent, context_wrapper, replace: true)
151
+ agent_headers = Helpers::Headers.normalize(current_agent.headers)
152
+ current_headers = Helpers::Headers.merge(agent_headers, runtime_headers)
153
+ apply_headers(chat, current_headers)
145
154
 
146
155
  # Force the new agent to respond to the conversation context
147
156
  # This ensures the user gets a response from the new agent
@@ -154,7 +163,7 @@ module Agents
154
163
  save_conversation_state(chat, context_wrapper, current_agent)
155
164
  return RunResult.new(
156
165
  output: response.content,
157
- messages: MessageExtractor.extract_messages(chat, current_agent),
166
+ messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
158
167
  usage: context_wrapper.usage,
159
168
  context: context_wrapper.context
160
169
  )
@@ -170,7 +179,7 @@ module Agents
170
179
 
171
180
  return RunResult.new(
172
181
  output: response.content,
173
- messages: MessageExtractor.extract_messages(chat, current_agent),
182
+ messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
174
183
  usage: context_wrapper.usage,
175
184
  context: context_wrapper.context
176
185
  )
@@ -181,7 +190,7 @@ module Agents
181
190
 
182
191
  RunResult.new(
183
192
  output: "Conversation ended: #{e.message}",
184
- messages: chat ? MessageExtractor.extract_messages(chat, current_agent) : [],
193
+ messages: chat ? Helpers::MessageExtractor.extract_messages(chat, current_agent) : [],
185
194
  usage: context_wrapper.usage,
186
195
  error: e,
187
196
  context: context_wrapper.context
@@ -192,7 +201,7 @@ module Agents
192
201
 
193
202
  RunResult.new(
194
203
  output: nil,
195
- messages: chat ? MessageExtractor.extract_messages(chat, current_agent) : [],
204
+ messages: chat ? Helpers::MessageExtractor.extract_messages(chat, current_agent) : [],
196
205
  usage: context_wrapper.usage,
197
206
  error: e,
198
207
  context: context_wrapper.context
@@ -201,6 +210,11 @@ module Agents
201
210
 
202
211
  private
203
212
 
213
+ # Creates a deep copy of context data for thread safety.
214
+ # Preserves conversation history array structure while avoiding agent mutation.
215
+ #
216
+ # @param context [Hash] The context to copy
217
+ # @return [Hash] Thread-safe deep copy of the context
204
218
  def deep_copy_context(context)
205
219
  # Handle deep copying for thread safety
206
220
  context.dup.tap do |copied|
@@ -211,13 +225,18 @@ module Agents
211
225
  end
212
226
  end
213
227
 
228
+ # Restores conversation history from context into RubyLLM chat.
229
+ # Converts stored message hashes back into RubyLLM::Message objects with proper content handling.
230
+ #
231
+ # @param chat [RubyLLM::Chat] The chat instance to restore history into
232
+ # @param context_wrapper [RunContext] Context containing conversation history
214
233
  def restore_conversation_history(chat, context_wrapper)
215
234
  history = context_wrapper.context[:conversation_history] || []
216
235
 
217
236
  history.each do |msg|
218
237
  # Only restore user and assistant messages with content
219
238
  next unless %i[user assistant].include?(msg[:role].to_sym)
220
- next unless msg[:content] && !MessageExtractor.content_empty?(msg[:content])
239
+ next unless msg[:content] && !Helpers::MessageExtractor.content_empty?(msg[:content])
221
240
 
222
241
  # Extract text content safely - handle both string and hash content
223
242
  content = RubyLLM::Content.new(msg[:content])
@@ -228,21 +247,18 @@ module Agents
228
247
  content: content
229
248
  )
230
249
  chat.add_message(message)
231
- rescue StandardError => e
232
- # Continue with partial history on error
233
- # TODO: Remove this, and let the error propagate up the call stack
234
- puts "[Agents] Failed to restore message: #{e.message}\n#{e.backtrace.join("\n")}"
235
250
  end
236
- rescue StandardError => e
237
- # If history restoration completely fails, continue with empty history
238
- # TODO: Remove this, and let the error propagate up the call stack
239
- puts "[Agents] Failed to restore conversation history: #{e.message}"
240
- context_wrapper.context[:conversation_history] = []
241
251
  end
242
252
 
253
+ # Saves current conversation state from RubyLLM chat back to context for persistence.
254
+ # Maintains conversation continuity across agent handoffs and process boundaries.
255
+ #
256
+ # @param chat [RubyLLM::Chat] The chat instance to extract state from
257
+ # @param context_wrapper [RunContext] Context to save state into
258
+ # @param current_agent [Agents::Agent] The currently active agent
243
259
  def save_conversation_state(chat, context_wrapper, current_agent)
244
260
  # Extract messages from chat
245
- messages = MessageExtractor.extract_messages(chat, current_agent)
261
+ messages = Helpers::MessageExtractor.extract_messages(chat, current_agent)
246
262
 
247
263
  # Update context with latest state
248
264
  context_wrapper.context[:conversation_history] = messages
@@ -254,14 +270,45 @@ module Agents
254
270
  context_wrapper.context.delete(:pending_handoff)
255
271
  end
256
272
 
257
- def create_chat(agent, context_wrapper)
273
+ # Configures a RubyLLM chat instance with agent-specific settings.
274
+ # Uses RubyLLM's replace option to swap agent context while preserving conversation history during handoffs.
275
+ #
276
+ # @param chat [RubyLLM::Chat] The chat instance to configure
277
+ # @param agent [Agents::Agent] The agent whose configuration to apply
278
+ # @param context_wrapper [RunContext] Thread-safe context wrapper
279
+ # @param replace [Boolean] Whether to replace existing configuration (true for handoffs, false for initial setup)
280
+ # @return [RubyLLM::Chat] The configured chat instance
281
+ def configure_chat_for_agent(chat, agent, context_wrapper, replace: false)
258
282
  # Get system prompt (may be dynamic)
259
283
  system_prompt = agent.get_system_prompt(context_wrapper)
260
284
 
261
- # Create standard RubyLLM chat
262
- chat = RubyLLM::Chat.new(model: agent.model)
263
-
264
285
  # Combine all tools - both handoff and regular tools need wrapping
286
+ all_tools = build_agent_tools(agent, context_wrapper)
287
+
288
+ # Switch model if different (important for handoffs between agents using different models)
289
+ chat.with_model(agent.model) if replace
290
+
291
+ # Configure chat with instructions, temperature, tools, and schema
292
+ chat.with_instructions(system_prompt, replace: replace) if system_prompt
293
+ chat.with_temperature(agent.temperature) if agent.temperature
294
+ chat.with_tools(*all_tools, replace: replace)
295
+ chat.with_schema(agent.response_schema) if agent.response_schema
296
+
297
+ chat
298
+ end
299
+
300
+ def apply_headers(chat, headers)
301
+ return if headers.empty?
302
+
303
+ chat.with_headers(**headers)
304
+ end
305
+
306
+ # Builds thread-safe tool wrappers for an agent's tools and handoff tools.
307
+ #
308
+ # @param agent [Agents::Agent] The agent whose tools to wrap
309
+ # @param context_wrapper [RunContext] Thread-safe context wrapper for tool execution
310
+ # @return [Array<ToolWrapper>] Array of wrapped tools ready for RubyLLM
311
+ def build_agent_tools(agent, context_wrapper)
265
312
  all_tools = []
266
313
 
267
314
  # Add handoff tools
@@ -275,13 +322,7 @@ module Agents
275
322
  all_tools << ToolWrapper.new(tool, context_wrapper)
276
323
  end
277
324
 
278
- # Configure chat with instructions, temperature, tools, and schema
279
- chat.with_instructions(system_prompt) if system_prompt
280
- chat.with_temperature(agent.temperature) if agent.temperature
281
- chat.with_tools(*all_tools) if all_tools.any?
282
- chat.with_schema(agent.response_schema) if agent.response_schema
283
-
284
- chat
325
+ all_tools
285
326
  end
286
327
  end
287
328
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Agents
4
- VERSION = "0.5.0"
4
+ VERSION = "0.6.0"
5
5
  end
data/lib/agents.rb CHANGED
@@ -111,11 +111,11 @@ require_relative "agents/run_context"
111
111
  require_relative "agents/tool_context"
112
112
  require_relative "agents/tool"
113
113
  require_relative "agents/handoff"
114
+ require_relative "agents/helpers"
114
115
  require_relative "agents/agent"
115
116
 
116
117
  # Execution components
117
118
  require_relative "agents/tool_wrapper"
118
- require_relative "agents/message_extractor"
119
119
  require_relative "agents/callback_manager"
120
120
  require_relative "agents/agent_runner"
121
121
  require_relative "agents/runner"
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ai-agents
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.0
4
+ version: 0.6.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Shivam Mishra
@@ -15,14 +15,14 @@ dependencies:
15
15
  requirements:
16
16
  - - "~>"
17
17
  - !ruby/object:Gem::Version
18
- version: 1.6.0
18
+ version: 1.8.2
19
19
  type: :runtime
20
20
  prerelease: false
21
21
  version_requirements: !ruby/object:Gem::Requirement
22
22
  requirements:
23
23
  - - "~>"
24
24
  - !ruby/object:Gem::Version
25
- version: 1.6.0
25
+ version: 1.8.2
26
26
  description: Ruby AI Agents SDK enables creating complex AI workflows with multi-agent
27
27
  orchestration, tool execution, safety guardrails, and provider-agnostic LLM integration.
28
28
  email:
@@ -60,6 +60,7 @@ files:
60
60
  - docs/guides/agent-as-tool-pattern.md
61
61
  - docs/guides/multi-agent-systems.md
62
62
  - docs/guides/rails-integration.md
63
+ - docs/guides/request-headers.md
63
64
  - docs/guides/state-persistence.md
64
65
  - docs/guides/structured-output.md
65
66
  - docs/index.md
@@ -102,7 +103,9 @@ files:
102
103
  - lib/agents/agent_tool.rb
103
104
  - lib/agents/callback_manager.rb
104
105
  - lib/agents/handoff.rb
105
- - lib/agents/message_extractor.rb
106
+ - lib/agents/helpers.rb
107
+ - lib/agents/helpers/headers.rb
108
+ - lib/agents/helpers/message_extractor.rb
106
109
  - lib/agents/result.rb
107
110
  - lib/agents/run_context.rb
108
111
  - lib/agents/runner.rb
@@ -1,97 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Agents
4
- # Service object responsible for extracting and formatting conversation messages
5
- # from RubyLLM chat objects into a format suitable for persistence and context restoration.
6
- #
7
- # Handles different message types:
8
- # - User messages: Basic content preservation
9
- # - Assistant messages: Includes agent attribution and tool calls
10
- # - Tool result messages: Links back to original tool calls
11
- #
12
- # @example Extract messages from a chat
13
- # messages = MessageExtractor.extract_messages(chat, current_agent)
14
- # #=> [
15
- # { role: :user, content: "Hello" },
16
- # { role: :assistant, content: "Hi!", agent_name: "Support", tool_calls: [...] },
17
- # { role: :tool, content: "Result", tool_call_id: "call_123" }
18
- # ]
19
- class MessageExtractor
20
- # Check if content is considered empty (handles both String and Hash content)
21
- #
22
- # @param content [String, Hash, nil] The content to check
23
- # @return [Boolean] true if content is empty, false otherwise
24
- def self.content_empty?(content)
25
- case content
26
- when String
27
- content.strip.empty?
28
- when Hash
29
- content.empty?
30
- else
31
- content.nil?
32
- end
33
- end
34
-
35
- # Extract messages from a chat object for conversation history persistence
36
- #
37
- # @param chat [Object] Chat object that responds to :messages
38
- # @param current_agent [Agent] The agent currently handling the conversation
39
- # @return [Array<Hash>] Array of message hashes suitable for persistence
40
- def self.extract_messages(chat, current_agent)
41
- new(chat, current_agent).extract
42
- end
43
-
44
- def initialize(chat, current_agent)
45
- @chat = chat
46
- @current_agent = current_agent
47
- end
48
-
49
- def extract
50
- return [] unless @chat.respond_to?(:messages)
51
-
52
- @chat.messages.filter_map do |msg|
53
- case msg.role
54
- when :user, :assistant
55
- extract_user_or_assistant_message(msg)
56
- when :tool
57
- extract_tool_message(msg)
58
- end
59
- end
60
- end
61
-
62
- private
63
-
64
- def extract_user_or_assistant_message(msg)
65
- return nil unless msg.content && !self.class.content_empty?(msg.content)
66
-
67
- message = {
68
- role: msg.role,
69
- content: msg.content
70
- }
71
-
72
- if msg.role == :assistant
73
- # Add agent attribution for conversation continuity
74
- message[:agent_name] = @current_agent.name if @current_agent
75
-
76
- # Add tool calls if present
77
- if msg.tool_call? && msg.tool_calls
78
- # RubyLLM stores tool_calls as Hash with call_id => ToolCall object
79
- # Reference: RubyLLM::StreamAccumulator#tool_calls_from_stream
80
- message[:tool_calls] = msg.tool_calls.values.map(&:to_h)
81
- end
82
- end
83
-
84
- message
85
- end
86
-
87
- def extract_tool_message(msg)
88
- return nil unless msg.tool_result?
89
-
90
- {
91
- role: msg.role,
92
- content: msg.content,
93
- tool_call_id: msg.tool_call_id
94
- }
95
- end
96
- end
97
- end