ai-agents 0.9.0 → 0.10.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: df386be7e27f87111901954d72e4caa3e26a1d789ec113fbf9e2da9d2f87587e
4
- data.tar.gz: f05b6827852966d0514abae61c7732a34ea92aeebe30855132ec8bee39c1e4c2
3
+ metadata.gz: 6d4dc10b4aceae77705002488794ecdd15531f604641adaa03445d72374220f0
4
+ data.tar.gz: b056fcf690121a790808e6aa1229dc1a16b6e19b609fbaa09a5e32613f5bc58f
5
5
  SHA512:
6
- metadata.gz: 2180b6b495519d34ff4762cd027d6079fb39ad91117242f6f366779fd7360c7bbb55521761e151ad246eee3004363de9bf768ca953a8e69a1e39e2f2dfeb5345
7
- data.tar.gz: 41dadb09fd62a2ce47b063c6b85685f1efa8be73dbee467e83ad69cdb32793926aaadacbce7bfa820960c0d5f35e57325a7078733c19bef0e1121076bc6a9002
6
+ metadata.gz: b80b45ccaa92140f6155dceeb2227235150f913c90385197878074af9df0c22b913e611c46424f9efa9b3f8f128b8921795c3136685a889a758c4b8dd81f4dbd
7
+ data.tar.gz: 63662db9f1dcbed10439dfa64746a868a0ee2cc64266c48e55f613296e90dca0d1661466af01825bbec3015c928209f20f000bc9aeb882f5ff510be7ad7110f7
data/CHANGELOG.md CHANGED
@@ -7,6 +7,26 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+
11
+ ## [0.10.0] - 2026-04-20
12
+
13
+ ### Added
14
+ - Support for provider-specific params via `with_params` (#44)
15
+
16
+ ### Changed
17
+ - **Bump `ruby_llm` dependency**: now `~> 1.14` (was `~> 1.9.1`). Trusts upstream semantic versioning by dropping the PATCH-level pin so minor-version fixes are picked up automatically (#61)
18
+ - Various internal refactors to `TracingCallbacks`, `Runner`, and helper modules
19
+
20
+
21
+ ## [0.9.1] - 2026-02-24
22
+
23
+ ### Fixed
24
+ - **Multimodal Conversation History**: Restored multimodal image content from conversation history, ensuring image URLs and base64 data are preserved across agent turns (#46)
25
+ - **Tracing Instrumentation**: Improved serialization of multimodal content in tracing callbacks, returning JSON for non-text content types
26
+
27
+ ### Changed
28
+ - **Test Infrastructure**: OpenTelemetry stubs in tests are now conditionally applied only when the `opentelemetry-api` gem is not installed (#45)
29
+
10
30
  ## [0.9.0] - 2026-02-09
11
31
 
12
32
  ### Added
@@ -0,0 +1,90 @@
1
+ ---
2
+ layout: default
3
+ title: Provider-Specific Parameters
4
+ parent: Guides
5
+ nav_order: 7
6
+ ---
7
+
8
+ # Provider-Specific Parameters
9
+
10
+ Provider-specific parameters let you pass additional options directly into the LLM request payload via RubyLLM's `with_params` method. This is useful for features like OpenAI's `service_tier`, Anthropic's `reasoning_effort`, or any other provider-specific option that isn't exposed as a first-class SDK attribute.
11
+
12
+ ## Basic Usage
13
+
14
+ ### Agent-Level Params
15
+
16
+ Set default parameters when creating an agent that will be applied to all requests:
17
+
18
+ ```ruby
19
+ agent = Agents::Agent.new(
20
+ name: "Assistant",
21
+ instructions: "You are a helpful assistant",
22
+ params: {
23
+ service_tier: "flex",
24
+ max_completion_tokens: 2048
25
+ }
26
+ )
27
+
28
+ runner = Agents::Runner.with_agents(agent)
29
+ result = runner.run("Hello!")
30
+ # All requests will include the provider-specific params
31
+ ```
32
+
33
+ ### Runtime Params
34
+
35
+ Override or add parameters for specific requests:
36
+
37
+ ```ruby
38
+ agent = Agents::Agent.new(
39
+ name: "Assistant",
40
+ instructions: "You are a helpful assistant"
41
+ )
42
+
43
+ runner = Agents::Runner.with_agents(agent)
44
+
45
+ # Pass params at runtime
46
+ result = runner.run(
47
+ "Explain quantum computing",
48
+ params: {
49
+ service_tier: "default",
50
+ max_completion_tokens: 4096
51
+ }
52
+ )
53
+ ```
54
+
55
+ ### Parameter Precedence
56
+
57
+ When both agent-level and runtime params are provided, **runtime params take precedence**:
58
+
59
+ ```ruby
60
+ agent = Agents::Agent.new(
61
+ name: "Assistant",
62
+ instructions: "You are a helpful assistant",
63
+ params: {
64
+ service_tier: "flex",
65
+ top_p: 0.9
66
+ }
67
+ )
68
+
69
+ runner = Agents::Runner.with_agents(agent)
70
+
71
+ result = runner.run(
72
+ "Hello!",
73
+ params: {
74
+ service_tier: "default", # Overrides agent's flex value
75
+ max_completion_tokens: 1000 # Additional param
76
+ }
77
+ )
78
+
79
+ # Final params sent to LLM API:
80
+ # {
81
+ # service_tier: "default", # Runtime value wins
82
+ # top_p: 0.9, # From agent
83
+ # max_completion_tokens: 1000 # From runtime
84
+ # }
85
+ ```
86
+
87
+ ## See Also
88
+
89
+ - [Custom Request Headers](request-headers.html) - Adding custom HTTP headers using the same two-level precedence pattern
90
+ - [Multi-Agent Systems](multi-agent-systems.html) - Using params across agent handoffs
data/docs/guides.md CHANGED
@@ -18,4 +18,5 @@ Practical guides for building real-world applications with the AI Agents library
18
18
  - **[State Persistence](guides/state-persistence.html)** - Managing conversation state and context across sessions and processes
19
19
  - **[Structured Output](guides/structured-output.html)** - Enforcing JSON schema validation for reliable agent responses
20
20
  - **[Custom Request Headers](guides/request-headers.html)** - Adding custom HTTP headers for authentication, tracking, and provider-specific features
21
+ - **[Provider-Specific Parameters](guides/provider-params.html)** - Passing provider-specific parameters like service_tier to the underlying LLM request
21
22
  - **[OpenTelemetry Instrumentation](guides/instrumentation.html)** - Trace agent execution with Langfuse and other OTel backends
data/lib/agents/agent.rb CHANGED
@@ -4,7 +4,7 @@
4
4
  # Agents are immutable, thread-safe objects that can be cloned with modifications.
5
5
  # They encapsulate the configuration needed to interact with an LLM including
6
6
  # instructions, tools, and potential handoff targets.
7
- require_relative "helpers/headers"
7
+ require_relative "helpers/hash_normalizer"
8
8
  # @example Creating a basic agent
9
9
  # agent = Agents::Agent.new(
10
10
  # name: "Assistant",
@@ -50,7 +50,7 @@ require_relative "helpers/headers"
50
50
  # )
51
51
  module Agents
52
52
  class Agent
53
- attr_reader :name, :instructions, :model, :tools, :handoff_agents, :temperature, :response_schema, :headers
53
+ attr_reader :name, :instructions, :model, :tools, :handoff_agents, :temperature, :response_schema, :headers, :params
54
54
 
55
55
  # Initialize a new Agent instance
56
56
  #
@@ -62,8 +62,9 @@ module Agents
62
62
  # @param temperature [Float] Controls randomness in responses (0.0 = deterministic, 1.0 = very random, default: 0.7)
63
63
  # @param response_schema [Hash, nil] JSON schema for structured output responses
64
64
  # @param headers [Hash, nil] Default HTTP headers applied to LLM requests
65
+ # @param params [Hash, nil] Default provider-specific parameters applied to LLM requests (e.g., service_tier)
65
66
  def initialize(name:, instructions: nil, model: "gpt-4.1-mini", tools: [], handoff_agents: [], temperature: 0.7,
66
- response_schema: nil, headers: nil)
67
+ response_schema: nil, headers: nil, params: nil)
67
68
  @name = name
68
69
  @instructions = instructions
69
70
  @model = model
@@ -71,7 +72,8 @@ module Agents
71
72
  @handoff_agents = []
72
73
  @temperature = temperature
73
74
  @response_schema = response_schema
74
- @headers = Helpers::Headers.normalize(headers, freeze_result: true)
75
+ @headers = Helpers::HashNormalizer.normalize(headers, label: "headers", freeze_result: true)
76
+ @params = Helpers::HashNormalizer.normalize(params, label: "params", freeze_result: true)
75
77
 
76
78
  # Mutex for thread-safe handoff registration
77
79
  # While agents are typically configured at startup, we want to ensure
@@ -167,7 +169,8 @@ module Agents
167
169
  handoff_agents: changes.fetch(:handoff_agents, @handoff_agents),
168
170
  temperature: changes.fetch(:temperature, @temperature),
169
171
  response_schema: changes.fetch(:response_schema, @response_schema),
170
- headers: changes.fetch(:headers, @headers)
172
+ headers: changes.fetch(:headers, @headers),
173
+ params: changes.fetch(:params, @params)
171
174
  )
172
175
  end
173
176
 
@@ -29,6 +29,8 @@ module Agents
29
29
  # can safely register callbacks concurrently without data races.
30
30
  #
31
31
  class AgentRunner
32
+ attr_reader :agents
33
+
32
34
  # Initialize with a list of agents. The first agent becomes the default entry point.
33
35
  #
34
36
  # @param agents [Array<Agents::Agent>] List of agents, first one is the default entry point
@@ -64,8 +66,9 @@ module Agents
64
66
  # @param context [Hash] Conversation context (will be restored if continuing conversation)
65
67
  # @param max_turns [Integer] Maximum turns before stopping (default: 10)
66
68
  # @param headers [Hash, nil] Custom HTTP headers to pass through to the underlying LLM provider
69
+ # @param params [Hash, nil] Provider-specific parameters to pass through to the underlying LLM (e.g., service_tier)
67
70
  # @return [RunResult] Execution result with output, messages, and updated context
68
- def run(input, context: {}, max_turns: Runner::DEFAULT_MAX_TURNS, headers: nil)
71
+ def run(input, context: {}, max_turns: Runner::DEFAULT_MAX_TURNS, headers: nil, params: nil)
69
72
  # Determine which agent should handle this conversation
70
73
  # Uses conversation history to maintain continuity across handoffs
71
74
  current_agent = determine_conversation_agent(context)
@@ -78,6 +81,7 @@ module Agents
78
81
  registry: @registry,
79
82
  max_turns: max_turns,
80
83
  headers: headers,
84
+ params: params,
81
85
  callbacks: @callbacks
82
86
  )
83
87
  end
@@ -97,7 +97,7 @@ module Agents
97
97
  private
98
98
 
99
99
  def transform_agent_name(name)
100
- name.downcase.gsub(/\s+/, "_").gsub(/[^a-z0-9_]/, "")
100
+ Helpers::NameNormalizer.to_tool_name(name)
101
101
  end
102
102
 
103
103
  # Create isolated context that only shares state, not conversation artifacts
@@ -53,7 +53,7 @@ module Agents
53
53
  @target_agent = target_agent
54
54
 
55
55
  # Set up the tool with a standardized name and description
56
- @tool_name = "handoff_to_#{target_agent.name.downcase.gsub(/\s+/, "_")}"
56
+ @tool_name = "handoff_to_#{Helpers::NameNormalizer.to_tool_name(target_agent.name)}"
57
57
  @tool_description = "Transfer conversation to #{target_agent.name}"
58
58
 
59
59
  super()
@@ -0,0 +1,28 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Agents
4
+ module Helpers
5
+ module HashNormalizer
6
+ module_function
7
+
8
+ # NOTE: freeze_result performs a shallow freeze on the top-level hash only.
9
+ # Nested values remain mutable — e.g. hash[:nested][:key] = "x" would succeed.
10
+ def normalize(input, label:, freeze_result: false)
11
+ return freeze_result ? {}.freeze : {} if input.nil? || (input.respond_to?(:empty?) && input.empty?)
12
+
13
+ hash = input.respond_to?(:to_h) ? input.to_h : input
14
+ raise ArgumentError, "#{label} must be a Hash or respond to #to_h" unless hash.is_a?(Hash)
15
+
16
+ result = hash.transform_keys { |key| key.is_a?(Symbol) ? key : key.to_sym }
17
+ freeze_result ? result.freeze : result
18
+ end
19
+
20
+ def merge(base, override)
21
+ return override if base.empty?
22
+ return base if override.empty?
23
+
24
+ base.merge(override)
25
+ end
26
+ end
27
+ end
28
+ end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Agents
4
+ module Helpers
5
+ module NameNormalizer
6
+ module_function
7
+
8
+ def to_tool_name(name)
9
+ name.downcase.gsub(/\s+/, "_").gsub(/[^a-z0-9_]/, "")
10
+ end
11
+ end
12
+ end
13
+ end
@@ -5,5 +5,6 @@ module Agents
5
5
  end
6
6
  end
7
7
 
8
- require_relative "helpers/headers"
8
+ require_relative "helpers/hash_normalizer"
9
+ require_relative "helpers/name_normalizer"
9
10
  require_relative "helpers/message_extractor"
@@ -48,7 +48,7 @@ module Agents
48
48
  tracing = tracing_state(context_wrapper)
49
49
  return unless tracing
50
50
 
51
- tracing[:pending_llm_input] = input.to_s
51
+ tracing[:pending_llm_input] = serialize_output(input)
52
52
 
53
53
  return if tracing[:current_agent_name] == agent_name
54
54
 
@@ -151,9 +151,9 @@ module Agents
151
151
  llm_span = @tracer.start_span(@llm_span_name, with_parent: parent_context(tracing), attributes: attrs)
152
152
 
153
153
  llm_span.set_attribute(ATTR_GEN_AI_REQUEST_MODEL, model) if model
154
- set_llm_response_attributes(llm_span, message)
155
154
 
156
155
  output = llm_output_text(message)
156
+ set_llm_response_attributes(llm_span, message, output)
157
157
  tracing[:last_agent_output] = output unless output.empty?
158
158
 
159
159
  llm_span.finish
@@ -187,29 +187,25 @@ module Agents
187
187
  root_span.status = OpenTelemetry::Trace::Status.error(error.message)
188
188
  end
189
189
 
190
- def set_llm_response_attributes(span, response)
190
+ def set_llm_response_attributes(span, response, output)
191
191
  if response.respond_to?(:input_tokens) && response.input_tokens
192
192
  span.set_attribute(ATTR_GEN_AI_USAGE_INPUT, response.input_tokens)
193
193
  end
194
194
  if response.respond_to?(:output_tokens) && response.output_tokens
195
195
  span.set_attribute(ATTR_GEN_AI_USAGE_OUTPUT, response.output_tokens)
196
196
  end
197
- output = llm_output_text(response)
198
197
  span.set_attribute(ATTR_LANGFUSE_OBS_OUTPUT, output) unless output.empty?
199
198
  end
200
199
 
201
- # Falls back to formatting tool calls when response has no text content,
202
- # and uses .to_json for Hash/Array (structured output) to avoid Ruby's .to_s format.
200
+ # Returns serialized text content if present, otherwise falls back to tool call formatting.
201
+ # Uses .to_json for Hash/Array (structured output) to avoid Ruby's .to_s format.
203
202
  def llm_output_text(response)
204
- return format_tool_calls(response) unless response.respond_to?(:content)
205
-
206
- content = response.content
207
- return format_tool_calls(response) if content.nil?
208
-
209
- text = content.is_a?(Hash) || content.is_a?(Array) ? content.to_json : content.to_s
210
- return format_tool_calls(response) if text.empty?
203
+ if response.respond_to?(:content) && response.content
204
+ text = serialize_output(response.content)
205
+ return text unless text.empty?
206
+ end
211
207
 
212
- text
208
+ format_tool_calls(response)
213
209
  end
214
210
 
215
211
  # Excludes the last message (current response) — returns what was sent to the LLM.
@@ -223,23 +219,21 @@ module Agents
223
219
  end
224
220
 
225
221
  def format_single_message(msg)
226
- text = serialize_content(msg.content)
222
+ text = serialize_output(msg.content)
227
223
  text = append_tool_calls(msg, text)
228
224
  { role: msg.role.to_s, content: text }
229
225
  end
230
226
 
231
- def serialize_content(content)
232
- content.is_a?(Hash) || content.is_a?(Array) ? content.to_json : content.to_s
233
- end
234
-
235
227
  def append_tool_calls(msg, text)
236
228
  return text unless msg.role == :assistant && msg.respond_to?(:tool_calls) && msg.tool_calls&.any?
237
229
 
238
- calls = msg.tool_calls.values.map { |tc| "#{tc.name}(#{tc.arguments.to_json})" }.join(", ")
230
+ calls = msg.tool_calls.values.map { |tc| "#{tc.name}(#{serialize_output(tc.arguments)})" }.join(", ")
239
231
  text.empty? ? "Tool calls: #{calls}" : "#{text}\nTool calls: #{calls}"
240
232
  end
241
233
 
242
234
  def serialize_output(value)
235
+ return serialize_multimodal_content(value) if multimodal_content?(value)
236
+
243
237
  value.is_a?(Hash) || value.is_a?(Array) ? value.to_json : value.to_s
244
238
  end
245
239
 
@@ -334,6 +328,23 @@ module Agents
334
328
  def cleanup_tracing_state(context_wrapper)
335
329
  context_wrapper.context.delete(:__otel_tracing)
336
330
  end
331
+
332
+ def multimodal_content?(value)
333
+ value.respond_to?(:text) && value.respond_to?(:attachments)
334
+ end
335
+
336
+ def serialize_multimodal_content(content)
337
+ parts = []
338
+ text = content.text
339
+ parts << text if text && !text.empty?
340
+
341
+ if content.attachments&.any?
342
+ urls = content.attachments.map { |a| a.respond_to?(:source) ? a.source.to_s : a.to_s }
343
+ parts << "Attachments: #{urls.join(", ")}"
344
+ end
345
+
346
+ parts.join("\n")
347
+ end
337
348
  end
338
349
  end
339
350
  end
data/lib/agents/runner.rb CHANGED
@@ -81,9 +81,11 @@ module Agents
81
81
  # @param registry [Hash] Registry of agents for handoff resolution
82
82
  # @param max_turns [Integer] Maximum conversation turns before stopping
83
83
  # @param headers [Hash, nil] Custom HTTP headers passed to the underlying LLM provider
84
+ # @param params [Hash, nil] Provider-specific parameters passed to the underlying LLM (e.g., service_tier)
84
85
  # @param callbacks [Hash] Optional callbacks for real-time event notifications
85
86
  # @return [RunResult] The result containing output, messages, and usage
86
- def run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, headers: nil, callbacks: {})
87
+ def run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, headers: nil, params: nil,
88
+ callbacks: {})
87
89
  # The starting_agent is already determined by AgentRunner based on conversation history
88
90
  current_agent = starting_agent
89
91
 
@@ -95,13 +97,17 @@ module Agents
95
97
  # Emit run start event
96
98
  context_wrapper.callback_manager.emit_run_start(current_agent.name, input, context_wrapper)
97
99
 
98
- runtime_headers = Helpers::Headers.normalize(headers)
99
- agent_headers = Helpers::Headers.normalize(current_agent.headers)
100
+ runtime_headers = Helpers::HashNormalizer.normalize(headers, label: "headers")
101
+ agent_headers = Helpers::HashNormalizer.normalize(current_agent.headers, label: "headers")
102
+ runtime_params = Helpers::HashNormalizer.normalize(params, label: "params")
103
+ agent_params = Helpers::HashNormalizer.normalize(current_agent.params, label: "params")
100
104
 
101
105
  # Create chat and restore conversation history
102
106
  chat = RubyLLM::Chat.new(model: current_agent.model)
103
- current_headers = Helpers::Headers.merge(agent_headers, runtime_headers)
107
+ current_headers = Helpers::HashNormalizer.merge(agent_headers, runtime_headers)
108
+ current_params = Helpers::HashNormalizer.merge(agent_params, runtime_params)
104
109
  apply_headers(chat, current_headers)
110
+ apply_params(chat, current_params)
105
111
  configure_chat_for_agent(chat, current_agent, context_wrapper, replace: false)
106
112
  restore_conversation_history(chat, context_wrapper)
107
113
  input_already_in_history = last_message_matches?(chat, input)
@@ -112,19 +118,18 @@ module Agents
112
118
  raise MaxTurnsExceeded, "Exceeded maximum turns: #{max_turns}" if current_turn > max_turns
113
119
 
114
120
  # Get response from LLM (RubyLLM handles tool execution with halting based handoff detection)
115
- result = if current_turn == 1
116
- # Emit agent thinking event for initial message
117
- context_wrapper.callback_manager.emit_agent_thinking(current_agent.name, input, context_wrapper)
118
- # If conversation history already ends with this user message (e.g. passed
119
- # in via context from an external system), use complete to avoid duplicating it.
120
- input_already_in_history ? chat.complete : chat.ask(input)
121
- else
122
- # Emit agent thinking event for continuation
123
- context_wrapper.callback_manager.emit_agent_thinking(current_agent.name, "(continuing conversation)",
124
- context_wrapper)
125
- chat.complete
126
- end
127
- response = result
121
+ response = if current_turn == 1
122
+ # Emit agent thinking event for initial message
123
+ context_wrapper.callback_manager.emit_agent_thinking(current_agent.name, input, context_wrapper)
124
+ # If conversation history already ends with this user message (e.g. passed
125
+ # in via context from an external system), use complete to avoid duplicating it.
126
+ input_already_in_history ? chat.complete : chat.ask(input)
127
+ else
128
+ # Emit agent thinking event for continuation
129
+ context_wrapper.callback_manager.emit_agent_thinking(current_agent.name, "(continuing conversation)",
130
+ context_wrapper)
131
+ chat.complete
132
+ end
128
133
  track_usage(response, context_wrapper)
129
134
 
130
135
  # Emit LLM call complete event with model and response for instrumentation
@@ -140,22 +145,8 @@ module Agents
140
145
  # Validate that the target agent is in our registry
141
146
  # This prevents handoffs to agents that weren't explicitly provided
142
147
  unless registry[next_agent.name]
143
- save_conversation_state(chat, context_wrapper, current_agent)
144
148
  error = AgentNotFoundError.new("Handoff failed: Agent '#{next_agent.name}' not found in registry")
145
-
146
- result = RunResult.new(
147
- output: nil,
148
- messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
149
- usage: context_wrapper.usage,
150
- context: context_wrapper.context,
151
- error: error
152
- )
153
-
154
- # Emit agent complete and run complete events with error
155
- context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, error, context_wrapper)
156
- context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
157
-
158
- return result
149
+ return finalize_run(chat, context_wrapper, current_agent, output: nil, error: error)
159
150
  end
160
151
 
161
152
  # Save current conversation state before switching
@@ -174,9 +165,12 @@ module Agents
174
165
 
175
166
  # Reconfigure existing chat for new agent - preserves conversation history automatically
176
167
  configure_chat_for_agent(chat, current_agent, context_wrapper, replace: true)
177
- agent_headers = Helpers::Headers.normalize(current_agent.headers)
178
- current_headers = Helpers::Headers.merge(agent_headers, runtime_headers)
168
+ agent_headers = Helpers::HashNormalizer.normalize(current_agent.headers, label: "headers")
169
+ current_headers = Helpers::HashNormalizer.merge(agent_headers, runtime_headers)
179
170
  apply_headers(chat, current_headers)
171
+ agent_params = Helpers::HashNormalizer.normalize(current_agent.params, label: "params")
172
+ current_params = Helpers::HashNormalizer.merge(agent_params, runtime_params)
173
+ apply_params(chat, current_params)
180
174
  context_wrapper.callback_manager.emit_chat_created(
181
175
  chat, current_agent.name, current_agent.model, context_wrapper
182
176
  )
@@ -189,81 +183,50 @@ module Agents
189
183
 
190
184
  # Handle non-handoff halts - return the halt content as final response
191
185
  if response.is_a?(RubyLLM::Tool::Halt)
192
- save_conversation_state(chat, context_wrapper, current_agent)
193
-
194
- result = RunResult.new(
195
- output: response.content,
196
- messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
197
- usage: context_wrapper.usage,
198
- context: context_wrapper.context
199
- )
200
-
201
- # Emit agent complete and run complete events
202
- context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, nil, context_wrapper)
203
- context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
204
-
205
- return result
186
+ return finalize_run(chat, context_wrapper, current_agent, output: response.content)
206
187
  end
207
188
 
208
189
  # If tools were called, continue the loop to let them execute
209
190
  next if response.tool_call?
210
191
 
211
192
  # If no tools were called, we have our final response
212
-
213
- # Save final state before returning
214
- save_conversation_state(chat, context_wrapper, current_agent)
215
-
216
- result = RunResult.new(
217
- output: response.content,
218
- messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
219
- usage: context_wrapper.usage,
220
- context: context_wrapper.context
221
- )
222
-
223
- # Emit agent complete and run complete events
224
- context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, nil, context_wrapper)
225
- context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
226
-
227
- return result
193
+ return finalize_run(chat, context_wrapper, current_agent, output: response.content)
228
194
  end
229
195
  rescue MaxTurnsExceeded => e
230
- # Save state even on error
231
- save_conversation_state(chat, context_wrapper, current_agent) if chat
232
-
233
- result = RunResult.new(
234
- output: "Conversation ended: #{e.message}",
235
- messages: chat ? Helpers::MessageExtractor.extract_messages(chat, current_agent) : [],
236
- usage: context_wrapper.usage,
237
- error: e,
238
- context: context_wrapper.context
239
- )
196
+ finalize_run(chat, context_wrapper, current_agent,
197
+ output: "Conversation ended: #{e.message}", error: e)
198
+ rescue StandardError => e
199
+ finalize_run(chat, context_wrapper, current_agent, output: nil, error: e)
200
+ end
240
201
 
241
- # Emit agent complete and run complete events with error
242
- context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, e, context_wrapper)
243
- context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
202
+ private
244
203
 
245
- result
246
- rescue StandardError => e
247
- # Save state even on error
204
+ # Saves conversation state, builds a RunResult, emits completion callbacks, and returns it.
205
+ # Centralises the finalize-and-return pattern used by the normal path, halt path, and error rescues.
206
+ #
207
+ # @param chat [RubyLLM::Chat, nil] The chat instance (nil in early-failure rescues)
208
+ # @param context_wrapper [RunContext] Context wrapper for state and callbacks
209
+ # @param current_agent [Agents::Agent] The currently active agent
210
+ # @param output [String, nil] The output text for the result
211
+ # @param error [StandardError, nil] Optional error to attach to the result
212
+ # @return [RunResult]
213
+ def finalize_run(chat, context_wrapper, current_agent, output:, error: nil)
248
214
  save_conversation_state(chat, context_wrapper, current_agent) if chat
249
215
 
250
216
  result = RunResult.new(
251
- output: nil,
217
+ output: output,
252
218
  messages: chat ? Helpers::MessageExtractor.extract_messages(chat, current_agent) : [],
253
219
  usage: context_wrapper.usage,
254
- error: e,
220
+ error: error,
255
221
  context: context_wrapper.context
256
222
  )
257
223
 
258
- # Emit agent complete and run complete events with error
259
- context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, e, context_wrapper)
224
+ context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, error, context_wrapper)
260
225
  context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
261
226
 
262
227
  result
263
228
  end
264
229
 
265
- private
266
-
267
230
  # Creates a deep copy of context data for thread safety.
268
231
  # Preserves conversation history array structure while avoiding agent mutation.
269
232
  #
@@ -334,7 +297,7 @@ module Agents
334
297
 
335
298
  params = {
336
299
  role: role,
337
- content: RubyLLM::Content.new(content_value)
300
+ content: build_content(content_value)
338
301
  }
339
302
 
340
303
  # Handle tool-specific parameters (Tool Results)
@@ -366,6 +329,24 @@ module Agents
366
329
  params
367
330
  end
368
331
 
332
+ # Build RubyLLM::Content from stored content, handling multimodal arrays with image attachments.
333
+ # Multimodal arrays follow the OpenAI content format: [{type: 'text', text: '...'}, {type: 'image_url', ...}]
334
+ def build_content(content_value)
335
+ return RubyLLM::Content.new(content_value) unless content_value.is_a?(Array)
336
+
337
+ text_parts = content_value.filter_map { |p| p[:text] || p["text"] if (p[:type] || p["type"]) == "text" }
338
+ image_urls = content_value.filter_map do |p|
339
+ next unless (p[:type] || p["type"]) == "image_url"
340
+
341
+ p.dig(:image_url, :url) || p.dig("image_url", "url")
342
+ end
343
+
344
+ return RubyLLM::Content.new(content_value.to_json) if text_parts.empty? && image_urls.empty?
345
+
346
+ text = text_parts.join(" ")
347
+ image_urls.any? ? RubyLLM::Content.new(text, image_urls) : RubyLLM::Content.new(text)
348
+ end
349
+
369
350
  # Validate tool message has required tool_call_id
370
351
  def valid_tool_message?(msg)
371
352
  if msg[:tool_call_id]
@@ -444,6 +425,12 @@ module Agents
444
425
  chat.with_headers(**headers)
445
426
  end
446
427
 
428
+ def apply_params(chat, params)
429
+ return if params.empty?
430
+
431
+ chat.with_params(**params)
432
+ end
433
+
447
434
  def track_usage(response, context_wrapper)
448
435
  return unless context_wrapper&.usage
449
436
 
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Agents
4
- VERSION = "0.9.0"
4
+ VERSION = "0.10.0"
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ai-agents
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.0
4
+ version: 0.10.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Shivam Mishra
@@ -15,14 +15,14 @@ dependencies:
15
15
  requirements:
16
16
  - - "~>"
17
17
  - !ruby/object:Gem::Version
18
- version: 1.9.1
18
+ version: '1.14'
19
19
  type: :runtime
20
20
  prerelease: false
21
21
  version_requirements: !ruby/object:Gem::Requirement
22
22
  requirements:
23
23
  - - "~>"
24
24
  - !ruby/object:Gem::Version
25
- version: 1.9.1
25
+ version: '1.14'
26
26
  description: Ruby AI Agents SDK enables creating complex AI workflows with multi-agent
27
27
  orchestration, tool execution, safety guardrails, and provider-agnostic LLM integration.
28
28
  email:
@@ -61,6 +61,7 @@ files:
61
61
  - docs/guides/agent-as-tool-pattern.md
62
62
  - docs/guides/instrumentation.md
63
63
  - docs/guides/multi-agent-systems.md
64
+ - docs/guides/provider-params.md
64
65
  - docs/guides/rails-integration.md
65
66
  - docs/guides/request-headers.md
66
67
  - docs/guides/state-persistence.md
@@ -106,8 +107,9 @@ files:
106
107
  - lib/agents/callback_manager.rb
107
108
  - lib/agents/handoff.rb
108
109
  - lib/agents/helpers.rb
109
- - lib/agents/helpers/headers.rb
110
+ - lib/agents/helpers/hash_normalizer.rb
110
111
  - lib/agents/helpers/message_extractor.rb
112
+ - lib/agents/helpers/name_normalizer.rb
111
113
  - lib/agents/instrumentation.rb
112
114
  - lib/agents/instrumentation/constants.rb
113
115
  - lib/agents/instrumentation/tracing_callbacks.rb
@@ -1,33 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Agents
4
- module Helpers
5
- module Headers
6
- module_function
7
-
8
- def normalize(headers, freeze_result: false)
9
- return freeze_result ? {}.freeze : {} if headers.nil? || (headers.respond_to?(:empty?) && headers.empty?)
10
-
11
- hash = headers.respond_to?(:to_h) ? headers.to_h : headers
12
- raise ArgumentError, "headers must be a Hash or respond to #to_h" unless hash.is_a?(Hash)
13
-
14
- result = symbolize_keys(hash)
15
- freeze_result ? result.freeze : result
16
- end
17
-
18
- def merge(agent_headers, runtime_headers)
19
- return runtime_headers if agent_headers.empty?
20
- return agent_headers if runtime_headers.empty?
21
-
22
- agent_headers.merge(runtime_headers) { |_key, _agent_value, runtime_value| runtime_value }
23
- end
24
-
25
- def symbolize_keys(hash)
26
- hash.transform_keys do |key|
27
- key.is_a?(Symbol) ? key : key.to_sym
28
- end
29
- end
30
- private_class_method :symbolize_keys
31
- end
32
- end
33
- end