ruby_llm-responses_api 0.4.1 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c2d9ce65eebe6420f01878669d81f90f999b738158b17eaa558dd6c88226c2c2
4
- data.tar.gz: c432ef2dfcebb290debbbc5ac5e72038081f54fc054065da1bee09465ba99ba0
3
+ metadata.gz: 5eafd14a08ce95dc9637f022c3dcd0b88dc979314efd534ec3a3d5dbb2a6e396
4
+ data.tar.gz: 6b84beeec2204e791727bb969aac9b9e490e574204c78ab87ec6b69692f1ad7d
5
5
  SHA512:
6
- metadata.gz: 39bbbb38a8b7183ff501d092eab938f0ab6572129ca3cd518057daa04b21117ea38eb8c19ab1a6755036b41f683f3124a1c0209ee91c5f547fe12590b673bbf3
7
- data.tar.gz: 74346a9093b98f079b02deffc3d9f8cbe9b8bf681d33a843d4bd130d099976f78d66a61a6c2b5032d31a84190e25b7509fe4e83f39fa278be2f74f5980568544
6
+ metadata.gz: 5216a047aa783ed7b221e91f0173fd5785ae003b3006a9dcbcdb07effef569e73cac28fab1129f9e157e15f03a5219d18fdcfcc84c17406f3913fe8cdc1ed761
7
+ data.tar.gz: 3aa365e82445b4deb9f3b2dcda4ec47f096728f05c3f7a89eb9076be41ba81e873e2f513b4057391b84d6f1e1b8fd630ade743a46b0c2a37f9c2187eb43efbd6
data/CHANGELOG.md CHANGED
@@ -5,6 +5,21 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.5.0] - 2026-02-25
9
+
10
+ ### Added
11
+
12
+ - **Batch API** for processing many requests asynchronously at 50% lower cost
13
+ - `RubyLLM.batch(model:, provider:)` factory method
14
+ - `Batch#add` to queue requests with auto-generated or custom IDs
15
+ - `Batch#create!` to upload JSONL and create the batch in one call
16
+ - `Batch#wait!` to poll until completion with progress callbacks
17
+ - `Batch#results` returns a `Hash<custom_id, Message>` using the same parsing as `Chat`
18
+ - `Batch#errors`, `Batch#cancel!`, and status helpers (`completed?`, `in_progress?`, `failed?`)
19
+ - Resume from a previous session via `RubyLLM.batch(id: "batch_abc", provider: :openai_responses)`
20
+ - `RubyLLM.batches` to list existing batches
21
+ - `Batches` helper module with JSONL builder, URL helpers, and result parsing
22
+
8
23
  ## [0.4.1] - 2026-02-24
9
24
 
10
25
  ### Added
data/README.md CHANGED
@@ -259,6 +259,44 @@ image_results = RubyLLM::ResponsesAPI::BuiltInTools.parse_image_generation_resu
259
259
  citations = RubyLLM::ResponsesAPI::BuiltInTools.extract_citations(message_content)
260
260
  ```
261
261
 
262
+ ## Batch API
263
+
264
+ Process many requests asynchronously at 50% lower cost with a 24-hour completion window:
265
+
266
+ ```ruby
267
+ # Create a batch
268
+ batch = RubyLLM.batch(model: 'gpt-4o', provider: :openai_responses)
269
+
270
+ # Add requests (auto-generates IDs or use your own)
271
+ batch.add("What is Ruby?")
272
+ batch.add("What is Python?", instructions: "Be brief", temperature: 0.5)
273
+ batch.add("Translate: hello", id: "translate_1")
274
+
275
+ # Submit (uploads JSONL file + creates batch)
276
+ batch.create!
277
+ batch.id # => "batch_abc123"
278
+
279
+ # Poll until done
280
+ batch.wait!(interval: 60) { |b| puts "#{b.completed_count}/#{b.total_count}" }
281
+
282
+ # Get results as Messages keyed by custom_id
283
+ results = batch.results
284
+ results["request_0"].content # => "Ruby is a dynamic..."
285
+ results["translate_1"].content # => "Hola"
286
+
287
+ # Resume from a previous session
288
+ batch = RubyLLM.batch(id: "batch_abc123", provider: :openai_responses)
289
+ batch.results
290
+
291
+ # Cancel a running batch
292
+ batch.cancel!
293
+
294
+ # List existing batches
295
+ RubyLLM.batches(provider: :openai_responses)
296
+ ```
297
+
298
+ **Constraints**: No `web_search`/`code_interpreter` tools, no `previous_response_id` chaining, max 50k requests per batch, 200MB file limit.
299
+
262
300
  ## WebSocket Mode
263
301
 
264
302
  For agentic workflows with many tool-call round trips, WebSocket mode provides lower latency by maintaining a persistent connection instead of HTTP requests per turn.
@@ -314,6 +352,7 @@ ws.disconnect
314
352
  - **Server-side compaction** - Run multi-hour agent sessions without hitting context limits
315
353
  - **Containers** - Persistent execution environments with networking and file management
316
354
  - **WebSocket mode** - Lower-latency persistent connections for agentic tool-call loops
355
+ - **Batch API** - Process bulk requests at 50% lower cost with 24-hour turnaround
317
356
 
318
357
  ## License
319
358
 
@@ -0,0 +1,231 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'stringio'
4
+
5
+ module RubyLLM
6
+ module Providers
7
+ class OpenAIResponses
8
+ # High-level interface for OpenAI's Batch API.
9
+ # Hides JSONL serialization, file upload, polling, and result parsing
10
+ # behind a clean Ruby API that mirrors RubyLLM::Chat.
11
+ #
12
+ # @example
13
+ # batch = RubyLLM.batch(model: 'gpt-4o', provider: :openai_responses)
14
+ # batch.add("What is Ruby?")
15
+ # batch.add("What is Python?", instructions: "Be brief")
16
+ # batch.create!
17
+ # batch.wait! { |b| puts "#{b.completed_count}/#{b.total_count}" }
18
+ # batch.results # => { "request_0" => Message, ... }
19
+ class Batch
20
+ attr_reader :id, :requests
21
+
22
+ # @param model [String] Model ID (e.g. 'gpt-4o')
23
+ # @param provider [Symbol, RubyLLM::Providers::OpenAIResponses] Provider slug or instance
24
+ # @param id [String, nil] Existing batch ID to resume
25
+ def initialize(model: nil, provider: :openai_responses, id: nil)
26
+ @model = model
27
+ @provider = resolve_provider(provider)
28
+ @requests = []
29
+ @request_counter = 0
30
+ @data = {}
31
+
32
+ return unless id
33
+
34
+ @id = id
35
+ refresh!
36
+ end
37
+
38
+ # Queue a request for inclusion in the batch.
39
+ # @param input [String, Array] User message or Responses API input array
40
+ # @param id [String, nil] Custom ID for this request (auto-generated if omitted)
41
+ # @param instructions [String, nil] System/developer instructions
42
+ # @param temperature [Float, nil] Sampling temperature
43
+ # @param tools [Array, nil] Tools configuration
44
+ # @return [self]
45
+ def add(input, id: nil, instructions: nil, temperature: nil, tools: nil, **extra) # rubocop:disable Metrics/ParameterLists
46
+ custom_id = id || "request_#{@request_counter}"
47
+ @request_counter += 1
48
+
49
+ body = { model: @model, input: Batches.normalize_input(input) }
50
+ body[:instructions] = instructions if instructions
51
+ body[:temperature] = temperature if temperature
52
+ body[:tools] = tools if tools
53
+ body.merge!(extra) unless extra.empty?
54
+
55
+ @requests << { custom_id: custom_id, body: body }
56
+ self
57
+ end
58
+
59
+ # Build JSONL, upload the file, and create the batch.
60
+ # @param metadata [Hash, nil] Optional metadata for the batch
61
+ # @return [self]
62
+ def create!(metadata: nil)
63
+ raise Error.new(nil, 'No requests added') if @requests.empty?
64
+ raise Error.new(nil, 'Batch already created') if @id
65
+
66
+ jsonl = Batches.build_jsonl(@requests)
67
+ file_id = upload_file(jsonl)
68
+
69
+ payload = {
70
+ input_file_id: file_id,
71
+ endpoint: '/v1/responses',
72
+ completion_window: '24h'
73
+ }
74
+ payload[:metadata] = metadata if metadata
75
+
76
+ response = @provider.instance_variable_get(:@connection).post(Batches.batches_url, payload)
77
+ @data = response.body
78
+ @id = @data['id']
79
+ self
80
+ end
81
+
82
+ # Fetch the latest batch status from the API.
83
+ # @return [self]
84
+ def refresh!
85
+ raise Error.new(nil, 'Batch not yet created') unless @id
86
+
87
+ response = @provider.instance_variable_get(:@connection).get(Batches.batch_url(@id))
88
+ @data = response.body
89
+ self
90
+ end
91
+
92
+ # @return [String, nil] Batch status
93
+ def status
94
+ @data['status']
95
+ end
96
+
97
+ # @return [Integer, nil] Number of completed requests
98
+ def completed_count
99
+ @data.dig('request_counts', 'completed')
100
+ end
101
+
102
+ # @return [Integer, nil] Total number of requests
103
+ def total_count
104
+ @data.dig('request_counts', 'total')
105
+ end
106
+
107
+ # @return [Integer, nil] Number of failed requests
108
+ def failed_count
109
+ @data.dig('request_counts', 'failed')
110
+ end
111
+
112
+ # @return [Boolean]
113
+ def completed?
114
+ status == Batches::COMPLETED
115
+ end
116
+
117
+ # @return [Boolean]
118
+ def in_progress?
119
+ Batches.pending?(status)
120
+ end
121
+
122
+ # @return [Boolean]
123
+ def failed?
124
+ status == Batches::FAILED
125
+ end
126
+
127
+ # @return [Boolean]
128
+ def expired?
129
+ status == Batches::EXPIRED
130
+ end
131
+
132
+ # @return [Boolean]
133
+ def cancelled?
134
+ status == Batches::CANCELLED
135
+ end
136
+
137
+ # Block until the batch reaches a terminal status.
138
+ # @param interval [Numeric] Seconds between polls (default: 30)
139
+ # @param timeout [Numeric, nil] Maximum seconds to wait
140
+ # @yield [Batch] Called after each poll
141
+ # @return [self]
142
+ def wait!(interval: 30, timeout: nil)
143
+ start_time = Time.now
144
+
145
+ loop do
146
+ refresh!
147
+ yield self if block_given?
148
+
149
+ break if Batches.terminal?(status)
150
+
151
+ if timeout && (Time.now - start_time) > timeout
152
+ raise Error.new(nil, "Batch polling timeout after #{timeout} seconds")
153
+ end
154
+
155
+ sleep interval
156
+ end
157
+
158
+ self
159
+ end
160
+
161
+ # Download and parse the output file into a Hash of Messages.
162
+ # @return [Hash<String, Message>] Results keyed by custom_id
163
+ def results
164
+ output_file_id = @data['output_file_id']
165
+ raise Error.new(nil, 'No output file available yet') unless output_file_id
166
+
167
+ jsonl = fetch_file_content(output_file_id)
168
+ Batches.parse_results_to_messages(jsonl)
169
+ end
170
+
171
+ # Download and parse the error file.
172
+ # @return [Array<Hash>] Error entries
173
+ def errors
174
+ error_file_id = @data['error_file_id']
175
+ return [] unless error_file_id
176
+
177
+ jsonl = fetch_file_content(error_file_id)
178
+ Batches.parse_errors(jsonl)
179
+ end
180
+
181
+ # Cancel the batch.
182
+ # @return [self]
183
+ def cancel!
184
+ raise Error.new(nil, 'Batch not yet created') unless @id
185
+
186
+ response = @provider.instance_variable_get(:@connection).post(Batches.cancel_batch_url(@id), {})
187
+ @data = response.body
188
+ self
189
+ end
190
+
191
+ private
192
+
193
+ def resolve_provider(provider)
194
+ case provider
195
+ when Symbol, String
196
+ slug = provider.to_sym
197
+ provider_class = RubyLLM::Provider.providers[slug]
198
+ raise Error.new(nil, "Unknown provider: #{slug}") unless provider_class
199
+
200
+ provider_class.new(RubyLLM.config)
201
+ else
202
+ provider
203
+ end
204
+ end
205
+
206
+ # Upload a JSONL string as a file to the Files API.
207
+ # @return [String] The uploaded file ID
208
+ def upload_file(jsonl)
209
+ io = StringIO.new(jsonl)
210
+ file_part = Faraday::Multipart::FilePart.new(io, 'application/jsonl', 'batch_requests.jsonl')
211
+
212
+ response = @provider.instance_variable_get(:@connection).post(Batches.files_url, {
213
+ file: file_part,
214
+ purpose: 'batch'
215
+ })
216
+ response.body['id']
217
+ end
218
+
219
+ # Download raw file content, bypassing JSON response middleware.
220
+ # @return [String] Raw file content
221
+ def fetch_file_content(file_id)
222
+ conn = @provider.instance_variable_get(:@connection)
223
+ response = conn.connection.get(Batches.file_content_url(file_id)) do |req|
224
+ req.headers.merge!(@provider.headers)
225
+ end
226
+ response.body
227
+ end
228
+ end
229
+ end
230
+ end
231
+ end
@@ -0,0 +1,131 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'json'
4
+
5
+ module RubyLLM
6
+ module Providers
7
+ class OpenAIResponses
8
+ # Stateless helpers for the Batch API.
9
+ # Provides URL builders, JSONL serialization, status constants, and result parsing.
10
+ module Batches
11
+ module_function
12
+
13
+ # Status constants
14
+ VALIDATING = 'validating'
15
+ IN_PROGRESS = 'in_progress'
16
+ COMPLETED = 'completed'
17
+ FAILED = 'failed'
18
+ CANCELLED = 'cancelled'
19
+ CANCELLING = 'cancelling'
20
+ EXPIRED = 'expired'
21
+
22
+ TERMINAL_STATUSES = [COMPLETED, FAILED, CANCELLED, EXPIRED].freeze
23
+ PENDING_STATUSES = [VALIDATING, IN_PROGRESS, CANCELLING].freeze
24
+
25
+ # --- URL helpers ---
26
+
27
+ def files_url
28
+ 'files'
29
+ end
30
+
31
+ def batches_url
32
+ 'batches'
33
+ end
34
+
35
+ def batch_url(batch_id)
36
+ "batches/#{batch_id}"
37
+ end
38
+
39
+ def cancel_batch_url(batch_id)
40
+ "batches/#{batch_id}/cancel"
41
+ end
42
+
43
+ def file_content_url(file_id)
44
+ "files/#{file_id}/content"
45
+ end
46
+
47
+ # --- Status helpers ---
48
+
49
+ def terminal?(status)
50
+ TERMINAL_STATUSES.include?(status)
51
+ end
52
+
53
+ def pending?(status)
54
+ PENDING_STATUSES.include?(status)
55
+ end
56
+
57
+ # --- JSONL builder ---
58
+
59
+ # Build a JSONL string from an array of request hashes.
60
+ # Each request has: custom_id, body (the Responses API payload)
61
+ def build_jsonl(requests)
62
+ requests.map do |req|
63
+ JSON.generate({
64
+ custom_id: req[:custom_id],
65
+ method: 'POST',
66
+ url: '/v1/responses',
67
+ body: req[:body]
68
+ })
69
+ end.join("\n")
70
+ end
71
+
72
+ # --- Input normalization ---
73
+
74
+ # Wraps a plain string into the Responses API input format.
75
+ def normalize_input(input)
76
+ case input
77
+ when String
78
+ [{ type: 'message', role: 'user', content: input }]
79
+ when Array
80
+ input
81
+ else
82
+ input
83
+ end
84
+ end
85
+
86
+ # --- Result parsing ---
87
+
88
+ # Parse JSONL output into an array of raw result hashes.
89
+ def parse_results(jsonl_string)
90
+ jsonl_string.each_line.filter_map do |line|
91
+ line = line.strip
92
+ next if line.empty?
93
+
94
+ JSON.parse(line)
95
+ end
96
+ end
97
+
98
+ # Parse JSONL output into a Hash of { custom_id => Message }.
99
+ # Reuses Chat.extract_output_text and Chat.extract_tool_calls to avoid duplication.
100
+ def parse_results_to_messages(jsonl_string)
101
+ results = parse_results(jsonl_string)
102
+ results.each_with_object({}) do |result, hash|
103
+ custom_id = result['custom_id']
104
+ response_body = result.dig('response', 'body')
105
+ next unless response_body
106
+
107
+ output = response_body['output'] || []
108
+ content = Chat.extract_output_text(output)
109
+ tool_calls = Chat.extract_tool_calls(output)
110
+ usage = response_body['usage'] || {}
111
+
112
+ hash[custom_id] = Message.new(
113
+ role: :assistant,
114
+ content: content,
115
+ tool_calls: tool_calls,
116
+ input_tokens: usage['input_tokens'],
117
+ output_tokens: usage['output_tokens'],
118
+ model_id: response_body['model']
119
+ )
120
+ end
121
+ end
122
+
123
+ # Parse JSONL error file into an array of error hashes.
124
+ def parse_errors(jsonl_string)
125
+ results = parse_results(jsonl_string)
126
+ results.select { |r| r.dig('response', 'status_code')&.>= 400 }
127
+ end
128
+ end
129
+ end
130
+ end
131
+ end
@@ -12,7 +12,7 @@ module RubyLLM
12
12
 
13
13
  module_function
14
14
 
15
- def render_payload(messages, tools:, temperature:, model:, stream: false, schema: nil, thinking: nil) # rubocop:disable Metrics/ParameterLists
15
+ def render_payload(messages, tools:, temperature:, model:, stream: false, schema: nil, thinking: nil) # rubocop:disable Metrics/ParameterLists,Lint/UnusedMethodArgument
16
16
  # Extract system messages for instructions
17
17
  system_messages = messages.select { |m| m.role == :system }
18
18
  non_system_messages = messages.reject { |m| m.role == :system }
@@ -191,7 +191,7 @@ module RubyLLM
191
191
  end
192
192
 
193
193
  def shell_tool(environment_type: 'container_auto', container_id: nil,
194
- network_policy: nil, memory_limit: nil)
194
+ network_policy: nil, memory_limit: nil)
195
195
  env = if container_id
196
196
  { type: 'container_reference', container_id: container_id }
197
197
  else
@@ -21,7 +21,7 @@ module RubyLLM
21
21
  # ws.connect
22
22
  # ws.create_response(model: 'gpt-4o', input: [...]) { |chunk| ... }
23
23
  # ws.disconnect
24
- class WebSocket
24
+ class WebSocket # rubocop:disable Metrics/ClassLength
25
25
  WEBSOCKET_PATH = '/v1/responses'
26
26
  KNOWN_PARAMS = %i[store metadata compact_threshold context_management].freeze
27
27
 
@@ -96,7 +96,7 @@ module RubyLLM
96
96
  # @param payload [Hash] Responses API payload (model, input, tools, etc.)
97
97
  # @yield [RubyLLM::Chunk] each streamed chunk
98
98
  # @return [RubyLLM::Message] the assembled final message
99
- def call(payload, &block)
99
+ def call(payload, &)
100
100
  ensure_connected!
101
101
  acquire_flight!
102
102
 
@@ -105,7 +105,7 @@ module RubyLLM
105
105
 
106
106
  envelope = { type: 'response.create', response: payload.except(:stream) }
107
107
  send_json(envelope)
108
- accumulate_response(queue, &block)
108
+ accumulate_response(queue, &)
109
109
  ensure
110
110
  @mutex.synchronize { @message_queue = nil }
111
111
  release_flight!
@@ -122,7 +122,7 @@ module RubyLLM
122
122
  # @param extra [Hash] additional fields forwarded to the API
123
123
  # @yield [RubyLLM::Chunk] each streamed chunk
124
124
  # @return [RubyLLM::Message] the assembled final message
125
- def create_response(model:, input:, tools: nil, previous_response_id: nil, instructions: nil, **extra, &block)
125
+ def create_response(model:, input:, tools: nil, previous_response_id: nil, instructions: nil, **extra, &block) # rubocop:disable Metrics/ParameterLists
126
126
  payload = build_standalone_payload(
127
127
  model: model, input: input, tools: tools,
128
128
  previous_response_id: previous_response_id,
@@ -221,7 +221,7 @@ module RubyLLM
221
221
  headers
222
222
  end
223
223
 
224
- def build_standalone_payload(model:, input:, tools: nil, previous_response_id: nil, instructions: nil, **extra)
224
+ def build_standalone_payload(model:, input:, tools: nil, previous_response_id: nil, instructions: nil, **extra) # rubocop:disable Metrics/ParameterLists
225
225
  prev_id = previous_response_id || @last_response_id
226
226
  response = { model: model, input: input }
227
227
  response[:tools] = tools.map { |t| Tools.tool_for(t) } if tools&.any?
@@ -231,7 +231,7 @@ module RubyLLM
231
231
  State.apply_state_params(response, extra)
232
232
  Compaction.apply_compaction(response, extra)
233
233
 
234
- forwarded = extra.reject { |k, _| KNOWN_PARAMS.include?(k) }
234
+ forwarded = extra.except(*KNOWN_PARAMS)
235
235
  response.merge(forwarded)
236
236
  end
237
237
 
@@ -20,7 +20,8 @@ module RubyLLM
20
20
  def complete(messages, tools:, temperature:, model:, params: {}, headers: {}, schema: nil, thinking: nil, &block) # rubocop:disable Metrics/ParameterLists
21
21
  if params[:transport]&.to_sym == :websocket
22
22
  ws_complete(messages, tools: tools, temperature: temperature, model: model,
23
- params: params.except(:transport), schema: schema, thinking: thinking, &block)
23
+ params: params.except(:transport), schema: schema,
24
+ thinking: thinking, &block)
24
25
  else
25
26
  super
26
27
  end
@@ -145,9 +146,25 @@ module RubyLLM
145
146
  response.body
146
147
  end
147
148
 
149
+ # --- Batch API ---
150
+
151
+ # List batches
152
+ # @param limit [Integer] Number of batches to return (default: 20)
153
+ # @param after [String, nil] Cursor for pagination
154
+ # @return [Hash] Batch listing with 'data' array
155
+ def list_batches(limit: 20, after: nil)
156
+ url = Batches.batches_url
157
+ params = { limit: limit }
158
+ params[:after] = after if after
159
+ response = @connection.get(url) do |req|
160
+ req.params.merge!(params)
161
+ end
162
+ response.body
163
+ end
164
+
148
165
  private
149
166
 
150
- def ws_complete(messages, tools:, temperature:, model:, params:, schema:, thinking:, &block)
167
+ def ws_complete(messages, tools:, temperature:, model:, params:, schema:, thinking:, &block) # rubocop:disable Metrics/ParameterLists
151
168
  normalized_temperature = maybe_normalize_temperature(temperature, model)
152
169
 
153
170
  payload = Utils.deep_merge(
@@ -184,8 +201,6 @@ module RubyLLM
184
201
  end
185
202
  end
186
203
 
187
- public
188
-
189
204
  class << self
190
205
  def capabilities
191
206
  OpenAIResponses::Capabilities
@@ -19,6 +19,8 @@ require_relative 'ruby_llm/providers/openai_responses/state'
19
19
  require_relative 'ruby_llm/providers/openai_responses/background'
20
20
  require_relative 'ruby_llm/providers/openai_responses/compaction'
21
21
  require_relative 'ruby_llm/providers/openai_responses/containers'
22
+ require_relative 'ruby_llm/providers/openai_responses/batches'
23
+ require_relative 'ruby_llm/providers/openai_responses/batch'
22
24
  require_relative 'ruby_llm/providers/openai_responses/message_extension'
23
25
  require_relative 'ruby_llm/providers/openai_responses/model_registry'
24
26
  require_relative 'ruby_llm/providers/openai_responses/active_record_extension'
@@ -37,7 +39,7 @@ RubyLLM::Providers::OpenAIResponses::ModelRegistry.register_all!
37
39
  module RubyLLM
38
40
  # ResponsesAPI namespace for direct access to helpers and version
39
41
  module ResponsesAPI
40
- VERSION = '0.4.1'
42
+ VERSION = '0.5.0'
41
43
 
42
44
  # Shorthand access to built-in tool helpers
43
45
  BuiltInTools = Providers::OpenAIResponses::BuiltInTools
@@ -45,6 +47,22 @@ module RubyLLM
45
47
  Background = Providers::OpenAIResponses::Background
46
48
  Compaction = Providers::OpenAIResponses::Compaction
47
49
  Containers = Providers::OpenAIResponses::Containers
50
+ Batches = Providers::OpenAIResponses::Batches
51
+ Batch = Providers::OpenAIResponses::Batch
48
52
  WebSocket = Providers::OpenAIResponses::WebSocket
49
53
  end
54
+
55
+ # Create a new Batch for bulk request processing
56
+ def self.batch(...)
57
+ Providers::OpenAIResponses::Batch.new(...)
58
+ end
59
+
60
+ # List existing batches
61
+ def self.batches(provider: :openai_responses, **kwargs)
62
+ slug = provider.to_sym
63
+ provider_class = Provider.providers[slug]
64
+ raise Error.new(nil, "Unknown provider: #{slug}") unless provider_class
65
+
66
+ provider_class.new(config).list_batches(**kwargs)
67
+ end
50
68
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby_llm-responses_api
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.4.1
4
+ version: 0.5.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Chris Hasinski
@@ -153,6 +153,8 @@ files:
153
153
  - lib/ruby_llm/providers/openai_responses/active_record_extension.rb
154
154
  - lib/ruby_llm/providers/openai_responses/background.rb
155
155
  - lib/ruby_llm/providers/openai_responses/base.rb
156
+ - lib/ruby_llm/providers/openai_responses/batch.rb
157
+ - lib/ruby_llm/providers/openai_responses/batches.rb
156
158
  - lib/ruby_llm/providers/openai_responses/built_in_tools.rb
157
159
  - lib/ruby_llm/providers/openai_responses/capabilities.rb
158
160
  - lib/ruby_llm/providers/openai_responses/chat.rb