openai 0.11.0 → 0.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 252a9ce9833b0a9f66be94b76081f34716e348ebca732e63c3e16c105ed42ea1
4
- data.tar.gz: d4d8b36822ee74af77508ec8e48b472d72d619333e468055695158e242c49760
3
+ metadata.gz: 38a07c18c8f0197c2edc1400139623aefc1c85e8a0fbf167a86afc53f1399a75
4
+ data.tar.gz: 13b400c12a9d2bef1ebcf2722413e3e6b1915992df539542f4a73f084b839cfa
5
5
  SHA512:
6
- metadata.gz: f27c40e40df727c8da570a4f6b1e20f72a87baa710efd3957bb0fbbcc23eb1dd78d03b1bcc3c3cf5684d4db4b062c392c1c3d970093c8e0fac89ef85db053db3
7
- data.tar.gz: 21fe44d1605c6196d7321dcb7efa7d951a2523ba71859eab1bf242ff3314afe0cce25201c21b141f9f3902b684cb577ba51c520d12602d1279eccb770769e3a4
6
+ metadata.gz: d5a62aacb54b1e50526da647c3ea91adbe8dbf69bd8c4f0df08a0a6c9fa4b49ebb79bd4b51bf5efc55d06d514af286b7658c153a6498298761dedc2f73c42c20
7
+ data.tar.gz: 9c7a2bbe1053d11780882a11a9617ea4cd2ee4a6648f278003be7da107c0e5dc281beeb6dbce3db168f4c9338d161742ab48bc03711a8c4f05e540f11e6f40d0
data/CHANGELOG.md CHANGED
@@ -1,5 +1,21 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.12.0 (2025-07-03)
4
+
5
+ Full Changelog: [v0.11.0...v0.12.0](https://github.com/openai/openai-ruby/compare/v0.11.0...v0.12.0)
6
+
7
+ ### Features
8
+
9
+ * ensure partial jsons in structured ouput are handled gracefully ([#740](https://github.com/openai/openai-ruby/issues/740)) ([5deec70](https://github.com/openai/openai-ruby/commit/5deec708bad1ceb1a03e9aa65f737e3f89ce6455))
10
+ * responses streaming helpers ([#721](https://github.com/openai/openai-ruby/issues/721)) ([c2f4270](https://github.com/openai/openai-ruby/commit/c2f42708e41492f1c22886735079973510fb2789))
11
+
12
+
13
+ ### Chores
14
+
15
+ * **ci:** only run for pushes and fork pull requests ([97538e2](https://github.com/openai/openai-ruby/commit/97538e266f6f9a0e09669453539ee52ca56f4f59))
16
+ * **internal:** allow streams to also be unwrapped on a per-row basis ([49bdadf](https://github.com/openai/openai-ruby/commit/49bdadfc0d3400664de0c8e7cfd59879faec45b8))
17
+ * **internal:** minor refactoring of json helpers ([#744](https://github.com/openai/openai-ruby/issues/744)) ([f13edee](https://github.com/openai/openai-ruby/commit/f13edee16325be04335443cb886a7c2024155fd9))
18
+
3
19
  ## 0.11.0 (2025-06-26)
4
20
 
5
21
  Full Changelog: [v0.10.0...v0.11.0](https://github.com/openai/openai-ruby/compare/v0.10.0...v0.11.0)
data/README.md CHANGED
@@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
15
15
  <!-- x-release-please-start-version -->
16
16
 
17
17
  ```ruby
18
- gem "openai", "~> 0.11.0"
18
+ gem "openai", "~> 0.12.0"
19
19
  ```
20
20
 
21
21
  <!-- x-release-please-end -->
@@ -42,16 +42,14 @@ puts(chat_completion)
42
42
 
43
43
  We provide support for streaming responses using Server-Sent Events (SSE).
44
44
 
45
- **coming soon:** `openai.chat.completions.stream` will soon come with Python SDK-style higher-level streaming responses support.
46
-
47
45
  ```ruby
48
- stream = openai.chat.completions.stream_raw(
49
- messages: [{role: "user", content: "Say this is a test"}],
46
+ stream = openai.responses.stream(
47
+ input: "Write a haiku about OpenAI.",
50
48
  model: :"gpt-4.1"
51
49
  )
52
50
 
53
- stream.each do |completion|
54
- puts(completion)
51
+ stream.each do |event|
52
+ puts(event.type)
55
53
  end
56
54
  ```
57
55
 
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ module OpenAI
4
+ module Helpers
5
+ module Streaming
6
+ class ResponseTextDeltaEvent < OpenAI::Models::Responses::ResponseTextDeltaEvent
7
+ required :snapshot, String
8
+ end
9
+
10
+ class ResponseTextDoneEvent < OpenAI::Models::Responses::ResponseTextDoneEvent
11
+ optional :parsed, Object
12
+ end
13
+
14
+ class ResponseFunctionCallArgumentsDeltaEvent < OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent
15
+ required :snapshot, String
16
+ end
17
+
18
+ class ResponseCompletedEvent < OpenAI::Models::Responses::ResponseCompletedEvent
19
+ required :response, OpenAI::Models::Responses::Response
20
+ end
21
+ end
22
+ end
23
+ end
@@ -0,0 +1,232 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "events"
4
+
5
+ module OpenAI
6
+ module Helpers
7
+ module Streaming
8
+ class ResponseStream
9
+ include OpenAI::Internal::Type::BaseStream
10
+
11
+ def initialize(raw_stream:, text_format: nil, starting_after: nil)
12
+ @text_format = text_format
13
+ @starting_after = starting_after
14
+ @raw_stream = raw_stream
15
+ @iterator = iterator
16
+ @state = ResponseStreamState.new(
17
+ text_format: text_format
18
+ )
19
+ end
20
+
21
+ def until_done
22
+ each {} # rubocop:disable Lint/EmptyBlock
23
+ self
24
+ end
25
+
26
+ def text
27
+ OpenAI::Internal::Util.chain_fused(@iterator) do |yielder|
28
+ @iterator.each do |event|
29
+ case event
30
+ when OpenAI::Streaming::ResponseTextDeltaEvent
31
+ yielder << event.delta
32
+ end
33
+ end
34
+ end
35
+ end
36
+
37
+ def get_final_response
38
+ until_done
39
+ response = @state.completed_response
40
+ raise RuntimeError.new("Didn't receive a 'response.completed' event") unless response
41
+ response
42
+ end
43
+
44
+ def get_output_text
45
+ response = get_final_response
46
+ text_parts = []
47
+
48
+ response.output.each do |output|
49
+ next unless output.type == :message
50
+
51
+ output.content.each do |content|
52
+ next unless content.type == :output_text
53
+ text_parts << content.text
54
+ end
55
+ end
56
+
57
+ text_parts.join
58
+ end
59
+
60
+ private
61
+
62
+ def iterator
63
+ @iterator ||= OpenAI::Internal::Util.chain_fused(@raw_stream) do |y|
64
+ @raw_stream.each do |raw_event|
65
+ events_to_yield = @state.handle_event(raw_event)
66
+ events_to_yield.each do |event|
67
+ if @starting_after.nil? || event.sequence_number > @starting_after
68
+ y << event
69
+ end
70
+ end
71
+ end
72
+ end
73
+ end
74
+ end
75
+
76
+ class ResponseStreamState
77
+ attr_reader :completed_response
78
+
79
+ def initialize(text_format:)
80
+ @current_snapshot = nil
81
+ @completed_response = nil
82
+ @text_format = text_format
83
+ end
84
+
85
+ def handle_event(event)
86
+ @current_snapshot = accumulate_event(
87
+ event: event,
88
+ current_snapshot: @current_snapshot
89
+ )
90
+
91
+ events_to_yield = []
92
+
93
+ case event
94
+ when OpenAI::Models::Responses::ResponseTextDeltaEvent
95
+ output = @current_snapshot.output[event.output_index]
96
+ assert_type(output, :message)
97
+
98
+ content = output.content[event.content_index]
99
+ assert_type(content, :output_text)
100
+
101
+ events_to_yield << OpenAI::Streaming::ResponseTextDeltaEvent.new(
102
+ content_index: event.content_index,
103
+ delta: event.delta,
104
+ item_id: event.item_id,
105
+ output_index: event.output_index,
106
+ sequence_number: event.sequence_number,
107
+ type: event.type,
108
+ snapshot: content.text
109
+ )
110
+
111
+ when OpenAI::Models::Responses::ResponseTextDoneEvent
112
+ output = @current_snapshot.output[event.output_index]
113
+ assert_type(output, :message)
114
+
115
+ content = output.content[event.content_index]
116
+ assert_type(content, :output_text)
117
+
118
+ parsed = parse_structured_text(content.text)
119
+
120
+ events_to_yield << OpenAI::Streaming::ResponseTextDoneEvent.new(
121
+ content_index: event.content_index,
122
+ item_id: event.item_id,
123
+ output_index: event.output_index,
124
+ sequence_number: event.sequence_number,
125
+ text: event.text,
126
+ type: event.type,
127
+ parsed: parsed
128
+ )
129
+
130
+ when OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent
131
+ output = @current_snapshot.output[event.output_index]
132
+ assert_type(output, :function_call)
133
+
134
+ events_to_yield << OpenAI::Streaming::ResponseFunctionCallArgumentsDeltaEvent.new(
135
+ delta: event.delta,
136
+ item_id: event.item_id,
137
+ output_index: event.output_index,
138
+ sequence_number: event.sequence_number,
139
+ type: event.type,
140
+ snapshot: output.arguments
141
+ )
142
+
143
+ when OpenAI::Models::Responses::ResponseCompletedEvent
144
+ events_to_yield << OpenAI::Streaming::ResponseCompletedEvent.new(
145
+ sequence_number: event.sequence_number,
146
+ type: event.type,
147
+ response: event.response
148
+ )
149
+
150
+ else
151
+ # Pass through other events unchanged.
152
+ events_to_yield << event
153
+ end
154
+
155
+ events_to_yield
156
+ end
157
+
158
+ def accumulate_event(event:, current_snapshot:)
159
+ if current_snapshot.nil?
160
+ unless event.is_a?(OpenAI::Models::Responses::ResponseCreatedEvent)
161
+ raise "Expected first event to be response.created"
162
+ end
163
+
164
+ # Use the converter to create a new, isolated copy of the response object.
165
+ # This ensures proper type validation and prevents shared object references.
166
+ return OpenAI::Internal::Type::Converter.coerce(
167
+ OpenAI::Models::Responses::Response,
168
+ event.response
169
+ )
170
+ end
171
+
172
+ case event
173
+ when OpenAI::Models::Responses::ResponseOutputItemAddedEvent
174
+ current_snapshot.output.push(event.item)
175
+
176
+ when OpenAI::Models::Responses::ResponseContentPartAddedEvent
177
+ output = current_snapshot.output[event.output_index]
178
+ if output && output.type == :message
179
+ output.content.push(event.part)
180
+ current_snapshot.output[event.output_index] = output
181
+ end
182
+
183
+ when OpenAI::Models::Responses::ResponseTextDeltaEvent
184
+ output = current_snapshot.output[event.output_index]
185
+ if output && output.type == :message
186
+ content = output.content[event.content_index]
187
+ if content && content.type == :output_text
188
+ content.text += event.delta
189
+ output.content[event.content_index] = content
190
+ current_snapshot.output[event.output_index] = output
191
+ end
192
+ end
193
+
194
+ when OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent
195
+ output = current_snapshot.output[event.output_index]
196
+ if output && output.type == :function_call
197
+ output.arguments = (output.arguments || "") + event.delta
198
+ current_snapshot.output[event.output_index] = output
199
+ end
200
+
201
+ when OpenAI::Models::Responses::ResponseCompletedEvent
202
+ @completed_response = event.response
203
+ end
204
+
205
+ current_snapshot
206
+ end
207
+
208
+ private
209
+
210
+ def assert_type(object, expected_type)
211
+ return if object && object.type == expected_type
212
+ actual_type = object ? object.type : "nil"
213
+ raise "Invalid state: expected #{expected_type} but got #{actual_type}"
214
+ end
215
+
216
+ def parse_structured_text(text)
217
+ return nil unless @text_format && text
218
+
219
+ begin
220
+ parsed = JSON.parse(text, symbolize_names: true)
221
+ OpenAI::Internal::Type::Converter.coerce(@text_format, parsed)
222
+ rescue JSON::ParserError => e
223
+ raise RuntimeError.new(
224
+ "Failed to parse structured text as JSON for #{@text_format}: #{e.message}. " \
225
+ "Raw text: #{text.inspect}"
226
+ )
227
+ end
228
+ end
229
+ end
230
+ end
231
+ end
232
+ end
@@ -0,0 +1,39 @@
1
+ # frozen_string_literal: true
2
+
3
+ module OpenAI
4
+ module Helpers
5
+ module StructuredOutput
6
+ # @abstract
7
+ #
8
+ # Like OpenAI::Internal::Type::Unknown, but for parsed JSON values, which can be incomplete or malformed.
9
+ class ParsedJson < OpenAI::Internal::Type::Unknown
10
+ class << self
11
+ # @api private
12
+ #
13
+ # No coercion needed for Unknown type.
14
+ #
15
+ # @param value [Object]
16
+ #
17
+ # @param state [Hash{Symbol=>Object}] .
18
+ #
19
+ # @option state [Boolean] :translate_names
20
+ #
21
+ # @option state [Boolean] :strictness
22
+ #
23
+ # @option state [Hash{Symbol=>Object}] :exactness
24
+ #
25
+ # @option state [Class<StandardError>] :error
26
+ #
27
+ # @option state [Integer] :branched
28
+ #
29
+ # @return [Object]
30
+ def coerce(value, state:)
31
+ (state[:error] = value) if value.is_a?(StandardError)
32
+
33
+ super
34
+ end
35
+ end
36
+ end
37
+ end
38
+ end
39
+ end
@@ -47,7 +47,8 @@ module OpenAI
47
47
  message: message
48
48
  )
49
49
  in decoded
50
- y << OpenAI::Internal::Type::Converter.coerce(@model, decoded)
50
+ unwrapped = OpenAI::Internal::Util.dig(decoded, @unwrap)
51
+ y << OpenAI::Internal::Type::Converter.coerce(@model, unwrapped)
51
52
  end
52
53
  else
53
54
  end
@@ -471,6 +471,7 @@ module OpenAI
471
471
  self.class.validate!(req)
472
472
  model = req.fetch(:model) { OpenAI::Internal::Type::Unknown }
473
473
  opts = req[:options].to_h
474
+ unwrap = req[:unwrap]
474
475
  OpenAI::RequestOptions.validate!(opts)
475
476
  request = build_request(req.except(:options), opts)
476
477
  url = request.fetch(:url)
@@ -487,11 +488,18 @@ module OpenAI
487
488
  decoded = OpenAI::Internal::Util.decode_content(response, stream: stream)
488
489
  case req
489
490
  in {stream: Class => st}
490
- st.new(model: model, url: url, status: status, response: response, stream: decoded)
491
+ st.new(
492
+ model: model,
493
+ url: url,
494
+ status: status,
495
+ response: response,
496
+ unwrap: unwrap,
497
+ stream: decoded
498
+ )
491
499
  in {page: Class => page}
492
500
  page.new(client: self, req: req, headers: response, page_data: decoded)
493
501
  else
494
- unwrapped = OpenAI::Internal::Util.dig(decoded, req[:unwrap])
502
+ unwrapped = OpenAI::Internal::Util.dig(decoded, unwrap)
495
503
  OpenAI::Internal::Type::Converter.coerce(model, unwrapped)
496
504
  end
497
505
  end
@@ -64,12 +64,14 @@ module OpenAI
64
64
  # @param url [URI::Generic]
65
65
  # @param status [Integer]
66
66
  # @param response [Net::HTTPResponse]
67
+ # @param unwrap [Symbol, Integer, Array<Symbol, Integer>, Proc]
67
68
  # @param stream [Enumerable<Object>]
68
- def initialize(model:, url:, status:, response:, stream:)
69
+ def initialize(model:, url:, status:, response:, unwrap:, stream:)
69
70
  @model = model
70
71
  @url = url
71
72
  @status = status
72
73
  @response = response
74
+ @unwrap = unwrap
73
75
  @stream = stream
74
76
  @iterator = iterator
75
77
 
@@ -14,7 +14,7 @@ module OpenAI
14
14
  # The parsed contents of the message, if JSON schema is specified.
15
15
  #
16
16
  # @return [Object, nil]
17
- optional :parsed, OpenAI::Internal::Type::Unknown
17
+ optional :parsed, OpenAI::StructuredOutput::ParsedJson
18
18
 
19
19
  # @!attribute refusal
20
20
  # The refusal message generated by the model.
@@ -44,7 +44,7 @@ module OpenAI
44
44
  # The parsed contents of the arguments.
45
45
  #
46
46
  # @return [Object, nil]
47
- required :parsed, OpenAI::Internal::Type::Unknown
47
+ required :parsed, OpenAI::StructuredOutput::ParsedJson
48
48
 
49
49
  # @!attribute name
50
50
  # The name of the function to call.
@@ -14,7 +14,7 @@ module OpenAI
14
14
  # The parsed contents of the arguments.
15
15
  #
16
16
  # @return [Object, nil]
17
- required :parsed, OpenAI::Internal::Type::Unknown
17
+ required :parsed, OpenAI::StructuredOutput::ParsedJson
18
18
 
19
19
  # @!attribute call_id
20
20
  # The unique ID of the function tool call generated by the model.
@@ -23,7 +23,7 @@ module OpenAI
23
23
  # The parsed contents of the output, if JSON schema is specified.
24
24
  #
25
25
  # @return [Object, nil]
26
- optional :parsed, OpenAI::Internal::Type::Unknown
26
+ optional :parsed, OpenAI::StructuredOutput::ParsedJson
27
27
 
28
28
  # @!attribute type
29
29
  # The type of the output text. Always `output_text`.
@@ -104,7 +104,6 @@ module OpenAI
104
104
  raise ArgumentError.new(message)
105
105
  end
106
106
 
107
- # rubocop:disable Layout/LineLength
108
107
  model = nil
109
108
  tool_models = {}
110
109
  case parsed
@@ -157,11 +156,16 @@ module OpenAI
157
156
  else
158
157
  end
159
158
 
159
+ # rubocop:disable Metrics/BlockLength
160
160
  unwrap = ->(raw) do
161
161
  if model.is_a?(OpenAI::StructuredOutput::JsonSchemaConverter)
162
162
  raw[:choices]&.each do |choice|
163
163
  message = choice.fetch(:message)
164
- parsed = JSON.parse(message.fetch(:content), symbolize_names: true)
164
+ begin
165
+ parsed = JSON.parse(message.fetch(:content), symbolize_names: true)
166
+ rescue JSON::ParserError => e
167
+ parsed = e
168
+ end
165
169
  coerced = OpenAI::Internal::Type::Converter.coerce(model, parsed)
166
170
  message.store(:parsed, coerced)
167
171
  end
@@ -171,7 +175,11 @@ module OpenAI
171
175
  func = tool_call.fetch(:function)
172
176
  next if (model = tool_models[func.fetch(:name)]).nil?
173
177
 
174
- parsed = JSON.parse(func.fetch(:arguments), symbolize_names: true)
178
+ begin
179
+ parsed = JSON.parse(func.fetch(:arguments), symbolize_names: true)
180
+ rescue JSON::ParserError => e
181
+ parsed = e
182
+ end
175
183
  coerced = OpenAI::Internal::Type::Converter.coerce(model, parsed)
176
184
  func.store(:parsed, coerced)
177
185
  end
@@ -179,7 +187,7 @@ module OpenAI
179
187
 
180
188
  raw
181
189
  end
182
- # rubocop:enable Layout/LineLength
190
+ # rubocop:enable Metrics/BlockLength
183
191
 
184
192
  @client.request(
185
193
  method: :post,
@@ -81,81 +81,12 @@ module OpenAI
81
81
  raise ArgumentError.new(message)
82
82
  end
83
83
 
84
- model = nil
85
- tool_models = {}
86
- case parsed
87
- in {text: OpenAI::StructuredOutput::JsonSchemaConverter => model}
88
- parsed.update(
89
- text: {
90
- format: {
91
- type: :json_schema,
92
- strict: true,
93
- name: model.name.split("::").last,
94
- schema: model.to_json_schema
95
- }
96
- }
97
- )
98
- in {text: {format: OpenAI::StructuredOutput::JsonSchemaConverter => model}}
99
- parsed.fetch(:text).update(
100
- format: {
101
- type: :json_schema,
102
- strict: true,
103
- name: model.name.split("::").last,
104
- schema: model.to_json_schema
105
- }
106
- )
107
- in {text: {format: {type: :json_schema, schema: OpenAI::StructuredOutput::JsonSchemaConverter => model}}}
108
- parsed.dig(:text, :format).store(:schema, model.to_json_schema)
109
- in {tools: Array => tools}
110
- mapped = tools.map do |tool|
111
- case tool
112
- in OpenAI::StructuredOutput::JsonSchemaConverter
113
- name = tool.name.split("::").last
114
- tool_models.store(name, tool)
115
- {
116
- type: :function,
117
- strict: true,
118
- name: name,
119
- parameters: tool.to_json_schema
120
- }
121
- in {type: :function, parameters: OpenAI::StructuredOutput::JsonSchemaConverter => params}
122
- func = tool.fetch(:function)
123
- name = func[:name] ||= params.name.split("::").last
124
- tool_models.store(name, params)
125
- func.update(parameters: params.to_json_schema)
126
- tool
127
- else
128
- tool
129
- end
130
- end
131
- tools.replace(mapped)
132
- else
133
- end
84
+ model, tool_models = get_structured_output_models(parsed)
134
85
 
135
86
  unwrap = ->(raw) do
136
- if model.is_a?(OpenAI::StructuredOutput::JsonSchemaConverter)
137
- raw[:output]
138
- &.flat_map do |output|
139
- next [] unless output[:type] == "message"
140
- output[:content].to_a
141
- end
142
- &.each do |content|
143
- next unless content[:type] == "output_text"
144
- parsed = JSON.parse(content.fetch(:text), symbolize_names: true)
145
- coerced = OpenAI::Internal::Type::Converter.coerce(model, parsed)
146
- content.store(:parsed, coerced)
147
- end
148
- end
149
- raw[:output]&.each do |output|
150
- next unless output[:type] == "function_call"
151
- next if (model = tool_models[output.fetch(:name)]).nil?
152
- parsed = JSON.parse(output.fetch(:arguments), symbolize_names: true)
153
- coerced = OpenAI::Internal::Type::Converter.coerce(model, parsed)
154
- output.store(:parsed, coerced)
155
- end
156
-
157
- raw
87
+ parse_structured_outputs!(raw, model, tool_models)
158
88
  end
89
+
159
90
  @client.request(
160
91
  method: :post,
161
92
  path: "responses",
@@ -166,8 +97,112 @@ module OpenAI
166
97
  )
167
98
  end
168
99
 
169
- def stream
170
- raise NotImplementedError.new("higher level helpers are coming soon!")
100
+ # See {OpenAI::Resources::Responses#create} for non-streaming counterpart.
101
+ #
102
+ # Some parameter documentations has been truncated, see
103
+ # {OpenAI::Models::Responses::ResponseCreateParams} for more details.
104
+ #
105
+ # Creates a model response. Provide
106
+ # [text](https://platform.openai.com/docs/guides/text) or
107
+ # [image](https://platform.openai.com/docs/guides/images) inputs to generate
108
+ # [text](https://platform.openai.com/docs/guides/text) or
109
+ # [JSON](https://platform.openai.com/docs/guides/structured-outputs) outputs. Have
110
+ # the model call your own
111
+ # [custom code](https://platform.openai.com/docs/guides/function-calling) or use
112
+ # built-in [tools](https://platform.openai.com/docs/guides/tools) like
113
+ # [web search](https://platform.openai.com/docs/guides/tools-web-search) or
114
+ # [file search](https://platform.openai.com/docs/guides/tools-file-search) to use
115
+ # your own data as input for the model's response.
116
+ #
117
+ # @overload stream_raw(input:, model:, background: nil, include: nil, instructions: nil, max_output_tokens: nil, metadata: nil, parallel_tool_calls: nil, previous_response_id: nil, prompt: nil, reasoning: nil, service_tier: nil, store: nil, temperature: nil, text: nil, tool_choice: nil, tools: nil, top_p: nil, truncation: nil, user: nil, request_options: {})
118
+ #
119
+ # @param input [String, Array<OpenAI::Models::Responses::EasyInputMessage, OpenAI::Models::Responses::ResponseInputItem::Message, OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseInputItem::ComputerCallOutput, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseInputItem::FunctionCallOutput, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseInputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCallOutput, OpenAI::Models::Responses::ResponseInputItem::McpListTools, OpenAI::Models::Responses::ResponseInputItem::McpApprovalRequest, OpenAI::Models::Responses::ResponseInputItem::McpApprovalResponse, OpenAI::Models::Responses::ResponseInputItem::McpCall, OpenAI::Models::Responses::ResponseInputItem::ItemReference>] Text, image, or file inputs to the model, used to generate a response.
120
+ #
121
+ # @param model [String, Symbol, OpenAI::Models::ChatModel, OpenAI::Models::ResponsesModel::ResponsesOnlyModel] Model ID used to generate the response, like `gpt-4o` or `o3`. OpenAI
122
+ #
123
+ # @param background [Boolean, nil] Whether to run the model response in the background.
124
+ #
125
+ # @param include [Array<Symbol, OpenAI::Models::Responses::ResponseIncludable>, nil] Specify additional output data to include in the model response. Currently
126
+ #
127
+ # @param instructions [String, nil] A system (or developer) message inserted into the model's context.
128
+ #
129
+ # @param max_output_tokens [Integer, nil] An upper bound for the number of tokens that can be generated for a response, in
130
+ #
131
+ # @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
132
+ #
133
+ # @param parallel_tool_calls [Boolean, nil] Whether to allow the model to run tool calls in parallel.
134
+ #
135
+ # @param previous_response_id [String, nil] The unique ID of the previous response to the model. Use this to resume streams from a given response.
136
+ #
137
+ # @param prompt [OpenAI::Models::Responses::ResponsePrompt, nil] Reference to a prompt template and its variables.
138
+ #
139
+ # @param reasoning [OpenAI::Models::Reasoning, nil] **o-series models only**
140
+ #
141
+ # @param service_tier [Symbol, OpenAI::Models::Responses::ResponseCreateParams::ServiceTier, nil] Specifies the latency tier to use for processing the request. This parameter is
142
+ #
143
+ # @param store [Boolean, nil] Whether to store the generated model response for later retrieval via
144
+ #
145
+ # @param temperature [Float, nil] What sampling temperature to use, between 0 and 2. Higher values like 0.8 will m
146
+ #
147
+ # @param text [OpenAI::Models::Responses::ResponseTextConfig] Configuration options for a text response from the model. Can be plain
148
+ #
149
+ # @param tool_choice [Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction] How the model should select which tool (or tools) to use when generating
150
+ #
151
+ # @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
152
+ #
153
+ # @param top_p [Float, nil] An alternative to sampling with temperature, called nucleus sampling,
154
+ #
155
+ # @param truncation [Symbol, OpenAI::Models::Responses::ResponseCreateParams::Truncation, nil] The truncation strategy to use for the model response.
156
+ #
157
+ # @param user [String] A stable identifier for your end-users.
158
+ #
159
+ # @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}, nil]
160
+ #
161
+ # @return [OpenAI::Helpers::Streaming::ResponseStream]
162
+ #
163
+ # @see OpenAI::Models::Responses::ResponseCreateParams
164
+ def stream(params)
165
+ parsed, options = OpenAI::Responses::ResponseCreateParams.dump_request(params)
166
+ starting_after, previous_response_id = parsed.values_at(:starting_after, :previous_response_id)
167
+
168
+ if starting_after && !previous_response_id
169
+ raise ArgumentError, "starting_after can only be used with previous_response_id"
170
+ end
171
+ model, tool_models = get_structured_output_models(parsed)
172
+
173
+ if previous_response_id
174
+ retrieve_params = {}
175
+ retrieve_params[:include] = params[:include] if params[:include]
176
+ retrieve_params[:request_options] = params[:request_options] if params[:request_options]
177
+
178
+ raw_stream = retrieve_streaming(previous_response_id, retrieve_params)
179
+ else
180
+ unwrap = ->(raw) do
181
+ if raw[:type] == "response.completed" && raw[:response]
182
+ parse_structured_outputs!(raw[:response], model, tool_models)
183
+ end
184
+ raw
185
+ end
186
+
187
+ parsed[:stream] = true
188
+
189
+ raw_stream = @client.request(
190
+ method: :post,
191
+ path: "responses",
192
+ headers: {"accept" => "text/event-stream"},
193
+ body: parsed,
194
+ stream: OpenAI::Internal::Stream,
195
+ model: OpenAI::Models::Responses::ResponseStreamEvent,
196
+ unwrap: unwrap,
197
+ options: options
198
+ )
199
+ end
200
+
201
+ OpenAI::Streaming::ResponseStream.new(
202
+ raw_stream: raw_stream,
203
+ text_format: model,
204
+ starting_after: starting_after
205
+ )
171
206
  end
172
207
 
173
208
  # See {OpenAI::Resources::Responses#create} for non-streaming counterpart.
@@ -207,7 +242,7 @@ module OpenAI
207
242
  #
208
243
  # @param parallel_tool_calls [Boolean, nil] Whether to allow the model to run tool calls in parallel.
209
244
  #
210
- # @param previous_response_id [String, nil] The unique ID of the previous response to the model. Use this to
245
+ # @param previous_response_id [String, nil] The unique ID of the previous response to the response to the model. Use this to resume streams from a given response.
211
246
  #
212
247
  # @param prompt [OpenAI::Models::Responses::ResponsePrompt, nil] Reference to a prompt template and its variables.
213
248
  #
@@ -245,6 +280,7 @@ module OpenAI
245
280
  raise ArgumentError.new(message)
246
281
  end
247
282
  parsed.store(:stream, true)
283
+
248
284
  @client.request(
249
285
  method: :post,
250
286
  path: "responses",
@@ -378,6 +414,143 @@ module OpenAI
378
414
  @client = client
379
415
  @input_items = OpenAI::Resources::Responses::InputItems.new(client: client)
380
416
  end
417
+
418
+ private
419
+
420
+ # Post-processes raw API responses to parse and coerce structured outputs into typed Ruby objects.
421
+ #
422
+ # This method enhances the raw API response by parsing JSON content in structured outputs
423
+ # (both text outputs and function/tool calls) and converting them to their corresponding
424
+ # Ruby types using the JsonSchemaConverter models identified during request preparation.
425
+ #
426
+ # @param raw [Hash] The raw API response hash that will be mutated with parsed data
427
+ # @param model [JsonSchemaConverter|nil] The converter for structured text output, if specified
428
+ # @param tool_models [Hash<String, JsonSchemaConverter>] Hash mapping tool names to their converters
429
+ # @return [Hash] The mutated raw response with added :parsed fields containing typed Ruby objects
430
+ #
431
+ # The method performs two main transformations:
432
+ # 1. For structured text outputs: Finds output_text content, parses the JSON, and coerces it
433
+ # to the model type, adding the result as content[:parsed]
434
+ # 2. For function/tool calls: Looks up the tool's converter by name, parses the arguments JSON,
435
+ # and coerces it to the appropriate type, adding the result as output[:parsed]
436
+ def parse_structured_outputs!(raw, model, tool_models)
437
+ if model.is_a?(OpenAI::StructuredOutput::JsonSchemaConverter)
438
+ raw[:output]
439
+ &.flat_map do |output|
440
+ next [] unless output[:type] == "message"
441
+ output[:content].to_a
442
+ end
443
+ &.each do |content|
444
+ next unless content[:type] == "output_text"
445
+ begin
446
+ parsed = JSON.parse(content.fetch(:text), symbolize_names: true)
447
+ rescue JSON::ParserError => e
448
+ parsed = e
449
+ end
450
+ coerced = OpenAI::Internal::Type::Converter.coerce(model, parsed)
451
+ content.store(:parsed, coerced)
452
+ end
453
+ end
454
+ raw[:output]&.each do |output|
455
+ next unless output[:type] == "function_call"
456
+ next if (model = tool_models[output.fetch(:name)]).nil?
457
+ begin
458
+ parsed = JSON.parse(output.fetch(:arguments), symbolize_names: true)
459
+ rescue JSON::ParserError => e
460
+ parsed = e
461
+ end
462
+ coerced = OpenAI::Internal::Type::Converter.coerce(model, parsed)
463
+ output.store(:parsed, coerced)
464
+ end
465
+
466
+ raw
467
+ end
468
+
469
+ # Extracts structured output models from request parameters and converts them to JSON Schema format.
470
+ #
471
+ # This method processes the parsed request parameters to identify any JsonSchemaConverter instances
472
+ # that define expected output schemas. It transforms these Ruby schema definitions into the JSON
473
+ # Schema format required by the OpenAI API, enabling type-safe structured outputs.
474
+ #
475
+ # @param parsed [Hash] The parsed request parameters that may contain structured output definitions
476
+ # @return [Array<(JsonSchemaConverter|nil, Hash)>] A tuple containing:
477
+ # - model: The JsonSchemaConverter for structured text output (or nil if not specified)
478
+ # - tool_models: Hash mapping tool names to their JsonSchemaConverter models
479
+ #
480
+ # The method handles multiple ways structured outputs can be specified:
481
+ # - Direct text format: { text: JsonSchemaConverter }
482
+ # - Nested text format: { text: { format: JsonSchemaConverter } }
483
+ # - Deep nested format: { text: { format: { type: :json_schema, schema: JsonSchemaConverter } } }
484
+ # - Tool parameters: { tools: [JsonSchemaConverter, ...] } or tools with parameters as converters
485
+ def get_structured_output_models(parsed)
486
+ model = nil
487
+ tool_models = {}
488
+
489
+ case parsed
490
+ in {text: OpenAI::StructuredOutput::JsonSchemaConverter => model}
491
+ parsed.update(
492
+ text: {
493
+ format: {
494
+ type: :json_schema,
495
+ strict: true,
496
+ name: model.name.split("::").last,
497
+ schema: model.to_json_schema
498
+ }
499
+ }
500
+ )
501
+ in {text: {format: OpenAI::StructuredOutput::JsonSchemaConverter => model}}
502
+ parsed.fetch(:text).update(
503
+ format: {
504
+ type: :json_schema,
505
+ strict: true,
506
+ name: model.name.split("::").last,
507
+ schema: model.to_json_schema
508
+ }
509
+ )
510
+ in {text: {format: {type: :json_schema,
511
+ schema: OpenAI::StructuredOutput::JsonSchemaConverter => model}}}
512
+ parsed.dig(:text, :format).store(:schema, model.to_json_schema)
513
+ in {tools: Array => tools}
514
+ # rubocop:disable Metrics/BlockLength
515
+ mapped = tools.map do |tool|
516
+ case tool
517
+ in OpenAI::StructuredOutput::JsonSchemaConverter
518
+ name = tool.name.split("::").last
519
+ tool_models.store(name, tool)
520
+ {
521
+ type: :function,
522
+ strict: true,
523
+ name: name,
524
+ parameters: tool.to_json_schema
525
+ }
526
+ in {type: :function, parameters: OpenAI::StructuredOutput::JsonSchemaConverter => params}
527
+ func = tool.fetch(:function)
528
+ name = func[:name] ||= params.name.split("::").last
529
+ tool_models.store(name, params)
530
+ func.update(parameters: params.to_json_schema)
531
+ tool
532
+ in {type: _, function: {parameters: OpenAI::StructuredOutput::JsonSchemaConverter => params, **}}
533
+ name = tool[:function][:name] || params.name.split("::").last
534
+ tool_models.store(name, params)
535
+ tool[:function][:parameters] = params.to_json_schema
536
+ tool
537
+ in {type: _, function: Hash => func} if func[:parameters].is_a?(Class) && func[:parameters] < OpenAI::Internal::Type::BaseModel
538
+ params = func[:parameters]
539
+ name = func[:name] || params.name.split("::").last
540
+ tool_models.store(name, params)
541
+ func[:parameters] = params.to_json_schema
542
+ tool
543
+ else
544
+ tool
545
+ end
546
+ end
547
+ # rubocop:enable Metrics/BlockLength
548
+ tools.replace(mapped)
549
+ else
550
+ end
551
+
552
+ [model, tool_models]
553
+ end
381
554
  end
382
555
  end
383
556
  end
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module OpenAI
4
+ Streaming = Helpers::Streaming
5
+ end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module OpenAI
4
- VERSION = "0.11.0"
4
+ VERSION = "0.12.0"
5
5
  end
data/lib/openai.rb CHANGED
@@ -56,6 +56,7 @@ require_relative "openai/helpers/structured_output/enum_of"
56
56
  require_relative "openai/helpers/structured_output/union_of"
57
57
  require_relative "openai/helpers/structured_output/array_of"
58
58
  require_relative "openai/helpers/structured_output/base_model"
59
+ require_relative "openai/helpers/structured_output/parsed_json"
59
60
  require_relative "openai/helpers/structured_output"
60
61
  require_relative "openai/structured_output"
61
62
  require_relative "openai/models/reasoning_effort"
@@ -539,3 +540,6 @@ require_relative "openai/resources/vector_stores"
539
540
  require_relative "openai/resources/vector_stores/file_batches"
540
541
  require_relative "openai/resources/vector_stores/files"
541
542
  require_relative "openai/resources/webhooks"
543
+ require_relative "openai/helpers/streaming/events"
544
+ require_relative "openai/helpers/streaming/response_stream"
545
+ require_relative "openai/streaming"
@@ -0,0 +1,31 @@
1
+ # typed: strong
2
+
3
+ module OpenAI
4
+ module Helpers
5
+ module Streaming
6
+ class ResponseTextDeltaEvent < OpenAI::Models::Responses::ResponseTextDeltaEvent
7
+ sig { returns(String) }
8
+ def snapshot
9
+ end
10
+ end
11
+
12
+ class ResponseTextDoneEvent < OpenAI::Models::Responses::ResponseTextDoneEvent
13
+ sig { returns(T.untyped) }
14
+ def parsed
15
+ end
16
+ end
17
+
18
+ class ResponseFunctionCallArgumentsDeltaEvent < OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent
19
+ sig { returns(String) }
20
+ def snapshot
21
+ end
22
+ end
23
+
24
+ class ResponseCompletedEvent < OpenAI::Models::Responses::ResponseCompletedEvent
25
+ sig { returns(OpenAI::Models::Responses::Response) }
26
+ def response
27
+ end
28
+ end
29
+ end
30
+ end
31
+ end
@@ -0,0 +1,104 @@
1
+ # typed: strong
2
+
3
+ module OpenAI
4
+ module Helpers
5
+ module Streaming
6
+ class ResponseStream
7
+ include OpenAI::Internal::Type::BaseStream
8
+
9
+ # Define the type union for streaming events that can be yielded
10
+ ResponseStreamEvent =
11
+ T.type_alias do
12
+ T.any(
13
+ OpenAI::Streaming::ResponseTextDeltaEvent,
14
+ OpenAI::Streaming::ResponseTextDoneEvent,
15
+ OpenAI::Streaming::ResponseCompletedEvent,
16
+ OpenAI::Streaming::ResponseFunctionCallArgumentsDeltaEvent,
17
+ # Pass through other raw events
18
+ OpenAI::Models::Responses::ResponseStreamEvent::Variants
19
+ )
20
+ end
21
+
22
+ Message = type_member { { fixed: ResponseStreamEvent } }
23
+ Elem = type_member { { fixed: ResponseStreamEvent } }
24
+
25
+ sig do
26
+ params(
27
+ raw_stream: T.untyped,
28
+ text_format: T.untyped,
29
+ starting_after: T.nilable(Integer)
30
+ ).void
31
+ end
32
+ def initialize(raw_stream:, text_format:, starting_after:)
33
+ end
34
+
35
+ sig { void }
36
+ def close
37
+ end
38
+
39
+ sig { returns(T.self_type) }
40
+ def until_done
41
+ end
42
+
43
+ sig { returns(OpenAI::Models::Responses::Response) }
44
+ def get_final_response
45
+ end
46
+
47
+ sig { returns(String) }
48
+ def get_output_text
49
+ end
50
+
51
+ sig { returns(T::Enumerator::Lazy[String]) }
52
+ def text
53
+ end
54
+
55
+ # Override the each method to properly type the yielded events
56
+ sig do
57
+ params(
58
+ block: T.nilable(T.proc.params(event: ResponseStreamEvent).void)
59
+ ).returns(T.any(T.self_type, T::Enumerator[ResponseStreamEvent]))
60
+ end
61
+ def each(&block)
62
+ end
63
+
64
+ private
65
+
66
+ sig { returns(T.untyped) }
67
+ def iterator
68
+ end
69
+ end
70
+
71
+ class ResponseStreamState
72
+ sig { returns(T.nilable(OpenAI::Models::Responses::Response)) }
73
+ attr_reader :completed_response
74
+
75
+ sig { params(text_format: T.untyped).void }
76
+ def initialize(text_format:)
77
+ end
78
+
79
+ sig { params(event: T.untyped).returns(T::Array[T.untyped]) }
80
+ def handle_event(event)
81
+ end
82
+
83
+ sig do
84
+ params(
85
+ event: T.untyped,
86
+ current_snapshot: T.nilable(OpenAI::Models::Responses::Response)
87
+ ).returns(OpenAI::Models::Responses::Response)
88
+ end
89
+ def accumulate_event(event:, current_snapshot:)
90
+ end
91
+
92
+ private
93
+
94
+ sig { params(text: T.nilable(String)).returns(T.untyped) }
95
+ def parse_structured_text(text)
96
+ end
97
+
98
+ sig { params(object: T.untyped, expected_type: Symbol).void }
99
+ def assert_type(object, expected_type)
100
+ end
101
+ end
102
+ end
103
+ end
104
+ end
@@ -52,10 +52,17 @@ module OpenAI
52
52
  url: URI::Generic,
53
53
  status: Integer,
54
54
  response: Net::HTTPResponse,
55
+ unwrap:
56
+ T.any(
57
+ Symbol,
58
+ Integer,
59
+ T::Array[T.any(Symbol, Integer)],
60
+ T.proc.params(arg0: T.anything).returns(T.anything)
61
+ ),
55
62
  stream: T::Enumerable[Message]
56
63
  ).void
57
64
  end
58
- def initialize(model:, url:, status:, response:, stream:)
65
+ def initialize(model:, url:, status:, response:, unwrap:, stream:)
59
66
  end
60
67
 
61
68
  # @api private
@@ -275,7 +275,13 @@ module OpenAI
275
275
  ),
276
276
  store: T.nilable(T::Boolean),
277
277
  temperature: T.nilable(Float),
278
- text: OpenAI::Responses::ResponseTextConfig::OrHash,
278
+ text:
279
+ T.nilable(
280
+ T.any(
281
+ OpenAI::Responses::ResponseTextConfig::OrHash,
282
+ OpenAI::StructuredOutput::JsonSchemaConverter
283
+ )
284
+ ),
279
285
  tool_choice:
280
286
  T.any(
281
287
  OpenAI::Responses::ToolChoiceOptions::OrSymbol,
@@ -462,6 +468,125 @@ module OpenAI
462
468
  )
463
469
  end
464
470
 
471
+ # See {OpenAI::Resources::Responses#create} for non-streaming counterpart.
472
+ #
473
+ # Creates a model response with a higher-level streaming interface that provides
474
+ # helper methods for processing events and aggregating stream outputs.
475
+ sig do
476
+ params(
477
+ input:
478
+ T.nilable(OpenAI::Responses::ResponseCreateParams::Input::Variants),
479
+ model:
480
+ T.nilable(
481
+ T.any(
482
+ String,
483
+ OpenAI::ChatModel::OrSymbol,
484
+ OpenAI::ResponsesModel::ResponsesOnlyModel::OrSymbol
485
+ )
486
+ ),
487
+ background: T.nilable(T::Boolean),
488
+ include:
489
+ T.nilable(
490
+ T::Array[OpenAI::Responses::ResponseIncludable::OrSymbol]
491
+ ),
492
+ instructions: T.nilable(String),
493
+ max_output_tokens: T.nilable(Integer),
494
+ metadata: T.nilable(T::Hash[Symbol, String]),
495
+ parallel_tool_calls: T.nilable(T::Boolean),
496
+ previous_response_id: T.nilable(String),
497
+ prompt: T.nilable(OpenAI::Responses::ResponsePrompt::OrHash),
498
+ reasoning: T.nilable(OpenAI::Reasoning::OrHash),
499
+ service_tier:
500
+ T.nilable(
501
+ OpenAI::Responses::ResponseCreateParams::ServiceTier::OrSymbol
502
+ ),
503
+ store: T.nilable(T::Boolean),
504
+ temperature: T.nilable(Float),
505
+ text:
506
+ T.any(
507
+ OpenAI::Responses::ResponseTextConfig::OrHash,
508
+ OpenAI::StructuredOutput::JsonSchemaConverter
509
+ ),
510
+ tool_choice:
511
+ T.any(
512
+ OpenAI::Responses::ToolChoiceOptions::OrSymbol,
513
+ OpenAI::Responses::ToolChoiceTypes::OrHash,
514
+ OpenAI::Responses::ToolChoiceFunction::OrHash
515
+ ),
516
+ tools:
517
+ T.nilable(
518
+ T::Array[
519
+ T.any(
520
+ OpenAI::Responses::FunctionTool::OrHash,
521
+ OpenAI::Responses::FileSearchTool::OrHash,
522
+ OpenAI::Responses::ComputerTool::OrHash,
523
+ OpenAI::Responses::Tool::Mcp::OrHash,
524
+ OpenAI::Responses::Tool::CodeInterpreter::OrHash,
525
+ OpenAI::Responses::Tool::ImageGeneration::OrHash,
526
+ OpenAI::Responses::Tool::LocalShell::OrHash,
527
+ OpenAI::Responses::WebSearchTool::OrHash,
528
+ OpenAI::StructuredOutput::JsonSchemaConverter
529
+ )
530
+ ]
531
+ ),
532
+ top_p: T.nilable(Float),
533
+ truncation:
534
+ T.nilable(
535
+ OpenAI::Responses::ResponseCreateParams::Truncation::OrSymbol
536
+ ),
537
+ user: T.nilable(String),
538
+ starting_after: T.nilable(Integer),
539
+ request_options: T.nilable(OpenAI::RequestOptions::OrHash)
540
+ ).returns(OpenAI::Streaming::ResponseStream)
541
+ end
542
+ def stream(
543
+ # Text, image, or file inputs to the model, used to generate a response.
544
+ input: nil,
545
+ # Model ID used to generate the response, like `gpt-4o` or `o3`.
546
+ model: nil,
547
+ # Whether to run the model response in the background.
548
+ background: nil,
549
+ # Specify additional output data to include in the model response.
550
+ include: nil,
551
+ # A system (or developer) message inserted into the model's context.
552
+ instructions: nil,
553
+ # An upper bound for the number of tokens that can be generated for a response.
554
+ max_output_tokens: nil,
555
+ # Set of 16 key-value pairs that can be attached to an object.
556
+ metadata: nil,
557
+ # Whether to allow the model to run tool calls in parallel.
558
+ parallel_tool_calls: nil,
559
+ # The unique ID of the previous response to the model. Use this to create
560
+ # multi-turn conversations.
561
+ previous_response_id: nil,
562
+ # Reference to a prompt template and its variables.
563
+ prompt: nil,
564
+ # Configuration options for reasoning models.
565
+ reasoning: nil,
566
+ # Specifies the latency tier to use for processing the request.
567
+ service_tier: nil,
568
+ # Whether to store the generated model response for later retrieval via API.
569
+ store: nil,
570
+ # What sampling temperature to use, between 0 and 2.
571
+ temperature: nil,
572
+ # Configuration options for a text response from the model.
573
+ text: nil,
574
+ # How the model should select which tool (or tools) to use when generating a response.
575
+ tool_choice: nil,
576
+ # An array of tools the model may call while generating a response.
577
+ tools: nil,
578
+ # An alternative to sampling with temperature, called nucleus sampling.
579
+ top_p: nil,
580
+ # The truncation strategy to use for the model response.
581
+ truncation: nil,
582
+ # A stable identifier for your end-users.
583
+ user: nil,
584
+ # The sequence number of the event after which to start streaming (for resuming streams).
585
+ starting_after: nil,
586
+ request_options: {}
587
+ )
588
+ end
589
+
465
590
  # See {OpenAI::Resources::Responses#retrieve_streaming} for streaming counterpart.
466
591
  #
467
592
  # Retrieves a model response with the given ID.
@@ -0,0 +1,5 @@
1
+ # typed: strong
2
+
3
+ module OpenAI
4
+ Streaming = OpenAI::Helpers::Streaming
5
+ end
@@ -23,6 +23,10 @@ module OpenAI
23
23
  url: URI::Generic,
24
24
  status: Integer,
25
25
  response: top,
26
+ unwrap: Symbol
27
+ | Integer
28
+ | ::Array[Symbol | Integer]
29
+ | ^(top arg0) -> top,
26
30
  stream: Enumerable[Message]
27
31
  ) -> void
28
32
 
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.11.0
4
+ version: 0.12.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - OpenAI
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2025-06-26 00:00:00.000000000 Z
11
+ date: 2025-07-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: connection_pool
@@ -39,12 +39,15 @@ files:
39
39
  - lib/openai/client.rb
40
40
  - lib/openai/errors.rb
41
41
  - lib/openai/file_part.rb
42
+ - lib/openai/helpers/streaming/events.rb
43
+ - lib/openai/helpers/streaming/response_stream.rb
42
44
  - lib/openai/helpers/structured_output.rb
43
45
  - lib/openai/helpers/structured_output/array_of.rb
44
46
  - lib/openai/helpers/structured_output/base_model.rb
45
47
  - lib/openai/helpers/structured_output/boolean.rb
46
48
  - lib/openai/helpers/structured_output/enum_of.rb
47
49
  - lib/openai/helpers/structured_output/json_schema_converter.rb
50
+ - lib/openai/helpers/structured_output/parsed_json.rb
48
51
  - lib/openai/helpers/structured_output/union_of.rb
49
52
  - lib/openai/internal.rb
50
53
  - lib/openai/internal/cursor_page.rb
@@ -547,12 +550,15 @@ files:
547
550
  - lib/openai/resources/vector_stores/file_batches.rb
548
551
  - lib/openai/resources/vector_stores/files.rb
549
552
  - lib/openai/resources/webhooks.rb
553
+ - lib/openai/streaming.rb
550
554
  - lib/openai/structured_output.rb
551
555
  - lib/openai/version.rb
552
556
  - manifest.yaml
553
557
  - rbi/openai/client.rbi
554
558
  - rbi/openai/errors.rbi
555
559
  - rbi/openai/file_part.rbi
560
+ - rbi/openai/helpers/streaming/events.rbi
561
+ - rbi/openai/helpers/streaming/response_stream.rbi
556
562
  - rbi/openai/helpers/structured_output.rbi
557
563
  - rbi/openai/helpers/structured_output/array_of.rbi
558
564
  - rbi/openai/helpers/structured_output/base_model.rbi
@@ -1061,6 +1067,7 @@ files:
1061
1067
  - rbi/openai/resources/vector_stores/file_batches.rbi
1062
1068
  - rbi/openai/resources/vector_stores/files.rbi
1063
1069
  - rbi/openai/resources/webhooks.rbi
1070
+ - rbi/openai/streaming.rbi
1064
1071
  - rbi/openai/structured_output.rbi
1065
1072
  - rbi/openai/version.rbi
1066
1073
  - sig/openai/client.rbs