openai 0.14.0 → 0.15.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +18 -0
- data/README.md +3 -3
- data/lib/openai/models/audio/speech_create_params.rb +0 -9
- data/lib/openai/models/chat/chat_completion.rb +2 -2
- data/lib/openai/models/chat/chat_completion_audio_param.rb +0 -9
- data/lib/openai/models/chat/chat_completion_chunk.rb +2 -2
- data/lib/openai/models/chat/completion_create_params.rb +2 -2
- data/lib/openai/models/function_definition.rb +1 -1
- data/lib/openai/models/image_edit_params.rb +4 -1
- data/lib/openai/models/image_generate_params.rb +4 -1
- data/lib/openai/models/images_response.rb +2 -5
- data/lib/openai/models/responses/response.rb +2 -2
- data/lib/openai/models/responses/response_code_interpreter_tool_call.rb +5 -3
- data/lib/openai/models/responses/response_create_params.rb +2 -2
- data/lib/openai/models/responses/response_mcp_call_arguments_delta_event.rb +9 -4
- data/lib/openai/models/responses/response_mcp_call_arguments_done_event.rb +7 -4
- data/lib/openai/models/responses/response_mcp_call_completed_event.rb +17 -1
- data/lib/openai/models/responses/response_mcp_call_failed_event.rb +17 -1
- data/lib/openai/models/responses/response_mcp_list_tools_completed_event.rb +17 -1
- data/lib/openai/models/responses/response_mcp_list_tools_failed_event.rb +17 -1
- data/lib/openai/models/responses/response_mcp_list_tools_in_progress_event.rb +17 -1
- data/lib/openai/models/responses/response_stream_event.rb +1 -7
- data/lib/openai/models/responses/response_text_delta_event.rb +66 -1
- data/lib/openai/models/responses/response_text_done_event.rb +66 -1
- data/lib/openai/resources/images.rb +6 -6
- data/lib/openai/resources/responses.rb +2 -2
- data/lib/openai/version.rb +1 -1
- data/lib/openai.rb +0 -2
- data/rbi/openai/models/audio/speech_create_params.rbi +0 -9
- data/rbi/openai/models/chat/chat_completion.rbi +3 -3
- data/rbi/openai/models/chat/chat_completion_audio_param.rbi +0 -15
- data/rbi/openai/models/chat/chat_completion_chunk.rbi +3 -3
- data/rbi/openai/models/chat/completion_create_params.rbi +3 -3
- data/rbi/openai/models/function_definition.rbi +2 -2
- data/rbi/openai/models/image_edit_params.rbi +6 -0
- data/rbi/openai/models/image_generate_params.rbi +6 -0
- data/rbi/openai/models/images_response.rbi +2 -2
- data/rbi/openai/models/responses/response.rbi +3 -3
- data/rbi/openai/models/responses/response_code_interpreter_tool_call.rbi +6 -3
- data/rbi/openai/models/responses/response_create_params.rbi +3 -3
- data/rbi/openai/models/responses/response_mcp_call_arguments_delta_event.rbi +7 -5
- data/rbi/openai/models/responses/response_mcp_call_arguments_done_event.rbi +5 -5
- data/rbi/openai/models/responses/response_mcp_call_completed_event.rbi +28 -4
- data/rbi/openai/models/responses/response_mcp_call_failed_event.rbi +28 -4
- data/rbi/openai/models/responses/response_mcp_list_tools_completed_event.rbi +28 -4
- data/rbi/openai/models/responses/response_mcp_list_tools_failed_event.rbi +28 -4
- data/rbi/openai/models/responses/response_mcp_list_tools_in_progress_event.rbi +28 -4
- data/rbi/openai/models/responses/response_stream_event.rbi +0 -2
- data/rbi/openai/models/responses/response_text_delta_event.rbi +131 -0
- data/rbi/openai/models/responses/response_text_done_event.rbi +131 -0
- data/rbi/openai/resources/chat/completions.rbi +2 -2
- data/rbi/openai/resources/images.rbi +22 -10
- data/rbi/openai/resources/responses.rbi +2 -2
- data/sig/openai/models/audio/speech_create_params.rbs +0 -6
- data/sig/openai/models/chat/chat_completion_audio_param.rbs +0 -6
- data/sig/openai/models/responses/response_mcp_call_arguments_delta_event.rbs +4 -4
- data/sig/openai/models/responses/response_mcp_call_arguments_done_event.rbs +4 -4
- data/sig/openai/models/responses/response_mcp_call_completed_event.rbs +14 -1
- data/sig/openai/models/responses/response_mcp_call_failed_event.rbs +14 -1
- data/sig/openai/models/responses/response_mcp_list_tools_completed_event.rbs +14 -1
- data/sig/openai/models/responses/response_mcp_list_tools_failed_event.rbs +14 -1
- data/sig/openai/models/responses/response_mcp_list_tools_in_progress_event.rbs +10 -0
- data/sig/openai/models/responses/response_stream_event.rbs +0 -2
- data/sig/openai/models/responses/response_text_delta_event.rbs +52 -0
- data/sig/openai/models/responses/response_text_done_event.rbs +52 -0
- metadata +2 -8
- data/lib/openai/models/responses/response_reasoning_delta_event.rb +0 -60
- data/lib/openai/models/responses/response_reasoning_done_event.rb +0 -60
- data/rbi/openai/models/responses/response_reasoning_delta_event.rbi +0 -83
- data/rbi/openai/models/responses/response_reasoning_done_event.rbi +0 -83
- data/sig/openai/models/responses/response_reasoning_delta_event.rbs +0 -47
- data/sig/openai/models/responses/response_reasoning_done_event.rbs +0 -47
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: e15e098317bf9151fffc0d83be9fd3ead36872a58ea53628f2d0dd3028735c78
|
4
|
+
data.tar.gz: 04a779ac9f0b4418138bf4a7216e01109e48d608d76c33fc83f76e9de82d574e
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: d3673d18e0d3cfcd0db2ddc4c9c45bc6da8dd86371a38d85f7ad181f45780e6a17b48b9478e621c47a17c1708a2ec775ae8b61c0f6eb39c6b9d6686e09edfb65
|
7
|
+
data.tar.gz: a7fc556be0b4ba6ea16e6a2ba37bb0e9f62c2f7e39344a372b02b4c54b752e5718e15bfea4ffd8c277dd8db4f6d19530dc29e1ee5aee3fb2fc9a253269944b8f
|
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,23 @@
|
|
1
1
|
# Changelog
|
2
2
|
|
3
|
+
## 0.15.0 (2025-07-21)
|
4
|
+
|
5
|
+
Full Changelog: [v0.14.0...v0.15.0](https://github.com/openai/openai-ruby/compare/v0.14.0...v0.15.0)
|
6
|
+
|
7
|
+
### Features
|
8
|
+
|
9
|
+
* **api:** manual updates ([fb53071](https://github.com/openai/openai-ruby/commit/fb530713d08a4ba49e8bdaecd9848674bb35c333))
|
10
|
+
|
11
|
+
|
12
|
+
### Bug Fixes
|
13
|
+
|
14
|
+
* **internal:** tests should use normalized property names ([801e9c2](https://github.com/openai/openai-ruby/commit/801e9c29f65e572a3b49f5cf7891d3053e1d087f))
|
15
|
+
|
16
|
+
|
17
|
+
### Chores
|
18
|
+
|
19
|
+
* **api:** event shapes more accurate ([29f32ce](https://github.com/openai/openai-ruby/commit/29f32cedf6112d38fe8de454658a5afd7ad0d2cb))
|
20
|
+
|
3
21
|
## 0.14.0 (2025-07-16)
|
4
22
|
|
5
23
|
Full Changelog: [v0.13.1...v0.14.0](https://github.com/openai/openai-ruby/compare/v0.13.1...v0.14.0)
|
data/README.md
CHANGED
@@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
|
|
15
15
|
<!-- x-release-please-start-version -->
|
16
16
|
|
17
17
|
```ruby
|
18
|
-
gem "openai", "~> 0.
|
18
|
+
gem "openai", "~> 0.15.0"
|
19
19
|
```
|
20
20
|
|
21
21
|
<!-- x-release-please-end -->
|
@@ -443,7 +443,7 @@ You can provide typesafe request parameters like so:
|
|
443
443
|
|
444
444
|
```ruby
|
445
445
|
openai.chat.completions.create(
|
446
|
-
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(
|
446
|
+
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
|
447
447
|
model: :"gpt-4.1"
|
448
448
|
)
|
449
449
|
```
|
@@ -459,7 +459,7 @@ openai.chat.completions.create(
|
|
459
459
|
|
460
460
|
# You can also splat a full Params class:
|
461
461
|
params = OpenAI::Chat::CompletionCreateParams.new(
|
462
|
-
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(
|
462
|
+
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
|
463
463
|
model: :"gpt-4.1"
|
464
464
|
)
|
465
465
|
openai.chat.completions.create(**params)
|
@@ -111,12 +111,6 @@ module OpenAI
|
|
111
111
|
|
112
112
|
variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::ECHO }
|
113
113
|
|
114
|
-
variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::FABLE }
|
115
|
-
|
116
|
-
variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::ONYX }
|
117
|
-
|
118
|
-
variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::NOVA }
|
119
|
-
|
120
114
|
variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::SAGE }
|
121
115
|
|
122
116
|
variant const: -> { OpenAI::Models::Audio::SpeechCreateParams::Voice::SHIMMER }
|
@@ -137,9 +131,6 @@ module OpenAI
|
|
137
131
|
BALLAD = :ballad
|
138
132
|
CORAL = :coral
|
139
133
|
ECHO = :echo
|
140
|
-
FABLE = :fable
|
141
|
-
ONYX = :onyx
|
142
|
-
NOVA = :nova
|
143
134
|
SAGE = :sage
|
144
135
|
SHIMMER = :shimmer
|
145
136
|
VERSE = :verse
|
@@ -44,7 +44,7 @@ module OpenAI
|
|
44
44
|
# - If set to 'auto', then the request will be processed with the service tier
|
45
45
|
# configured in the Project settings. Unless otherwise configured, the Project
|
46
46
|
# will use 'default'.
|
47
|
-
# - If set to 'default', then the
|
47
|
+
# - If set to 'default', then the request will be processed with the standard
|
48
48
|
# pricing and performance for the selected model.
|
49
49
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
50
50
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -193,7 +193,7 @@ module OpenAI
|
|
193
193
|
# - If set to 'auto', then the request will be processed with the service tier
|
194
194
|
# configured in the Project settings. Unless otherwise configured, the Project
|
195
195
|
# will use 'default'.
|
196
|
-
# - If set to 'default', then the
|
196
|
+
# - If set to 'default', then the request will be processed with the standard
|
197
197
|
# pricing and performance for the selected model.
|
198
198
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
199
199
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -67,12 +67,6 @@ module OpenAI
|
|
67
67
|
|
68
68
|
variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::ECHO }
|
69
69
|
|
70
|
-
variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::FABLE }
|
71
|
-
|
72
|
-
variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::ONYX }
|
73
|
-
|
74
|
-
variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::NOVA }
|
75
|
-
|
76
70
|
variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::SAGE }
|
77
71
|
|
78
72
|
variant const: -> { OpenAI::Models::Chat::ChatCompletionAudioParam::Voice::SHIMMER }
|
@@ -93,9 +87,6 @@ module OpenAI
|
|
93
87
|
BALLAD = :ballad
|
94
88
|
CORAL = :coral
|
95
89
|
ECHO = :echo
|
96
|
-
FABLE = :fable
|
97
|
-
ONYX = :onyx
|
98
|
-
NOVA = :nova
|
99
90
|
SAGE = :sage
|
100
91
|
SHIMMER = :shimmer
|
101
92
|
VERSE = :verse
|
@@ -43,7 +43,7 @@ module OpenAI
|
|
43
43
|
# - If set to 'auto', then the request will be processed with the service tier
|
44
44
|
# configured in the Project settings. Unless otherwise configured, the Project
|
45
45
|
# will use 'default'.
|
46
|
-
# - If set to 'default', then the
|
46
|
+
# - If set to 'default', then the request will be processed with the standard
|
47
47
|
# pricing and performance for the selected model.
|
48
48
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
49
49
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -376,7 +376,7 @@ module OpenAI
|
|
376
376
|
# - If set to 'auto', then the request will be processed with the service tier
|
377
377
|
# configured in the Project settings. Unless otherwise configured, the Project
|
378
378
|
# will use 'default'.
|
379
|
-
# - If set to 'default', then the
|
379
|
+
# - If set to 'default', then the request will be processed with the standard
|
380
380
|
# pricing and performance for the selected model.
|
381
381
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
382
382
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -224,7 +224,7 @@ module OpenAI
|
|
224
224
|
# - If set to 'auto', then the request will be processed with the service tier
|
225
225
|
# configured in the Project settings. Unless otherwise configured, the Project
|
226
226
|
# will use 'default'.
|
227
|
-
# - If set to 'default', then the
|
227
|
+
# - If set to 'default', then the request will be processed with the standard
|
228
228
|
# pricing and performance for the selected model.
|
229
229
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
230
230
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -553,7 +553,7 @@ module OpenAI
|
|
553
553
|
# - If set to 'auto', then the request will be processed with the service tier
|
554
554
|
# configured in the Project settings. Unless otherwise configured, the Project
|
555
555
|
# will use 'default'.
|
556
|
-
# - If set to 'default', then the
|
556
|
+
# - If set to 'default', then the request will be processed with the standard
|
557
557
|
# pricing and performance for the selected model.
|
558
558
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
559
559
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -34,7 +34,7 @@ module OpenAI
|
|
34
34
|
# set to true, the model will follow the exact schema defined in the `parameters`
|
35
35
|
# field. Only a subset of JSON Schema is supported when `strict` is `true`. Learn
|
36
36
|
# more about Structured Outputs in the
|
37
|
-
# [function calling guide](docs/guides/function-calling).
|
37
|
+
# [function calling guide](https://platform.openai.com/docs/guides/function-calling).
|
38
38
|
#
|
39
39
|
# @return [Boolean, nil]
|
40
40
|
optional :strict, OpenAI::Internal::Type::Boolean, nil?: true
|
@@ -4,7 +4,7 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
# @see OpenAI::Resources::Images#edit
|
6
6
|
#
|
7
|
-
# @see OpenAI::Resources::Images#
|
7
|
+
# @see OpenAI::Resources::Images#edit_stream_raw
|
8
8
|
class ImageEditParams < OpenAI::Internal::Type::BaseModel
|
9
9
|
extend OpenAI::Internal::Type::RequestParameters::Converter
|
10
10
|
include OpenAI::Internal::Type::RequestParameters
|
@@ -92,6 +92,9 @@ module OpenAI
|
|
92
92
|
# responses that return partial images. Value must be between 0 and 3. When set to
|
93
93
|
# 0, the response will be a single image sent in one streaming event.
|
94
94
|
#
|
95
|
+
# Note that the final image may be sent before the full number of partial images
|
96
|
+
# are generated if the full image is generated more quickly.
|
97
|
+
#
|
95
98
|
# @return [Integer, nil]
|
96
99
|
optional :partial_images, Integer, nil?: true
|
97
100
|
|
@@ -4,7 +4,7 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
# @see OpenAI::Resources::Images#generate
|
6
6
|
#
|
7
|
-
# @see OpenAI::Resources::Images#
|
7
|
+
# @see OpenAI::Resources::Images#generate_stream_raw
|
8
8
|
class ImageGenerateParams < OpenAI::Internal::Type::BaseModel
|
9
9
|
extend OpenAI::Internal::Type::RequestParameters::Converter
|
10
10
|
include OpenAI::Internal::Type::RequestParameters
|
@@ -71,6 +71,9 @@ module OpenAI
|
|
71
71
|
# responses that return partial images. Value must be between 0 and 3. When set to
|
72
72
|
# 0, the response will be a single image sent in one streaming event.
|
73
73
|
#
|
74
|
+
# Note that the final image may be sent before the full number of partial images
|
75
|
+
# are generated if the full image is generated more quickly.
|
76
|
+
#
|
74
77
|
# @return [Integer, nil]
|
75
78
|
optional :partial_images, Integer, nil?: true
|
76
79
|
|
@@ -140,7 +140,7 @@ module OpenAI
|
|
140
140
|
required :input_tokens_details, -> { OpenAI::ImagesResponse::Usage::InputTokensDetails }
|
141
141
|
|
142
142
|
# @!attribute output_tokens
|
143
|
-
# The number of
|
143
|
+
# The number of output tokens generated by the model.
|
144
144
|
#
|
145
145
|
# @return [Integer]
|
146
146
|
required :output_tokens, Integer
|
@@ -152,16 +152,13 @@ module OpenAI
|
|
152
152
|
required :total_tokens, Integer
|
153
153
|
|
154
154
|
# @!method initialize(input_tokens:, input_tokens_details:, output_tokens:, total_tokens:)
|
155
|
-
# Some parameter documentations has been truncated, see
|
156
|
-
# {OpenAI::Models::ImagesResponse::Usage} for more details.
|
157
|
-
#
|
158
155
|
# For `gpt-image-1` only, the token usage information for the image generation.
|
159
156
|
#
|
160
157
|
# @param input_tokens [Integer] The number of tokens (images and text) in the input prompt.
|
161
158
|
#
|
162
159
|
# @param input_tokens_details [OpenAI::Models::ImagesResponse::Usage::InputTokensDetails] The input tokens detailed information for the image generation.
|
163
160
|
#
|
164
|
-
# @param output_tokens [Integer] The number of
|
161
|
+
# @param output_tokens [Integer] The number of output tokens generated by the model.
|
165
162
|
#
|
166
163
|
# @param total_tokens [Integer] The total number of tokens (images and text) used for the image generation.
|
167
164
|
|
@@ -186,7 +186,7 @@ module OpenAI
|
|
186
186
|
# - If set to 'auto', then the request will be processed with the service tier
|
187
187
|
# configured in the Project settings. Unless otherwise configured, the Project
|
188
188
|
# will use 'default'.
|
189
|
-
# - If set to 'default', then the
|
189
|
+
# - If set to 'default', then the request will be processed with the standard
|
190
190
|
# pricing and performance for the selected model.
|
191
191
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
192
192
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -401,7 +401,7 @@ module OpenAI
|
|
401
401
|
# - If set to 'auto', then the request will be processed with the service tier
|
402
402
|
# configured in the Project settings. Unless otherwise configured, the Project
|
403
403
|
# will use 'default'.
|
404
|
-
# - If set to 'default', then the
|
404
|
+
# - If set to 'default', then the request will be processed with the standard
|
405
405
|
# pricing and performance for the selected model.
|
406
406
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
407
407
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -34,7 +34,8 @@ module OpenAI
|
|
34
34
|
nil?: true
|
35
35
|
|
36
36
|
# @!attribute status
|
37
|
-
# The status of the code interpreter tool call.
|
37
|
+
# The status of the code interpreter tool call. Valid values are `in_progress`,
|
38
|
+
# `completed`, `incomplete`, `interpreting`, and `failed`.
|
38
39
|
#
|
39
40
|
# @return [Symbol, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Status]
|
40
41
|
required :status, enum: -> { OpenAI::Responses::ResponseCodeInterpreterToolCall::Status }
|
@@ -59,7 +60,7 @@ module OpenAI
|
|
59
60
|
#
|
60
61
|
# @param outputs [Array<OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Output::Logs, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Output::Image>, nil] The outputs generated by the code interpreter, such as logs or images.
|
61
62
|
#
|
62
|
-
# @param status [Symbol, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Status] The status of the code interpreter tool call.
|
63
|
+
# @param status [Symbol, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Status] The status of the code interpreter tool call. Valid values are `in_progress`, `c
|
63
64
|
#
|
64
65
|
# @param type [Symbol, :code_interpreter_call] The type of the code interpreter tool call. Always `code_interpreter_call`.
|
65
66
|
|
@@ -121,7 +122,8 @@ module OpenAI
|
|
121
122
|
# @return [Array(OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Output::Logs, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall::Output::Image)]
|
122
123
|
end
|
123
124
|
|
124
|
-
# The status of the code interpreter tool call.
|
125
|
+
# The status of the code interpreter tool call. Valid values are `in_progress`,
|
126
|
+
# `completed`, `incomplete`, `interpreting`, and `failed`.
|
125
127
|
#
|
126
128
|
# @see OpenAI::Models::Responses::ResponseCodeInterpreterToolCall#status
|
127
129
|
module Status
|
@@ -138,7 +138,7 @@ module OpenAI
|
|
138
138
|
# - If set to 'auto', then the request will be processed with the service tier
|
139
139
|
# configured in the Project settings. Unless otherwise configured, the Project
|
140
140
|
# will use 'default'.
|
141
|
-
# - If set to 'default', then the
|
141
|
+
# - If set to 'default', then the request will be processed with the standard
|
142
142
|
# pricing and performance for the selected model.
|
143
143
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
144
144
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -328,7 +328,7 @@ module OpenAI
|
|
328
328
|
# - If set to 'auto', then the request will be processed with the service tier
|
329
329
|
# configured in the Project settings. Unless otherwise configured, the Project
|
330
330
|
# will use 'default'.
|
331
|
-
# - If set to 'default', then the
|
331
|
+
# - If set to 'default', then the request will be processed with the standard
|
332
332
|
# pricing and performance for the selected model.
|
333
333
|
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
|
334
334
|
# 'priority', then the request will be processed with the corresponding service
|
@@ -5,10 +5,11 @@ module OpenAI
|
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpCallArgumentsDeltaEvent < OpenAI::Internal::Type::BaseModel
|
7
7
|
# @!attribute delta
|
8
|
-
#
|
8
|
+
# A JSON string containing the partial update to the arguments for the MCP tool
|
9
|
+
# call.
|
9
10
|
#
|
10
|
-
# @return [
|
11
|
-
required :delta,
|
11
|
+
# @return [String]
|
12
|
+
required :delta, String
|
12
13
|
|
13
14
|
# @!attribute item_id
|
14
15
|
# The unique identifier of the MCP tool call item being processed.
|
@@ -35,10 +36,14 @@ module OpenAI
|
|
35
36
|
required :type, const: :"response.mcp_call_arguments.delta"
|
36
37
|
|
37
38
|
# @!method initialize(delta:, item_id:, output_index:, sequence_number:, type: :"response.mcp_call_arguments.delta")
|
39
|
+
# Some parameter documentations has been truncated, see
|
40
|
+
# {OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent} for more
|
41
|
+
# details.
|
42
|
+
#
|
38
43
|
# Emitted when there is a delta (partial update) to the arguments of an MCP tool
|
39
44
|
# call.
|
40
45
|
#
|
41
|
-
# @param delta [
|
46
|
+
# @param delta [String] A JSON string containing the partial update to the arguments for the MCP tool ca
|
42
47
|
#
|
43
48
|
# @param item_id [String] The unique identifier of the MCP tool call item being processed.
|
44
49
|
#
|
@@ -5,10 +5,10 @@ module OpenAI
|
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpCallArgumentsDoneEvent < OpenAI::Internal::Type::BaseModel
|
7
7
|
# @!attribute arguments
|
8
|
-
#
|
8
|
+
# A JSON string containing the finalized arguments for the MCP tool call.
|
9
9
|
#
|
10
|
-
# @return [
|
11
|
-
required :arguments,
|
10
|
+
# @return [String]
|
11
|
+
required :arguments, String
|
12
12
|
|
13
13
|
# @!attribute item_id
|
14
14
|
# The unique identifier of the MCP tool call item being processed.
|
@@ -35,9 +35,12 @@ module OpenAI
|
|
35
35
|
required :type, const: :"response.mcp_call_arguments.done"
|
36
36
|
|
37
37
|
# @!method initialize(arguments:, item_id:, output_index:, sequence_number:, type: :"response.mcp_call_arguments.done")
|
38
|
+
# Some parameter documentations has been truncated, see
|
39
|
+
# {OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent} for more details.
|
40
|
+
#
|
38
41
|
# Emitted when the arguments for an MCP tool call are finalized.
|
39
42
|
#
|
40
|
-
# @param arguments [
|
43
|
+
# @param arguments [String] A JSON string containing the finalized arguments for the MCP tool call.
|
41
44
|
#
|
42
45
|
# @param item_id [String] The unique identifier of the MCP tool call item being processed.
|
43
46
|
#
|
@@ -4,6 +4,18 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpCallCompletedEvent < OpenAI::Internal::Type::BaseModel
|
7
|
+
# @!attribute item_id
|
8
|
+
# The ID of the MCP tool call item that completed.
|
9
|
+
#
|
10
|
+
# @return [String]
|
11
|
+
required :item_id, String
|
12
|
+
|
13
|
+
# @!attribute output_index
|
14
|
+
# The index of the output item that completed.
|
15
|
+
#
|
16
|
+
# @return [Integer]
|
17
|
+
required :output_index, Integer
|
18
|
+
|
7
19
|
# @!attribute sequence_number
|
8
20
|
# The sequence number of this event.
|
9
21
|
#
|
@@ -16,9 +28,13 @@ module OpenAI
|
|
16
28
|
# @return [Symbol, :"response.mcp_call.completed"]
|
17
29
|
required :type, const: :"response.mcp_call.completed"
|
18
30
|
|
19
|
-
# @!method initialize(sequence_number:, type: :"response.mcp_call.completed")
|
31
|
+
# @!method initialize(item_id:, output_index:, sequence_number:, type: :"response.mcp_call.completed")
|
20
32
|
# Emitted when an MCP tool call has completed successfully.
|
21
33
|
#
|
34
|
+
# @param item_id [String] The ID of the MCP tool call item that completed.
|
35
|
+
#
|
36
|
+
# @param output_index [Integer] The index of the output item that completed.
|
37
|
+
#
|
22
38
|
# @param sequence_number [Integer] The sequence number of this event.
|
23
39
|
#
|
24
40
|
# @param type [Symbol, :"response.mcp_call.completed"] The type of the event. Always 'response.mcp_call.completed'.
|
@@ -4,6 +4,18 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpCallFailedEvent < OpenAI::Internal::Type::BaseModel
|
7
|
+
# @!attribute item_id
|
8
|
+
# The ID of the MCP tool call item that failed.
|
9
|
+
#
|
10
|
+
# @return [String]
|
11
|
+
required :item_id, String
|
12
|
+
|
13
|
+
# @!attribute output_index
|
14
|
+
# The index of the output item that failed.
|
15
|
+
#
|
16
|
+
# @return [Integer]
|
17
|
+
required :output_index, Integer
|
18
|
+
|
7
19
|
# @!attribute sequence_number
|
8
20
|
# The sequence number of this event.
|
9
21
|
#
|
@@ -16,9 +28,13 @@ module OpenAI
|
|
16
28
|
# @return [Symbol, :"response.mcp_call.failed"]
|
17
29
|
required :type, const: :"response.mcp_call.failed"
|
18
30
|
|
19
|
-
# @!method initialize(sequence_number:, type: :"response.mcp_call.failed")
|
31
|
+
# @!method initialize(item_id:, output_index:, sequence_number:, type: :"response.mcp_call.failed")
|
20
32
|
# Emitted when an MCP tool call has failed.
|
21
33
|
#
|
34
|
+
# @param item_id [String] The ID of the MCP tool call item that failed.
|
35
|
+
#
|
36
|
+
# @param output_index [Integer] The index of the output item that failed.
|
37
|
+
#
|
22
38
|
# @param sequence_number [Integer] The sequence number of this event.
|
23
39
|
#
|
24
40
|
# @param type [Symbol, :"response.mcp_call.failed"] The type of the event. Always 'response.mcp_call.failed'.
|
@@ -4,6 +4,18 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpListToolsCompletedEvent < OpenAI::Internal::Type::BaseModel
|
7
|
+
# @!attribute item_id
|
8
|
+
# The ID of the MCP tool call item that produced this output.
|
9
|
+
#
|
10
|
+
# @return [String]
|
11
|
+
required :item_id, String
|
12
|
+
|
13
|
+
# @!attribute output_index
|
14
|
+
# The index of the output item that was processed.
|
15
|
+
#
|
16
|
+
# @return [Integer]
|
17
|
+
required :output_index, Integer
|
18
|
+
|
7
19
|
# @!attribute sequence_number
|
8
20
|
# The sequence number of this event.
|
9
21
|
#
|
@@ -16,9 +28,13 @@ module OpenAI
|
|
16
28
|
# @return [Symbol, :"response.mcp_list_tools.completed"]
|
17
29
|
required :type, const: :"response.mcp_list_tools.completed"
|
18
30
|
|
19
|
-
# @!method initialize(sequence_number:, type: :"response.mcp_list_tools.completed")
|
31
|
+
# @!method initialize(item_id:, output_index:, sequence_number:, type: :"response.mcp_list_tools.completed")
|
20
32
|
# Emitted when the list of available MCP tools has been successfully retrieved.
|
21
33
|
#
|
34
|
+
# @param item_id [String] The ID of the MCP tool call item that produced this output.
|
35
|
+
#
|
36
|
+
# @param output_index [Integer] The index of the output item that was processed.
|
37
|
+
#
|
22
38
|
# @param sequence_number [Integer] The sequence number of this event.
|
23
39
|
#
|
24
40
|
# @param type [Symbol, :"response.mcp_list_tools.completed"] The type of the event. Always 'response.mcp_list_tools.completed'.
|
@@ -4,6 +4,18 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpListToolsFailedEvent < OpenAI::Internal::Type::BaseModel
|
7
|
+
# @!attribute item_id
|
8
|
+
# The ID of the MCP tool call item that failed.
|
9
|
+
#
|
10
|
+
# @return [String]
|
11
|
+
required :item_id, String
|
12
|
+
|
13
|
+
# @!attribute output_index
|
14
|
+
# The index of the output item that failed.
|
15
|
+
#
|
16
|
+
# @return [Integer]
|
17
|
+
required :output_index, Integer
|
18
|
+
|
7
19
|
# @!attribute sequence_number
|
8
20
|
# The sequence number of this event.
|
9
21
|
#
|
@@ -16,9 +28,13 @@ module OpenAI
|
|
16
28
|
# @return [Symbol, :"response.mcp_list_tools.failed"]
|
17
29
|
required :type, const: :"response.mcp_list_tools.failed"
|
18
30
|
|
19
|
-
# @!method initialize(sequence_number:, type: :"response.mcp_list_tools.failed")
|
31
|
+
# @!method initialize(item_id:, output_index:, sequence_number:, type: :"response.mcp_list_tools.failed")
|
20
32
|
# Emitted when the attempt to list available MCP tools has failed.
|
21
33
|
#
|
34
|
+
# @param item_id [String] The ID of the MCP tool call item that failed.
|
35
|
+
#
|
36
|
+
# @param output_index [Integer] The index of the output item that failed.
|
37
|
+
#
|
22
38
|
# @param sequence_number [Integer] The sequence number of this event.
|
23
39
|
#
|
24
40
|
# @param type [Symbol, :"response.mcp_list_tools.failed"] The type of the event. Always 'response.mcp_list_tools.failed'.
|
@@ -4,6 +4,18 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpListToolsInProgressEvent < OpenAI::Internal::Type::BaseModel
|
7
|
+
# @!attribute item_id
|
8
|
+
# The ID of the MCP tool call item that is being processed.
|
9
|
+
#
|
10
|
+
# @return [String]
|
11
|
+
required :item_id, String
|
12
|
+
|
13
|
+
# @!attribute output_index
|
14
|
+
# The index of the output item that is being processed.
|
15
|
+
#
|
16
|
+
# @return [Integer]
|
17
|
+
required :output_index, Integer
|
18
|
+
|
7
19
|
# @!attribute sequence_number
|
8
20
|
# The sequence number of this event.
|
9
21
|
#
|
@@ -16,10 +28,14 @@ module OpenAI
|
|
16
28
|
# @return [Symbol, :"response.mcp_list_tools.in_progress"]
|
17
29
|
required :type, const: :"response.mcp_list_tools.in_progress"
|
18
30
|
|
19
|
-
# @!method initialize(sequence_number:, type: :"response.mcp_list_tools.in_progress")
|
31
|
+
# @!method initialize(item_id:, output_index:, sequence_number:, type: :"response.mcp_list_tools.in_progress")
|
20
32
|
# Emitted when the system is in the process of retrieving the list of available
|
21
33
|
# MCP tools.
|
22
34
|
#
|
35
|
+
# @param item_id [String] The ID of the MCP tool call item that is being processed.
|
36
|
+
#
|
37
|
+
# @param output_index [Integer] The index of the output item that is being processed.
|
38
|
+
#
|
23
39
|
# @param sequence_number [Integer] The sequence number of this event.
|
24
40
|
#
|
25
41
|
# @param type [Symbol, :"response.mcp_list_tools.in_progress"] The type of the event. Always 'response.mcp_list_tools.in_progress'.
|
@@ -191,12 +191,6 @@ module OpenAI
|
|
191
191
|
# Emitted when a response is queued and waiting to be processed.
|
192
192
|
variant :"response.queued", -> { OpenAI::Responses::ResponseQueuedEvent }
|
193
193
|
|
194
|
-
# Emitted when there is a delta (partial update) to the reasoning content.
|
195
|
-
variant :"response.reasoning.delta", -> { OpenAI::Responses::ResponseReasoningDeltaEvent }
|
196
|
-
|
197
|
-
# Emitted when the reasoning content is finalized for an item.
|
198
|
-
variant :"response.reasoning.done", -> { OpenAI::Responses::ResponseReasoningDoneEvent }
|
199
|
-
|
200
194
|
# Emitted when there is a delta (partial update) to the reasoning summary content.
|
201
195
|
variant :"response.reasoning_summary.delta",
|
202
196
|
-> {
|
@@ -210,7 +204,7 @@ module OpenAI
|
|
210
204
|
}
|
211
205
|
|
212
206
|
# @!method self.variants
|
213
|
-
# @return [Array(OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::
|
207
|
+
# @return [Array(OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryDoneEvent)]
|
214
208
|
end
|
215
209
|
end
|
216
210
|
end
|