openai 0.13.1 → 0.15.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +26 -0
- data/README.md +3 -3
- data/lib/openai/models/audio/speech_create_params.rb +0 -9
- data/lib/openai/models/chat/chat_completion.rb +2 -2
- data/lib/openai/models/chat/chat_completion_audio_param.rb +0 -9
- data/lib/openai/models/chat/chat_completion_chunk.rb +2 -2
- data/lib/openai/models/chat/completion_create_params.rb +2 -2
- data/lib/openai/models/function_definition.rb +1 -1
- data/lib/openai/models/image_edit_completed_event.rb +198 -0
- data/lib/openai/models/image_edit_params.rb +39 -1
- data/lib/openai/models/image_edit_partial_image_event.rb +135 -0
- data/lib/openai/models/image_edit_stream_event.rb +21 -0
- data/lib/openai/models/image_gen_completed_event.rb +198 -0
- data/lib/openai/models/image_gen_partial_image_event.rb +135 -0
- data/lib/openai/models/image_gen_stream_event.rb +21 -0
- data/lib/openai/models/image_generate_params.rb +16 -1
- data/lib/openai/models/images_response.rb +2 -2
- data/lib/openai/models/responses/response.rb +2 -2
- data/lib/openai/models/responses/response_code_interpreter_tool_call.rb +5 -3
- data/lib/openai/models/responses/response_create_params.rb +2 -2
- data/lib/openai/models/responses/response_mcp_call_arguments_delta_event.rb +9 -4
- data/lib/openai/models/responses/response_mcp_call_arguments_done_event.rb +7 -4
- data/lib/openai/models/responses/response_mcp_call_completed_event.rb +17 -1
- data/lib/openai/models/responses/response_mcp_call_failed_event.rb +17 -1
- data/lib/openai/models/responses/response_mcp_list_tools_completed_event.rb +17 -1
- data/lib/openai/models/responses/response_mcp_list_tools_failed_event.rb +17 -1
- data/lib/openai/models/responses/response_mcp_list_tools_in_progress_event.rb +17 -1
- data/lib/openai/models/responses/response_output_refusal.rb +2 -2
- data/lib/openai/models/responses/response_stream_event.rb +1 -7
- data/lib/openai/models/responses/response_text_delta_event.rb +66 -1
- data/lib/openai/models/responses/response_text_done_event.rb +66 -1
- data/lib/openai/models/responses/tool.rb +30 -1
- data/lib/openai/models.rb +12 -0
- data/lib/openai/resources/images.rb +140 -2
- data/lib/openai/resources/responses.rb +2 -2
- data/lib/openai/version.rb +1 -1
- data/lib/openai.rb +6 -2
- data/rbi/openai/models/audio/speech_create_params.rbi +0 -9
- data/rbi/openai/models/chat/chat_completion.rbi +3 -3
- data/rbi/openai/models/chat/chat_completion_audio_param.rbi +0 -15
- data/rbi/openai/models/chat/chat_completion_chunk.rbi +3 -3
- data/rbi/openai/models/chat/completion_create_params.rbi +3 -3
- data/rbi/openai/models/function_definition.rbi +2 -2
- data/rbi/openai/models/image_edit_completed_event.rbi +346 -0
- data/rbi/openai/models/image_edit_params.rbi +57 -0
- data/rbi/openai/models/image_edit_partial_image_event.rbi +249 -0
- data/rbi/openai/models/image_edit_stream_event.rbi +22 -0
- data/rbi/openai/models/image_gen_completed_event.rbi +339 -0
- data/rbi/openai/models/image_gen_partial_image_event.rbi +243 -0
- data/rbi/openai/models/image_gen_stream_event.rbi +22 -0
- data/rbi/openai/models/image_generate_params.rbi +18 -0
- data/rbi/openai/models/images_response.rbi +2 -2
- data/rbi/openai/models/responses/response.rbi +3 -3
- data/rbi/openai/models/responses/response_code_interpreter_tool_call.rbi +6 -3
- data/rbi/openai/models/responses/response_create_params.rbi +3 -3
- data/rbi/openai/models/responses/response_mcp_call_arguments_delta_event.rbi +7 -5
- data/rbi/openai/models/responses/response_mcp_call_arguments_done_event.rbi +5 -5
- data/rbi/openai/models/responses/response_mcp_call_completed_event.rbi +28 -4
- data/rbi/openai/models/responses/response_mcp_call_failed_event.rbi +28 -4
- data/rbi/openai/models/responses/response_mcp_list_tools_completed_event.rbi +28 -4
- data/rbi/openai/models/responses/response_mcp_list_tools_failed_event.rbi +28 -4
- data/rbi/openai/models/responses/response_mcp_list_tools_in_progress_event.rbi +28 -4
- data/rbi/openai/models/responses/response_output_refusal.rbi +2 -2
- data/rbi/openai/models/responses/response_stream_event.rbi +0 -2
- data/rbi/openai/models/responses/response_text_delta_event.rbi +131 -0
- data/rbi/openai/models/responses/response_text_done_event.rbi +131 -0
- data/rbi/openai/models/responses/tool.rbi +61 -0
- data/rbi/openai/models.rbi +12 -0
- data/rbi/openai/resources/chat/completions.rbi +2 -2
- data/rbi/openai/resources/images.rbi +237 -0
- data/rbi/openai/resources/responses.rbi +2 -2
- data/sig/openai/models/audio/speech_create_params.rbs +0 -6
- data/sig/openai/models/chat/chat_completion_audio_param.rbs +0 -6
- data/sig/openai/models/image_edit_completed_event.rbs +150 -0
- data/sig/openai/models/image_edit_params.rbs +21 -0
- data/sig/openai/models/image_edit_partial_image_event.rbs +105 -0
- data/sig/openai/models/image_edit_stream_event.rbs +12 -0
- data/sig/openai/models/image_gen_completed_event.rbs +150 -0
- data/sig/openai/models/image_gen_partial_image_event.rbs +105 -0
- data/sig/openai/models/image_gen_stream_event.rbs +12 -0
- data/sig/openai/models/image_generate_params.rbs +5 -0
- data/sig/openai/models/responses/response_mcp_call_arguments_delta_event.rbs +4 -4
- data/sig/openai/models/responses/response_mcp_call_arguments_done_event.rbs +4 -4
- data/sig/openai/models/responses/response_mcp_call_completed_event.rbs +14 -1
- data/sig/openai/models/responses/response_mcp_call_failed_event.rbs +14 -1
- data/sig/openai/models/responses/response_mcp_list_tools_completed_event.rbs +14 -1
- data/sig/openai/models/responses/response_mcp_list_tools_failed_event.rbs +14 -1
- data/sig/openai/models/responses/response_mcp_list_tools_in_progress_event.rbs +10 -0
- data/sig/openai/models/responses/response_stream_event.rbs +0 -2
- data/sig/openai/models/responses/response_text_delta_event.rbs +52 -0
- data/sig/openai/models/responses/response_text_done_event.rbs +52 -0
- data/sig/openai/models/responses/tool.rbs +16 -0
- data/sig/openai/models.rbs +12 -0
- data/sig/openai/resources/images.rbs +38 -0
- metadata +20 -8
- data/lib/openai/models/responses/response_reasoning_delta_event.rb +0 -60
- data/lib/openai/models/responses/response_reasoning_done_event.rb +0 -60
- data/rbi/openai/models/responses/response_reasoning_delta_event.rbi +0 -83
- data/rbi/openai/models/responses/response_reasoning_done_event.rbi +0 -83
- data/sig/openai/models/responses/response_reasoning_delta_event.rbs +0 -47
- data/sig/openai/models/responses/response_reasoning_done_event.rbs +0 -47
@@ -4,6 +4,18 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpListToolsFailedEvent < OpenAI::Internal::Type::BaseModel
|
7
|
+
# @!attribute item_id
|
8
|
+
# The ID of the MCP tool call item that failed.
|
9
|
+
#
|
10
|
+
# @return [String]
|
11
|
+
required :item_id, String
|
12
|
+
|
13
|
+
# @!attribute output_index
|
14
|
+
# The index of the output item that failed.
|
15
|
+
#
|
16
|
+
# @return [Integer]
|
17
|
+
required :output_index, Integer
|
18
|
+
|
7
19
|
# @!attribute sequence_number
|
8
20
|
# The sequence number of this event.
|
9
21
|
#
|
@@ -16,9 +28,13 @@ module OpenAI
|
|
16
28
|
# @return [Symbol, :"response.mcp_list_tools.failed"]
|
17
29
|
required :type, const: :"response.mcp_list_tools.failed"
|
18
30
|
|
19
|
-
# @!method initialize(sequence_number:, type: :"response.mcp_list_tools.failed")
|
31
|
+
# @!method initialize(item_id:, output_index:, sequence_number:, type: :"response.mcp_list_tools.failed")
|
20
32
|
# Emitted when the attempt to list available MCP tools has failed.
|
21
33
|
#
|
34
|
+
# @param item_id [String] The ID of the MCP tool call item that failed.
|
35
|
+
#
|
36
|
+
# @param output_index [Integer] The index of the output item that failed.
|
37
|
+
#
|
22
38
|
# @param sequence_number [Integer] The sequence number of this event.
|
23
39
|
#
|
24
40
|
# @param type [Symbol, :"response.mcp_list_tools.failed"] The type of the event. Always 'response.mcp_list_tools.failed'.
|
@@ -4,6 +4,18 @@ module OpenAI
|
|
4
4
|
module Models
|
5
5
|
module Responses
|
6
6
|
class ResponseMcpListToolsInProgressEvent < OpenAI::Internal::Type::BaseModel
|
7
|
+
# @!attribute item_id
|
8
|
+
# The ID of the MCP tool call item that is being processed.
|
9
|
+
#
|
10
|
+
# @return [String]
|
11
|
+
required :item_id, String
|
12
|
+
|
13
|
+
# @!attribute output_index
|
14
|
+
# The index of the output item that is being processed.
|
15
|
+
#
|
16
|
+
# @return [Integer]
|
17
|
+
required :output_index, Integer
|
18
|
+
|
7
19
|
# @!attribute sequence_number
|
8
20
|
# The sequence number of this event.
|
9
21
|
#
|
@@ -16,10 +28,14 @@ module OpenAI
|
|
16
28
|
# @return [Symbol, :"response.mcp_list_tools.in_progress"]
|
17
29
|
required :type, const: :"response.mcp_list_tools.in_progress"
|
18
30
|
|
19
|
-
# @!method initialize(sequence_number:, type: :"response.mcp_list_tools.in_progress")
|
31
|
+
# @!method initialize(item_id:, output_index:, sequence_number:, type: :"response.mcp_list_tools.in_progress")
|
20
32
|
# Emitted when the system is in the process of retrieving the list of available
|
21
33
|
# MCP tools.
|
22
34
|
#
|
35
|
+
# @param item_id [String] The ID of the MCP tool call item that is being processed.
|
36
|
+
#
|
37
|
+
# @param output_index [Integer] The index of the output item that is being processed.
|
38
|
+
#
|
23
39
|
# @param sequence_number [Integer] The sequence number of this event.
|
24
40
|
#
|
25
41
|
# @param type [Symbol, :"response.mcp_list_tools.in_progress"] The type of the event. Always 'response.mcp_list_tools.in_progress'.
|
@@ -5,7 +5,7 @@ module OpenAI
|
|
5
5
|
module Responses
|
6
6
|
class ResponseOutputRefusal < OpenAI::Internal::Type::BaseModel
|
7
7
|
# @!attribute refusal
|
8
|
-
# The refusal
|
8
|
+
# The refusal explanation from the model.
|
9
9
|
#
|
10
10
|
# @return [String]
|
11
11
|
required :refusal, String
|
@@ -19,7 +19,7 @@ module OpenAI
|
|
19
19
|
# @!method initialize(refusal:, type: :refusal)
|
20
20
|
# A refusal from the model.
|
21
21
|
#
|
22
|
-
# @param refusal [String] The refusal
|
22
|
+
# @param refusal [String] The refusal explanation from the model.
|
23
23
|
#
|
24
24
|
# @param type [Symbol, :refusal] The type of the refusal. Always `refusal`.
|
25
25
|
end
|
@@ -191,12 +191,6 @@ module OpenAI
|
|
191
191
|
# Emitted when a response is queued and waiting to be processed.
|
192
192
|
variant :"response.queued", -> { OpenAI::Responses::ResponseQueuedEvent }
|
193
193
|
|
194
|
-
# Emitted when there is a delta (partial update) to the reasoning content.
|
195
|
-
variant :"response.reasoning.delta", -> { OpenAI::Responses::ResponseReasoningDeltaEvent }
|
196
|
-
|
197
|
-
# Emitted when the reasoning content is finalized for an item.
|
198
|
-
variant :"response.reasoning.done", -> { OpenAI::Responses::ResponseReasoningDoneEvent }
|
199
|
-
|
200
194
|
# Emitted when there is a delta (partial update) to the reasoning summary content.
|
201
195
|
variant :"response.reasoning_summary.delta",
|
202
196
|
-> {
|
@@ -210,7 +204,7 @@ module OpenAI
|
|
210
204
|
}
|
211
205
|
|
212
206
|
# @!method self.variants
|
213
|
-
# @return [Array(OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::
|
207
|
+
# @return [Array(OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryDoneEvent)]
|
214
208
|
end
|
215
209
|
end
|
216
210
|
end
|
@@ -22,6 +22,13 @@ module OpenAI
|
|
22
22
|
# @return [String]
|
23
23
|
required :item_id, String
|
24
24
|
|
25
|
+
# @!attribute logprobs
|
26
|
+
# The log probabilities of the tokens in the delta.
|
27
|
+
#
|
28
|
+
# @return [Array<OpenAI::Models::Responses::ResponseTextDeltaEvent::Logprob>]
|
29
|
+
required :logprobs,
|
30
|
+
-> { OpenAI::Internal::Type::ArrayOf[OpenAI::Responses::ResponseTextDeltaEvent::Logprob] }
|
31
|
+
|
25
32
|
# @!attribute output_index
|
26
33
|
# The index of the output item that the text delta was added to.
|
27
34
|
#
|
@@ -40,7 +47,7 @@ module OpenAI
|
|
40
47
|
# @return [Symbol, :"response.output_text.delta"]
|
41
48
|
required :type, const: :"response.output_text.delta"
|
42
49
|
|
43
|
-
# @!method initialize(content_index:, delta:, item_id:, output_index:, sequence_number:, type: :"response.output_text.delta")
|
50
|
+
# @!method initialize(content_index:, delta:, item_id:, logprobs:, output_index:, sequence_number:, type: :"response.output_text.delta")
|
44
51
|
# Some parameter documentations has been truncated, see
|
45
52
|
# {OpenAI::Models::Responses::ResponseTextDeltaEvent} for more details.
|
46
53
|
#
|
@@ -52,11 +59,69 @@ module OpenAI
|
|
52
59
|
#
|
53
60
|
# @param item_id [String] The ID of the output item that the text delta was added to.
|
54
61
|
#
|
62
|
+
# @param logprobs [Array<OpenAI::Models::Responses::ResponseTextDeltaEvent::Logprob>] The log probabilities of the tokens in the delta.
|
63
|
+
#
|
55
64
|
# @param output_index [Integer] The index of the output item that the text delta was added to.
|
56
65
|
#
|
57
66
|
# @param sequence_number [Integer] The sequence number for this event.
|
58
67
|
#
|
59
68
|
# @param type [Symbol, :"response.output_text.delta"] The type of the event. Always `response.output_text.delta`.
|
69
|
+
|
70
|
+
class Logprob < OpenAI::Internal::Type::BaseModel
|
71
|
+
# @!attribute token
|
72
|
+
# A possible text token.
|
73
|
+
#
|
74
|
+
# @return [String]
|
75
|
+
required :token, String
|
76
|
+
|
77
|
+
# @!attribute logprob
|
78
|
+
# The log probability of this token.
|
79
|
+
#
|
80
|
+
# @return [Float]
|
81
|
+
required :logprob, Float
|
82
|
+
|
83
|
+
# @!attribute top_logprobs
|
84
|
+
# The log probability of the top 20 most likely tokens.
|
85
|
+
#
|
86
|
+
# @return [Array<OpenAI::Models::Responses::ResponseTextDeltaEvent::Logprob::TopLogprob>, nil]
|
87
|
+
optional :top_logprobs,
|
88
|
+
-> {
|
89
|
+
OpenAI::Internal::Type::ArrayOf[OpenAI::Responses::ResponseTextDeltaEvent::Logprob::TopLogprob]
|
90
|
+
}
|
91
|
+
|
92
|
+
# @!method initialize(token:, logprob:, top_logprobs: nil)
|
93
|
+
# Some parameter documentations has been truncated, see
|
94
|
+
# {OpenAI::Models::Responses::ResponseTextDeltaEvent::Logprob} for more details.
|
95
|
+
#
|
96
|
+
# A logprob is the logarithmic probability that the model assigns to producing a
|
97
|
+
# particular token at a given position in the sequence. Less-negative (higher)
|
98
|
+
# logprob values indicate greater model confidence in that token choice.
|
99
|
+
#
|
100
|
+
# @param token [String] A possible text token.
|
101
|
+
#
|
102
|
+
# @param logprob [Float] The log probability of this token.
|
103
|
+
#
|
104
|
+
# @param top_logprobs [Array<OpenAI::Models::Responses::ResponseTextDeltaEvent::Logprob::TopLogprob>] The log probability of the top 20 most likely tokens.
|
105
|
+
|
106
|
+
class TopLogprob < OpenAI::Internal::Type::BaseModel
|
107
|
+
# @!attribute token
|
108
|
+
# A possible text token.
|
109
|
+
#
|
110
|
+
# @return [String, nil]
|
111
|
+
optional :token, String
|
112
|
+
|
113
|
+
# @!attribute logprob
|
114
|
+
# The log probability of this token.
|
115
|
+
#
|
116
|
+
# @return [Float, nil]
|
117
|
+
optional :logprob, Float
|
118
|
+
|
119
|
+
# @!method initialize(token: nil, logprob: nil)
|
120
|
+
# @param token [String] A possible text token.
|
121
|
+
#
|
122
|
+
# @param logprob [Float] The log probability of this token.
|
123
|
+
end
|
124
|
+
end
|
60
125
|
end
|
61
126
|
end
|
62
127
|
end
|
@@ -16,6 +16,13 @@ module OpenAI
|
|
16
16
|
# @return [String]
|
17
17
|
required :item_id, String
|
18
18
|
|
19
|
+
# @!attribute logprobs
|
20
|
+
# The log probabilities of the tokens in the delta.
|
21
|
+
#
|
22
|
+
# @return [Array<OpenAI::Models::Responses::ResponseTextDoneEvent::Logprob>]
|
23
|
+
required :logprobs,
|
24
|
+
-> { OpenAI::Internal::Type::ArrayOf[OpenAI::Responses::ResponseTextDoneEvent::Logprob] }
|
25
|
+
|
19
26
|
# @!attribute output_index
|
20
27
|
# The index of the output item that the text content is finalized.
|
21
28
|
#
|
@@ -40,7 +47,7 @@ module OpenAI
|
|
40
47
|
# @return [Symbol, :"response.output_text.done"]
|
41
48
|
required :type, const: :"response.output_text.done"
|
42
49
|
|
43
|
-
# @!method initialize(content_index:, item_id:, output_index:, sequence_number:, text:, type: :"response.output_text.done")
|
50
|
+
# @!method initialize(content_index:, item_id:, logprobs:, output_index:, sequence_number:, text:, type: :"response.output_text.done")
|
44
51
|
# Some parameter documentations has been truncated, see
|
45
52
|
# {OpenAI::Models::Responses::ResponseTextDoneEvent} for more details.
|
46
53
|
#
|
@@ -50,6 +57,8 @@ module OpenAI
|
|
50
57
|
#
|
51
58
|
# @param item_id [String] The ID of the output item that the text content is finalized.
|
52
59
|
#
|
60
|
+
# @param logprobs [Array<OpenAI::Models::Responses::ResponseTextDoneEvent::Logprob>] The log probabilities of the tokens in the delta.
|
61
|
+
#
|
53
62
|
# @param output_index [Integer] The index of the output item that the text content is finalized.
|
54
63
|
#
|
55
64
|
# @param sequence_number [Integer] The sequence number for this event.
|
@@ -57,6 +66,62 @@ module OpenAI
|
|
57
66
|
# @param text [String] The text content that is finalized.
|
58
67
|
#
|
59
68
|
# @param type [Symbol, :"response.output_text.done"] The type of the event. Always `response.output_text.done`.
|
69
|
+
|
70
|
+
class Logprob < OpenAI::Internal::Type::BaseModel
|
71
|
+
# @!attribute token
|
72
|
+
# A possible text token.
|
73
|
+
#
|
74
|
+
# @return [String]
|
75
|
+
required :token, String
|
76
|
+
|
77
|
+
# @!attribute logprob
|
78
|
+
# The log probability of this token.
|
79
|
+
#
|
80
|
+
# @return [Float]
|
81
|
+
required :logprob, Float
|
82
|
+
|
83
|
+
# @!attribute top_logprobs
|
84
|
+
# The log probability of the top 20 most likely tokens.
|
85
|
+
#
|
86
|
+
# @return [Array<OpenAI::Models::Responses::ResponseTextDoneEvent::Logprob::TopLogprob>, nil]
|
87
|
+
optional :top_logprobs,
|
88
|
+
-> {
|
89
|
+
OpenAI::Internal::Type::ArrayOf[OpenAI::Responses::ResponseTextDoneEvent::Logprob::TopLogprob]
|
90
|
+
}
|
91
|
+
|
92
|
+
# @!method initialize(token:, logprob:, top_logprobs: nil)
|
93
|
+
# Some parameter documentations has been truncated, see
|
94
|
+
# {OpenAI::Models::Responses::ResponseTextDoneEvent::Logprob} for more details.
|
95
|
+
#
|
96
|
+
# A logprob is the logarithmic probability that the model assigns to producing a
|
97
|
+
# particular token at a given position in the sequence. Less-negative (higher)
|
98
|
+
# logprob values indicate greater model confidence in that token choice.
|
99
|
+
#
|
100
|
+
# @param token [String] A possible text token.
|
101
|
+
#
|
102
|
+
# @param logprob [Float] The log probability of this token.
|
103
|
+
#
|
104
|
+
# @param top_logprobs [Array<OpenAI::Models::Responses::ResponseTextDoneEvent::Logprob::TopLogprob>] The log probability of the top 20 most likely tokens.
|
105
|
+
|
106
|
+
class TopLogprob < OpenAI::Internal::Type::BaseModel
|
107
|
+
# @!attribute token
|
108
|
+
# A possible text token.
|
109
|
+
#
|
110
|
+
# @return [String, nil]
|
111
|
+
optional :token, String
|
112
|
+
|
113
|
+
# @!attribute logprob
|
114
|
+
# The log probability of this token.
|
115
|
+
#
|
116
|
+
# @return [Float, nil]
|
117
|
+
optional :logprob, Float
|
118
|
+
|
119
|
+
# @!method initialize(token: nil, logprob: nil)
|
120
|
+
# @param token [String] A possible text token.
|
121
|
+
#
|
122
|
+
# @param logprob [Float] The log probability of this token.
|
123
|
+
end
|
124
|
+
end
|
60
125
|
end
|
61
126
|
end
|
62
127
|
end
|
@@ -305,6 +305,18 @@ module OpenAI
|
|
305
305
|
# @return [Symbol, OpenAI::Models::Responses::Tool::ImageGeneration::Background, nil]
|
306
306
|
optional :background, enum: -> { OpenAI::Responses::Tool::ImageGeneration::Background }
|
307
307
|
|
308
|
+
# @!attribute input_fidelity
|
309
|
+
# Control how much effort the model will exert to match the style and features,
|
310
|
+
# especially facial features, of input images. This parameter is only supported
|
311
|
+
# for `gpt-image-1`. Supports `high` and `low`. Defaults to `low`.
|
312
|
+
#
|
313
|
+
# @return [Symbol, OpenAI::Models::Responses::Tool::ImageGeneration::InputFidelity, nil]
|
314
|
+
optional :input_fidelity,
|
315
|
+
enum: -> {
|
316
|
+
OpenAI::Responses::Tool::ImageGeneration::InputFidelity
|
317
|
+
},
|
318
|
+
nil?: true
|
319
|
+
|
308
320
|
# @!attribute input_image_mask
|
309
321
|
# Optional mask for inpainting. Contains `image_url` (string, optional) and
|
310
322
|
# `file_id` (string, optional).
|
@@ -358,7 +370,7 @@ module OpenAI
|
|
358
370
|
# @return [Symbol, OpenAI::Models::Responses::Tool::ImageGeneration::Size, nil]
|
359
371
|
optional :size, enum: -> { OpenAI::Responses::Tool::ImageGeneration::Size }
|
360
372
|
|
361
|
-
# @!method initialize(background: nil, input_image_mask: nil, model: nil, moderation: nil, output_compression: nil, output_format: nil, partial_images: nil, quality: nil, size: nil, type: :image_generation)
|
373
|
+
# @!method initialize(background: nil, input_fidelity: nil, input_image_mask: nil, model: nil, moderation: nil, output_compression: nil, output_format: nil, partial_images: nil, quality: nil, size: nil, type: :image_generation)
|
362
374
|
# Some parameter documentations has been truncated, see
|
363
375
|
# {OpenAI::Models::Responses::Tool::ImageGeneration} for more details.
|
364
376
|
#
|
@@ -366,6 +378,8 @@ module OpenAI
|
|
366
378
|
#
|
367
379
|
# @param background [Symbol, OpenAI::Models::Responses::Tool::ImageGeneration::Background] Background type for the generated image. One of `transparent`,
|
368
380
|
#
|
381
|
+
# @param input_fidelity [Symbol, OpenAI::Models::Responses::Tool::ImageGeneration::InputFidelity, nil] Control how much effort the model will exert to match the style and features,
|
382
|
+
#
|
369
383
|
# @param input_image_mask [OpenAI::Models::Responses::Tool::ImageGeneration::InputImageMask] Optional mask for inpainting. Contains `image_url`
|
370
384
|
#
|
371
385
|
# @param model [Symbol, OpenAI::Models::Responses::Tool::ImageGeneration::Model] The image generation model to use. Default: `gpt-image-1`.
|
@@ -399,6 +413,21 @@ module OpenAI
|
|
399
413
|
# @return [Array<Symbol>]
|
400
414
|
end
|
401
415
|
|
416
|
+
# Control how much effort the model will exert to match the style and features,
|
417
|
+
# especially facial features, of input images. This parameter is only supported
|
418
|
+
# for `gpt-image-1`. Supports `high` and `low`. Defaults to `low`.
|
419
|
+
#
|
420
|
+
# @see OpenAI::Models::Responses::Tool::ImageGeneration#input_fidelity
|
421
|
+
module InputFidelity
|
422
|
+
extend OpenAI::Internal::Type::Enum
|
423
|
+
|
424
|
+
HIGH = :high
|
425
|
+
LOW = :low
|
426
|
+
|
427
|
+
# @!method self.values
|
428
|
+
# @return [Array<Symbol>]
|
429
|
+
end
|
430
|
+
|
402
431
|
# @see OpenAI::Models::Responses::Tool::ImageGeneration#input_image_mask
|
403
432
|
class InputImageMask < OpenAI::Internal::Type::BaseModel
|
404
433
|
# @!attribute file_id
|
data/lib/openai/models.rb
CHANGED
@@ -152,10 +152,22 @@ module OpenAI
|
|
152
152
|
|
153
153
|
ImageCreateVariationParams = OpenAI::Models::ImageCreateVariationParams
|
154
154
|
|
155
|
+
ImageEditCompletedEvent = OpenAI::Models::ImageEditCompletedEvent
|
156
|
+
|
155
157
|
ImageEditParams = OpenAI::Models::ImageEditParams
|
156
158
|
|
159
|
+
ImageEditPartialImageEvent = OpenAI::Models::ImageEditPartialImageEvent
|
160
|
+
|
161
|
+
ImageEditStreamEvent = OpenAI::Models::ImageEditStreamEvent
|
162
|
+
|
163
|
+
ImageGenCompletedEvent = OpenAI::Models::ImageGenCompletedEvent
|
164
|
+
|
157
165
|
ImageGenerateParams = OpenAI::Models::ImageGenerateParams
|
158
166
|
|
167
|
+
ImageGenPartialImageEvent = OpenAI::Models::ImageGenPartialImageEvent
|
168
|
+
|
169
|
+
ImageGenStreamEvent = OpenAI::Models::ImageGenStreamEvent
|
170
|
+
|
159
171
|
ImageModel = OpenAI::Models::ImageModel
|
160
172
|
|
161
173
|
ImagesResponse = OpenAI::Models::ImagesResponse
|
@@ -39,13 +39,15 @@ module OpenAI
|
|
39
39
|
)
|
40
40
|
end
|
41
41
|
|
42
|
+
# See {OpenAI::Resources::Images#edit_stream_raw} for streaming counterpart.
|
43
|
+
#
|
42
44
|
# Some parameter documentations has been truncated, see
|
43
45
|
# {OpenAI::Models::ImageEditParams} for more details.
|
44
46
|
#
|
45
47
|
# Creates an edited or extended image given one or more source images and a
|
46
48
|
# prompt. This endpoint only supports `gpt-image-1` and `dall-e-2`.
|
47
49
|
#
|
48
|
-
# @overload edit(image:, prompt:, background: nil, mask: nil, model: nil, n: nil, output_compression: nil, output_format: nil, quality: nil, response_format: nil, size: nil, user: nil, request_options: {})
|
50
|
+
# @overload edit(image:, prompt:, background: nil, input_fidelity: nil, mask: nil, model: nil, n: nil, output_compression: nil, output_format: nil, partial_images: nil, quality: nil, response_format: nil, size: nil, user: nil, request_options: {})
|
49
51
|
#
|
50
52
|
# @param image [Pathname, StringIO, IO, String, OpenAI::FilePart, Array<Pathname, StringIO, IO, String, OpenAI::FilePart>] The image(s) to edit. Must be a supported image file or an array of images.
|
51
53
|
#
|
@@ -53,6 +55,8 @@ module OpenAI
|
|
53
55
|
#
|
54
56
|
# @param background [Symbol, OpenAI::Models::ImageEditParams::Background, nil] Allows to set transparency for the background of the generated image(s).
|
55
57
|
#
|
58
|
+
# @param input_fidelity [Symbol, OpenAI::Models::ImageEditParams::InputFidelity, nil] Control how much effort the model will exert to match the style and features,
|
59
|
+
#
|
56
60
|
# @param mask [Pathname, StringIO, IO, String, OpenAI::FilePart] An additional image whose fully transparent areas (e.g. where alpha is zero) ind
|
57
61
|
#
|
58
62
|
# @param model [String, Symbol, OpenAI::Models::ImageModel, nil] The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are sup
|
@@ -63,6 +67,8 @@ module OpenAI
|
|
63
67
|
#
|
64
68
|
# @param output_format [Symbol, OpenAI::Models::ImageEditParams::OutputFormat, nil] The format in which the generated images are returned. This parameter is
|
65
69
|
#
|
70
|
+
# @param partial_images [Integer, nil] The number of partial images to generate. This parameter is used for
|
71
|
+
#
|
66
72
|
# @param quality [Symbol, OpenAI::Models::ImageEditParams::Quality, nil] The quality of the image that will be generated. `high`, `medium` and `low` are
|
67
73
|
#
|
68
74
|
# @param response_format [Symbol, OpenAI::Models::ImageEditParams::ResponseFormat, nil] The format in which the generated images are returned. Must be one of `url` or `
|
@@ -78,6 +84,10 @@ module OpenAI
|
|
78
84
|
# @see OpenAI::Models::ImageEditParams
|
79
85
|
def edit(params)
|
80
86
|
parsed, options = OpenAI::ImageEditParams.dump_request(params)
|
87
|
+
if parsed[:stream]
|
88
|
+
message = "Please use `#edit_stream_raw` for the streaming use case."
|
89
|
+
raise ArgumentError.new(message)
|
90
|
+
end
|
81
91
|
@client.request(
|
82
92
|
method: :post,
|
83
93
|
path: "images/edits",
|
@@ -88,13 +98,76 @@ module OpenAI
|
|
88
98
|
)
|
89
99
|
end
|
90
100
|
|
101
|
+
# See {OpenAI::Resources::Images#edit} for non-streaming counterpart.
|
102
|
+
#
|
103
|
+
# Some parameter documentations has been truncated, see
|
104
|
+
# {OpenAI::Models::ImageEditParams} for more details.
|
105
|
+
#
|
106
|
+
# Creates an edited or extended image given one or more source images and a
|
107
|
+
# prompt. This endpoint only supports `gpt-image-1` and `dall-e-2`.
|
108
|
+
#
|
109
|
+
# @overload edit_stream_raw(image:, prompt:, background: nil, input_fidelity: nil, mask: nil, model: nil, n: nil, output_compression: nil, output_format: nil, partial_images: nil, quality: nil, response_format: nil, size: nil, user: nil, request_options: {})
|
110
|
+
#
|
111
|
+
# @param image [Pathname, StringIO, IO, String, OpenAI::FilePart, Array<Pathname, StringIO, IO, String, OpenAI::FilePart>] The image(s) to edit. Must be a supported image file or an array of images.
|
112
|
+
#
|
113
|
+
# @param prompt [String] A text description of the desired image(s). The maximum length is 1000 character
|
114
|
+
#
|
115
|
+
# @param background [Symbol, OpenAI::Models::ImageEditParams::Background, nil] Allows to set transparency for the background of the generated image(s).
|
116
|
+
#
|
117
|
+
# @param input_fidelity [Symbol, OpenAI::Models::ImageEditParams::InputFidelity, nil] Control how much effort the model will exert to match the style and features,
|
118
|
+
#
|
119
|
+
# @param mask [Pathname, StringIO, IO, String, OpenAI::FilePart] An additional image whose fully transparent areas (e.g. where alpha is zero) ind
|
120
|
+
#
|
121
|
+
# @param model [String, Symbol, OpenAI::Models::ImageModel, nil] The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are sup
|
122
|
+
#
|
123
|
+
# @param n [Integer, nil] The number of images to generate. Must be between 1 and 10.
|
124
|
+
#
|
125
|
+
# @param output_compression [Integer, nil] The compression level (0-100%) for the generated images. This parameter
|
126
|
+
#
|
127
|
+
# @param output_format [Symbol, OpenAI::Models::ImageEditParams::OutputFormat, nil] The format in which the generated images are returned. This parameter is
|
128
|
+
#
|
129
|
+
# @param partial_images [Integer, nil] The number of partial images to generate. This parameter is used for
|
130
|
+
#
|
131
|
+
# @param quality [Symbol, OpenAI::Models::ImageEditParams::Quality, nil] The quality of the image that will be generated. `high`, `medium` and `low` are
|
132
|
+
#
|
133
|
+
# @param response_format [Symbol, OpenAI::Models::ImageEditParams::ResponseFormat, nil] The format in which the generated images are returned. Must be one of `url` or `
|
134
|
+
#
|
135
|
+
# @param size [Symbol, OpenAI::Models::ImageEditParams::Size, nil] The size of the generated images. Must be one of `1024x1024`, `1536x1024` (lands
|
136
|
+
#
|
137
|
+
# @param user [String] A unique identifier representing your end-user, which can help OpenAI to monitor
|
138
|
+
#
|
139
|
+
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}, nil]
|
140
|
+
#
|
141
|
+
# @return [OpenAI::Internal::Stream<OpenAI::Models::ImageEditPartialImageEvent, OpenAI::Models::ImageEditCompletedEvent>]
|
142
|
+
#
|
143
|
+
# @see OpenAI::Models::ImageEditParams
|
144
|
+
def edit_stream_raw(params)
|
145
|
+
parsed, options = OpenAI::ImageEditParams.dump_request(params)
|
146
|
+
unless parsed.fetch(:stream, true)
|
147
|
+
message = "Please use `#edit` for the non-streaming use case."
|
148
|
+
raise ArgumentError.new(message)
|
149
|
+
end
|
150
|
+
parsed.store(:stream, true)
|
151
|
+
@client.request(
|
152
|
+
method: :post,
|
153
|
+
path: "images/edits",
|
154
|
+
headers: {"content-type" => "multipart/form-data", "accept" => "text/event-stream"},
|
155
|
+
body: parsed,
|
156
|
+
stream: OpenAI::Internal::Stream,
|
157
|
+
model: OpenAI::ImageEditStreamEvent,
|
158
|
+
options: options
|
159
|
+
)
|
160
|
+
end
|
161
|
+
|
162
|
+
# See {OpenAI::Resources::Images#generate_stream_raw} for streaming counterpart.
|
163
|
+
#
|
91
164
|
# Some parameter documentations has been truncated, see
|
92
165
|
# {OpenAI::Models::ImageGenerateParams} for more details.
|
93
166
|
#
|
94
167
|
# Creates an image given a prompt.
|
95
168
|
# [Learn more](https://platform.openai.com/docs/guides/images).
|
96
169
|
#
|
97
|
-
# @overload generate(prompt:, background: nil, model: nil, moderation: nil, n: nil, output_compression: nil, output_format: nil, quality: nil, response_format: nil, size: nil, style: nil, user: nil, request_options: {})
|
170
|
+
# @overload generate(prompt:, background: nil, model: nil, moderation: nil, n: nil, output_compression: nil, output_format: nil, partial_images: nil, quality: nil, response_format: nil, size: nil, style: nil, user: nil, request_options: {})
|
98
171
|
#
|
99
172
|
# @param prompt [String] A text description of the desired image(s). The maximum length is 32000 characte
|
100
173
|
#
|
@@ -110,6 +183,8 @@ module OpenAI
|
|
110
183
|
#
|
111
184
|
# @param output_format [Symbol, OpenAI::Models::ImageGenerateParams::OutputFormat, nil] The format in which the generated images are returned. This parameter is only su
|
112
185
|
#
|
186
|
+
# @param partial_images [Integer, nil] The number of partial images to generate. This parameter is used for
|
187
|
+
#
|
113
188
|
# @param quality [Symbol, OpenAI::Models::ImageGenerateParams::Quality, nil] The quality of the image that will be generated.
|
114
189
|
#
|
115
190
|
# @param response_format [Symbol, OpenAI::Models::ImageGenerateParams::ResponseFormat, nil] The format in which generated images with `dall-e-2` and `dall-e-3` are returned
|
@@ -127,6 +202,10 @@ module OpenAI
|
|
127
202
|
# @see OpenAI::Models::ImageGenerateParams
|
128
203
|
def generate(params)
|
129
204
|
parsed, options = OpenAI::ImageGenerateParams.dump_request(params)
|
205
|
+
if parsed[:stream]
|
206
|
+
message = "Please use `#generate_stream_raw` for the streaming use case."
|
207
|
+
raise ArgumentError.new(message)
|
208
|
+
end
|
130
209
|
@client.request(
|
131
210
|
method: :post,
|
132
211
|
path: "images/generations",
|
@@ -136,6 +215,65 @@ module OpenAI
|
|
136
215
|
)
|
137
216
|
end
|
138
217
|
|
218
|
+
# See {OpenAI::Resources::Images#generate} for non-streaming counterpart.
|
219
|
+
#
|
220
|
+
# Some parameter documentations has been truncated, see
|
221
|
+
# {OpenAI::Models::ImageGenerateParams} for more details.
|
222
|
+
#
|
223
|
+
# Creates an image given a prompt.
|
224
|
+
# [Learn more](https://platform.openai.com/docs/guides/images).
|
225
|
+
#
|
226
|
+
# @overload generate_stream_raw(prompt:, background: nil, model: nil, moderation: nil, n: nil, output_compression: nil, output_format: nil, partial_images: nil, quality: nil, response_format: nil, size: nil, style: nil, user: nil, request_options: {})
|
227
|
+
#
|
228
|
+
# @param prompt [String] A text description of the desired image(s). The maximum length is 32000 characte
|
229
|
+
#
|
230
|
+
# @param background [Symbol, OpenAI::Models::ImageGenerateParams::Background, nil] Allows to set transparency for the background of the generated image(s).
|
231
|
+
#
|
232
|
+
# @param model [String, Symbol, OpenAI::Models::ImageModel, nil] The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or `gpt-im
|
233
|
+
#
|
234
|
+
# @param moderation [Symbol, OpenAI::Models::ImageGenerateParams::Moderation, nil] Control the content-moderation level for images generated by `gpt-image-1`. Must
|
235
|
+
#
|
236
|
+
# @param n [Integer, nil] The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only
|
237
|
+
#
|
238
|
+
# @param output_compression [Integer, nil] The compression level (0-100%) for the generated images. This parameter is only
|
239
|
+
#
|
240
|
+
# @param output_format [Symbol, OpenAI::Models::ImageGenerateParams::OutputFormat, nil] The format in which the generated images are returned. This parameter is only su
|
241
|
+
#
|
242
|
+
# @param partial_images [Integer, nil] The number of partial images to generate. This parameter is used for
|
243
|
+
#
|
244
|
+
# @param quality [Symbol, OpenAI::Models::ImageGenerateParams::Quality, nil] The quality of the image that will be generated.
|
245
|
+
#
|
246
|
+
# @param response_format [Symbol, OpenAI::Models::ImageGenerateParams::ResponseFormat, nil] The format in which generated images with `dall-e-2` and `dall-e-3` are returned
|
247
|
+
#
|
248
|
+
# @param size [Symbol, OpenAI::Models::ImageGenerateParams::Size, nil] The size of the generated images. Must be one of `1024x1024`, `1536x1024` (lands
|
249
|
+
#
|
250
|
+
# @param style [Symbol, OpenAI::Models::ImageGenerateParams::Style, nil] The style of the generated images. This parameter is only supported for `dall-e-
|
251
|
+
#
|
252
|
+
# @param user [String] A unique identifier representing your end-user, which can help OpenAI to monitor
|
253
|
+
#
|
254
|
+
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}, nil]
|
255
|
+
#
|
256
|
+
# @return [OpenAI::Internal::Stream<OpenAI::Models::ImageGenPartialImageEvent, OpenAI::Models::ImageGenCompletedEvent>]
|
257
|
+
#
|
258
|
+
# @see OpenAI::Models::ImageGenerateParams
|
259
|
+
def generate_stream_raw(params)
|
260
|
+
parsed, options = OpenAI::ImageGenerateParams.dump_request(params)
|
261
|
+
unless parsed.fetch(:stream, true)
|
262
|
+
message = "Please use `#generate` for the non-streaming use case."
|
263
|
+
raise ArgumentError.new(message)
|
264
|
+
end
|
265
|
+
parsed.store(:stream, true)
|
266
|
+
@client.request(
|
267
|
+
method: :post,
|
268
|
+
path: "images/generations",
|
269
|
+
headers: {"accept" => "text/event-stream"},
|
270
|
+
body: parsed,
|
271
|
+
stream: OpenAI::Internal::Stream,
|
272
|
+
model: OpenAI::ImageGenStreamEvent,
|
273
|
+
options: options
|
274
|
+
)
|
275
|
+
end
|
276
|
+
|
139
277
|
# @api private
|
140
278
|
#
|
141
279
|
# @param client [OpenAI::Client]
|
@@ -270,7 +270,7 @@ module OpenAI
|
|
270
270
|
#
|
271
271
|
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}, nil]
|
272
272
|
#
|
273
|
-
# @return [OpenAI::Internal::Stream<OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::
|
273
|
+
# @return [OpenAI::Internal::Stream<OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryDoneEvent>]
|
274
274
|
#
|
275
275
|
# @see OpenAI::Models::Responses::ResponseCreateParams
|
276
276
|
def stream_raw(params = {})
|
@@ -344,7 +344,7 @@ module OpenAI
|
|
344
344
|
#
|
345
345
|
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}, nil]
|
346
346
|
#
|
347
|
-
# @return [OpenAI::Internal::Stream<OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::
|
347
|
+
# @return [OpenAI::Internal::Stream<OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryDoneEvent>]
|
348
348
|
#
|
349
349
|
# @see OpenAI::Models::Responses::ResponseRetrieveParams
|
350
350
|
def retrieve_streaming(response_id, params = {})
|
data/lib/openai/version.rb
CHANGED