openai 0.8.0 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +21 -0
  3. data/README.md +115 -4
  4. data/lib/openai/models/chat/chat_completion.rb +1 -0
  5. data/lib/openai/models/chat/chat_completion_chunk.rb +1 -0
  6. data/lib/openai/models/chat/completion_create_params.rb +1 -0
  7. data/lib/openai/models/fine_tuning/checkpoints/permission_retrieve_response.rb +25 -60
  8. data/lib/openai/models/fine_tuning/job_create_params.rb +4 -2
  9. data/lib/openai/models/image_edit_params.rb +35 -1
  10. data/lib/openai/models/responses/response.rb +41 -6
  11. data/lib/openai/models/responses/response_create_params.rb +13 -4
  12. data/lib/openai/models/responses/response_prompt.rb +63 -0
  13. data/lib/openai/resources/fine_tuning/checkpoints/permissions.rb +2 -1
  14. data/lib/openai/resources/fine_tuning/jobs.rb +2 -2
  15. data/lib/openai/resources/images.rb +5 -1
  16. data/lib/openai/resources/responses.rb +8 -4
  17. data/lib/openai/version.rb +1 -1
  18. data/lib/openai.rb +1 -0
  19. data/rbi/openai/models/chat/chat_completion.rbi +5 -0
  20. data/rbi/openai/models/chat/chat_completion_chunk.rbi +5 -0
  21. data/rbi/openai/models/chat/completion_create_params.rbi +5 -0
  22. data/rbi/openai/models/fine_tuning/checkpoints/permission_retrieve_response.rbi +26 -95
  23. data/rbi/openai/models/fine_tuning/job_create_params.rbi +8 -4
  24. data/rbi/openai/models/image_edit_params.rbi +51 -0
  25. data/rbi/openai/models/responses/response.rbi +66 -7
  26. data/rbi/openai/models/responses/response_create_params.rbi +24 -4
  27. data/rbi/openai/models/responses/response_prompt.rbi +120 -0
  28. data/rbi/openai/resources/fine_tuning/checkpoints/permissions.rbi +3 -1
  29. data/rbi/openai/resources/fine_tuning/jobs.rbi +6 -4
  30. data/rbi/openai/resources/images.rbi +11 -0
  31. data/rbi/openai/resources/responses.rbi +10 -4
  32. data/sig/openai/models/chat/chat_completion.rbs +2 -1
  33. data/sig/openai/models/chat/chat_completion_chunk.rbs +2 -1
  34. data/sig/openai/models/chat/completion_create_params.rbs +2 -1
  35. data/sig/openai/models/fine_tuning/checkpoints/permission_retrieve_response.rbs +16 -53
  36. data/sig/openai/models/image_edit_params.rbs +22 -0
  37. data/sig/openai/models/responses/response.rbs +22 -5
  38. data/sig/openai/models/responses/response_create_params.rbs +7 -1
  39. data/sig/openai/models/responses/response_prompt.rbs +44 -0
  40. data/sig/openai/resources/fine_tuning/checkpoints/permissions.rbs +1 -1
  41. data/sig/openai/resources/images.rbs +2 -0
  42. data/sig/openai/resources/responses.rbs +2 -0
  43. metadata +5 -2
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 778fba439d6ccb684d26309e1c8b6df0a91dfbeea730d878b27dbe5d3866c006
4
- data.tar.gz: 1e01ee7a200d5fecc53101441600a2d5ff68de95c35daa39ca3a686faf687825
3
+ metadata.gz: e0a273910412c6b05028da1b9e7fceab0b048d4d34df4b1bee6a781dcbad33dd
4
+ data.tar.gz: 51a3719def2827a793c2b9974cc4104e00000cea78893ff36d327254bb8e586e
5
5
  SHA512:
6
- metadata.gz: 8f8eae9d8696ab749f14921f0680faf5e4e48c80c7a22a8e956221882c2609596067036d6b8403adc6f6d6facfce355b1d708ee18030b2b44ed36db98a3399e9
7
- data.tar.gz: fa5ffa61094c569e2d3c8c19f0d4fb2295bd12574f41f19b43176957fe0c7b4966054ace9f6484ec54865797bfc7a93d0a74d984d57761410e531b4c981fd2a0
6
+ metadata.gz: d65777601ceac3d125750949f80a0ca4d50f62593e971d067f7c164ab1dbbddf51266cee8ab752ef058f2313f09b7ad48a6312c8df7a465e0807e956ed55cedf
7
+ data.tar.gz: 9156aee0d88ad6be1e794acede9961d774c40ade621282aff54899879e718f6f514f61007d25f614e507796eefefbf694508042e1e126fc0412aac76b003ff44
data/CHANGELOG.md CHANGED
@@ -1,5 +1,26 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.9.0 (2025-06-17)
4
+
5
+ Full Changelog: [v0.8.0...v0.9.0](https://github.com/openai/openai-ruby/compare/v0.8.0...v0.9.0)
6
+
7
+ ### Features
8
+
9
+ * **api:** add reusable prompt IDs ([72e35ad](https://github.com/openai/openai-ruby/commit/72e35ad4a677a70a98db291a20aa212e53c367ea))
10
+ * **api:** manual updates ([a4bcab7](https://github.com/openai/openai-ruby/commit/a4bcab736d59404c61b148a468d3bf0bc570fa39))
11
+
12
+
13
+ ### Chores
14
+
15
+ * **ci:** enable for pull requests ([e8dfcf9](https://github.com/openai/openai-ruby/commit/e8dfcf97f3af426d3ad83472fa6eaac718acbd4d))
16
+ * **ci:** link to correct github repo ([7b34316](https://github.com/openai/openai-ruby/commit/7b3431612ea66d123bc114ec55bdf07f6081439e))
17
+
18
+
19
+ ### Documentation
20
+
21
+ * structured outputs in README ([#723](https://github.com/openai/openai-ruby/issues/723)) ([7212e61](https://github.com/openai/openai-ruby/commit/7212e61ee2fb9ebff0576b0bff4424f43ae54af2))
22
+ * use image edit example in readme ([#722](https://github.com/openai/openai-ruby/issues/722)) ([eaa5055](https://github.com/openai/openai-ruby/commit/eaa5055eebca620c261c749ae4945845532c012d))
23
+
3
24
  ## 0.8.0 (2025-06-10)
4
25
 
5
26
  Full Changelog: [v0.7.0...v0.8.0](https://github.com/openai/openai-ruby/compare/v0.7.0...v0.8.0)
data/README.md CHANGED
@@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
15
15
  <!-- x-release-please-start-version -->
16
16
 
17
17
  ```ruby
18
- gem "openai", "~> 0.8.0"
18
+ gem "openai", "~> 0.9.0"
19
19
  ```
20
20
 
21
21
  <!-- x-release-please-end -->
@@ -96,15 +96,126 @@ file_object = openai.files.create(file: Pathname("input.jsonl"), purpose: "fine-
96
96
  # Alternatively, pass file contents or a `StringIO` directly:
97
97
  file_object = openai.files.create(file: File.read("input.jsonl"), purpose: "fine-tune")
98
98
 
99
+ puts(file_object.id)
100
+
99
101
  # Or, to control the filename and/or content type:
100
- file = OpenAI::FilePart.new(File.read("input.jsonl"), filename: "input.jsonl", content_type: "…")
101
- file_object = openai.files.create(file: file, purpose: "fine-tune")
102
+ image = OpenAI::FilePart.new(Pathname('dog.jpg'), content_type: 'image/jpeg')
103
+ edited = openai.images.edit(
104
+ prompt: "make this image look like a painting",
105
+ model: "gpt-image-1",
106
+ size: '1024x1024',
107
+ image: image
108
+ )
102
109
 
103
- puts(file_object.id)
110
+ puts(edited.data.first)
104
111
  ```
105
112
 
106
113
  Note that you can also pass a raw `IO` descriptor, but this disables retries, as the library can't be sure if the descriptor is a file or pipe (which cannot be rewound).
107
114
 
115
+ ### [Structured outputs](https://platform.openai.com/docs/guides/structured-outputs) and function calling
116
+
117
+ This SDK ships with helpers in `OpenAI::BaseModel`, `OpenAI::ArrayOf`, `OpenAI::EnumOf`, and `OpenAI::UnionOf` to help you define the supported JSON schemas used in making structured outputs and function calling requests.
118
+
119
+ <details>
120
+ <summary>Snippet</summary>
121
+
122
+ ```ruby
123
+ # Participant model with an optional last_name and an enum for status
124
+ class Participant < OpenAI::BaseModel
125
+ required :first_name, String
126
+ required :last_name, String, nil?: true
127
+ required :status, OpenAI::EnumOf[:confirmed, :unconfirmed, :tentative]
128
+ end
129
+
130
+ # CalendarEvent model with a list of participants.
131
+ class CalendarEvent < OpenAI::BaseModel
132
+ required :name, String
133
+ required :date, String
134
+ required :participants, OpenAI::ArrayOf[Participant]
135
+ end
136
+
137
+
138
+ client = OpenAI::Client.new
139
+
140
+ response = client.responses.create(
141
+ model: "gpt-4o-2024-08-06",
142
+ input: [
143
+ {role: :system, content: "Extract the event information."},
144
+ {
145
+ role: :user,
146
+ content: <<~CONTENT
147
+ Alice Shah and Lena are going to a science fair on Friday at 123 Main St. in San Diego.
148
+ They have also invited Jasper Vellani and Talia Groves - Jasper has not responded and Talia said she is thinking about it.
149
+ CONTENT
150
+ }
151
+ ],
152
+ text: CalendarEvent
153
+ )
154
+
155
+ response
156
+ .output
157
+ .flat_map { _1.content }
158
+ # filter out refusal responses
159
+ .grep_v(OpenAI::Models::Responses::ResponseOutputRefusal)
160
+ .each do |content|
161
+ # parsed is an instance of `CalendarEvent`
162
+ pp(content.parsed)
163
+ end
164
+ ```
165
+
166
+ </details>
167
+
168
+ See the [examples](https://github.com/openai/openai-ruby/tree/main/examples) directory for more usage examples for helper usage.
169
+
170
+ To make the equivalent request using raw JSON schema format, you would do the following:
171
+
172
+ <details>
173
+ <summary>Snippet</summary>
174
+
175
+ ```ruby
176
+ response = client.responses.create(
177
+ model: "gpt-4o-2024-08-06",
178
+ input: [
179
+ {role: :system, content: "Extract the event information."},
180
+ {
181
+ role: :user,
182
+ content: "..."
183
+ }
184
+ ],
185
+ text: {
186
+ format: {
187
+ type: :json_schema,
188
+ name: "CalendarEvent",
189
+ strict: true,
190
+ schema: {
191
+ type: "object",
192
+ properties: {
193
+ name: {type: "string"},
194
+ date: {type: "string"},
195
+ participants: {
196
+ type: "array",
197
+ items: {
198
+ type: "object",
199
+ properties: {
200
+ first_name: {type: "string"},
201
+ last_name: {type: %w[string null]},
202
+ status: {type: "string", enum: %w[confirmed unconfirmed tentative]}
203
+ },
204
+ required: %w[first_name last_name status],
205
+ additionalProperties: false
206
+ }
207
+ }
208
+ },
209
+ required: %w[name date participants],
210
+ additionalProperties: false
211
+ }
212
+ }
213
+ }
214
+ )
215
+ ```
216
+
217
+ </details>
218
+
108
219
  ### Handling errors
109
220
 
110
221
  When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass of `OpenAI::Errors::APIError` will be thrown:
@@ -213,6 +213,7 @@ module OpenAI
213
213
  AUTO = :auto
214
214
  DEFAULT = :default
215
215
  FLEX = :flex
216
+ SCALE = :scale
216
217
 
217
218
  # @!method self.values
218
219
  # @return [Array<Symbol>]
@@ -396,6 +396,7 @@ module OpenAI
396
396
  AUTO = :auto
397
397
  DEFAULT = :default
398
398
  FLEX = :flex
399
+ SCALE = :scale
399
400
 
400
401
  # @!method self.values
401
402
  # @return [Array<Symbol>]
@@ -569,6 +569,7 @@ module OpenAI
569
569
  AUTO = :auto
570
570
  DEFAULT = :default
571
571
  FLEX = :flex
572
+ SCALE = :scale
572
573
 
573
574
  # @!method self.values
574
575
  # @return [Array<Symbol>]
@@ -6,76 +6,41 @@ module OpenAI
6
6
  module Checkpoints
7
7
  # @see OpenAI::Resources::FineTuning::Checkpoints::Permissions#retrieve
8
8
  class PermissionRetrieveResponse < OpenAI::Internal::Type::BaseModel
9
- # @!attribute data
9
+ # @!attribute id
10
+ # The permission identifier, which can be referenced in the API endpoints.
10
11
  #
11
- # @return [Array<OpenAI::Models::FineTuning::Checkpoints::PermissionRetrieveResponse::Data>]
12
- required :data,
13
- -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Models::FineTuning::Checkpoints::PermissionRetrieveResponse::Data] }
12
+ # @return [String]
13
+ required :id, String
14
14
 
15
- # @!attribute has_more
15
+ # @!attribute created_at
16
+ # The Unix timestamp (in seconds) for when the permission was created.
16
17
  #
17
- # @return [Boolean]
18
- required :has_more, OpenAI::Internal::Type::Boolean
18
+ # @return [Integer]
19
+ required :created_at, Integer
19
20
 
20
21
  # @!attribute object
22
+ # The object type, which is always "checkpoint.permission".
21
23
  #
22
- # @return [Symbol, :list]
23
- required :object, const: :list
24
+ # @return [Symbol, :"checkpoint.permission"]
25
+ required :object, const: :"checkpoint.permission"
24
26
 
25
- # @!attribute first_id
27
+ # @!attribute project_id
28
+ # The project identifier that the permission is for.
26
29
  #
27
- # @return [String, nil]
28
- optional :first_id, String, nil?: true
30
+ # @return [String]
31
+ required :project_id, String
29
32
 
30
- # @!attribute last_id
33
+ # @!method initialize(id:, created_at:, project_id:, object: :"checkpoint.permission")
34
+ # The `checkpoint.permission` object represents a permission for a fine-tuned
35
+ # model checkpoint.
31
36
  #
32
- # @return [String, nil]
33
- optional :last_id, String, nil?: true
34
-
35
- # @!method initialize(data:, has_more:, first_id: nil, last_id: nil, object: :list)
36
- # @param data [Array<OpenAI::Models::FineTuning::Checkpoints::PermissionRetrieveResponse::Data>]
37
- # @param has_more [Boolean]
38
- # @param first_id [String, nil]
39
- # @param last_id [String, nil]
40
- # @param object [Symbol, :list]
41
-
42
- class Data < OpenAI::Internal::Type::BaseModel
43
- # @!attribute id
44
- # The permission identifier, which can be referenced in the API endpoints.
45
- #
46
- # @return [String]
47
- required :id, String
48
-
49
- # @!attribute created_at
50
- # The Unix timestamp (in seconds) for when the permission was created.
51
- #
52
- # @return [Integer]
53
- required :created_at, Integer
54
-
55
- # @!attribute object
56
- # The object type, which is always "checkpoint.permission".
57
- #
58
- # @return [Symbol, :"checkpoint.permission"]
59
- required :object, const: :"checkpoint.permission"
60
-
61
- # @!attribute project_id
62
- # The project identifier that the permission is for.
63
- #
64
- # @return [String]
65
- required :project_id, String
66
-
67
- # @!method initialize(id:, created_at:, project_id:, object: :"checkpoint.permission")
68
- # The `checkpoint.permission` object represents a permission for a fine-tuned
69
- # model checkpoint.
70
- #
71
- # @param id [String] The permission identifier, which can be referenced in the API endpoints.
72
- #
73
- # @param created_at [Integer] The Unix timestamp (in seconds) for when the permission was created.
74
- #
75
- # @param project_id [String] The project identifier that the permission is for.
76
- #
77
- # @param object [Symbol, :"checkpoint.permission"] The object type, which is always "checkpoint.permission".
78
- end
37
+ # @param id [String] The permission identifier, which can be referenced in the API endpoints.
38
+ #
39
+ # @param created_at [Integer] The Unix timestamp (in seconds) for when the permission was created.
40
+ #
41
+ # @param project_id [String] The project identifier that the permission is for.
42
+ #
43
+ # @param object [Symbol, :"checkpoint.permission"] The object type, which is always "checkpoint.permission".
79
44
  end
80
45
  end
81
46
  end
@@ -31,7 +31,8 @@ module OpenAI
31
31
  # [preference](https://platform.openai.com/docs/api-reference/fine-tuning/preference-input)
32
32
  # format.
33
33
  #
34
- # See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
34
+ # See the
35
+ # [fine-tuning guide](https://platform.openai.com/docs/guides/model-optimization)
35
36
  # for more details.
36
37
  #
37
38
  # @return [String]
@@ -100,7 +101,8 @@ module OpenAI
100
101
  # Your dataset must be formatted as a JSONL file. You must upload your file with
101
102
  # the purpose `fine-tune`.
102
103
  #
103
- # See the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning)
104
+ # See the
105
+ # [fine-tuning guide](https://platform.openai.com/docs/guides/model-optimization)
104
106
  # for more details.
105
107
  #
106
108
  # @return [String, nil]
@@ -61,6 +61,22 @@ module OpenAI
61
61
  # @return [Integer, nil]
62
62
  optional :n, Integer, nil?: true
63
63
 
64
+ # @!attribute output_compression
65
+ # The compression level (0-100%) for the generated images. This parameter is only
66
+ # supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and
67
+ # defaults to 100.
68
+ #
69
+ # @return [Integer, nil]
70
+ optional :output_compression, Integer, nil?: true
71
+
72
+ # @!attribute output_format
73
+ # The format in which the generated images are returned. This parameter is only
74
+ # supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
75
+ # default value is `png`.
76
+ #
77
+ # @return [Symbol, OpenAI::Models::ImageEditParams::OutputFormat, nil]
78
+ optional :output_format, enum: -> { OpenAI::ImageEditParams::OutputFormat }, nil?: true
79
+
64
80
  # @!attribute quality
65
81
  # The quality of the image that will be generated. `high`, `medium` and `low` are
66
82
  # only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
@@ -94,7 +110,7 @@ module OpenAI
94
110
  # @return [String, nil]
95
111
  optional :user, String
96
112
 
97
- # @!method initialize(image:, prompt:, background: nil, mask: nil, model: nil, n: nil, quality: nil, response_format: nil, size: nil, user: nil, request_options: {})
113
+ # @!method initialize(image:, prompt:, background: nil, mask: nil, model: nil, n: nil, output_compression: nil, output_format: nil, quality: nil, response_format: nil, size: nil, user: nil, request_options: {})
98
114
  # Some parameter documentations has been truncated, see
99
115
  # {OpenAI::Models::ImageEditParams} for more details.
100
116
  #
@@ -110,6 +126,10 @@ module OpenAI
110
126
  #
111
127
  # @param n [Integer, nil] The number of images to generate. Must be between 1 and 10.
112
128
  #
129
+ # @param output_compression [Integer, nil] The compression level (0-100%) for the generated images. This parameter
130
+ #
131
+ # @param output_format [Symbol, OpenAI::Models::ImageEditParams::OutputFormat, nil] The format in which the generated images are returned. This parameter is
132
+ #
113
133
  # @param quality [Symbol, OpenAI::Models::ImageEditParams::Quality, nil] The quality of the image that will be generated. `high`, `medium` and `low` are
114
134
  #
115
135
  # @param response_format [Symbol, OpenAI::Models::ImageEditParams::ResponseFormat, nil] The format in which the generated images are returned. Must be one of `url` or `
@@ -174,6 +194,20 @@ module OpenAI
174
194
  # @return [Array(String, Symbol, OpenAI::Models::ImageModel)]
175
195
  end
176
196
 
197
+ # The format in which the generated images are returned. This parameter is only
198
+ # supported for `gpt-image-1`. Must be one of `png`, `jpeg`, or `webp`. The
199
+ # default value is `png`.
200
+ module OutputFormat
201
+ extend OpenAI::Internal::Type::Enum
202
+
203
+ PNG = :png
204
+ JPEG = :jpeg
205
+ WEBP = :webp
206
+
207
+ # @!method self.values
208
+ # @return [Array<Symbol>]
209
+ end
210
+
177
211
  # The quality of the image that will be generated. `high`, `medium` and `low` are
178
212
  # only supported for `gpt-image-1`. `dall-e-2` only supports `standard` quality.
179
213
  # Defaults to `auto`.
@@ -32,15 +32,14 @@ module OpenAI
32
32
  required :incomplete_details, -> { OpenAI::Responses::Response::IncompleteDetails }, nil?: true
33
33
 
34
34
  # @!attribute instructions
35
- # Inserts a system (or developer) message as the first item in the model's
36
- # context.
35
+ # A system (or developer) message inserted into the model's context.
37
36
  #
38
37
  # When using along with `previous_response_id`, the instructions from a previous
39
38
  # response will not be carried over to the next response. This makes it simple to
40
39
  # swap out system (or developer) messages in new responses.
41
40
  #
42
- # @return [String, nil]
43
- required :instructions, String, nil?: true
41
+ # @return [String, Array<OpenAI::Models::Responses::EasyInputMessage, OpenAI::Models::Responses::ResponseInputItem::Message, OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseInputItem::ComputerCallOutput, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseInputItem::FunctionCallOutput, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseInputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCallOutput, OpenAI::Models::Responses::ResponseInputItem::McpListTools, OpenAI::Models::Responses::ResponseInputItem::McpApprovalRequest, OpenAI::Models::Responses::ResponseInputItem::McpApprovalResponse, OpenAI::Models::Responses::ResponseInputItem::McpCall, OpenAI::Models::Responses::ResponseInputItem::ItemReference>, nil]
42
+ required :instructions, union: -> { OpenAI::Responses::Response::Instructions }, nil?: true
44
43
 
45
44
  # @!attribute metadata
46
45
  # Set of 16 key-value pairs that can be attached to an object. This can be useful
@@ -156,6 +155,13 @@ module OpenAI
156
155
  # @return [String, nil]
157
156
  optional :previous_response_id, String, nil?: true
158
157
 
158
+ # @!attribute prompt
159
+ # Reference to a prompt template and its variables.
160
+ # [Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
161
+ #
162
+ # @return [OpenAI::Models::Responses::ResponsePrompt, nil]
163
+ optional :prompt, -> { OpenAI::Responses::ResponsePrompt }, nil?: true
164
+
159
165
  # @!attribute reasoning
160
166
  # **o-series models only**
161
167
  #
@@ -231,7 +237,7 @@ module OpenAI
231
237
  # @return [String, nil]
232
238
  optional :user, String
233
239
 
234
- # @!method initialize(id:, created_at:, error:, incomplete_details:, instructions:, metadata:, model:, output:, parallel_tool_calls:, temperature:, tool_choice:, tools:, top_p:, background: nil, max_output_tokens: nil, previous_response_id: nil, reasoning: nil, service_tier: nil, status: nil, text: nil, truncation: nil, usage: nil, user: nil, object: :response)
240
+ # @!method initialize(id:, created_at:, error:, incomplete_details:, instructions:, metadata:, model:, output:, parallel_tool_calls:, temperature:, tool_choice:, tools:, top_p:, background: nil, max_output_tokens: nil, previous_response_id: nil, prompt: nil, reasoning: nil, service_tier: nil, status: nil, text: nil, truncation: nil, usage: nil, user: nil, object: :response)
235
241
  # Some parameter documentations has been truncated, see
236
242
  # {OpenAI::Models::Responses::Response} for more details.
237
243
  #
@@ -243,7 +249,7 @@ module OpenAI
243
249
  #
244
250
  # @param incomplete_details [OpenAI::Models::Responses::Response::IncompleteDetails, nil] Details about why the response is incomplete.
245
251
  #
246
- # @param instructions [String, nil] Inserts a system (or developer) message as the first item in the model's context
252
+ # @param instructions [String, Array<OpenAI::Models::Responses::EasyInputMessage, OpenAI::Models::Responses::ResponseInputItem::Message, OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseInputItem::ComputerCallOutput, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseInputItem::FunctionCallOutput, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseInputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCallOutput, OpenAI::Models::Responses::ResponseInputItem::McpListTools, OpenAI::Models::Responses::ResponseInputItem::McpApprovalRequest, OpenAI::Models::Responses::ResponseInputItem::McpApprovalResponse, OpenAI::Models::Responses::ResponseInputItem::McpCall, OpenAI::Models::Responses::ResponseInputItem::ItemReference>, nil] A system (or developer) message inserted into the model's context.
247
253
  #
248
254
  # @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
249
255
  #
@@ -267,6 +273,8 @@ module OpenAI
267
273
  #
268
274
  # @param previous_response_id [String, nil] The unique ID of the previous response to the model. Use this to
269
275
  #
276
+ # @param prompt [OpenAI::Models::Responses::ResponsePrompt, nil] Reference to a prompt template and its variables.
277
+ #
270
278
  # @param reasoning [OpenAI::Models::Reasoning, nil] **o-series models only**
271
279
  #
272
280
  # @param service_tier [Symbol, OpenAI::Models::Responses::Response::ServiceTier, nil] Specifies the latency tier to use for processing the request. This parameter is
@@ -310,6 +318,32 @@ module OpenAI
310
318
  end
311
319
  end
312
320
 
321
+ # A system (or developer) message inserted into the model's context.
322
+ #
323
+ # When using along with `previous_response_id`, the instructions from a previous
324
+ # response will not be carried over to the next response. This makes it simple to
325
+ # swap out system (or developer) messages in new responses.
326
+ #
327
+ # @see OpenAI::Models::Responses::Response#instructions
328
+ module Instructions
329
+ extend OpenAI::Internal::Type::Union
330
+
331
+ # A text input to the model, equivalent to a text input with the
332
+ # `developer` role.
333
+ variant String
334
+
335
+ # A list of one or many input items to the model, containing
336
+ # different content types.
337
+ variant -> { OpenAI::Models::Responses::Response::Instructions::ResponseInputItemArray }
338
+
339
+ # @!method self.variants
340
+ # @return [Array(String, Array<OpenAI::Models::Responses::EasyInputMessage, OpenAI::Models::Responses::ResponseInputItem::Message, OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseInputItem::ComputerCallOutput, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseInputItem::FunctionCallOutput, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseInputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCallOutput, OpenAI::Models::Responses::ResponseInputItem::McpListTools, OpenAI::Models::Responses::ResponseInputItem::McpApprovalRequest, OpenAI::Models::Responses::ResponseInputItem::McpApprovalResponse, OpenAI::Models::Responses::ResponseInputItem::McpCall, OpenAI::Models::Responses::ResponseInputItem::ItemReference>)]
341
+
342
+ # @type [OpenAI::Internal::Type::Converter]
343
+ ResponseInputItemArray =
344
+ OpenAI::Internal::Type::ArrayOf[union: -> { OpenAI::Responses::ResponseInputItem }]
345
+ end
346
+
313
347
  # How the model should select which tool (or tools) to use when generating a
314
348
  # response. See the `tools` parameter to see how to specify which tools the model
315
349
  # can call.
@@ -364,6 +398,7 @@ module OpenAI
364
398
  AUTO = :auto
365
399
  DEFAULT = :default
366
400
  FLEX = :flex
401
+ SCALE = :scale
367
402
 
368
403
  # @!method self.values
369
404
  # @return [Array<Symbol>]
@@ -64,8 +64,7 @@ module OpenAI
64
64
  nil?: true
65
65
 
66
66
  # @!attribute instructions
67
- # Inserts a system (or developer) message as the first item in the model's
68
- # context.
67
+ # A system (or developer) message inserted into the model's context.
69
68
  #
70
69
  # When using along with `previous_response_id`, the instructions from a previous
71
70
  # response will not be carried over to the next response. This makes it simple to
@@ -107,6 +106,13 @@ module OpenAI
107
106
  # @return [String, nil]
108
107
  optional :previous_response_id, String, nil?: true
109
108
 
109
+ # @!attribute prompt
110
+ # Reference to a prompt template and its variables.
111
+ # [Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
112
+ #
113
+ # @return [OpenAI::Models::Responses::ResponsePrompt, nil]
114
+ optional :prompt, -> { OpenAI::Responses::ResponsePrompt }, nil?: true
115
+
110
116
  # @!attribute reasoning
111
117
  # **o-series models only**
112
118
  #
@@ -226,7 +232,7 @@ module OpenAI
226
232
  # @return [String, nil]
227
233
  optional :user, String
228
234
 
229
- # @!method initialize(input:, model:, background: nil, include: nil, instructions: nil, max_output_tokens: nil, metadata: nil, parallel_tool_calls: nil, previous_response_id: nil, reasoning: nil, service_tier: nil, store: nil, temperature: nil, text: nil, tool_choice: nil, tools: nil, top_p: nil, truncation: nil, user: nil, request_options: {})
235
+ # @!method initialize(input:, model:, background: nil, include: nil, instructions: nil, max_output_tokens: nil, metadata: nil, parallel_tool_calls: nil, previous_response_id: nil, prompt: nil, reasoning: nil, service_tier: nil, store: nil, temperature: nil, text: nil, tool_choice: nil, tools: nil, top_p: nil, truncation: nil, user: nil, request_options: {})
230
236
  # Some parameter documentations has been truncated, see
231
237
  # {OpenAI::Models::Responses::ResponseCreateParams} for more details.
232
238
  #
@@ -238,7 +244,7 @@ module OpenAI
238
244
  #
239
245
  # @param include [Array<Symbol, OpenAI::Models::Responses::ResponseIncludable>, nil] Specify additional output data to include in the model response. Currently
240
246
  #
241
- # @param instructions [String, nil] Inserts a system (or developer) message as the first item in the model's context
247
+ # @param instructions [String, nil] A system (or developer) message inserted into the model's context.
242
248
  #
243
249
  # @param max_output_tokens [Integer, nil] An upper bound for the number of tokens that can be generated for a response, in
244
250
  #
@@ -248,6 +254,8 @@ module OpenAI
248
254
  #
249
255
  # @param previous_response_id [String, nil] The unique ID of the previous response to the model. Use this to
250
256
  #
257
+ # @param prompt [OpenAI::Models::Responses::ResponsePrompt, nil] Reference to a prompt template and its variables.
258
+ #
251
259
  # @param reasoning [OpenAI::Models::Reasoning, nil] **o-series models only**
252
260
  #
253
261
  # @param service_tier [Symbol, OpenAI::Models::Responses::ResponseCreateParams::ServiceTier, nil] Specifies the latency tier to use for processing the request. This parameter is
@@ -317,6 +325,7 @@ module OpenAI
317
325
  AUTO = :auto
318
326
  DEFAULT = :default
319
327
  FLEX = :flex
328
+ SCALE = :scale
320
329
 
321
330
  # @!method self.values
322
331
  # @return [Array<Symbol>]
@@ -0,0 +1,63 @@
1
+ # frozen_string_literal: true
2
+
3
+ module OpenAI
4
+ module Models
5
+ module Responses
6
+ class ResponsePrompt < OpenAI::Internal::Type::BaseModel
7
+ # @!attribute id
8
+ # The unique identifier of the prompt template to use.
9
+ #
10
+ # @return [String]
11
+ required :id, String
12
+
13
+ # @!attribute variables
14
+ # Optional map of values to substitute in for variables in your prompt. The
15
+ # substitution values can either be strings, or other Response input types like
16
+ # images or files.
17
+ #
18
+ # @return [Hash{Symbol=>String, OpenAI::Models::Responses::ResponseInputText, OpenAI::Models::Responses::ResponseInputImage, OpenAI::Models::Responses::ResponseInputFile}, nil]
19
+ optional :variables,
20
+ -> { OpenAI::Internal::Type::HashOf[union: OpenAI::Responses::ResponsePrompt::Variable] },
21
+ nil?: true
22
+
23
+ # @!attribute version
24
+ # Optional version of the prompt template.
25
+ #
26
+ # @return [String, nil]
27
+ optional :version, String, nil?: true
28
+
29
+ # @!method initialize(id:, variables: nil, version: nil)
30
+ # Some parameter documentations has been truncated, see
31
+ # {OpenAI::Models::Responses::ResponsePrompt} for more details.
32
+ #
33
+ # Reference to a prompt template and its variables.
34
+ # [Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
35
+ #
36
+ # @param id [String] The unique identifier of the prompt template to use.
37
+ #
38
+ # @param variables [Hash{Symbol=>String, OpenAI::Models::Responses::ResponseInputText, OpenAI::Models::Responses::ResponseInputImage, OpenAI::Models::Responses::ResponseInputFile}, nil] Optional map of values to substitute in for variables in your
39
+ #
40
+ # @param version [String, nil] Optional version of the prompt template.
41
+
42
+ # A text input to the model.
43
+ module Variable
44
+ extend OpenAI::Internal::Type::Union
45
+
46
+ variant String
47
+
48
+ # A text input to the model.
49
+ variant -> { OpenAI::Responses::ResponseInputText }
50
+
51
+ # An image input to the model. Learn about [image inputs](https://platform.openai.com/docs/guides/vision).
52
+ variant -> { OpenAI::Responses::ResponseInputImage }
53
+
54
+ # A file input to the model.
55
+ variant -> { OpenAI::Responses::ResponseInputFile }
56
+
57
+ # @!method self.variants
58
+ # @return [Array(String, OpenAI::Models::Responses::ResponseInputText, OpenAI::Models::Responses::ResponseInputImage, OpenAI::Models::Responses::ResponseInputFile)]
59
+ end
60
+ end
61
+ end
62
+ end
63
+ end
@@ -60,7 +60,7 @@ module OpenAI
60
60
  #
61
61
  # @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}, nil]
62
62
  #
63
- # @return [OpenAI::Models::FineTuning::Checkpoints::PermissionRetrieveResponse]
63
+ # @return [OpenAI::Internal::CursorPage<OpenAI::Models::FineTuning::Checkpoints::PermissionRetrieveResponse>]
64
64
  #
65
65
  # @see OpenAI::Models::FineTuning::Checkpoints::PermissionRetrieveParams
66
66
  def retrieve(fine_tuned_model_checkpoint, params = {})
@@ -69,6 +69,7 @@ module OpenAI
69
69
  method: :get,
70
70
  path: ["fine_tuning/checkpoints/%1$s/permissions", fine_tuned_model_checkpoint],
71
71
  query: parsed,
72
+ page: OpenAI::Internal::CursorPage,
72
73
  model: OpenAI::Models::FineTuning::Checkpoints::PermissionRetrieveResponse,
73
74
  options: options
74
75
  )
@@ -16,7 +16,7 @@ module OpenAI
16
16
  # Response includes details of the enqueued job including job status and the name
17
17
  # of the fine-tuned models once complete.
18
18
  #
19
- # [Learn more about fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
19
+ # [Learn more about fine-tuning](https://platform.openai.com/docs/guides/model-optimization)
20
20
  #
21
21
  # @overload create(model:, training_file:, hyperparameters: nil, integrations: nil, metadata: nil, method_: nil, seed: nil, suffix: nil, validation_file: nil, request_options: {})
22
22
  #
@@ -59,7 +59,7 @@ module OpenAI
59
59
  #
60
60
  # Get info about a fine-tuning job.
61
61
  #
62
- # [Learn more about fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
62
+ # [Learn more about fine-tuning](https://platform.openai.com/docs/guides/model-optimization)
63
63
  #
64
64
  # @overload retrieve(fine_tuning_job_id, request_options: {})
65
65
  #