openai 0.17.1 → 0.18.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (55) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +30 -0
  3. data/README.md +1 -1
  4. data/lib/openai/helpers/structured_output/array_of.rb +2 -10
  5. data/lib/openai/helpers/structured_output/base_model.rb +4 -11
  6. data/lib/openai/helpers/structured_output/json_schema_converter.rb +19 -3
  7. data/lib/openai/helpers/structured_output/union_of.rb +2 -10
  8. data/lib/openai/models/batch_create_params.rb +38 -1
  9. data/lib/openai/models/beta/thread_create_and_run_params.rb +2 -2
  10. data/lib/openai/models/beta/threads/run.rb +2 -2
  11. data/lib/openai/models/beta/threads/run_create_params.rb +2 -2
  12. data/lib/openai/models/chat/chat_completion.rb +6 -6
  13. data/lib/openai/models/chat/chat_completion_chunk.rb +6 -6
  14. data/lib/openai/models/chat/completion_create_params.rb +7 -7
  15. data/lib/openai/models/file_create_params.rb +37 -1
  16. data/lib/openai/models/graders/text_similarity_grader.rb +6 -5
  17. data/lib/openai/models/reasoning.rb +1 -1
  18. data/lib/openai/models/responses/response.rb +6 -8
  19. data/lib/openai/models/responses/response_create_params.rb +6 -8
  20. data/lib/openai/models/upload_create_params.rb +37 -1
  21. data/lib/openai/resources/batches.rb +3 -1
  22. data/lib/openai/resources/files.rb +4 -2
  23. data/lib/openai/resources/responses.rb +2 -2
  24. data/lib/openai/resources/uploads.rb +3 -1
  25. data/lib/openai/version.rb +1 -1
  26. data/rbi/openai/helpers/structured_output/array_of.rbi +0 -3
  27. data/rbi/openai/helpers/structured_output/json_schema_converter.rbi +10 -0
  28. data/rbi/openai/models/batch_create_params.rbi +60 -0
  29. data/rbi/openai/models/beta/thread_create_and_run_params.rbi +3 -3
  30. data/rbi/openai/models/beta/threads/run.rbi +3 -3
  31. data/rbi/openai/models/beta/threads/run_create_params.rbi +3 -3
  32. data/rbi/openai/models/chat/chat_completion.rbi +6 -9
  33. data/rbi/openai/models/chat/chat_completion_chunk.rbi +6 -9
  34. data/rbi/openai/models/chat/completion_create_params.rbi +8 -11
  35. data/rbi/openai/models/file_create_params.rbi +56 -0
  36. data/rbi/openai/models/graders/text_similarity_grader.rbi +11 -6
  37. data/rbi/openai/models/reasoning.rbi +1 -1
  38. data/rbi/openai/models/responses/response.rbi +8 -11
  39. data/rbi/openai/models/responses/response_create_params.rbi +8 -11
  40. data/rbi/openai/models/upload_create_params.rbi +56 -0
  41. data/rbi/openai/resources/batches.rbi +5 -0
  42. data/rbi/openai/resources/beta/threads/runs.rbi +2 -2
  43. data/rbi/openai/resources/beta/threads.rbi +2 -2
  44. data/rbi/openai/resources/chat/completions.rbi +6 -8
  45. data/rbi/openai/resources/files.rbi +5 -1
  46. data/rbi/openai/resources/responses.rbi +6 -8
  47. data/rbi/openai/resources/uploads.rbi +4 -0
  48. data/sig/openai/models/batch_create_params.rbs +22 -1
  49. data/sig/openai/models/file_create_params.rbs +22 -1
  50. data/sig/openai/models/graders/text_similarity_grader.rbs +3 -1
  51. data/sig/openai/models/upload_create_params.rbs +22 -1
  52. data/sig/openai/resources/batches.rbs +1 -0
  53. data/sig/openai/resources/files.rbs +1 -0
  54. data/sig/openai/resources/uploads.rbs +1 -0
  55. metadata +2 -2
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f9090d21fb7dcdd5e12c607bbe02aa7c1e7336e02148fec0465b21cc43bb9d45
4
- data.tar.gz: 61662c62cde77fefb5bec16ce51127efd60b4f71542d2bcf2c48f0225ecb29f9
3
+ metadata.gz: c35bbfd04ead89e0106c7e37becbe9d9cded43bfb68f678052a09fa76a8b5c1c
4
+ data.tar.gz: b6870bb326394a91a1e1c2244ba8b25f8792fcbf4c7a54146505883a1d7cd4ef
5
5
  SHA512:
6
- metadata.gz: '08c00969160cc09128778848c99704501b192ca61ed7a845f5d9ead7a59eedd7762b5c0e3e941b4b016b2baa935200d59e8d480a55b82dbd5ebeeb58d94660a4'
7
- data.tar.gz: 3386e4858cfe7ce20c46b10ac51f2cfa403f963bf0ef33e1eedb70f8eeb14076f539729700f442f65b6535bea30b8a9638e8c43bc79e6a7af56a8efaaf26f907
6
+ metadata.gz: 3e3159ea33f98d814305fb82ff5d91a58c7824938fb008ecf29f2fc7993db3ed9fb809d5db6a69f0f28a9aa17fd5d5a5412b4c9c9b29de666ab6d7fb17c4fb3a
7
+ data.tar.gz: f9b4351d29b599c9e539684863f46ebada1bf5a1cb56c8a7d759d627a37fa603b53efc866708ac260b0b9a111b07313055d171bf7c8f0ebc9cb56b199b86f334
data/CHANGELOG.md CHANGED
@@ -1,5 +1,35 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.18.1 (2025-08-19)
4
+
5
+ Full Changelog: [v0.18.0...v0.18.1](https://github.com/openai/openai-ruby/compare/v0.18.0...v0.18.1)
6
+
7
+ ### Chores
8
+
9
+ * **api:** accurately represent shape for verbosity on Chat Completions ([a19cd00](https://github.com/openai/openai-ruby/commit/a19cd00e6df3cc3f47239a25fe15f33c2cb77962))
10
+
11
+ ## 0.18.0 (2025-08-15)
12
+
13
+ Full Changelog: [v0.17.1...v0.18.0](https://github.com/openai/openai-ruby/compare/v0.17.1...v0.18.0)
14
+
15
+ ### ⚠ BREAKING CHANGES
16
+
17
+ * structured output desc should go on array items not array itself ([#799](https://github.com/openai/openai-ruby/issues/799))
18
+
19
+ ### Features
20
+
21
+ * **api:** add new text parameters, expiration options ([f318432](https://github.com/openai/openai-ruby/commit/f318432b19800ab42d5b0c5f179f0cdd02dbf596))
22
+
23
+
24
+ ### Bug Fixes
25
+
26
+ * structured output desc should go on array items not array itself ([#799](https://github.com/openai/openai-ruby/issues/799)) ([ff507d0](https://github.com/openai/openai-ruby/commit/ff507d095ff703ba3b44ab82b06eb4314688d4eb))
27
+
28
+
29
+ ### Chores
30
+
31
+ * **internal:** update test skipping reason ([c815703](https://github.com/openai/openai-ruby/commit/c815703062ce79d2cb14f252ee5d23cf4ebf15ca))
32
+
3
33
  ## 0.17.1 (2025-08-09)
4
34
 
5
35
  Full Changelog: [v0.17.0...v0.17.1](https://github.com/openai/openai-ruby/compare/v0.17.0...v0.17.1)
data/README.md CHANGED
@@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
15
15
  <!-- x-release-please-start-version -->
16
16
 
17
17
  ```ruby
18
- gem "openai", "~> 0.17.1"
18
+ gem "openai", "~> 0.18.1"
19
19
  ```
20
20
 
21
21
  <!-- x-release-please-end -->
@@ -30,19 +30,11 @@ module OpenAI
30
30
  state: state
31
31
  )
32
32
  items = OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.to_nilable(items) if nilable?
33
+ OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.assoc_meta!(items, meta: @meta)
33
34
 
34
- schema = {type: "array", items: items}
35
- description.nil? ? schema : schema.update(description: description)
35
+ {type: "array", items: items}
36
36
  end
37
37
  end
38
-
39
- # @return [String, nil]
40
- attr_reader :description
41
-
42
- def initialize(type_info, spec = {})
43
- super
44
- @description = [type_info, spec].grep(Hash).filter_map { _1[:doc] }.first
45
- end
46
38
  end
47
39
  end
48
40
  end
@@ -28,15 +28,13 @@ module OpenAI
28
28
  OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.cache_def!(state, type: self) do
29
29
  path = state.fetch(:path)
30
30
  properties = fields.to_h do |name, field|
31
- type, nilable = field.fetch_values(:type, :nilable)
31
+ type, nilable, meta = field.fetch_values(:type, :nilable, :meta)
32
32
  new_state = {**state, path: [*path, ".#{name}"]}
33
33
 
34
34
  schema =
35
35
  case type
36
- in {"$ref": String}
37
- type
38
36
  in OpenAI::Helpers::StructuredOutput::JsonSchemaConverter
39
- type.to_json_schema_inner(state: new_state).update(field.slice(:description))
37
+ type.to_json_schema_inner(state: new_state)
40
38
  else
41
39
  OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.to_json_schema_inner(
42
40
  type,
@@ -44,6 +42,8 @@ module OpenAI
44
42
  )
45
43
  end
46
44
  schema = OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.to_nilable(schema) if nilable
45
+ OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.assoc_meta!(schema, meta: meta)
46
+
47
47
  [name, schema]
48
48
  end
49
49
 
@@ -58,13 +58,6 @@ module OpenAI
58
58
  end
59
59
 
60
60
  class << self
61
- def required(name_sym, type_info, spec = {})
62
- super
63
-
64
- doc = [type_info, spec].grep(Hash).filter_map { _1[:doc] }.first
65
- known_fields.fetch(name_sym).update(description: doc) unless doc.nil?
66
- end
67
-
68
61
  def optional(...)
69
62
  # rubocop:disable Layout/LineLength
70
63
  message = "`optional` is not supported for structured output APIs, use `#required` with `nil?: true` instead"
@@ -46,7 +46,7 @@ module OpenAI
46
46
  in {"$ref": String}
47
47
  {
48
48
  anyOf: [
49
- schema.update(OpenAI::Helpers::StructuredOutput::JsonSchemaConverter::NO_REF => true),
49
+ schema.merge!(OpenAI::Helpers::StructuredOutput::JsonSchemaConverter::NO_REF => true),
50
50
  {type: null}
51
51
  ]
52
52
  }
@@ -60,6 +60,17 @@ module OpenAI
60
60
  end
61
61
  end
62
62
 
63
+ # @api private
64
+ #
65
+ # @param schema [Hash{Symbol=>Object}]
66
+ def assoc_meta!(schema, meta:)
67
+ xformed = meta.transform_keys(doc: :description)
68
+ if schema.key?(:$ref) && !xformed.empty?
69
+ schema.merge!(OpenAI::Helpers::StructuredOutput::JsonSchemaConverter::NO_REF => true)
70
+ end
71
+ schema.merge!(xformed)
72
+ end
73
+
63
74
  # @api private
64
75
  #
65
76
  # @param state [Hash{Symbol=>Object}]
@@ -116,12 +127,17 @@ module OpenAI
116
127
 
117
128
  case refs
118
129
  in [ref]
119
- ref.replace(sch)
130
+ ref.replace(ref.except(:$ref).merge(sch))
120
131
  in [_, ref, *]
121
132
  reused_defs.store(ref.fetch(:$ref), sch)
133
+ refs.each do
134
+ unless (meta = _1.except(:$ref)).empty?
135
+ _1.replace(allOf: [_1.slice(:$ref), meta])
136
+ end
137
+ end
122
138
  else
123
139
  end
124
- no_refs.each { _1.replace(sch) }
140
+ no_refs.each { _1.replace(_1.except(:$ref).merge(sch)) }
125
141
  end
126
142
 
127
143
  xformed = reused_defs.transform_keys { _1.delete_prefix("#/$defs/") }
@@ -56,16 +56,8 @@ module OpenAI
56
56
 
57
57
  # @param variants [Array<generic<Member>>]
58
58
  def initialize(*variants)
59
- case variants
60
- in [Symbol => d, Hash => vs]
61
- discriminator(d)
62
- vs.each do |k, v|
63
- v.is_a?(Proc) ? variant(k, v) : variant(k, -> { v })
64
- end
65
- else
66
- variants.each do |v|
67
- v.is_a?(Proc) ? variant(v) : variant(-> { v })
68
- end
59
+ variants.each do |v|
60
+ v.is_a?(Proc) ? variant(v) : variant(-> { v })
69
61
  end
70
62
  end
71
63
  end
@@ -48,7 +48,14 @@ module OpenAI
48
48
  # @return [Hash{Symbol=>String}, nil]
49
49
  optional :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true
50
50
 
51
- # @!method initialize(completion_window:, endpoint:, input_file_id:, metadata: nil, request_options: {})
51
+ # @!attribute output_expires_after
52
+ # The expiration policy for the output and/or error file that are generated for a
53
+ # batch.
54
+ #
55
+ # @return [OpenAI::Models::BatchCreateParams::OutputExpiresAfter, nil]
56
+ optional :output_expires_after, -> { OpenAI::BatchCreateParams::OutputExpiresAfter }
57
+
58
+ # @!method initialize(completion_window:, endpoint:, input_file_id:, metadata: nil, output_expires_after: nil, request_options: {})
52
59
  # Some parameter documentations has been truncated, see
53
60
  # {OpenAI::Models::BatchCreateParams} for more details.
54
61
  #
@@ -60,6 +67,8 @@ module OpenAI
60
67
  #
61
68
  # @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
62
69
  #
70
+ # @param output_expires_after [OpenAI::Models::BatchCreateParams::OutputExpiresAfter] The expiration policy for the output and/or error file that are generated for a
71
+ #
63
72
  # @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}]
64
73
 
65
74
  # The time frame within which the batch should be processed. Currently only `24h`
@@ -88,6 +97,34 @@ module OpenAI
88
97
  # @!method self.values
89
98
  # @return [Array<Symbol>]
90
99
  end
100
+
101
+ class OutputExpiresAfter < OpenAI::Internal::Type::BaseModel
102
+ # @!attribute anchor
103
+ # Anchor timestamp after which the expiration policy applies. Supported anchors:
104
+ # `created_at`. Note that the anchor is the file creation time, not the time the
105
+ # batch is created.
106
+ #
107
+ # @return [Symbol, :created_at]
108
+ required :anchor, const: :created_at
109
+
110
+ # @!attribute seconds
111
+ # The number of seconds after the anchor time that the file will expire. Must be
112
+ # between 3600 (1 hour) and 2592000 (30 days).
113
+ #
114
+ # @return [Integer]
115
+ required :seconds, Integer
116
+
117
+ # @!method initialize(seconds:, anchor: :created_at)
118
+ # Some parameter documentations has been truncated, see
119
+ # {OpenAI::Models::BatchCreateParams::OutputExpiresAfter} for more details.
120
+ #
121
+ # The expiration policy for the output and/or error file that are generated for a
122
+ # batch.
123
+ #
124
+ # @param seconds [Integer] The number of seconds after the anchor time that the file will expire. Must be b
125
+ #
126
+ # @param anchor [Symbol, :created_at] Anchor timestamp after which the expiration policy applies. Supported anchors: `
127
+ end
91
128
  end
92
129
  end
93
130
  end
@@ -157,7 +157,7 @@ module OpenAI
157
157
 
158
158
  # @!attribute truncation_strategy
159
159
  # Controls for how a thread will be truncated prior to the run. Use this to
160
- # control the intial context window of the run.
160
+ # control the initial context window of the run.
161
161
  #
162
162
  # @return [OpenAI::Models::Beta::ThreadCreateAndRunParams::TruncationStrategy, nil]
163
163
  optional :truncation_strategy,
@@ -694,7 +694,7 @@ module OpenAI
694
694
  # details.
695
695
  #
696
696
  # Controls for how a thread will be truncated prior to the run. Use this to
697
- # control the intial context window of the run.
697
+ # control the initial context window of the run.
698
698
  #
699
699
  # @param type [Symbol, OpenAI::Models::Beta::ThreadCreateAndRunParams::TruncationStrategy::Type] The truncation strategy to use for the thread. The default is `auto`. If set to
700
700
  #
@@ -195,7 +195,7 @@ module OpenAI
195
195
 
196
196
  # @!attribute truncation_strategy
197
197
  # Controls for how a thread will be truncated prior to the run. Use this to
198
- # control the intial context window of the run.
198
+ # control the initial context window of the run.
199
199
  #
200
200
  # @return [OpenAI::Models::Beta::Threads::Run::TruncationStrategy, nil]
201
201
  required :truncation_strategy, -> { OpenAI::Beta::Threads::Run::TruncationStrategy }, nil?: true
@@ -415,7 +415,7 @@ module OpenAI
415
415
  # {OpenAI::Models::Beta::Threads::Run::TruncationStrategy} for more details.
416
416
  #
417
417
  # Controls for how a thread will be truncated prior to the run. Use this to
418
- # control the intial context window of the run.
418
+ # control the initial context window of the run.
419
419
  #
420
420
  # @param type [Symbol, OpenAI::Models::Beta::Threads::Run::TruncationStrategy::Type] The truncation strategy to use for the thread. The default is `auto`. If set to
421
421
  #
@@ -184,7 +184,7 @@ module OpenAI
184
184
 
185
185
  # @!attribute truncation_strategy
186
186
  # Controls for how a thread will be truncated prior to the run. Use this to
187
- # control the intial context window of the run.
187
+ # control the initial context window of the run.
188
188
  #
189
189
  # @return [OpenAI::Models::Beta::Threads::RunCreateParams::TruncationStrategy, nil]
190
190
  optional :truncation_strategy,
@@ -413,7 +413,7 @@ module OpenAI
413
413
  # details.
414
414
  #
415
415
  # Controls for how a thread will be truncated prior to the run. Use this to
416
- # control the intial context window of the run.
416
+ # control the initial context window of the run.
417
417
  #
418
418
  # @param type [Symbol, OpenAI::Models::Beta::Threads::RunCreateParams::TruncationStrategy::Type] The truncation strategy to use for the thread. The default is `auto`. If set to
419
419
  #
@@ -47,9 +47,8 @@ module OpenAI
47
47
  # - If set to 'default', then the request will be processed with the standard
48
48
  # pricing and performance for the selected model.
49
49
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
50
- # 'priority', then the request will be processed with the corresponding service
51
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
52
- # Priority processing.
50
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
51
+ # will be processed with the corresponding service tier.
53
52
  # - When not set, the default behavior is 'auto'.
54
53
  #
55
54
  # When the `service_tier` parameter is set, the response body will include the
@@ -61,6 +60,8 @@ module OpenAI
61
60
  optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletion::ServiceTier }, nil?: true
62
61
 
63
62
  # @!attribute system_fingerprint
63
+ # @deprecated
64
+ #
64
65
  # This fingerprint represents the backend configuration that the model runs with.
65
66
  #
66
67
  # Can be used in conjunction with the `seed` request parameter to understand when
@@ -196,9 +197,8 @@ module OpenAI
196
197
  # - If set to 'default', then the request will be processed with the standard
197
198
  # pricing and performance for the selected model.
198
199
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
199
- # 'priority', then the request will be processed with the corresponding service
200
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
201
- # Priority processing.
200
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
201
+ # will be processed with the corresponding service tier.
202
202
  # - When not set, the default behavior is 'auto'.
203
203
  #
204
204
  # When the `service_tier` parameter is set, the response body will include the
@@ -46,9 +46,8 @@ module OpenAI
46
46
  # - If set to 'default', then the request will be processed with the standard
47
47
  # pricing and performance for the selected model.
48
48
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
49
- # 'priority', then the request will be processed with the corresponding service
50
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
51
- # Priority processing.
49
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
50
+ # will be processed with the corresponding service tier.
52
51
  # - When not set, the default behavior is 'auto'.
53
52
  #
54
53
  # When the `service_tier` parameter is set, the response body will include the
@@ -60,6 +59,8 @@ module OpenAI
60
59
  optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletionChunk::ServiceTier }, nil?: true
61
60
 
62
61
  # @!attribute system_fingerprint
62
+ # @deprecated
63
+ #
63
64
  # This fingerprint represents the backend configuration that the model runs with.
64
65
  # Can be used in conjunction with the `seed` request parameter to understand when
65
66
  # backend changes have been made that might impact determinism.
@@ -379,9 +380,8 @@ module OpenAI
379
380
  # - If set to 'default', then the request will be processed with the standard
380
381
  # pricing and performance for the selected model.
381
382
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
382
- # 'priority', then the request will be processed with the corresponding service
383
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
384
- # Priority processing.
383
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
384
+ # will be processed with the corresponding service tier.
385
385
  # - When not set, the default behavior is 'auto'.
386
386
  #
387
387
  # When the `service_tier` parameter is set, the response body will include the
@@ -226,6 +226,8 @@ module OpenAI
226
226
  optional :safety_identifier, String
227
227
 
228
228
  # @!attribute seed
229
+ # @deprecated
230
+ #
229
231
  # This feature is in Beta. If specified, our system will make a best effort to
230
232
  # sample deterministically, such that repeated requests with the same `seed` and
231
233
  # parameters should return the same result. Determinism is not guaranteed, and you
@@ -244,9 +246,8 @@ module OpenAI
244
246
  # - If set to 'default', then the request will be processed with the standard
245
247
  # pricing and performance for the selected model.
246
248
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
247
- # 'priority', then the request will be processed with the corresponding service
248
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
249
- # Priority processing.
249
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
250
+ # will be processed with the corresponding service tier.
250
251
  # - When not set, the default behavior is 'auto'.
251
252
  #
252
253
  # When the `service_tier` parameter is set, the response body will include the
@@ -271,7 +272,7 @@ module OpenAI
271
272
  # our [model distillation](https://platform.openai.com/docs/guides/distillation)
272
273
  # or [evals](https://platform.openai.com/docs/guides/evals) products.
273
274
  #
274
- # Supports text and image inputs. Note: image inputs over 10MB will be dropped.
275
+ # Supports text and image inputs. Note: image inputs over 8MB will be dropped.
275
276
  #
276
277
  # @return [Boolean, nil]
277
278
  optional :store, OpenAI::Internal::Type::Boolean, nil?: true
@@ -591,9 +592,8 @@ module OpenAI
591
592
  # - If set to 'default', then the request will be processed with the standard
592
593
  # pricing and performance for the selected model.
593
594
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
594
- # 'priority', then the request will be processed with the corresponding service
595
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
596
- # Priority processing.
595
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
596
+ # will be processed with the corresponding service tier.
597
597
  # - When not set, the default behavior is 'auto'.
598
598
  #
599
599
  # When the `service_tier` parameter is set, the response body will include the
@@ -22,7 +22,14 @@ module OpenAI
22
22
  # @return [Symbol, OpenAI::Models::FilePurpose]
23
23
  required :purpose, enum: -> { OpenAI::FilePurpose }
24
24
 
25
- # @!method initialize(file:, purpose:, request_options: {})
25
+ # @!attribute expires_after
26
+ # The expiration policy for a file. By default, files with `purpose=batch` expire
27
+ # after 30 days and all other files are persisted until they are manually deleted.
28
+ #
29
+ # @return [OpenAI::Models::FileCreateParams::ExpiresAfter, nil]
30
+ optional :expires_after, -> { OpenAI::FileCreateParams::ExpiresAfter }
31
+
32
+ # @!method initialize(file:, purpose:, expires_after: nil, request_options: {})
26
33
  # Some parameter documentations has been truncated, see
27
34
  # {OpenAI::Models::FileCreateParams} for more details.
28
35
  #
@@ -30,7 +37,36 @@ module OpenAI
30
37
  #
31
38
  # @param purpose [Symbol, OpenAI::Models::FilePurpose] The intended purpose of the uploaded file. One of: - `assistants`: Used in the A
32
39
  #
40
+ # @param expires_after [OpenAI::Models::FileCreateParams::ExpiresAfter] The expiration policy for a file. By default, files with `purpose=batch` expire
41
+ #
33
42
  # @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}]
43
+
44
+ class ExpiresAfter < OpenAI::Internal::Type::BaseModel
45
+ # @!attribute anchor
46
+ # Anchor timestamp after which the expiration policy applies. Supported anchors:
47
+ # `created_at`.
48
+ #
49
+ # @return [Symbol, :created_at]
50
+ required :anchor, const: :created_at
51
+
52
+ # @!attribute seconds
53
+ # The number of seconds after the anchor time that the file will expire. Must be
54
+ # between 3600 (1 hour) and 2592000 (30 days).
55
+ #
56
+ # @return [Integer]
57
+ required :seconds, Integer
58
+
59
+ # @!method initialize(seconds:, anchor: :created_at)
60
+ # Some parameter documentations has been truncated, see
61
+ # {OpenAI::Models::FileCreateParams::ExpiresAfter} for more details.
62
+ #
63
+ # The expiration policy for a file. By default, files with `purpose=batch` expire
64
+ # after 30 days and all other files are persisted until they are manually deleted.
65
+ #
66
+ # @param seconds [Integer] The number of seconds after the anchor time that the file will expire. Must be b
67
+ #
68
+ # @param anchor [Symbol, :created_at] Anchor timestamp after which the expiration policy applies. Supported anchors: `
69
+ end
34
70
  end
35
71
  end
36
72
  end
@@ -5,8 +5,8 @@ module OpenAI
5
5
  module Graders
6
6
  class TextSimilarityGrader < OpenAI::Internal::Type::BaseModel
7
7
  # @!attribute evaluation_metric
8
- # The evaluation metric to use. One of `fuzzy_match`, `bleu`, `gleu`, `meteor`,
9
- # `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or `rouge_l`.
8
+ # The evaluation metric to use. One of `cosine`, `fuzzy_match`, `bleu`, `gleu`,
9
+ # `meteor`, `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or `rouge_l`.
10
10
  #
11
11
  # @return [Symbol, OpenAI::Models::Graders::TextSimilarityGrader::EvaluationMetric]
12
12
  required :evaluation_metric, enum: -> { OpenAI::Graders::TextSimilarityGrader::EvaluationMetric }
@@ -41,7 +41,7 @@ module OpenAI
41
41
  #
42
42
  # A TextSimilarityGrader object which grades text based on similarity metrics.
43
43
  #
44
- # @param evaluation_metric [Symbol, OpenAI::Models::Graders::TextSimilarityGrader::EvaluationMetric] The evaluation metric to use. One of `fuzzy_match`, `bleu`, `gleu`, `meteor`, `r
44
+ # @param evaluation_metric [Symbol, OpenAI::Models::Graders::TextSimilarityGrader::EvaluationMetric] The evaluation metric to use. One of `cosine`, `fuzzy_match`, `bleu`,
45
45
  #
46
46
  # @param input [String] The text being graded.
47
47
  #
@@ -51,13 +51,14 @@ module OpenAI
51
51
  #
52
52
  # @param type [Symbol, :text_similarity] The type of grader.
53
53
 
54
- # The evaluation metric to use. One of `fuzzy_match`, `bleu`, `gleu`, `meteor`,
55
- # `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or `rouge_l`.
54
+ # The evaluation metric to use. One of `cosine`, `fuzzy_match`, `bleu`, `gleu`,
55
+ # `meteor`, `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or `rouge_l`.
56
56
  #
57
57
  # @see OpenAI::Models::Graders::TextSimilarityGrader#evaluation_metric
58
58
  module EvaluationMetric
59
59
  extend OpenAI::Internal::Type::Enum
60
60
 
61
+ COSINE = :cosine
61
62
  FUZZY_MATCH = :fuzzy_match
62
63
  BLEU = :bleu
63
64
  GLEU = :gleu
@@ -37,7 +37,7 @@ module OpenAI
37
37
  # Some parameter documentations has been truncated, see
38
38
  # {OpenAI::Models::Reasoning} for more details.
39
39
  #
40
- # **o-series models only**
40
+ # **gpt-5 and o-series models only**
41
41
  #
42
42
  # Configuration options for
43
43
  # [reasoning models](https://platform.openai.com/docs/guides/reasoning).
@@ -182,7 +182,7 @@ module OpenAI
182
182
  optional :prompt_cache_key, String
183
183
 
184
184
  # @!attribute reasoning
185
- # **o-series models only**
185
+ # **gpt-5 and o-series models only**
186
186
  #
187
187
  # Configuration options for
188
188
  # [reasoning models](https://platform.openai.com/docs/guides/reasoning).
@@ -209,9 +209,8 @@ module OpenAI
209
209
  # - If set to 'default', then the request will be processed with the standard
210
210
  # pricing and performance for the selected model.
211
211
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
212
- # 'priority', then the request will be processed with the corresponding service
213
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
214
- # Priority processing.
212
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
213
+ # will be processed with the corresponding service tier.
215
214
  # - When not set, the default behavior is 'auto'.
216
215
  #
217
216
  # When the `service_tier` parameter is set, the response body will include the
@@ -339,7 +338,7 @@ module OpenAI
339
338
  #
340
339
  # @param prompt_cache_key [String] Used by OpenAI to cache responses for similar requests to optimize your cache hi
341
340
  #
342
- # @param reasoning [OpenAI::Models::Reasoning, nil] **o-series models only**
341
+ # @param reasoning [OpenAI::Models::Reasoning, nil] **gpt-5 and o-series models only**
343
342
  #
344
343
  # @param safety_identifier [String] A stable identifier used to help detect users of your application that may be vi
345
344
  #
@@ -458,9 +457,8 @@ module OpenAI
458
457
  # - If set to 'default', then the request will be processed with the standard
459
458
  # pricing and performance for the selected model.
460
459
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
461
- # 'priority', then the request will be processed with the corresponding service
462
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
463
- # Priority processing.
460
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
461
+ # will be processed with the corresponding service tier.
464
462
  # - When not set, the default behavior is 'auto'.
465
463
  #
466
464
  # When the `service_tier` parameter is set, the response body will include the
@@ -132,7 +132,7 @@ module OpenAI
132
132
  optional :prompt_cache_key, String
133
133
 
134
134
  # @!attribute reasoning
135
- # **o-series models only**
135
+ # **gpt-5 and o-series models only**
136
136
  #
137
137
  # Configuration options for
138
138
  # [reasoning models](https://platform.openai.com/docs/guides/reasoning).
@@ -159,9 +159,8 @@ module OpenAI
159
159
  # - If set to 'default', then the request will be processed with the standard
160
160
  # pricing and performance for the selected model.
161
161
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
162
- # 'priority', then the request will be processed with the corresponding service
163
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
164
- # Priority processing.
162
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
163
+ # will be processed with the corresponding service tier.
165
164
  # - When not set, the default behavior is 'auto'.
166
165
  #
167
166
  # When the `service_tier` parameter is set, the response body will include the
@@ -307,7 +306,7 @@ module OpenAI
307
306
  #
308
307
  # @param prompt_cache_key [String] Used by OpenAI to cache responses for similar requests to optimize your cache hi
309
308
  #
310
- # @param reasoning [OpenAI::Models::Reasoning, nil] **o-series models only**
309
+ # @param reasoning [OpenAI::Models::Reasoning, nil] **gpt-5 and o-series models only**
311
310
  #
312
311
  # @param safety_identifier [String] A stable identifier used to help detect users of your application that may be vi
313
312
  #
@@ -367,9 +366,8 @@ module OpenAI
367
366
  # - If set to 'default', then the request will be processed with the standard
368
367
  # pricing and performance for the selected model.
369
368
  # - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
370
- # 'priority', then the request will be processed with the corresponding service
371
- # tier. [Contact sales](https://openai.com/contact-sales) to learn more about
372
- # Priority processing.
369
+ # '[priority](https://openai.com/api-priority-processing/)', then the request
370
+ # will be processed with the corresponding service tier.
373
371
  # - When not set, the default behavior is 'auto'.
374
372
  #
375
373
  # When the `service_tier` parameter is set, the response body will include the