openai 0.6.0 → 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 21263ba1eaa5418778087ba8a46167dd382ce7acba095a667ac797995bcb69bb
4
- data.tar.gz: b65b9f8e9c44b24e52c971edd800d35793a16f7dc9fccdd4214bc95ac0950548
3
+ metadata.gz: 0d38e57fa6db265eb6e87675116fb1dd895c72f307722b6042636f53e71095b0
4
+ data.tar.gz: 7c71ce88ab29e5f23be8d54e6c3f8ac75a55d96d2cbed45fde122cbaf61f73f5
5
5
  SHA512:
6
- metadata.gz: 8b780c50641a4b766ecd7888c530ad1d6551d5df36012afb9cf2887529ddd07d7b9823c7475be796391587801fc999f1e2f3874aad74bf4eef542015fcd5893e
7
- data.tar.gz: 776e6392df55b594509180dbc0f966d5dc6618ab215fd2b25b84831fdf7b1afa9dde10182c59f798049ffb385664dad2320cb54a3cece002735715a17ae08fb2
6
+ metadata.gz: 87a50675909aa588fd8f882149d204ee66a2e8b5477e4d12c110899c9472b5d33611a19dc9741a16dbc67f028e591673fe21cc37fb30de4e8836c7b0622ae8d2
7
+ data.tar.gz: 04bf845a79c77cd297c7e044892b28a43ee4a8b70957f0bf27cf160050f6cfc7dd6b469b8e2347cecde5a4fb709d6fdfbee0c8455afb82b55e2f57683b1f2310
data/CHANGELOG.md CHANGED
@@ -1,5 +1,20 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.7.0 (2025-06-09)
4
+
5
+ Full Changelog: [v0.6.0...v0.7.0](https://github.com/openai/openai-ruby/compare/v0.6.0...v0.7.0)
6
+
7
+ ### Features
8
+
9
+ * **api:** Add tools and structured outputs to evals ([6ee3392](https://github.com/openai/openai-ruby/commit/6ee33924e9146e2450e9c43d052886ed3214cbde))
10
+
11
+
12
+ ### Bug Fixes
13
+
14
+ * default content-type for text in multi-part formdata uploads should be text/plain ([105cf47](https://github.com/openai/openai-ruby/commit/105cf4717993c744ee6c453d2a99ae03f51035d4))
15
+ * tool parameter mapping for chat completions ([#156](https://github.com/openai/openai-ruby/issues/156)) ([5999b9f](https://github.com/openai/openai-ruby/commit/5999b9f6ad6dc73a290a8ef7b1b52bd89897039c))
16
+ * tool parameter mapping for responses ([#704](https://github.com/openai/openai-ruby/issues/704)) ([ac8bf11](https://github.com/openai/openai-ruby/commit/ac8bf11cf59fcc778f1658429a1fc06eaca79bba))
17
+
3
18
  ## 0.6.0 (2025-06-03)
4
19
 
5
20
  Full Changelog: [v0.5.1...v0.6.0](https://github.com/openai/openai-ruby/compare/v0.5.1...v0.6.0)
data/README.md CHANGED
@@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
15
15
  <!-- x-release-please-start-version -->
16
16
 
17
17
  ```ruby
18
- gem "openai", "~> 0.6.0"
18
+ gem "openai", "~> 0.7.0"
19
19
  ```
20
20
 
21
21
  <!-- x-release-please-end -->
@@ -497,7 +497,7 @@ module OpenAI
497
497
  # @param closing [Array<Proc>]
498
498
  # @param content_type [String, nil]
499
499
  private def write_multipart_content(y, val:, closing:, content_type: nil)
500
- content_type ||= "application/octet-stream"
500
+ content_line = "Content-Type: %s\r\n\r\n"
501
501
 
502
502
  case val
503
503
  in OpenAI::FilePart
@@ -508,24 +508,21 @@ module OpenAI
508
508
  content_type: val.content_type
509
509
  )
510
510
  in Pathname
511
- y << "Content-Type: #{content_type}\r\n\r\n"
511
+ y << format(content_line, content_type || "application/octet-stream")
512
512
  io = val.open(binmode: true)
513
513
  closing << io.method(:close)
514
514
  IO.copy_stream(io, y)
515
515
  in IO
516
- y << "Content-Type: #{content_type}\r\n\r\n"
516
+ y << format(content_line, content_type || "application/octet-stream")
517
517
  IO.copy_stream(val, y)
518
518
  in StringIO
519
- y << "Content-Type: #{content_type}\r\n\r\n"
519
+ y << format(content_line, content_type || "application/octet-stream")
520
520
  y << val.string
521
- in String
522
- y << "Content-Type: #{content_type}\r\n\r\n"
523
- y << val.to_s
524
521
  in -> { primitive?(_1) }
525
- y << "Content-Type: text/plain\r\n\r\n"
522
+ y << format(content_line, content_type || "text/plain")
526
523
  y << val.to_s
527
524
  else
528
- y << "Content-Type: application/json\r\n\r\n"
525
+ y << format(content_line, content_type || "application/json")
529
526
  y << JSON.generate(val)
530
527
  end
531
528
  y << "\r\n"
@@ -563,6 +560,8 @@ module OpenAI
563
560
 
564
561
  # @api private
565
562
  #
563
+ # https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.1.md#special-considerations-for-multipart-content
564
+ #
566
565
  # @param body [Object]
567
566
  #
568
567
  # @return [Array(String, Enumerable<String>)]
@@ -432,6 +432,24 @@ module OpenAI
432
432
  # @return [Integer, nil]
433
433
  optional :max_completion_tokens, Integer
434
434
 
435
+ # @!attribute response_format
436
+ # An object specifying the format that the model must output.
437
+ #
438
+ # Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
439
+ # Outputs which ensures the model will match your supplied JSON schema. Learn more
440
+ # in the
441
+ # [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
442
+ #
443
+ # Setting to `{ "type": "json_object" }` enables the older JSON mode, which
444
+ # ensures the message the model generates is valid JSON. Using `json_schema` is
445
+ # preferred for models that support it.
446
+ #
447
+ # @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::Models::ResponseFormatJSONObject, nil]
448
+ optional :response_format,
449
+ union: -> {
450
+ OpenAI::Evals::CreateEvalCompletionsRunDataSource::SamplingParams::ResponseFormat
451
+ }
452
+
435
453
  # @!attribute seed
436
454
  # A seed value to initialize the randomness, during sampling.
437
455
  #
@@ -444,20 +462,68 @@ module OpenAI
444
462
  # @return [Float, nil]
445
463
  optional :temperature, Float
446
464
 
465
+ # @!attribute tools
466
+ # A list of tools the model may call. Currently, only functions are supported as a
467
+ # tool. Use this to provide a list of functions the model may generate JSON inputs
468
+ # for. A max of 128 functions are supported.
469
+ #
470
+ # @return [Array<OpenAI::Models::Chat::ChatCompletionTool>, nil]
471
+ optional :tools, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletionTool] }
472
+
447
473
  # @!attribute top_p
448
474
  # An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
449
475
  #
450
476
  # @return [Float, nil]
451
477
  optional :top_p, Float
452
478
 
453
- # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, top_p: nil)
479
+ # @!method initialize(max_completion_tokens: nil, response_format: nil, seed: nil, temperature: nil, tools: nil, top_p: nil)
480
+ # Some parameter documentations has been truncated, see
481
+ # {OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams} for
482
+ # more details.
483
+ #
454
484
  # @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
455
485
  #
486
+ # @param response_format [OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
487
+ #
456
488
  # @param seed [Integer] A seed value to initialize the randomness, during sampling.
457
489
  #
458
490
  # @param temperature [Float] A higher temperature increases randomness in the outputs.
459
491
  #
492
+ # @param tools [Array<OpenAI::Models::Chat::ChatCompletionTool>] A list of tools the model may call. Currently, only functions are supported as a
493
+ #
460
494
  # @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
495
+
496
+ # An object specifying the format that the model must output.
497
+ #
498
+ # Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
499
+ # Outputs which ensures the model will match your supplied JSON schema. Learn more
500
+ # in the
501
+ # [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
502
+ #
503
+ # Setting to `{ "type": "json_object" }` enables the older JSON mode, which
504
+ # ensures the message the model generates is valid JSON. Using `json_schema` is
505
+ # preferred for models that support it.
506
+ #
507
+ # @see OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource::SamplingParams#response_format
508
+ module ResponseFormat
509
+ extend OpenAI::Internal::Type::Union
510
+
511
+ # Default response format. Used to generate text responses.
512
+ variant -> { OpenAI::ResponseFormatText }
513
+
514
+ # JSON Schema response format. Used to generate structured JSON responses.
515
+ # Learn more about [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs).
516
+ variant -> { OpenAI::ResponseFormatJSONSchema }
517
+
518
+ # JSON object response format. An older method of generating JSON responses.
519
+ # Using `json_schema` is recommended for models that support it. Note that the
520
+ # model will not generate JSON without a system or user message instructing it
521
+ # to do so.
522
+ variant -> { OpenAI::ResponseFormatJSONObject }
523
+
524
+ # @!method self.variants
525
+ # @return [Array(OpenAI::Models::ResponseFormatText, OpenAI::Models::ResponseFormatJSONSchema, OpenAI::Models::ResponseFormatJSONObject)]
526
+ end
461
527
  end
462
528
  end
463
529
  end
@@ -616,20 +616,96 @@ module OpenAI
616
616
  # @return [Float, nil]
617
617
  optional :temperature, Float
618
618
 
619
+ # @!attribute text
620
+ # Configuration options for a text response from the model. Can be plain text or
621
+ # structured JSON data. Learn more:
622
+ #
623
+ # - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
624
+ # - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
625
+ #
626
+ # @return [OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text, nil]
627
+ optional :text,
628
+ -> { OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text }
629
+
630
+ # @!attribute tools
631
+ # An array of tools the model may call while generating a response. You can
632
+ # specify which tool to use by setting the `tool_choice` parameter.
633
+ #
634
+ # The two categories of tools you can provide the model are:
635
+ #
636
+ # - **Built-in tools**: Tools that are provided by OpenAI that extend the model's
637
+ # capabilities, like
638
+ # [web search](https://platform.openai.com/docs/guides/tools-web-search) or
639
+ # [file search](https://platform.openai.com/docs/guides/tools-file-search).
640
+ # Learn more about
641
+ # [built-in tools](https://platform.openai.com/docs/guides/tools).
642
+ # - **Function calls (custom tools)**: Functions that are defined by you, enabling
643
+ # the model to call your own code. Learn more about
644
+ # [function calling](https://platform.openai.com/docs/guides/function-calling).
645
+ #
646
+ # @return [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>, nil]
647
+ optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }
648
+
619
649
  # @!attribute top_p
620
650
  # An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
621
651
  #
622
652
  # @return [Float, nil]
623
653
  optional :top_p, Float
624
654
 
625
- # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, top_p: nil)
655
+ # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil)
656
+ # Some parameter documentations has been truncated, see
657
+ # {OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams}
658
+ # for more details.
659
+ #
626
660
  # @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
627
661
  #
628
662
  # @param seed [Integer] A seed value to initialize the randomness, during sampling.
629
663
  #
630
664
  # @param temperature [Float] A higher temperature increases randomness in the outputs.
631
665
  #
666
+ # @param text [OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text] Configuration options for a text response from the model. Can be plain
667
+ #
668
+ # @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
669
+ #
632
670
  # @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
671
+
672
+ # @see OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams#text
673
+ class Text < OpenAI::Internal::Type::BaseModel
674
+ # @!attribute format_
675
+ # An object specifying the format that the model must output.
676
+ #
677
+ # Configuring `{ "type": "json_schema" }` enables Structured Outputs, which
678
+ # ensures the model will match your supplied JSON schema. Learn more in the
679
+ # [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
680
+ #
681
+ # The default format is `{ "type": "text" }` with no additional options.
682
+ #
683
+ # **Not recommended for gpt-4o and newer models:**
684
+ #
685
+ # Setting to `{ "type": "json_object" }` enables the older JSON mode, which
686
+ # ensures the message the model generates is valid JSON. Using `json_schema` is
687
+ # preferred for models that support it.
688
+ #
689
+ # @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject, nil]
690
+ optional :format_,
691
+ union: -> {
692
+ OpenAI::Responses::ResponseFormatTextConfig
693
+ },
694
+ api_name: :format
695
+
696
+ # @!method initialize(format_: nil)
697
+ # Some parameter documentations has been truncated, see
698
+ # {OpenAI::Models::Evals::RunCancelResponse::DataSource::Responses::SamplingParams::Text}
699
+ # for more details.
700
+ #
701
+ # Configuration options for a text response from the model. Can be plain text or
702
+ # structured JSON data. Learn more:
703
+ #
704
+ # - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
705
+ # - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
706
+ #
707
+ # @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
708
+ end
633
709
  end
634
710
  end
635
711
 
@@ -576,20 +576,98 @@ module OpenAI
576
576
  # @return [Float, nil]
577
577
  optional :temperature, Float
578
578
 
579
+ # @!attribute text
580
+ # Configuration options for a text response from the model. Can be plain text or
581
+ # structured JSON data. Learn more:
582
+ #
583
+ # - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
584
+ # - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
585
+ #
586
+ # @return [OpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams::Text, nil]
587
+ optional :text,
588
+ -> {
589
+ OpenAI::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams::Text
590
+ }
591
+
592
+ # @!attribute tools
593
+ # An array of tools the model may call while generating a response. You can
594
+ # specify which tool to use by setting the `tool_choice` parameter.
595
+ #
596
+ # The two categories of tools you can provide the model are:
597
+ #
598
+ # - **Built-in tools**: Tools that are provided by OpenAI that extend the model's
599
+ # capabilities, like
600
+ # [web search](https://platform.openai.com/docs/guides/tools-web-search) or
601
+ # [file search](https://platform.openai.com/docs/guides/tools-file-search).
602
+ # Learn more about
603
+ # [built-in tools](https://platform.openai.com/docs/guides/tools).
604
+ # - **Function calls (custom tools)**: Functions that are defined by you, enabling
605
+ # the model to call your own code. Learn more about
606
+ # [function calling](https://platform.openai.com/docs/guides/function-calling).
607
+ #
608
+ # @return [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>, nil]
609
+ optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }
610
+
579
611
  # @!attribute top_p
580
612
  # An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
581
613
  #
582
614
  # @return [Float, nil]
583
615
  optional :top_p, Float
584
616
 
585
- # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, top_p: nil)
617
+ # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil)
618
+ # Some parameter documentations has been truncated, see
619
+ # {OpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams}
620
+ # for more details.
621
+ #
586
622
  # @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
587
623
  #
588
624
  # @param seed [Integer] A seed value to initialize the randomness, during sampling.
589
625
  #
590
626
  # @param temperature [Float] A higher temperature increases randomness in the outputs.
591
627
  #
628
+ # @param text [OpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams::Text] Configuration options for a text response from the model. Can be plain
629
+ #
630
+ # @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
631
+ #
592
632
  # @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
633
+
634
+ # @see OpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams#text
635
+ class Text < OpenAI::Internal::Type::BaseModel
636
+ # @!attribute format_
637
+ # An object specifying the format that the model must output.
638
+ #
639
+ # Configuring `{ "type": "json_schema" }` enables Structured Outputs, which
640
+ # ensures the model will match your supplied JSON schema. Learn more in the
641
+ # [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
642
+ #
643
+ # The default format is `{ "type": "text" }` with no additional options.
644
+ #
645
+ # **Not recommended for gpt-4o and newer models:**
646
+ #
647
+ # Setting to `{ "type": "json_object" }` enables the older JSON mode, which
648
+ # ensures the message the model generates is valid JSON. Using `json_schema` is
649
+ # preferred for models that support it.
650
+ #
651
+ # @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject, nil]
652
+ optional :format_,
653
+ union: -> {
654
+ OpenAI::Responses::ResponseFormatTextConfig
655
+ },
656
+ api_name: :format
657
+
658
+ # @!method initialize(format_: nil)
659
+ # Some parameter documentations has been truncated, see
660
+ # {OpenAI::Models::Evals::RunCreateParams::DataSource::CreateEvalResponsesRunDataSource::SamplingParams::Text}
661
+ # for more details.
662
+ #
663
+ # Configuration options for a text response from the model. Can be plain text or
664
+ # structured JSON data. Learn more:
665
+ #
666
+ # - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
667
+ # - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
668
+ #
669
+ # @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
670
+ end
593
671
  end
594
672
  end
595
673
 
@@ -616,20 +616,96 @@ module OpenAI
616
616
  # @return [Float, nil]
617
617
  optional :temperature, Float
618
618
 
619
+ # @!attribute text
620
+ # Configuration options for a text response from the model. Can be plain text or
621
+ # structured JSON data. Learn more:
622
+ #
623
+ # - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
624
+ # - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
625
+ #
626
+ # @return [OpenAI::Models::Evals::RunCreateResponse::DataSource::Responses::SamplingParams::Text, nil]
627
+ optional :text,
628
+ -> { OpenAI::Models::Evals::RunCreateResponse::DataSource::Responses::SamplingParams::Text }
629
+
630
+ # @!attribute tools
631
+ # An array of tools the model may call while generating a response. You can
632
+ # specify which tool to use by setting the `tool_choice` parameter.
633
+ #
634
+ # The two categories of tools you can provide the model are:
635
+ #
636
+ # - **Built-in tools**: Tools that are provided by OpenAI that extend the model's
637
+ # capabilities, like
638
+ # [web search](https://platform.openai.com/docs/guides/tools-web-search) or
639
+ # [file search](https://platform.openai.com/docs/guides/tools-file-search).
640
+ # Learn more about
641
+ # [built-in tools](https://platform.openai.com/docs/guides/tools).
642
+ # - **Function calls (custom tools)**: Functions that are defined by you, enabling
643
+ # the model to call your own code. Learn more about
644
+ # [function calling](https://platform.openai.com/docs/guides/function-calling).
645
+ #
646
+ # @return [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>, nil]
647
+ optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }
648
+
619
649
  # @!attribute top_p
620
650
  # An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
621
651
  #
622
652
  # @return [Float, nil]
623
653
  optional :top_p, Float
624
654
 
625
- # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, top_p: nil)
655
+ # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil)
656
+ # Some parameter documentations has been truncated, see
657
+ # {OpenAI::Models::Evals::RunCreateResponse::DataSource::Responses::SamplingParams}
658
+ # for more details.
659
+ #
626
660
  # @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
627
661
  #
628
662
  # @param seed [Integer] A seed value to initialize the randomness, during sampling.
629
663
  #
630
664
  # @param temperature [Float] A higher temperature increases randomness in the outputs.
631
665
  #
666
+ # @param text [OpenAI::Models::Evals::RunCreateResponse::DataSource::Responses::SamplingParams::Text] Configuration options for a text response from the model. Can be plain
667
+ #
668
+ # @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
669
+ #
632
670
  # @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
671
+
672
+ # @see OpenAI::Models::Evals::RunCreateResponse::DataSource::Responses::SamplingParams#text
673
+ class Text < OpenAI::Internal::Type::BaseModel
674
+ # @!attribute format_
675
+ # An object specifying the format that the model must output.
676
+ #
677
+ # Configuring `{ "type": "json_schema" }` enables Structured Outputs, which
678
+ # ensures the model will match your supplied JSON schema. Learn more in the
679
+ # [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
680
+ #
681
+ # The default format is `{ "type": "text" }` with no additional options.
682
+ #
683
+ # **Not recommended for gpt-4o and newer models:**
684
+ #
685
+ # Setting to `{ "type": "json_object" }` enables the older JSON mode, which
686
+ # ensures the message the model generates is valid JSON. Using `json_schema` is
687
+ # preferred for models that support it.
688
+ #
689
+ # @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject, nil]
690
+ optional :format_,
691
+ union: -> {
692
+ OpenAI::Responses::ResponseFormatTextConfig
693
+ },
694
+ api_name: :format
695
+
696
+ # @!method initialize(format_: nil)
697
+ # Some parameter documentations has been truncated, see
698
+ # {OpenAI::Models::Evals::RunCreateResponse::DataSource::Responses::SamplingParams::Text}
699
+ # for more details.
700
+ #
701
+ # Configuration options for a text response from the model. Can be plain text or
702
+ # structured JSON data. Learn more:
703
+ #
704
+ # - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
705
+ # - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
706
+ #
707
+ # @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
708
+ end
633
709
  end
634
710
  end
635
711
 
@@ -616,20 +616,95 @@ module OpenAI
616
616
  # @return [Float, nil]
617
617
  optional :temperature, Float
618
618
 
619
+ # @!attribute text
620
+ # Configuration options for a text response from the model. Can be plain text or
621
+ # structured JSON data. Learn more:
622
+ #
623
+ # - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
624
+ # - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
625
+ #
626
+ # @return [OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams::Text, nil]
627
+ optional :text, -> { OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams::Text }
628
+
629
+ # @!attribute tools
630
+ # An array of tools the model may call while generating a response. You can
631
+ # specify which tool to use by setting the `tool_choice` parameter.
632
+ #
633
+ # The two categories of tools you can provide the model are:
634
+ #
635
+ # - **Built-in tools**: Tools that are provided by OpenAI that extend the model's
636
+ # capabilities, like
637
+ # [web search](https://platform.openai.com/docs/guides/tools-web-search) or
638
+ # [file search](https://platform.openai.com/docs/guides/tools-file-search).
639
+ # Learn more about
640
+ # [built-in tools](https://platform.openai.com/docs/guides/tools).
641
+ # - **Function calls (custom tools)**: Functions that are defined by you, enabling
642
+ # the model to call your own code. Learn more about
643
+ # [function calling](https://platform.openai.com/docs/guides/function-calling).
644
+ #
645
+ # @return [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>, nil]
646
+ optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }
647
+
619
648
  # @!attribute top_p
620
649
  # An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
621
650
  #
622
651
  # @return [Float, nil]
623
652
  optional :top_p, Float
624
653
 
625
- # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, top_p: nil)
654
+ # @!method initialize(max_completion_tokens: nil, seed: nil, temperature: nil, text: nil, tools: nil, top_p: nil)
655
+ # Some parameter documentations has been truncated, see
656
+ # {OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams}
657
+ # for more details.
658
+ #
626
659
  # @param max_completion_tokens [Integer] The maximum number of tokens in the generated output.
627
660
  #
628
661
  # @param seed [Integer] A seed value to initialize the randomness, during sampling.
629
662
  #
630
663
  # @param temperature [Float] A higher temperature increases randomness in the outputs.
631
664
  #
665
+ # @param text [OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams::Text] Configuration options for a text response from the model. Can be plain
666
+ #
667
+ # @param tools [Array<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
668
+ #
632
669
  # @param top_p [Float] An alternative to temperature for nucleus sampling; 1.0 includes all tokens.
670
+
671
+ # @see OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams#text
672
+ class Text < OpenAI::Internal::Type::BaseModel
673
+ # @!attribute format_
674
+ # An object specifying the format that the model must output.
675
+ #
676
+ # Configuring `{ "type": "json_schema" }` enables Structured Outputs, which
677
+ # ensures the model will match your supplied JSON schema. Learn more in the
678
+ # [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
679
+ #
680
+ # The default format is `{ "type": "text" }` with no additional options.
681
+ #
682
+ # **Not recommended for gpt-4o and newer models:**
683
+ #
684
+ # Setting to `{ "type": "json_object" }` enables the older JSON mode, which
685
+ # ensures the message the model generates is valid JSON. Using `json_schema` is
686
+ # preferred for models that support it.
687
+ #
688
+ # @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject, nil]
689
+ optional :format_,
690
+ union: -> {
691
+ OpenAI::Responses::ResponseFormatTextConfig
692
+ },
693
+ api_name: :format
694
+
695
+ # @!method initialize(format_: nil)
696
+ # Some parameter documentations has been truncated, see
697
+ # {OpenAI::Models::Evals::RunListResponse::DataSource::Responses::SamplingParams::Text}
698
+ # for more details.
699
+ #
700
+ # Configuration options for a text response from the model. Can be plain text or
701
+ # structured JSON data. Learn more:
702
+ #
703
+ # - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
704
+ # - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
705
+ #
706
+ # @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
707
+ end
633
708
  end
634
709
  end
635
710