openapi_openai 1.0.0 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/Gemfile.lock +2 -2
- data/README.md +274 -51
- data/docs/AssistantFileObject.md +24 -0
- data/docs/AssistantObject.md +36 -0
- data/docs/AssistantObjectToolsInner.md +51 -0
- data/docs/AssistantStreamEvent.md +57 -0
- data/docs/AssistantToolsCode.md +18 -0
- data/docs/AssistantToolsFunction.md +20 -0
- data/docs/AssistantToolsRetrieval.md +18 -0
- data/docs/AssistantsApi.md +2017 -0
- data/docs/AssistantsApiNamedToolChoice.md +20 -0
- data/docs/AssistantsApiResponseFormat.md +18 -0
- data/docs/AssistantsApiResponseFormatOption.md +49 -0
- data/docs/AssistantsApiToolChoiceOption.md +49 -0
- data/docs/AudioApi.md +235 -0
- data/docs/ChatApi.md +75 -0
- data/docs/{CreateChatCompletionRequestFunctionCallOneOf.md → ChatCompletionFunctionCallOption.md} +2 -2
- data/docs/ChatCompletionFunctions.md +3 -3
- data/docs/ChatCompletionMessageToolCall.md +22 -0
- data/docs/ChatCompletionMessageToolCallChunk.md +24 -0
- data/docs/{ChatCompletionRequestMessageFunctionCall.md → ChatCompletionMessageToolCallChunkFunction.md} +2 -2
- data/docs/ChatCompletionMessageToolCallFunction.md +20 -0
- data/docs/ChatCompletionNamedToolChoice.md +20 -0
- data/docs/ChatCompletionNamedToolChoiceFunction.md +18 -0
- data/docs/ChatCompletionRequestAssistantMessage.md +26 -0
- data/docs/ChatCompletionRequestAssistantMessageFunctionCall.md +20 -0
- data/docs/ChatCompletionRequestFunctionMessage.md +22 -0
- data/docs/ChatCompletionRequestMessage.md +45 -14
- data/docs/ChatCompletionRequestMessageContentPart.md +49 -0
- data/docs/ChatCompletionRequestMessageContentPartImage.md +20 -0
- data/docs/ChatCompletionRequestMessageContentPartImageImageUrl.md +20 -0
- data/docs/ChatCompletionRequestMessageContentPartText.md +20 -0
- data/docs/ChatCompletionRequestSystemMessage.md +22 -0
- data/docs/ChatCompletionRequestToolMessage.md +22 -0
- data/docs/ChatCompletionRequestUserMessage.md +22 -0
- data/docs/ChatCompletionRequestUserMessageContent.md +49 -0
- data/docs/ChatCompletionResponseMessage.md +5 -3
- data/docs/ChatCompletionRole.md +15 -0
- data/docs/ChatCompletionStreamResponseDelta.md +6 -4
- data/docs/ChatCompletionStreamResponseDeltaFunctionCall.md +20 -0
- data/docs/ChatCompletionTokenLogprob.md +24 -0
- data/docs/ChatCompletionTokenLogprobTopLogprobsInner.md +22 -0
- data/docs/ChatCompletionTool.md +20 -0
- data/docs/ChatCompletionToolChoiceOption.md +49 -0
- data/docs/CompletionUsage.md +22 -0
- data/docs/CompletionsApi.md +75 -0
- data/docs/CreateAssistantFileRequest.md +18 -0
- data/docs/CreateAssistantRequest.md +30 -0
- data/docs/CreateAssistantRequestModel.md +15 -0
- data/docs/CreateChatCompletionFunctionResponse.md +30 -0
- data/docs/CreateChatCompletionFunctionResponseChoicesInner.md +22 -0
- data/docs/CreateChatCompletionRequest.md +33 -21
- data/docs/CreateChatCompletionRequestFunctionCall.md +3 -3
- data/docs/CreateChatCompletionRequestModel.md +5 -37
- data/docs/CreateChatCompletionRequestResponseFormat.md +18 -0
- data/docs/CreateChatCompletionResponse.md +10 -8
- data/docs/CreateChatCompletionResponseChoicesInner.md +6 -4
- data/docs/CreateChatCompletionResponseChoicesInnerLogprobs.md +18 -0
- data/docs/CreateChatCompletionStreamResponse.md +9 -7
- data/docs/CreateChatCompletionStreamResponseChoicesInner.md +7 -5
- data/docs/CreateCompletionRequest.md +23 -21
- data/docs/CreateCompletionRequestModel.md +5 -37
- data/docs/CreateCompletionResponse.md +10 -8
- data/docs/CreateCompletionResponseChoicesInner.md +4 -4
- data/docs/CreateCompletionResponseChoicesInnerLogprobs.md +6 -6
- data/docs/CreateEmbeddingRequest.md +6 -2
- data/docs/CreateEmbeddingRequestModel.md +5 -37
- data/docs/CreateEmbeddingResponse.md +5 -5
- data/docs/CreateEmbeddingResponseUsage.md +2 -2
- data/docs/CreateFineTuningJobRequest.md +30 -0
- data/docs/CreateFineTuningJobRequestHyperparameters.md +22 -0
- data/docs/CreateFineTuningJobRequestHyperparametersBatchSize.md +49 -0
- data/docs/CreateFineTuningJobRequestHyperparametersLearningRateMultiplier.md +49 -0
- data/docs/CreateFineTuningJobRequestHyperparametersNEpochs.md +49 -0
- data/docs/CreateFineTuningJobRequestIntegrationsInner.md +20 -0
- data/docs/{CreateFineTuneRequestModel.md → CreateFineTuningJobRequestIntegrationsInnerType.md} +4 -4
- data/docs/CreateFineTuningJobRequestIntegrationsInnerWandb.md +24 -0
- data/docs/CreateFineTuningJobRequestModel.md +15 -0
- data/docs/CreateImageEditRequestModel.md +15 -0
- data/docs/CreateImageRequest.md +11 -5
- data/docs/CreateImageRequestModel.md +15 -0
- data/docs/CreateMessageRequest.md +24 -0
- data/docs/CreateModerationRequestModel.md +5 -37
- data/docs/CreateModerationResponse.md +3 -3
- data/docs/CreateModerationResponseResultsInner.md +1 -1
- data/docs/CreateModerationResponseResultsInnerCategories.md +15 -7
- data/docs/CreateModerationResponseResultsInnerCategoryScores.md +15 -7
- data/docs/CreateRunRequest.md +44 -0
- data/docs/CreateRunRequestModel.md +15 -0
- data/docs/CreateSpeechRequest.md +26 -0
- data/docs/CreateSpeechRequestModel.md +15 -0
- data/docs/CreateThreadAndRunRequest.md +42 -0
- data/docs/CreateThreadAndRunRequestToolsInner.md +51 -0
- data/docs/CreateThreadRequest.md +20 -0
- data/docs/CreateTranscription200Response.md +49 -0
- data/docs/CreateTranscriptionRequestModel.md +5 -37
- data/docs/CreateTranscriptionResponseJson.md +18 -0
- data/docs/CreateTranscriptionResponseVerboseJson.md +26 -0
- data/docs/CreateTranslation200Response.md +49 -0
- data/docs/{CreateTranscriptionResponse.md → CreateTranslationResponseJson.md} +2 -2
- data/docs/CreateTranslationResponseVerboseJson.md +24 -0
- data/docs/DeleteAssistantFileResponse.md +22 -0
- data/docs/DeleteAssistantResponse.md +22 -0
- data/docs/DeleteMessageResponse.md +22 -0
- data/docs/DeleteModelResponse.md +3 -3
- data/docs/DeleteThreadResponse.md +22 -0
- data/docs/DoneEvent.md +20 -0
- data/docs/Embedding.md +22 -0
- data/docs/EmbeddingsApi.md +75 -0
- data/docs/Error.md +4 -4
- data/docs/ErrorEvent.md +20 -0
- data/docs/FilesApi.md +351 -0
- data/docs/FineTuningApi.md +431 -0
- data/docs/FineTuningIntegration.md +20 -0
- data/docs/FineTuningJob.md +48 -0
- data/docs/FineTuningJobCheckpoint.md +30 -0
- data/docs/FineTuningJobCheckpointMetrics.md +30 -0
- data/docs/FineTuningJobError.md +22 -0
- data/docs/{FineTuneEvent.md → FineTuningJobEvent.md} +7 -5
- data/docs/FineTuningJobHyperparameters.md +18 -0
- data/docs/FineTuningJobHyperparametersNEpochs.md +49 -0
- data/docs/FineTuningJobIntegrationsInner.md +47 -0
- data/docs/FunctionObject.md +22 -0
- data/docs/Image.md +22 -0
- data/docs/ImagesApi.md +239 -0
- data/docs/ImagesResponse.md +1 -1
- data/docs/ListAssistantFilesResponse.md +26 -0
- data/docs/ListAssistantsResponse.md +26 -0
- data/docs/ListFilesResponse.md +3 -3
- data/docs/ListFineTuningJobCheckpointsResponse.md +26 -0
- data/docs/ListFineTuningJobEventsResponse.md +20 -0
- data/docs/ListMessageFilesResponse.md +26 -0
- data/docs/ListMessagesResponse.md +26 -0
- data/docs/ListPaginatedFineTuningJobsResponse.md +22 -0
- data/docs/ListRunStepsResponse.md +26 -0
- data/docs/ListRunsResponse.md +26 -0
- data/docs/ListThreadsResponse.md +26 -0
- data/docs/MessageContentImageFileObject.md +20 -0
- data/docs/MessageContentImageFileObjectImageFile.md +18 -0
- data/docs/MessageContentTextAnnotationsFileCitationObject.md +26 -0
- data/docs/MessageContentTextAnnotationsFileCitationObjectFileCitation.md +20 -0
- data/docs/MessageContentTextAnnotationsFilePathObject.md +26 -0
- data/docs/MessageContentTextAnnotationsFilePathObjectFilePath.md +18 -0
- data/docs/MessageContentTextObject.md +20 -0
- data/docs/MessageContentTextObjectText.md +20 -0
- data/docs/MessageContentTextObjectTextAnnotationsInner.md +49 -0
- data/docs/MessageDeltaContentImageFileObject.md +22 -0
- data/docs/MessageDeltaContentImageFileObjectImageFile.md +18 -0
- data/docs/MessageDeltaContentTextAnnotationsFileCitationObject.md +28 -0
- data/docs/MessageDeltaContentTextAnnotationsFileCitationObjectFileCitation.md +20 -0
- data/docs/MessageDeltaContentTextAnnotationsFilePathObject.md +28 -0
- data/docs/MessageDeltaContentTextAnnotationsFilePathObjectFilePath.md +18 -0
- data/docs/MessageDeltaContentTextObject.md +22 -0
- data/docs/MessageDeltaContentTextObjectText.md +20 -0
- data/docs/MessageDeltaContentTextObjectTextAnnotationsInner.md +49 -0
- data/docs/MessageDeltaObject.md +22 -0
- data/docs/MessageDeltaObjectDelta.md +22 -0
- data/docs/MessageDeltaObjectDeltaContentInner.md +49 -0
- data/docs/MessageFileObject.md +24 -0
- data/docs/MessageObject.md +44 -0
- data/docs/MessageObjectContentInner.md +49 -0
- data/docs/MessageObjectIncompleteDetails.md +18 -0
- data/docs/MessageStreamEvent.md +55 -0
- data/docs/MessageStreamEventOneOf.md +20 -0
- data/docs/MessageStreamEventOneOf1.md +20 -0
- data/docs/MessageStreamEventOneOf2.md +20 -0
- data/docs/MessageStreamEventOneOf3.md +20 -0
- data/docs/MessageStreamEventOneOf4.md +20 -0
- data/docs/Model.md +5 -5
- data/docs/ModelsApi.md +208 -0
- data/docs/ModerationsApi.md +75 -0
- data/docs/ModifyAssistantRequest.md +30 -0
- data/docs/ModifyMessageRequest.md +18 -0
- data/docs/ModifyRunRequest.md +18 -0
- data/docs/ModifyThreadRequest.md +18 -0
- data/docs/OpenAIFile.md +9 -9
- data/docs/RunCompletionUsage.md +22 -0
- data/docs/RunObject.md +68 -0
- data/docs/RunObjectIncompleteDetails.md +18 -0
- data/docs/RunObjectLastError.md +20 -0
- data/docs/RunObjectRequiredAction.md +20 -0
- data/docs/RunObjectRequiredActionSubmitToolOutputs.md +18 -0
- data/docs/RunStepCompletionUsage.md +22 -0
- data/docs/RunStepDeltaObject.md +22 -0
- data/docs/RunStepDeltaObjectDelta.md +18 -0
- data/docs/RunStepDeltaObjectDeltaStepDetails.md +49 -0
- data/docs/RunStepDeltaStepDetailsMessageCreationObject.md +20 -0
- data/docs/RunStepDeltaStepDetailsMessageCreationObjectMessageCreation.md +18 -0
- data/docs/RunStepDeltaStepDetailsToolCallsCodeObject.md +24 -0
- data/docs/RunStepDeltaStepDetailsToolCallsCodeObjectCodeInterpreter.md +20 -0
- data/docs/RunStepDeltaStepDetailsToolCallsCodeObjectCodeInterpreterOutputsInner.md +49 -0
- data/docs/RunStepDeltaStepDetailsToolCallsCodeOutputImageObject.md +22 -0
- data/docs/RunStepDeltaStepDetailsToolCallsCodeOutputImageObjectImage.md +18 -0
- data/docs/RunStepDeltaStepDetailsToolCallsCodeOutputLogsObject.md +22 -0
- data/docs/RunStepDeltaStepDetailsToolCallsFunctionObject.md +24 -0
- data/docs/RunStepDeltaStepDetailsToolCallsFunctionObjectFunction.md +22 -0
- data/docs/RunStepDeltaStepDetailsToolCallsObject.md +20 -0
- data/docs/RunStepDeltaStepDetailsToolCallsObjectToolCallsInner.md +51 -0
- data/docs/RunStepDeltaStepDetailsToolCallsRetrievalObject.md +24 -0
- data/docs/RunStepDetailsMessageCreationObject.md +20 -0
- data/docs/RunStepDetailsMessageCreationObjectMessageCreation.md +18 -0
- data/docs/RunStepDetailsToolCallsCodeObject.md +22 -0
- data/docs/RunStepDetailsToolCallsCodeObjectCodeInterpreter.md +20 -0
- data/docs/RunStepDetailsToolCallsCodeObjectCodeInterpreterOutputsInner.md +49 -0
- data/docs/RunStepDetailsToolCallsCodeOutputImageObject.md +20 -0
- data/docs/RunStepDetailsToolCallsCodeOutputImageObjectImage.md +18 -0
- data/docs/RunStepDetailsToolCallsCodeOutputLogsObject.md +20 -0
- data/docs/RunStepDetailsToolCallsFunctionObject.md +22 -0
- data/docs/RunStepDetailsToolCallsFunctionObjectFunction.md +22 -0
- data/docs/RunStepDetailsToolCallsObject.md +20 -0
- data/docs/RunStepDetailsToolCallsObjectToolCallsInner.md +51 -0
- data/docs/RunStepDetailsToolCallsRetrievalObject.md +22 -0
- data/docs/RunStepObject.md +48 -0
- data/docs/RunStepObjectLastError.md +20 -0
- data/docs/RunStepObjectStepDetails.md +49 -0
- data/docs/RunStepStreamEvent.md +59 -0
- data/docs/RunStepStreamEventOneOf.md +20 -0
- data/docs/RunStepStreamEventOneOf1.md +20 -0
- data/docs/RunStepStreamEventOneOf2.md +20 -0
- data/docs/RunStepStreamEventOneOf3.md +20 -0
- data/docs/RunStepStreamEventOneOf4.md +20 -0
- data/docs/RunStepStreamEventOneOf5.md +20 -0
- data/docs/RunStepStreamEventOneOf6.md +20 -0
- data/docs/RunStreamEvent.md +63 -0
- data/docs/RunStreamEventOneOf.md +20 -0
- data/docs/RunStreamEventOneOf1.md +20 -0
- data/docs/RunStreamEventOneOf2.md +20 -0
- data/docs/RunStreamEventOneOf3.md +20 -0
- data/docs/RunStreamEventOneOf4.md +20 -0
- data/docs/RunStreamEventOneOf5.md +20 -0
- data/docs/RunStreamEventOneOf6.md +20 -0
- data/docs/RunStreamEventOneOf7.md +20 -0
- data/docs/RunStreamEventOneOf8.md +20 -0
- data/docs/RunToolCallObject.md +22 -0
- data/docs/RunToolCallObjectFunction.md +20 -0
- data/docs/SubmitToolOutputsRunRequest.md +20 -0
- data/docs/SubmitToolOutputsRunRequestToolOutputsInner.md +20 -0
- data/docs/ThreadObject.md +24 -0
- data/docs/{CreateEditRequestModel.md → ThreadStreamEvent.md} +7 -7
- data/docs/ThreadStreamEventOneOf.md +20 -0
- data/docs/TranscriptionSegment.md +36 -0
- data/docs/TranscriptionWord.md +22 -0
- data/docs/TruncationObject.md +20 -0
- data/lib/openapi_openai/api/assistants_api.rb +2006 -0
- data/lib/openapi_openai/api/audio_api.rb +268 -0
- data/lib/openapi_openai/api/chat_api.rb +88 -0
- data/lib/openapi_openai/api/completions_api.rb +88 -0
- data/lib/openapi_openai/api/embeddings_api.rb +88 -0
- data/lib/openapi_openai/api/files_api.rb +342 -0
- data/lib/openapi_openai/api/fine_tuning_api.rb +405 -0
- data/lib/openapi_openai/api/images_api.rb +294 -0
- data/lib/openapi_openai/api/models_api.rb +199 -0
- data/lib/openapi_openai/api/moderations_api.rb +88 -0
- data/lib/openapi_openai/api_client.rb +2 -1
- data/lib/openapi_openai/api_error.rb +1 -1
- data/lib/openapi_openai/configuration.rb +8 -1
- data/lib/openapi_openai/models/assistant_file_object.rb +308 -0
- data/lib/openapi_openai/models/assistant_object.rb +481 -0
- data/lib/openapi_openai/models/assistant_object_tools_inner.rb +106 -0
- data/lib/openapi_openai/models/assistant_stream_event.rb +110 -0
- data/lib/openapi_openai/models/assistant_tools_code.rb +256 -0
- data/lib/openapi_openai/models/assistant_tools_function.rb +272 -0
- data/lib/openapi_openai/models/assistant_tools_retrieval.rb +256 -0
- data/lib/openapi_openai/models/assistants_api_named_tool_choice.rb +266 -0
- data/lib/openapi_openai/models/assistants_api_response_format.rb +252 -0
- data/lib/openapi_openai/models/assistants_api_response_format_option.rb +106 -0
- data/lib/openapi_openai/models/assistants_api_tool_choice_option.rb +106 -0
- data/lib/openapi_openai/models/chat_completion_function_call_option.rb +223 -0
- data/lib/openapi_openai/models/chat_completion_functions.rb +13 -13
- data/lib/openapi_openai/models/chat_completion_message_tool_call.rb +289 -0
- data/lib/openapi_openai/models/chat_completion_message_tool_call_chunk.rb +284 -0
- data/lib/openapi_openai/models/{chat_completion_request_message_function_call.rb → chat_completion_message_tool_call_chunk_function.rb} +4 -5
- data/lib/openapi_openai/models/chat_completion_message_tool_call_function.rb +240 -0
- data/lib/openapi_openai/models/chat_completion_named_tool_choice.rb +273 -0
- data/lib/openapi_openai/models/{create_chat_completion_request_function_call_one_of.rb → chat_completion_named_tool_choice_function.rb} +4 -4
- data/lib/openapi_openai/models/chat_completion_request_assistant_message.rb +298 -0
- data/lib/openapi_openai/models/chat_completion_request_assistant_message_function_call.rb +240 -0
- data/lib/openapi_openai/models/chat_completion_request_function_message.rb +286 -0
- data/lib/openapi_openai/models/chat_completion_request_message.rb +78 -255
- data/lib/openapi_openai/models/chat_completion_request_message_content_part.rb +105 -0
- data/lib/openapi_openai/models/chat_completion_request_message_content_part_image.rb +272 -0
- data/lib/openapi_openai/models/chat_completion_request_message_content_part_image_image_url.rb +268 -0
- data/lib/openapi_openai/models/chat_completion_request_message_content_part_text.rb +273 -0
- data/lib/openapi_openai/models/chat_completion_request_system_message.rb +283 -0
- data/lib/openapi_openai/models/chat_completion_request_tool_message.rb +290 -0
- data/lib/openapi_openai/models/chat_completion_request_user_message.rb +282 -0
- data/lib/openapi_openai/models/{create_fine_tune_request_model.rb → chat_completion_request_user_message_content.rb} +4 -3
- data/lib/openapi_openai/models/chat_completion_response_message.rb +30 -15
- data/lib/openapi_openai/models/chat_completion_role.rb +43 -0
- data/lib/openapi_openai/models/chat_completion_stream_response_delta.rb +29 -17
- data/lib/openapi_openai/models/chat_completion_stream_response_delta_function_call.rb +226 -0
- data/lib/openapi_openai/models/chat_completion_token_logprob.rb +273 -0
- data/lib/openapi_openai/models/chat_completion_token_logprob_top_logprobs_inner.rb +254 -0
- data/lib/openapi_openai/models/chat_completion_tool.rb +272 -0
- data/lib/openapi_openai/models/chat_completion_tool_choice_option.rb +106 -0
- data/lib/openapi_openai/models/{create_completion_response_usage.rb → completion_usage.rb} +25 -21
- data/lib/openapi_openai/models/create_assistant_file_request.rb +222 -0
- data/lib/openapi_openai/models/create_assistant_request.rb +372 -0
- data/lib/openapi_openai/models/create_assistant_request_model.rb +104 -0
- data/lib/openapi_openai/models/create_chat_completion_function_response.rb +346 -0
- data/lib/openapi_openai/models/create_chat_completion_function_response_choices_inner.rb +289 -0
- data/lib/openapi_openai/models/create_chat_completion_request.rb +277 -152
- data/lib/openapi_openai/models/create_chat_completion_request_function_call.rb +3 -3
- data/lib/openapi_openai/models/create_chat_completion_request_model.rb +8 -9
- data/lib/openapi_openai/models/create_chat_completion_request_response_format.rb +252 -0
- data/lib/openapi_openai/models/create_chat_completion_request_stop.rb +1 -1
- data/lib/openapi_openai/models/create_chat_completion_response.rb +75 -25
- data/lib/openapi_openai/models/create_chat_completion_response_choices_inner.rb +45 -10
- data/lib/openapi_openai/models/create_chat_completion_response_choices_inner_logprobs.rb +221 -0
- data/lib/openapi_openai/models/create_chat_completion_stream_response.rb +74 -24
- data/lib/openapi_openai/models/create_chat_completion_stream_response_choices_inner.rb +45 -16
- data/lib/openapi_openai/models/create_completion_request.rb +219 -182
- data/lib/openapi_openai/models/create_completion_request_model.rb +8 -9
- data/lib/openapi_openai/models/create_completion_request_prompt.rb +1 -1
- data/lib/openapi_openai/models/create_completion_request_stop.rb +1 -1
- data/lib/openapi_openai/models/create_completion_response.rb +75 -25
- data/lib/openapi_openai/models/create_completion_response_choices_inner.rb +25 -24
- data/lib/openapi_openai/models/create_completion_response_choices_inner_logprobs.rb +23 -23
- data/lib/openapi_openai/models/create_embedding_request.rb +87 -12
- data/lib/openapi_openai/models/create_embedding_request_input.rb +2 -2
- data/lib/openapi_openai/models/create_embedding_request_model.rb +8 -9
- data/lib/openapi_openai/models/create_embedding_response.rb +61 -24
- data/lib/openapi_openai/models/create_embedding_response_usage.rb +4 -1
- data/lib/openapi_openai/models/{create_fine_tune_request.rb → create_fine_tuning_job_request.rb} +78 -111
- data/lib/openapi_openai/models/create_fine_tuning_job_request_hyperparameters.rb +233 -0
- data/lib/openapi_openai/models/create_fine_tuning_job_request_hyperparameters_batch_size.rb +106 -0
- data/lib/openapi_openai/models/create_fine_tuning_job_request_hyperparameters_learning_rate_multiplier.rb +106 -0
- data/lib/openapi_openai/models/create_fine_tuning_job_request_hyperparameters_n_epochs.rb +106 -0
- data/lib/openapi_openai/models/{list_fine_tunes_response.rb → create_fine_tuning_job_request_integrations_inner.rb} +25 -27
- data/lib/openapi_openai/models/create_fine_tuning_job_request_integrations_inner_type.rb +105 -0
- data/lib/openapi_openai/models/create_fine_tuning_job_request_integrations_inner_wandb.rb +257 -0
- data/lib/openapi_openai/models/create_fine_tuning_job_request_model.rb +104 -0
- data/lib/openapi_openai/models/create_image_edit_request_model.rb +104 -0
- data/lib/openapi_openai/models/create_image_request.rb +81 -22
- data/lib/openapi_openai/models/create_image_request_model.rb +104 -0
- data/lib/openapi_openai/models/create_message_request.rb +352 -0
- data/lib/openapi_openai/models/create_moderation_request.rb +1 -1
- data/lib/openapi_openai/models/create_moderation_request_input.rb +1 -1
- data/lib/openapi_openai/models/create_moderation_request_model.rb +8 -9
- data/lib/openapi_openai/models/create_moderation_response.rb +5 -1
- data/lib/openapi_openai/models/create_moderation_response_results_inner.rb +2 -1
- data/lib/openapi_openai/models/create_moderation_response_results_inner_categories.rb +78 -2
- data/lib/openapi_openai/models/create_moderation_response_results_inner_category_scores.rb +78 -2
- data/lib/openapi_openai/models/create_run_request.rb +433 -0
- data/lib/openapi_openai/models/create_run_request_model.rb +104 -0
- data/lib/openapi_openai/models/{create_edit_request.rb → create_speech_request.rb} +105 -95
- data/lib/openapi_openai/models/create_speech_request_model.rb +104 -0
- data/lib/openapi_openai/models/create_thread_and_run_request.rb +418 -0
- data/lib/openapi_openai/models/create_thread_and_run_request_tools_inner.rb +106 -0
- data/lib/openapi_openai/models/{list_fine_tune_events_response.rb → create_thread_request.rb} +22 -33
- data/lib/openapi_openai/models/create_transcription200_response.rb +105 -0
- data/lib/openapi_openai/models/create_transcription_request_model.rb +9 -10
- data/lib/openapi_openai/models/{create_transcription_response.rb → create_transcription_response_json.rb} +6 -4
- data/lib/openapi_openai/models/create_transcription_response_verbose_json.rb +281 -0
- data/lib/openapi_openai/models/create_translation200_response.rb +105 -0
- data/lib/openapi_openai/models/{create_translation_response.rb → create_translation_response_json.rb} +4 -4
- data/lib/openapi_openai/models/create_translation_response_verbose_json.rb +268 -0
- data/lib/openapi_openai/models/delete_assistant_file_response.rb +288 -0
- data/lib/openapi_openai/models/delete_assistant_response.rb +287 -0
- data/lib/openapi_openai/models/delete_file_response.rb +35 -1
- data/lib/openapi_openai/models/delete_message_response.rb +287 -0
- data/lib/openapi_openai/models/delete_model_response.rb +21 -21
- data/lib/openapi_openai/models/delete_thread_response.rb +287 -0
- data/lib/openapi_openai/models/done_event.rb +284 -0
- data/lib/openapi_openai/models/embedding.rb +293 -0
- data/lib/openapi_openai/models/error.rb +22 -22
- data/lib/openapi_openai/models/error_event.rb +272 -0
- data/lib/openapi_openai/models/error_response.rb +1 -1
- data/lib/openapi_openai/models/fine_tuning_integration.rb +272 -0
- data/lib/openapi_openai/models/fine_tuning_job.rb +515 -0
- data/lib/openapi_openai/models/{fine_tune.rb → fine_tuning_job_checkpoint.rb} +94 -146
- data/lib/openapi_openai/models/fine_tuning_job_checkpoint_metrics.rb +269 -0
- data/lib/openapi_openai/models/{fine_tune_event.rb → fine_tuning_job_error.rb} +34 -50
- data/lib/openapi_openai/models/fine_tuning_job_event.rb +332 -0
- data/lib/openapi_openai/models/fine_tuning_job_hyperparameters.rb +222 -0
- data/lib/openapi_openai/models/fine_tuning_job_hyperparameters_n_epochs.rb +106 -0
- data/lib/openapi_openai/models/{create_edit_request_model.rb → fine_tuning_job_integrations_inner.rb} +3 -4
- data/lib/openapi_openai/models/function_object.rb +244 -0
- data/lib/openapi_openai/models/image.rb +236 -0
- data/lib/openapi_openai/models/images_response.rb +2 -2
- data/lib/openapi_openai/models/list_assistant_files_response.rb +287 -0
- data/lib/openapi_openai/models/list_assistants_response.rb +287 -0
- data/lib/openapi_openai/models/list_files_response.rb +54 -20
- data/lib/openapi_openai/models/list_fine_tuning_job_checkpoints_response.rb +309 -0
- data/lib/openapi_openai/models/{create_edit_response.rb → list_fine_tuning_job_events_response.rb} +57 -55
- data/lib/openapi_openai/models/list_message_files_response.rb +287 -0
- data/lib/openapi_openai/models/list_messages_response.rb +287 -0
- data/lib/openapi_openai/models/list_models_response.rb +35 -1
- data/lib/openapi_openai/models/list_paginated_fine_tuning_jobs_response.rb +289 -0
- data/lib/openapi_openai/models/list_run_steps_response.rb +287 -0
- data/lib/openapi_openai/models/list_runs_response.rb +287 -0
- data/lib/openapi_openai/models/list_threads_response.rb +287 -0
- data/lib/openapi_openai/models/message_content_image_file_object.rb +273 -0
- data/lib/openapi_openai/models/message_content_image_file_object_image_file.rb +222 -0
- data/lib/openapi_openai/models/message_content_text_annotations_file_citation_object.rb +360 -0
- data/lib/openapi_openai/models/message_content_text_annotations_file_citation_object_file_citation.rb +239 -0
- data/lib/openapi_openai/models/message_content_text_annotations_file_path_object.rb +360 -0
- data/lib/openapi_openai/models/message_content_text_annotations_file_path_object_file_path.rb +222 -0
- data/lib/openapi_openai/models/message_content_text_object.rb +273 -0
- data/lib/openapi_openai/models/message_content_text_object_text.rb +240 -0
- data/lib/openapi_openai/models/message_content_text_object_text_annotations_inner.rb +105 -0
- data/lib/openapi_openai/models/message_delta_content_image_file_object.rb +283 -0
- data/lib/openapi_openai/models/message_delta_content_image_file_object_image_file.rb +215 -0
- data/lib/openapi_openai/models/message_delta_content_text_annotations_file_citation_object.rb +349 -0
- data/lib/openapi_openai/models/message_delta_content_text_annotations_file_citation_object_file_citation.rb +225 -0
- data/lib/openapi_openai/models/message_delta_content_text_annotations_file_path_object.rb +349 -0
- data/lib/openapi_openai/models/message_delta_content_text_annotations_file_path_object_file_path.rb +215 -0
- data/lib/openapi_openai/models/{create_edit_response_choices_inner.rb → message_delta_content_text_object.rb} +42 -35
- data/lib/openapi_openai/models/message_delta_content_text_object_text.rb +226 -0
- data/lib/openapi_openai/models/message_delta_content_text_object_text_annotations_inner.rb +105 -0
- data/lib/openapi_openai/models/message_delta_object.rb +290 -0
- data/lib/openapi_openai/models/message_delta_object_delta.rb +293 -0
- data/lib/openapi_openai/models/message_delta_object_delta_content_inner.rb +105 -0
- data/lib/openapi_openai/models/message_file_object.rb +308 -0
- data/lib/openapi_openai/models/message_object.rb +500 -0
- data/lib/openapi_openai/models/message_object_content_inner.rb +105 -0
- data/lib/openapi_openai/models/message_object_incomplete_details.rb +257 -0
- data/lib/openapi_openai/models/message_stream_event.rb +108 -0
- data/lib/openapi_openai/models/message_stream_event_one_of.rb +272 -0
- data/lib/openapi_openai/models/message_stream_event_one_of1.rb +272 -0
- data/lib/openapi_openai/models/message_stream_event_one_of2.rb +272 -0
- data/lib/openapi_openai/models/message_stream_event_one_of3.rb +272 -0
- data/lib/openapi_openai/models/message_stream_event_one_of4.rb +272 -0
- data/lib/openapi_openai/models/model.rb +57 -18
- data/lib/openapi_openai/models/modify_assistant_request.rb +365 -0
- data/lib/openapi_openai/models/modify_message_request.rb +216 -0
- data/lib/openapi_openai/models/modify_run_request.rb +216 -0
- data/lib/openapi_openai/models/modify_thread_request.rb +216 -0
- data/lib/openapi_openai/models/open_ai_file.rb +93 -20
- data/lib/openapi_openai/models/run_completion_usage.rb +257 -0
- data/lib/openapi_openai/models/run_object.rb +686 -0
- data/lib/openapi_openai/models/run_object_incomplete_details.rb +250 -0
- data/lib/openapi_openai/models/run_object_last_error.rb +274 -0
- data/lib/openapi_openai/models/run_object_required_action.rb +273 -0
- data/lib/openapi_openai/models/run_object_required_action_submit_tool_outputs.rb +225 -0
- data/lib/openapi_openai/models/run_step_completion_usage.rb +257 -0
- data/lib/openapi_openai/models/run_step_delta_object.rb +290 -0
- data/lib/openapi_openai/models/{images_response_data_inner.rb → run_step_delta_object_delta.rb} +12 -20
- data/lib/openapi_openai/models/run_step_delta_object_delta_step_details.rb +106 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_message_creation_object.rb +266 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_message_creation_object_message_creation.rb +215 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_object.rb +293 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_object_code_interpreter.rb +228 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_object_code_interpreter_outputs_inner.rb +105 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_output_image_object.rb +282 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_output_image_object_image.rb +215 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_output_logs_object.rb +284 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_function_object.rb +292 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_function_object_function.rb +237 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_object.rb +269 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_object_tool_calls_inner.rb +106 -0
- data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_retrieval_object.rb +293 -0
- data/lib/openapi_openai/models/run_step_details_message_creation_object.rb +273 -0
- data/lib/openapi_openai/models/run_step_details_message_creation_object_message_creation.rb +222 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_code_object.rb +290 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_code_object_code_interpreter.rb +242 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_code_object_code_interpreter_outputs_inner.rb +105 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_code_output_image_object.rb +272 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_code_output_image_object_image.rb +222 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_code_output_logs_object.rb +274 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_function_object.rb +289 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_function_object_function.rb +253 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_object.rb +276 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_object_tool_calls_inner.rb +106 -0
- data/lib/openapi_openai/models/run_step_details_tool_calls_retrieval_object.rb +290 -0
- data/lib/openapi_openai/models/run_step_object.rb +505 -0
- data/lib/openapi_openai/models/run_step_object_last_error.rb +274 -0
- data/lib/openapi_openai/models/run_step_object_step_details.rb +106 -0
- data/lib/openapi_openai/models/run_step_stream_event.rb +110 -0
- data/lib/openapi_openai/models/run_step_stream_event_one_of.rb +272 -0
- data/lib/openapi_openai/models/run_step_stream_event_one_of1.rb +272 -0
- data/lib/openapi_openai/models/run_step_stream_event_one_of2.rb +272 -0
- data/lib/openapi_openai/models/run_step_stream_event_one_of3.rb +272 -0
- data/lib/openapi_openai/models/run_step_stream_event_one_of4.rb +272 -0
- data/lib/openapi_openai/models/run_step_stream_event_one_of5.rb +272 -0
- data/lib/openapi_openai/models/run_step_stream_event_one_of6.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event.rb +112 -0
- data/lib/openapi_openai/models/run_stream_event_one_of.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event_one_of1.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event_one_of2.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event_one_of3.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event_one_of4.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event_one_of5.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event_one_of6.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event_one_of7.rb +272 -0
- data/lib/openapi_openai/models/run_stream_event_one_of8.rb +272 -0
- data/lib/openapi_openai/models/run_tool_call_object.rb +290 -0
- data/lib/openapi_openai/models/run_tool_call_object_function.rb +240 -0
- data/lib/openapi_openai/models/submit_tool_outputs_run_request.rb +235 -0
- data/lib/openapi_openai/models/submit_tool_outputs_run_request_tool_outputs_inner.rb +225 -0
- data/lib/openapi_openai/models/thread_object.rb +304 -0
- data/lib/openapi_openai/models/thread_stream_event.rb +104 -0
- data/lib/openapi_openai/models/thread_stream_event_one_of.rb +272 -0
- data/lib/openapi_openai/models/transcription_segment.rb +377 -0
- data/lib/openapi_openai/models/{create_embedding_response_data_inner.rb → transcription_word.rb} +38 -37
- data/lib/openapi_openai/models/truncation_object.rb +275 -0
- data/lib/openapi_openai/version.rb +2 -2
- data/lib/openapi_openai.rb +209 -19
- data/openapi_openai.gemspec +2 -2
- data/spec/api/assistants_api_spec.rb +389 -0
- data/spec/api/audio_api_spec.rb +78 -0
- data/spec/api/chat_api_spec.rb +46 -0
- data/spec/api/completions_api_spec.rb +46 -0
- data/spec/api/embeddings_api_spec.rb +46 -0
- data/spec/api/files_api_spec.rb +91 -0
- data/spec/api/fine_tuning_api_spec.rb +106 -0
- data/spec/api/images_api_spec.rb +80 -0
- data/spec/api/models_api_spec.rb +67 -0
- data/spec/api/moderations_api_spec.rb +46 -0
- data/spec/models/assistant_file_object_spec.rb +58 -0
- data/spec/models/assistant_object_spec.rb +94 -0
- data/spec/models/assistant_object_tools_inner_spec.rb +32 -0
- data/spec/models/assistant_stream_event_spec.rb +32 -0
- data/spec/models/assistant_tools_code_spec.rb +40 -0
- data/spec/models/assistant_tools_function_spec.rb +46 -0
- data/spec/models/assistant_tools_retrieval_spec.rb +40 -0
- data/spec/models/assistants_api_named_tool_choice_spec.rb +46 -0
- data/spec/models/assistants_api_response_format_option_spec.rb +32 -0
- data/spec/models/assistants_api_response_format_spec.rb +40 -0
- data/spec/models/assistants_api_tool_choice_option_spec.rb +32 -0
- data/spec/models/chat_completion_function_call_option_spec.rb +36 -0
- data/spec/models/chat_completion_functions_spec.rb +3 -3
- data/spec/models/chat_completion_message_tool_call_chunk_function_spec.rb +42 -0
- data/spec/models/{create_edit_response_choices_inner_spec.rb → chat_completion_message_tool_call_chunk_spec.rb} +15 -15
- data/spec/models/{chat_completion_request_message_function_call_spec.rb → chat_completion_message_tool_call_function_spec.rb} +7 -7
- data/spec/models/chat_completion_message_tool_call_spec.rb +52 -0
- data/spec/models/chat_completion_named_tool_choice_function_spec.rb +36 -0
- data/spec/models/chat_completion_named_tool_choice_spec.rb +46 -0
- data/spec/models/chat_completion_request_assistant_message_function_call_spec.rb +42 -0
- data/spec/models/chat_completion_request_assistant_message_spec.rb +64 -0
- data/spec/models/chat_completion_request_function_message_spec.rb +52 -0
- data/spec/models/chat_completion_request_message_content_part_image_image_url_spec.rb +46 -0
- data/spec/models/chat_completion_request_message_content_part_image_spec.rb +46 -0
- data/spec/models/chat_completion_request_message_content_part_spec.rb +32 -0
- data/spec/models/chat_completion_request_message_content_part_text_spec.rb +46 -0
- data/spec/models/chat_completion_request_message_spec.rb +6 -32
- data/spec/models/chat_completion_request_system_message_spec.rb +52 -0
- data/spec/models/chat_completion_request_tool_message_spec.rb +52 -0
- data/spec/models/chat_completion_request_user_message_content_spec.rb +32 -0
- data/spec/models/chat_completion_request_user_message_spec.rb +52 -0
- data/spec/models/chat_completion_response_message_spec.rb +13 -7
- data/spec/models/chat_completion_role_spec.rb +30 -0
- data/spec/models/chat_completion_stream_response_delta_function_call_spec.rb +42 -0
- data/spec/models/chat_completion_stream_response_delta_spec.rb +14 -8
- data/spec/models/{create_edit_response_spec.rb → chat_completion_token_logprob_spec.rb} +11 -11
- data/spec/models/chat_completion_token_logprob_top_logprobs_inner_spec.rb +48 -0
- data/spec/models/chat_completion_tool_choice_option_spec.rb +32 -0
- data/spec/models/chat_completion_tool_spec.rb +46 -0
- data/spec/models/{create_completion_response_usage_spec.rb → completion_usage_spec.rb} +9 -9
- data/spec/models/create_assistant_file_request_spec.rb +36 -0
- data/spec/models/create_assistant_request_model_spec.rb +21 -0
- data/spec/models/{create_edit_request_spec.rb → create_assistant_request_spec.rb} +18 -12
- data/spec/models/create_chat_completion_function_response_choices_inner_spec.rb +52 -0
- data/spec/models/create_chat_completion_function_response_spec.rb +76 -0
- data/spec/models/create_chat_completion_request_function_call_spec.rb +1 -1
- data/spec/models/create_chat_completion_request_model_spec.rb +1 -12
- data/spec/models/{create_chat_completion_request_function_call_one_of_spec.rb → create_chat_completion_request_response_format_spec.rb} +12 -8
- data/spec/models/create_chat_completion_request_spec.rb +47 -11
- data/spec/models/create_chat_completion_request_stop_spec.rb +1 -1
- data/spec/models/create_chat_completion_response_choices_inner_logprobs_spec.rb +36 -0
- data/spec/models/create_chat_completion_response_choices_inner_spec.rb +12 -6
- data/spec/models/create_chat_completion_response_spec.rb +13 -3
- data/spec/models/create_chat_completion_stream_response_choices_inner_spec.rb +10 -4
- data/spec/models/create_chat_completion_stream_response_spec.rb +13 -3
- data/spec/models/create_completion_request_model_spec.rb +1 -12
- data/spec/models/create_completion_request_prompt_spec.rb +1 -1
- data/spec/models/create_completion_request_spec.rb +19 -13
- data/spec/models/create_completion_request_stop_spec.rb +1 -1
- data/spec/models/create_completion_response_choices_inner_logprobs_spec.rb +4 -4
- data/spec/models/create_completion_response_choices_inner_spec.rb +7 -7
- data/spec/models/create_completion_response_spec.rb +13 -3
- data/spec/models/create_embedding_request_input_spec.rb +1 -1
- data/spec/models/create_embedding_request_model_spec.rb +1 -12
- data/spec/models/create_embedding_request_spec.rb +18 -2
- data/spec/models/create_embedding_response_spec.rb +7 -3
- data/spec/models/create_embedding_response_usage_spec.rb +1 -1
- data/spec/models/create_fine_tuning_job_request_hyperparameters_batch_size_spec.rb +32 -0
- data/spec/models/create_fine_tuning_job_request_hyperparameters_learning_rate_multiplier_spec.rb +32 -0
- data/spec/models/create_fine_tuning_job_request_hyperparameters_n_epochs_spec.rb +32 -0
- data/spec/models/create_fine_tuning_job_request_hyperparameters_spec.rb +48 -0
- data/spec/models/create_fine_tuning_job_request_integrations_inner_spec.rb +42 -0
- data/spec/models/create_fine_tuning_job_request_integrations_inner_type_spec.rb +32 -0
- data/spec/models/create_fine_tuning_job_request_integrations_inner_wandb_spec.rb +54 -0
- data/spec/models/create_fine_tuning_job_request_model_spec.rb +21 -0
- data/spec/models/create_fine_tuning_job_request_spec.rb +72 -0
- data/spec/models/create_image_edit_request_model_spec.rb +21 -0
- data/spec/models/create_image_request_model_spec.rb +21 -0
- data/spec/models/create_image_request_spec.rb +30 -4
- data/spec/models/create_message_request_spec.rb +58 -0
- data/spec/models/create_moderation_request_input_spec.rb +1 -1
- data/spec/models/create_moderation_request_model_spec.rb +1 -12
- data/spec/models/create_moderation_request_spec.rb +1 -1
- data/spec/models/create_moderation_response_results_inner_categories_spec.rb +25 -1
- data/spec/models/create_moderation_response_results_inner_category_scores_spec.rb +25 -1
- data/spec/models/create_moderation_response_results_inner_spec.rb +1 -1
- data/spec/models/create_moderation_response_spec.rb +1 -1
- data/spec/models/create_run_request_model_spec.rb +21 -0
- data/spec/models/{create_fine_tune_request_spec.rb → create_run_request_spec.rb} +31 -19
- data/spec/models/create_speech_request_model_spec.rb +21 -0
- data/spec/models/create_speech_request_spec.rb +68 -0
- data/spec/models/{fine_tune_spec.rb → create_thread_and_run_request_spec.rb} +20 -20
- data/spec/models/create_thread_and_run_request_tools_inner_spec.rb +32 -0
- data/spec/models/{list_fine_tune_events_response_spec.rb → create_thread_request_spec.rb} +9 -9
- data/spec/models/create_transcription200_response_spec.rb +32 -0
- data/spec/models/create_transcription_request_model_spec.rb +1 -12
- data/spec/models/{create_transcription_response_spec.rb → create_transcription_response_json_spec.rb} +7 -7
- data/spec/models/create_transcription_response_verbose_json_spec.rb +60 -0
- data/spec/models/create_translation200_response_spec.rb +32 -0
- data/spec/models/{create_translation_response_spec.rb → create_translation_response_json_spec.rb} +7 -7
- data/spec/models/create_translation_response_verbose_json_spec.rb +54 -0
- data/spec/models/delete_assistant_file_response_spec.rb +52 -0
- data/spec/models/delete_assistant_response_spec.rb +52 -0
- data/spec/models/delete_file_response_spec.rb +5 -1
- data/spec/models/delete_message_response_spec.rb +52 -0
- data/spec/models/delete_model_response_spec.rb +3 -3
- data/spec/models/delete_thread_response_spec.rb +52 -0
- data/spec/models/done_event_spec.rb +50 -0
- data/spec/models/{create_embedding_response_data_inner_spec.rb → embedding_spec.rb} +13 -9
- data/spec/models/error_event_spec.rb +46 -0
- data/spec/models/error_response_spec.rb +1 -1
- data/spec/models/error_spec.rb +3 -3
- data/spec/models/fine_tuning_integration_spec.rb +46 -0
- data/spec/models/fine_tuning_job_checkpoint_metrics_spec.rb +72 -0
- data/spec/models/fine_tuning_job_checkpoint_spec.rb +76 -0
- data/spec/models/{fine_tune_event_spec.rb → fine_tuning_job_error_spec.rb} +10 -16
- data/spec/models/fine_tuning_job_event_spec.rb +68 -0
- data/spec/models/fine_tuning_job_hyperparameters_n_epochs_spec.rb +32 -0
- data/spec/models/fine_tuning_job_hyperparameters_spec.rb +36 -0
- data/spec/models/fine_tuning_job_integrations_inner_spec.rb +32 -0
- data/spec/models/fine_tuning_job_spec.rb +134 -0
- data/spec/models/function_object_spec.rb +48 -0
- data/spec/models/{images_response_data_inner_spec.rb → image_spec.rb} +14 -8
- data/spec/models/images_response_spec.rb +1 -1
- data/spec/models/list_assistant_files_response_spec.rb +60 -0
- data/spec/models/list_assistants_response_spec.rb +60 -0
- data/spec/models/list_files_response_spec.rb +7 -3
- data/spec/models/list_fine_tuning_job_checkpoints_response_spec.rb +64 -0
- data/spec/models/{list_fine_tunes_response_spec.rb → list_fine_tuning_job_events_response_spec.rb} +13 -9
- data/spec/models/list_message_files_response_spec.rb +60 -0
- data/spec/models/list_messages_response_spec.rb +60 -0
- data/spec/models/list_models_response_spec.rb +5 -1
- data/spec/models/list_paginated_fine_tuning_jobs_response_spec.rb +52 -0
- data/spec/models/list_run_steps_response_spec.rb +60 -0
- data/spec/models/list_runs_response_spec.rb +60 -0
- data/spec/models/list_threads_response_spec.rb +60 -0
- data/spec/models/message_content_image_file_object_image_file_spec.rb +36 -0
- data/spec/models/message_content_image_file_object_spec.rb +46 -0
- data/spec/models/message_content_text_annotations_file_citation_object_file_citation_spec.rb +42 -0
- data/spec/models/message_content_text_annotations_file_citation_object_spec.rb +64 -0
- data/spec/models/message_content_text_annotations_file_path_object_file_path_spec.rb +36 -0
- data/spec/models/message_content_text_annotations_file_path_object_spec.rb +64 -0
- data/spec/models/message_content_text_object_spec.rb +46 -0
- data/spec/models/message_content_text_object_text_annotations_inner_spec.rb +32 -0
- data/spec/models/message_content_text_object_text_spec.rb +42 -0
- data/spec/models/message_delta_content_image_file_object_image_file_spec.rb +36 -0
- data/spec/models/message_delta_content_image_file_object_spec.rb +52 -0
- data/spec/models/message_delta_content_text_annotations_file_citation_object_file_citation_spec.rb +42 -0
- data/spec/models/message_delta_content_text_annotations_file_citation_object_spec.rb +70 -0
- data/spec/models/message_delta_content_text_annotations_file_path_object_file_path_spec.rb +36 -0
- data/spec/models/message_delta_content_text_annotations_file_path_object_spec.rb +70 -0
- data/spec/models/message_delta_content_text_object_spec.rb +52 -0
- data/spec/models/message_delta_content_text_object_text_annotations_inner_spec.rb +32 -0
- data/spec/models/message_delta_content_text_object_text_spec.rb +42 -0
- data/spec/models/message_delta_object_delta_content_inner_spec.rb +32 -0
- data/spec/models/message_delta_object_delta_spec.rb +52 -0
- data/spec/models/message_delta_object_spec.rb +52 -0
- data/spec/models/message_file_object_spec.rb +58 -0
- data/spec/models/message_object_content_inner_spec.rb +32 -0
- data/spec/models/message_object_incomplete_details_spec.rb +40 -0
- data/spec/models/message_object_spec.rb +126 -0
- data/spec/models/message_stream_event_one_of1_spec.rb +46 -0
- data/spec/models/message_stream_event_one_of2_spec.rb +46 -0
- data/spec/models/message_stream_event_one_of3_spec.rb +46 -0
- data/spec/models/message_stream_event_one_of4_spec.rb +46 -0
- data/spec/models/message_stream_event_one_of_spec.rb +46 -0
- data/spec/models/message_stream_event_spec.rb +32 -0
- data/spec/models/model_spec.rb +7 -3
- data/spec/models/modify_assistant_request_spec.rb +72 -0
- data/spec/models/modify_message_request_spec.rb +36 -0
- data/spec/models/modify_run_request_spec.rb +36 -0
- data/spec/models/modify_thread_request_spec.rb +36 -0
- data/spec/models/open_ai_file_spec.rb +17 -5
- data/spec/models/run_completion_usage_spec.rb +48 -0
- data/spec/models/run_object_incomplete_details_spec.rb +40 -0
- data/spec/models/run_object_last_error_spec.rb +46 -0
- data/spec/models/run_object_required_action_spec.rb +46 -0
- data/spec/models/run_object_required_action_submit_tool_outputs_spec.rb +36 -0
- data/spec/models/run_object_spec.rb +194 -0
- data/spec/models/run_step_completion_usage_spec.rb +48 -0
- data/spec/models/run_step_delta_object_delta_spec.rb +36 -0
- data/spec/models/run_step_delta_object_delta_step_details_spec.rb +32 -0
- data/spec/models/run_step_delta_object_spec.rb +52 -0
- data/spec/models/run_step_delta_step_details_message_creation_object_message_creation_spec.rb +36 -0
- data/spec/models/run_step_delta_step_details_message_creation_object_spec.rb +46 -0
- data/spec/models/run_step_delta_step_details_tool_calls_code_object_code_interpreter_outputs_inner_spec.rb +32 -0
- data/spec/models/run_step_delta_step_details_tool_calls_code_object_code_interpreter_spec.rb +42 -0
- data/spec/models/run_step_delta_step_details_tool_calls_code_object_spec.rb +58 -0
- data/spec/models/run_step_delta_step_details_tool_calls_code_output_image_object_image_spec.rb +36 -0
- data/spec/models/run_step_delta_step_details_tool_calls_code_output_image_object_spec.rb +52 -0
- data/spec/models/run_step_delta_step_details_tool_calls_code_output_logs_object_spec.rb +52 -0
- data/spec/models/run_step_delta_step_details_tool_calls_function_object_function_spec.rb +48 -0
- data/spec/models/run_step_delta_step_details_tool_calls_function_object_spec.rb +58 -0
- data/spec/models/run_step_delta_step_details_tool_calls_object_spec.rb +46 -0
- data/spec/models/run_step_delta_step_details_tool_calls_object_tool_calls_inner_spec.rb +32 -0
- data/spec/models/run_step_delta_step_details_tool_calls_retrieval_object_spec.rb +58 -0
- data/spec/models/run_step_details_message_creation_object_message_creation_spec.rb +36 -0
- data/spec/models/run_step_details_message_creation_object_spec.rb +46 -0
- data/spec/models/run_step_details_tool_calls_code_object_code_interpreter_outputs_inner_spec.rb +32 -0
- data/spec/models/run_step_details_tool_calls_code_object_code_interpreter_spec.rb +42 -0
- data/spec/models/run_step_details_tool_calls_code_object_spec.rb +52 -0
- data/spec/models/run_step_details_tool_calls_code_output_image_object_image_spec.rb +36 -0
- data/spec/models/run_step_details_tool_calls_code_output_image_object_spec.rb +46 -0
- data/spec/models/run_step_details_tool_calls_code_output_logs_object_spec.rb +46 -0
- data/spec/models/run_step_details_tool_calls_function_object_function_spec.rb +48 -0
- data/spec/models/run_step_details_tool_calls_function_object_spec.rb +52 -0
- data/spec/models/run_step_details_tool_calls_object_spec.rb +46 -0
- data/spec/models/run_step_details_tool_calls_object_tool_calls_inner_spec.rb +32 -0
- data/spec/models/run_step_details_tool_calls_retrieval_object_spec.rb +52 -0
- data/spec/models/run_step_object_last_error_spec.rb +46 -0
- data/spec/models/run_step_object_spec.rb +138 -0
- data/spec/models/run_step_object_step_details_spec.rb +32 -0
- data/spec/models/run_step_stream_event_one_of1_spec.rb +46 -0
- data/spec/models/run_step_stream_event_one_of2_spec.rb +46 -0
- data/spec/models/run_step_stream_event_one_of3_spec.rb +46 -0
- data/spec/models/run_step_stream_event_one_of4_spec.rb +46 -0
- data/spec/models/run_step_stream_event_one_of5_spec.rb +46 -0
- data/spec/models/run_step_stream_event_one_of6_spec.rb +46 -0
- data/spec/models/run_step_stream_event_one_of_spec.rb +46 -0
- data/spec/models/run_step_stream_event_spec.rb +32 -0
- data/spec/models/run_stream_event_one_of1_spec.rb +46 -0
- data/spec/models/run_stream_event_one_of2_spec.rb +46 -0
- data/spec/models/run_stream_event_one_of3_spec.rb +46 -0
- data/spec/models/run_stream_event_one_of4_spec.rb +46 -0
- data/spec/models/run_stream_event_one_of5_spec.rb +46 -0
- data/spec/models/run_stream_event_one_of6_spec.rb +46 -0
- data/spec/models/run_stream_event_one_of7_spec.rb +46 -0
- data/spec/models/run_stream_event_one_of8_spec.rb +46 -0
- data/spec/models/run_stream_event_one_of_spec.rb +46 -0
- data/spec/models/{create_fine_tune_request_model_spec.rb → run_stream_event_spec.rb} +3 -3
- data/spec/models/run_tool_call_object_function_spec.rb +42 -0
- data/spec/models/run_tool_call_object_spec.rb +52 -0
- data/spec/models/submit_tool_outputs_run_request_spec.rb +42 -0
- data/spec/models/submit_tool_outputs_run_request_tool_outputs_inner_spec.rb +42 -0
- data/spec/models/thread_object_spec.rb +58 -0
- data/spec/models/thread_stream_event_one_of_spec.rb +46 -0
- data/spec/models/{create_edit_request_model_spec.rb → thread_stream_event_spec.rb} +3 -3
- data/spec/models/transcription_segment_spec.rb +90 -0
- data/spec/models/transcription_word_spec.rb +48 -0
- data/spec/models/truncation_object_spec.rb +46 -0
- data/spec/spec_helper.rb +1 -1
- metadata +867 -106
- data/docs/CreateCompletionResponseUsage.md +0 -22
- data/docs/CreateEditRequest.md +0 -28
- data/docs/CreateEditResponse.md +0 -24
- data/docs/CreateEditResponseChoicesInner.md +0 -24
- data/docs/CreateEmbeddingResponseDataInner.md +0 -22
- data/docs/CreateFineTuneRequest.md +0 -40
- data/docs/CreateTranslationResponse.md +0 -18
- data/docs/FineTune.md +0 -42
- data/docs/ImagesResponseDataInner.md +0 -20
- data/docs/ListFineTuneEventsResponse.md +0 -20
- data/docs/ListFineTunesResponse.md +0 -20
- data/docs/OpenAIApi.md +0 -1499
- data/lib/openapi_openai/api/open_ai_api.rb +0 -1583
- data/spec/api/open_ai_api_spec.rb +0 -306
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
# OpenApiOpenAIClient::CreateAssistantRequest
|
|
2
|
+
|
|
3
|
+
## Properties
|
|
4
|
+
|
|
5
|
+
| Name | Type | Description | Notes |
|
|
6
|
+
| ---- | ---- | ----------- | ----- |
|
|
7
|
+
| **model** | [**CreateAssistantRequestModel**](CreateAssistantRequestModel.md) | | |
|
|
8
|
+
| **name** | **String** | The name of the assistant. The maximum length is 256 characters. | [optional] |
|
|
9
|
+
| **description** | **String** | The description of the assistant. The maximum length is 512 characters. | [optional] |
|
|
10
|
+
| **instructions** | **String** | The system instructions that the assistant uses. The maximum length is 256,000 characters. | [optional] |
|
|
11
|
+
| **tools** | [**Array<AssistantObjectToolsInner>**](AssistantObjectToolsInner.md) | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `retrieval`, or `function`. | [optional] |
|
|
12
|
+
| **file_ids** | **Array<String>** | A list of [file](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. | [optional] |
|
|
13
|
+
| **metadata** | **Object** | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long. | [optional] |
|
|
14
|
+
|
|
15
|
+
## Example
|
|
16
|
+
|
|
17
|
+
```ruby
|
|
18
|
+
require 'openapi_openai'
|
|
19
|
+
|
|
20
|
+
instance = OpenApiOpenAIClient::CreateAssistantRequest.new(
|
|
21
|
+
model: null,
|
|
22
|
+
name: null,
|
|
23
|
+
description: null,
|
|
24
|
+
instructions: null,
|
|
25
|
+
tools: null,
|
|
26
|
+
file_ids: null,
|
|
27
|
+
metadata: null
|
|
28
|
+
)
|
|
29
|
+
```
|
|
30
|
+
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
# OpenApiOpenAIClient::CreateAssistantRequestModel
|
|
2
|
+
|
|
3
|
+
## Properties
|
|
4
|
+
|
|
5
|
+
| Name | Type | Description | Notes |
|
|
6
|
+
| ---- | ---- | ----------- | ----- |
|
|
7
|
+
|
|
8
|
+
## Example
|
|
9
|
+
|
|
10
|
+
```ruby
|
|
11
|
+
require 'openapi_openai'
|
|
12
|
+
|
|
13
|
+
instance = OpenApiOpenAIClient::CreateAssistantRequestModel.new()
|
|
14
|
+
```
|
|
15
|
+
|
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
# OpenApiOpenAIClient::CreateChatCompletionFunctionResponse
|
|
2
|
+
|
|
3
|
+
## Properties
|
|
4
|
+
|
|
5
|
+
| Name | Type | Description | Notes |
|
|
6
|
+
| ---- | ---- | ----------- | ----- |
|
|
7
|
+
| **id** | **String** | A unique identifier for the chat completion. | |
|
|
8
|
+
| **choices** | [**Array<CreateChatCompletionFunctionResponseChoicesInner>**](CreateChatCompletionFunctionResponseChoicesInner.md) | A list of chat completion choices. Can be more than one if `n` is greater than 1. | |
|
|
9
|
+
| **created** | **Integer** | The Unix timestamp (in seconds) of when the chat completion was created. | |
|
|
10
|
+
| **model** | **String** | The model used for the chat completion. | |
|
|
11
|
+
| **system_fingerprint** | **String** | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
|
|
12
|
+
| **object** | **String** | The object type, which is always `chat.completion`. | |
|
|
13
|
+
| **usage** | [**CompletionUsage**](CompletionUsage.md) | | [optional] |
|
|
14
|
+
|
|
15
|
+
## Example
|
|
16
|
+
|
|
17
|
+
```ruby
|
|
18
|
+
require 'openapi_openai'
|
|
19
|
+
|
|
20
|
+
instance = OpenApiOpenAIClient::CreateChatCompletionFunctionResponse.new(
|
|
21
|
+
id: null,
|
|
22
|
+
choices: null,
|
|
23
|
+
created: null,
|
|
24
|
+
model: null,
|
|
25
|
+
system_fingerprint: null,
|
|
26
|
+
object: null,
|
|
27
|
+
usage: null
|
|
28
|
+
)
|
|
29
|
+
```
|
|
30
|
+
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
# OpenApiOpenAIClient::CreateChatCompletionFunctionResponseChoicesInner
|
|
2
|
+
|
|
3
|
+
## Properties
|
|
4
|
+
|
|
5
|
+
| Name | Type | Description | Notes |
|
|
6
|
+
| ---- | ---- | ----------- | ----- |
|
|
7
|
+
| **finish_reason** | **String** | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, or `function_call` if the model called a function. | |
|
|
8
|
+
| **index** | **Integer** | The index of the choice in the list of choices. | |
|
|
9
|
+
| **message** | [**ChatCompletionResponseMessage**](ChatCompletionResponseMessage.md) | | |
|
|
10
|
+
|
|
11
|
+
## Example
|
|
12
|
+
|
|
13
|
+
```ruby
|
|
14
|
+
require 'openapi_openai'
|
|
15
|
+
|
|
16
|
+
instance = OpenApiOpenAIClient::CreateChatCompletionFunctionResponseChoicesInner.new(
|
|
17
|
+
finish_reason: null,
|
|
18
|
+
index: null,
|
|
19
|
+
message: null
|
|
20
|
+
)
|
|
21
|
+
```
|
|
22
|
+
|
|
@@ -4,20 +4,26 @@
|
|
|
4
4
|
|
|
5
5
|
| Name | Type | Description | Notes |
|
|
6
6
|
| ---- | ---- | ----------- | ----- |
|
|
7
|
+
| **messages** | [**Array<ChatCompletionRequestMessage>**](ChatCompletionRequestMessage.md) | A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models). | |
|
|
7
8
|
| **model** | [**CreateChatCompletionRequestModel**](CreateChatCompletionRequestModel.md) | | |
|
|
8
|
-
| **
|
|
9
|
-
| **
|
|
10
|
-
| **
|
|
9
|
+
| **frequency_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) | [optional][default to 0] |
|
|
10
|
+
| **logit_bias** | **Hash<String, Integer>** | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. | [optional] |
|
|
11
|
+
| **logprobs** | **Boolean** | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. | [optional][default to false] |
|
|
12
|
+
| **top_logprobs** | **Integer** | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. | [optional] |
|
|
13
|
+
| **max_tokens** | **Integer** | The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. | [optional] |
|
|
14
|
+
| **n** | **Integer** | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. | [optional][default to 1] |
|
|
15
|
+
| **presence_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) | [optional][default to 0] |
|
|
16
|
+
| **response_format** | [**CreateChatCompletionRequestResponseFormat**](CreateChatCompletionRequestResponseFormat.md) | | [optional] |
|
|
17
|
+
| **seed** | **Integer** | This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. | [optional] |
|
|
18
|
+
| **stop** | [**CreateChatCompletionRequestStop**](CreateChatCompletionRequestStop.md) | | [optional] |
|
|
19
|
+
| **stream** | **Boolean** | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions). | [optional][default to false] |
|
|
11
20
|
| **temperature** | **Float** | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | [optional][default to 1] |
|
|
12
21
|
| **top_p** | **Float** | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. | [optional][default to 1] |
|
|
13
|
-
| **
|
|
14
|
-
| **
|
|
15
|
-
| **stop** | [**CreateChatCompletionRequestStop**](CreateChatCompletionRequestStop.md) | | [optional] |
|
|
16
|
-
| **max_tokens** | **Integer** | The maximum number of [tokens](/tokenizer) to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens. | [optional] |
|
|
17
|
-
| **presence_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details) | [optional][default to 0] |
|
|
18
|
-
| **frequency_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details) | [optional][default to 0] |
|
|
19
|
-
| **logit_bias** | **Object** | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. | [optional] |
|
|
22
|
+
| **tools** | [**Array<ChatCompletionTool>**](ChatCompletionTool.md) | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported. | [optional] |
|
|
23
|
+
| **tool_choice** | [**ChatCompletionToolChoiceOption**](ChatCompletionToolChoiceOption.md) | | [optional] |
|
|
20
24
|
| **user** | **String** | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). | [optional] |
|
|
25
|
+
| **function_call** | [**CreateChatCompletionRequestFunctionCall**](CreateChatCompletionRequestFunctionCall.md) | | [optional] |
|
|
26
|
+
| **functions** | [**Array<ChatCompletionFunctions>**](ChatCompletionFunctions.md) | Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for. | [optional] |
|
|
21
27
|
|
|
22
28
|
## Example
|
|
23
29
|
|
|
@@ -25,20 +31,26 @@
|
|
|
25
31
|
require 'openapi_openai'
|
|
26
32
|
|
|
27
33
|
instance = OpenApiOpenAIClient::CreateChatCompletionRequest.new(
|
|
28
|
-
model: null,
|
|
29
34
|
messages: null,
|
|
30
|
-
|
|
31
|
-
function_call: null,
|
|
32
|
-
temperature: 1,
|
|
33
|
-
top_p: 1,
|
|
34
|
-
n: 1,
|
|
35
|
-
stream: null,
|
|
36
|
-
stop: null,
|
|
37
|
-
max_tokens: null,
|
|
38
|
-
presence_penalty: null,
|
|
35
|
+
model: null,
|
|
39
36
|
frequency_penalty: null,
|
|
40
37
|
logit_bias: null,
|
|
41
|
-
|
|
38
|
+
logprobs: null,
|
|
39
|
+
top_logprobs: null,
|
|
40
|
+
max_tokens: null,
|
|
41
|
+
n: 1,
|
|
42
|
+
presence_penalty: null,
|
|
43
|
+
response_format: null,
|
|
44
|
+
seed: null,
|
|
45
|
+
stop: null,
|
|
46
|
+
stream: null,
|
|
47
|
+
temperature: 1,
|
|
48
|
+
top_p: 1,
|
|
49
|
+
tools: null,
|
|
50
|
+
tool_choice: null,
|
|
51
|
+
user: user-1234,
|
|
52
|
+
function_call: null,
|
|
53
|
+
functions: null
|
|
42
54
|
)
|
|
43
55
|
```
|
|
44
56
|
|
|
@@ -14,7 +14,7 @@ require 'openapi_openai'
|
|
|
14
14
|
OpenApiOpenAIClient::CreateChatCompletionRequestFunctionCall.openapi_one_of
|
|
15
15
|
# =>
|
|
16
16
|
# [
|
|
17
|
-
# :'
|
|
17
|
+
# :'ChatCompletionFunctionCallOption',
|
|
18
18
|
# :'String'
|
|
19
19
|
# ]
|
|
20
20
|
```
|
|
@@ -29,7 +29,7 @@ Find the appropriate object from the `openapi_one_of` list and casts the data in
|
|
|
29
29
|
require 'openapi_openai'
|
|
30
30
|
|
|
31
31
|
OpenApiOpenAIClient::CreateChatCompletionRequestFunctionCall.build(data)
|
|
32
|
-
# => #<
|
|
32
|
+
# => #<ChatCompletionFunctionCallOption:0x00007fdd4aab02a0>
|
|
33
33
|
|
|
34
34
|
OpenApiOpenAIClient::CreateChatCompletionRequestFunctionCall.build(data_that_doesnt_match)
|
|
35
35
|
# => nil
|
|
@@ -43,7 +43,7 @@ OpenApiOpenAIClient::CreateChatCompletionRequestFunctionCall.build(data_that_doe
|
|
|
43
43
|
|
|
44
44
|
#### Return type
|
|
45
45
|
|
|
46
|
-
- `
|
|
46
|
+
- `ChatCompletionFunctionCallOption`
|
|
47
47
|
- `String`
|
|
48
48
|
- `nil` (if no type matches)
|
|
49
49
|
|
|
@@ -1,47 +1,15 @@
|
|
|
1
1
|
# OpenApiOpenAIClient::CreateChatCompletionRequestModel
|
|
2
2
|
|
|
3
|
-
##
|
|
3
|
+
## Properties
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
| Name | Type | Description | Notes |
|
|
6
|
+
| ---- | ---- | ----------- | ----- |
|
|
6
7
|
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
#### Example
|
|
10
|
-
|
|
11
|
-
```ruby
|
|
12
|
-
require 'openapi_openai'
|
|
13
|
-
|
|
14
|
-
OpenApiOpenAIClient::CreateChatCompletionRequestModel.openapi_one_of
|
|
15
|
-
# =>
|
|
16
|
-
# [
|
|
17
|
-
# :'String'
|
|
18
|
-
# ]
|
|
19
|
-
```
|
|
20
|
-
|
|
21
|
-
### build
|
|
22
|
-
|
|
23
|
-
Find the appropriate object from the `openapi_one_of` list and casts the data into it.
|
|
24
|
-
|
|
25
|
-
#### Example
|
|
8
|
+
## Example
|
|
26
9
|
|
|
27
10
|
```ruby
|
|
28
11
|
require 'openapi_openai'
|
|
29
12
|
|
|
30
|
-
OpenApiOpenAIClient::CreateChatCompletionRequestModel.
|
|
31
|
-
# => #<String:0x00007fdd4aab02a0>
|
|
32
|
-
|
|
33
|
-
OpenApiOpenAIClient::CreateChatCompletionRequestModel.build(data_that_doesnt_match)
|
|
34
|
-
# => nil
|
|
13
|
+
instance = OpenApiOpenAIClient::CreateChatCompletionRequestModel.new()
|
|
35
14
|
```
|
|
36
15
|
|
|
37
|
-
#### Parameters
|
|
38
|
-
|
|
39
|
-
| Name | Type | Description |
|
|
40
|
-
| ---- | ---- | ----------- |
|
|
41
|
-
| **data** | **Mixed** | data to be matched against the list of oneOf items |
|
|
42
|
-
|
|
43
|
-
#### Return type
|
|
44
|
-
|
|
45
|
-
- `String`
|
|
46
|
-
- `nil` (if no type matches)
|
|
47
|
-
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
# OpenApiOpenAIClient::CreateChatCompletionRequestResponseFormat
|
|
2
|
+
|
|
3
|
+
## Properties
|
|
4
|
+
|
|
5
|
+
| Name | Type | Description | Notes |
|
|
6
|
+
| ---- | ---- | ----------- | ----- |
|
|
7
|
+
| **type** | **String** | Must be one of `text` or `json_object`. | [optional][default to 'text'] |
|
|
8
|
+
|
|
9
|
+
## Example
|
|
10
|
+
|
|
11
|
+
```ruby
|
|
12
|
+
require 'openapi_openai'
|
|
13
|
+
|
|
14
|
+
instance = OpenApiOpenAIClient::CreateChatCompletionRequestResponseFormat.new(
|
|
15
|
+
type: json_object
|
|
16
|
+
)
|
|
17
|
+
```
|
|
18
|
+
|
|
@@ -4,12 +4,13 @@
|
|
|
4
4
|
|
|
5
5
|
| Name | Type | Description | Notes |
|
|
6
6
|
| ---- | ---- | ----------- | ----- |
|
|
7
|
-
| **id** | **String** |
|
|
8
|
-
| **
|
|
9
|
-
| **created** | **Integer** |
|
|
10
|
-
| **model** | **String** |
|
|
11
|
-
| **
|
|
12
|
-
| **
|
|
7
|
+
| **id** | **String** | A unique identifier for the chat completion. | |
|
|
8
|
+
| **choices** | [**Array<CreateChatCompletionResponseChoicesInner>**](CreateChatCompletionResponseChoicesInner.md) | A list of chat completion choices. Can be more than one if `n` is greater than 1. | |
|
|
9
|
+
| **created** | **Integer** | The Unix timestamp (in seconds) of when the chat completion was created. | |
|
|
10
|
+
| **model** | **String** | The model used for the chat completion. | |
|
|
11
|
+
| **system_fingerprint** | **String** | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
|
|
12
|
+
| **object** | **String** | The object type, which is always `chat.completion`. | |
|
|
13
|
+
| **usage** | [**CompletionUsage**](CompletionUsage.md) | | [optional] |
|
|
13
14
|
|
|
14
15
|
## Example
|
|
15
16
|
|
|
@@ -18,10 +19,11 @@ require 'openapi_openai'
|
|
|
18
19
|
|
|
19
20
|
instance = OpenApiOpenAIClient::CreateChatCompletionResponse.new(
|
|
20
21
|
id: null,
|
|
21
|
-
|
|
22
|
+
choices: null,
|
|
22
23
|
created: null,
|
|
23
24
|
model: null,
|
|
24
|
-
|
|
25
|
+
system_fingerprint: null,
|
|
26
|
+
object: null,
|
|
25
27
|
usage: null
|
|
26
28
|
)
|
|
27
29
|
```
|
|
@@ -4,9 +4,10 @@
|
|
|
4
4
|
|
|
5
5
|
| Name | Type | Description | Notes |
|
|
6
6
|
| ---- | ---- | ----------- | ----- |
|
|
7
|
-
| **
|
|
8
|
-
| **
|
|
9
|
-
| **
|
|
7
|
+
| **finish_reason** | **String** | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. | |
|
|
8
|
+
| **index** | **Integer** | The index of the choice in the list of choices. | |
|
|
9
|
+
| **message** | [**ChatCompletionResponseMessage**](ChatCompletionResponseMessage.md) | | |
|
|
10
|
+
| **logprobs** | [**CreateChatCompletionResponseChoicesInnerLogprobs**](CreateChatCompletionResponseChoicesInnerLogprobs.md) | | |
|
|
10
11
|
|
|
11
12
|
## Example
|
|
12
13
|
|
|
@@ -14,9 +15,10 @@
|
|
|
14
15
|
require 'openapi_openai'
|
|
15
16
|
|
|
16
17
|
instance = OpenApiOpenAIClient::CreateChatCompletionResponseChoicesInner.new(
|
|
18
|
+
finish_reason: null,
|
|
17
19
|
index: null,
|
|
18
20
|
message: null,
|
|
19
|
-
|
|
21
|
+
logprobs: null
|
|
20
22
|
)
|
|
21
23
|
```
|
|
22
24
|
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
# OpenApiOpenAIClient::CreateChatCompletionResponseChoicesInnerLogprobs
|
|
2
|
+
|
|
3
|
+
## Properties
|
|
4
|
+
|
|
5
|
+
| Name | Type | Description | Notes |
|
|
6
|
+
| ---- | ---- | ----------- | ----- |
|
|
7
|
+
| **content** | [**Array<ChatCompletionTokenLogprob>**](ChatCompletionTokenLogprob.md) | A list of message content tokens with log probability information. | |
|
|
8
|
+
|
|
9
|
+
## Example
|
|
10
|
+
|
|
11
|
+
```ruby
|
|
12
|
+
require 'openapi_openai'
|
|
13
|
+
|
|
14
|
+
instance = OpenApiOpenAIClient::CreateChatCompletionResponseChoicesInnerLogprobs.new(
|
|
15
|
+
content: null
|
|
16
|
+
)
|
|
17
|
+
```
|
|
18
|
+
|
|
@@ -4,11 +4,12 @@
|
|
|
4
4
|
|
|
5
5
|
| Name | Type | Description | Notes |
|
|
6
6
|
| ---- | ---- | ----------- | ----- |
|
|
7
|
-
| **id** | **String** |
|
|
8
|
-
| **
|
|
9
|
-
| **created** | **Integer** |
|
|
10
|
-
| **model** | **String** |
|
|
11
|
-
| **
|
|
7
|
+
| **id** | **String** | A unique identifier for the chat completion. Each chunk has the same ID. | |
|
|
8
|
+
| **choices** | [**Array<CreateChatCompletionStreamResponseChoicesInner>**](CreateChatCompletionStreamResponseChoicesInner.md) | A list of chat completion choices. Can be more than one if `n` is greater than 1. | |
|
|
9
|
+
| **created** | **Integer** | The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp. | |
|
|
10
|
+
| **model** | **String** | The model to generate the completion. | |
|
|
11
|
+
| **system_fingerprint** | **String** | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
|
|
12
|
+
| **object** | **String** | The object type, which is always `chat.completion.chunk`. | |
|
|
12
13
|
|
|
13
14
|
## Example
|
|
14
15
|
|
|
@@ -17,10 +18,11 @@ require 'openapi_openai'
|
|
|
17
18
|
|
|
18
19
|
instance = OpenApiOpenAIClient::CreateChatCompletionStreamResponse.new(
|
|
19
20
|
id: null,
|
|
20
|
-
|
|
21
|
+
choices: null,
|
|
21
22
|
created: null,
|
|
22
23
|
model: null,
|
|
23
|
-
|
|
24
|
+
system_fingerprint: null,
|
|
25
|
+
object: null
|
|
24
26
|
)
|
|
25
27
|
```
|
|
26
28
|
|
|
@@ -4,9 +4,10 @@
|
|
|
4
4
|
|
|
5
5
|
| Name | Type | Description | Notes |
|
|
6
6
|
| ---- | ---- | ----------- | ----- |
|
|
7
|
-
| **
|
|
8
|
-
| **
|
|
9
|
-
| **finish_reason** | **String** |
|
|
7
|
+
| **delta** | [**ChatCompletionStreamResponseDelta**](ChatCompletionStreamResponseDelta.md) | | |
|
|
8
|
+
| **logprobs** | [**CreateChatCompletionResponseChoicesInnerLogprobs**](CreateChatCompletionResponseChoicesInnerLogprobs.md) | | [optional] |
|
|
9
|
+
| **finish_reason** | **String** | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. | |
|
|
10
|
+
| **index** | **Integer** | The index of the choice in the list of choices. | |
|
|
10
11
|
|
|
11
12
|
## Example
|
|
12
13
|
|
|
@@ -14,9 +15,10 @@
|
|
|
14
15
|
require 'openapi_openai'
|
|
15
16
|
|
|
16
17
|
instance = OpenApiOpenAIClient::CreateChatCompletionStreamResponseChoicesInner.new(
|
|
17
|
-
index: null,
|
|
18
18
|
delta: null,
|
|
19
|
-
|
|
19
|
+
logprobs: null,
|
|
20
|
+
finish_reason: null,
|
|
21
|
+
index: null
|
|
20
22
|
)
|
|
21
23
|
```
|
|
22
24
|
|
|
@@ -6,19 +6,20 @@
|
|
|
6
6
|
| ---- | ---- | ----------- | ----- |
|
|
7
7
|
| **model** | [**CreateCompletionRequestModel**](CreateCompletionRequestModel.md) | | |
|
|
8
8
|
| **prompt** | [**CreateCompletionRequestPrompt**](CreateCompletionRequestPrompt.md) | | |
|
|
9
|
-
| **
|
|
10
|
-
| **max_tokens** | **Integer** | The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model's context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens. | [optional][default to 16] |
|
|
11
|
-
| **temperature** | **Float** | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | [optional][default to 1] |
|
|
12
|
-
| **top_p** | **Float** | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. | [optional][default to 1] |
|
|
13
|
-
| **n** | **Integer** | How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`. | [optional][default to 1] |
|
|
14
|
-
| **stream** | **Boolean** | Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb). | [optional][default to false] |
|
|
15
|
-
| **logprobs** | **Integer** | Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. | [optional] |
|
|
9
|
+
| **best_of** | **Integer** | Generates `best_of` completions server-side and returns the \"best\" (the one with the highest log probability per token). Results cannot be streamed. When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to return – `best_of` must be greater than `n`. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`. | [optional][default to 1] |
|
|
16
10
|
| **echo** | **Boolean** | Echo back the prompt in addition to the completion | [optional][default to false] |
|
|
11
|
+
| **frequency_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) | [optional][default to 0] |
|
|
12
|
+
| **logit_bias** | **Hash<String, Integer>** | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass `{\"50256\": -100}` to prevent the <|endoftext|> token from being generated. | [optional] |
|
|
13
|
+
| **logprobs** | **Integer** | Include the log probabilities on the `logprobs` most likely output tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. | [optional] |
|
|
14
|
+
| **max_tokens** | **Integer** | The maximum number of [tokens](/tokenizer) that can be generated in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. | [optional][default to 16] |
|
|
15
|
+
| **n** | **Integer** | How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`. | [optional][default to 1] |
|
|
16
|
+
| **presence_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) | [optional][default to 0] |
|
|
17
|
+
| **seed** | **Integer** | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. | [optional] |
|
|
17
18
|
| **stop** | [**CreateCompletionRequestStop**](CreateCompletionRequestStop.md) | | [optional] |
|
|
18
|
-
| **
|
|
19
|
-
| **
|
|
20
|
-
| **
|
|
21
|
-
| **
|
|
19
|
+
| **stream** | **Boolean** | Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions). | [optional][default to false] |
|
|
20
|
+
| **suffix** | **String** | The suffix that comes after a completion of inserted text. This parameter is only supported for `gpt-3.5-turbo-instruct`. | [optional] |
|
|
21
|
+
| **temperature** | **Float** | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | [optional][default to 1] |
|
|
22
|
+
| **top_p** | **Float** | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. | [optional][default to 1] |
|
|
22
23
|
| **user** | **String** | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). | [optional] |
|
|
23
24
|
|
|
24
25
|
## Example
|
|
@@ -29,19 +30,20 @@ require 'openapi_openai'
|
|
|
29
30
|
instance = OpenApiOpenAIClient::CreateCompletionRequest.new(
|
|
30
31
|
model: null,
|
|
31
32
|
prompt: null,
|
|
32
|
-
|
|
33
|
-
max_tokens: 16,
|
|
34
|
-
temperature: 1,
|
|
35
|
-
top_p: 1,
|
|
36
|
-
n: 1,
|
|
37
|
-
stream: null,
|
|
38
|
-
logprobs: null,
|
|
33
|
+
best_of: null,
|
|
39
34
|
echo: null,
|
|
40
|
-
stop: null,
|
|
41
|
-
presence_penalty: null,
|
|
42
35
|
frequency_penalty: null,
|
|
43
|
-
best_of: null,
|
|
44
36
|
logit_bias: null,
|
|
37
|
+
logprobs: null,
|
|
38
|
+
max_tokens: 16,
|
|
39
|
+
n: 1,
|
|
40
|
+
presence_penalty: null,
|
|
41
|
+
seed: null,
|
|
42
|
+
stop: null,
|
|
43
|
+
stream: null,
|
|
44
|
+
suffix: test.,
|
|
45
|
+
temperature: 1,
|
|
46
|
+
top_p: 1,
|
|
45
47
|
user: user-1234
|
|
46
48
|
)
|
|
47
49
|
```
|
|
@@ -1,47 +1,15 @@
|
|
|
1
1
|
# OpenApiOpenAIClient::CreateCompletionRequestModel
|
|
2
2
|
|
|
3
|
-
##
|
|
3
|
+
## Properties
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
| Name | Type | Description | Notes |
|
|
6
|
+
| ---- | ---- | ----------- | ----- |
|
|
6
7
|
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
#### Example
|
|
10
|
-
|
|
11
|
-
```ruby
|
|
12
|
-
require 'openapi_openai'
|
|
13
|
-
|
|
14
|
-
OpenApiOpenAIClient::CreateCompletionRequestModel.openapi_one_of
|
|
15
|
-
# =>
|
|
16
|
-
# [
|
|
17
|
-
# :'String'
|
|
18
|
-
# ]
|
|
19
|
-
```
|
|
20
|
-
|
|
21
|
-
### build
|
|
22
|
-
|
|
23
|
-
Find the appropriate object from the `openapi_one_of` list and casts the data into it.
|
|
24
|
-
|
|
25
|
-
#### Example
|
|
8
|
+
## Example
|
|
26
9
|
|
|
27
10
|
```ruby
|
|
28
11
|
require 'openapi_openai'
|
|
29
12
|
|
|
30
|
-
OpenApiOpenAIClient::CreateCompletionRequestModel.
|
|
31
|
-
# => #<String:0x00007fdd4aab02a0>
|
|
32
|
-
|
|
33
|
-
OpenApiOpenAIClient::CreateCompletionRequestModel.build(data_that_doesnt_match)
|
|
34
|
-
# => nil
|
|
13
|
+
instance = OpenApiOpenAIClient::CreateCompletionRequestModel.new()
|
|
35
14
|
```
|
|
36
15
|
|
|
37
|
-
#### Parameters
|
|
38
|
-
|
|
39
|
-
| Name | Type | Description |
|
|
40
|
-
| ---- | ---- | ----------- |
|
|
41
|
-
| **data** | **Mixed** | data to be matched against the list of oneOf items |
|
|
42
|
-
|
|
43
|
-
#### Return type
|
|
44
|
-
|
|
45
|
-
- `String`
|
|
46
|
-
- `nil` (if no type matches)
|
|
47
|
-
|
|
@@ -4,12 +4,13 @@
|
|
|
4
4
|
|
|
5
5
|
| Name | Type | Description | Notes |
|
|
6
6
|
| ---- | ---- | ----------- | ----- |
|
|
7
|
-
| **id** | **String** |
|
|
8
|
-
| **
|
|
9
|
-
| **created** | **Integer** |
|
|
10
|
-
| **model** | **String** |
|
|
11
|
-
| **
|
|
12
|
-
| **
|
|
7
|
+
| **id** | **String** | A unique identifier for the completion. | |
|
|
8
|
+
| **choices** | [**Array<CreateCompletionResponseChoicesInner>**](CreateCompletionResponseChoicesInner.md) | The list of completion choices the model generated for the input prompt. | |
|
|
9
|
+
| **created** | **Integer** | The Unix timestamp (in seconds) of when the completion was created. | |
|
|
10
|
+
| **model** | **String** | The model used for completion. | |
|
|
11
|
+
| **system_fingerprint** | **String** | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
|
|
12
|
+
| **object** | **String** | The object type, which is always \"text_completion\" | |
|
|
13
|
+
| **usage** | [**CompletionUsage**](CompletionUsage.md) | | [optional] |
|
|
13
14
|
|
|
14
15
|
## Example
|
|
15
16
|
|
|
@@ -18,10 +19,11 @@ require 'openapi_openai'
|
|
|
18
19
|
|
|
19
20
|
instance = OpenApiOpenAIClient::CreateCompletionResponse.new(
|
|
20
21
|
id: null,
|
|
21
|
-
|
|
22
|
+
choices: null,
|
|
22
23
|
created: null,
|
|
23
24
|
model: null,
|
|
24
|
-
|
|
25
|
+
system_fingerprint: null,
|
|
26
|
+
object: null,
|
|
25
27
|
usage: null
|
|
26
28
|
)
|
|
27
29
|
```
|
|
@@ -4,10 +4,10 @@
|
|
|
4
4
|
|
|
5
5
|
| Name | Type | Description | Notes |
|
|
6
6
|
| ---- | ---- | ----------- | ----- |
|
|
7
|
-
| **
|
|
7
|
+
| **finish_reason** | **String** | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, or `content_filter` if content was omitted due to a flag from our content filters. | |
|
|
8
8
|
| **index** | **Integer** | | |
|
|
9
9
|
| **logprobs** | [**CreateCompletionResponseChoicesInnerLogprobs**](CreateCompletionResponseChoicesInnerLogprobs.md) | | |
|
|
10
|
-
| **
|
|
10
|
+
| **text** | **String** | | |
|
|
11
11
|
|
|
12
12
|
## Example
|
|
13
13
|
|
|
@@ -15,10 +15,10 @@
|
|
|
15
15
|
require 'openapi_openai'
|
|
16
16
|
|
|
17
17
|
instance = OpenApiOpenAIClient::CreateCompletionResponseChoicesInner.new(
|
|
18
|
-
|
|
18
|
+
finish_reason: null,
|
|
19
19
|
index: null,
|
|
20
20
|
logprobs: null,
|
|
21
|
-
|
|
21
|
+
text: null
|
|
22
22
|
)
|
|
23
23
|
```
|
|
24
24
|
|