openapi_openai 1.0.0 → 1.1.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (766) hide show
  1. checksums.yaml +4 -4
  2. data/Gemfile.lock +2 -2
  3. data/README.md +274 -51
  4. data/docs/AssistantFileObject.md +24 -0
  5. data/docs/AssistantObject.md +36 -0
  6. data/docs/AssistantObjectToolsInner.md +51 -0
  7. data/docs/AssistantStreamEvent.md +57 -0
  8. data/docs/AssistantToolsCode.md +18 -0
  9. data/docs/AssistantToolsFunction.md +20 -0
  10. data/docs/AssistantToolsRetrieval.md +18 -0
  11. data/docs/AssistantsApi.md +2017 -0
  12. data/docs/AssistantsApiNamedToolChoice.md +20 -0
  13. data/docs/AssistantsApiResponseFormat.md +18 -0
  14. data/docs/AssistantsApiResponseFormatOption.md +49 -0
  15. data/docs/AssistantsApiToolChoiceOption.md +49 -0
  16. data/docs/AudioApi.md +235 -0
  17. data/docs/ChatApi.md +75 -0
  18. data/docs/{CreateChatCompletionRequestFunctionCallOneOf.md → ChatCompletionFunctionCallOption.md} +2 -2
  19. data/docs/ChatCompletionFunctions.md +3 -3
  20. data/docs/ChatCompletionMessageToolCall.md +22 -0
  21. data/docs/ChatCompletionMessageToolCallChunk.md +24 -0
  22. data/docs/{ChatCompletionRequestMessageFunctionCall.md → ChatCompletionMessageToolCallChunkFunction.md} +2 -2
  23. data/docs/ChatCompletionMessageToolCallFunction.md +20 -0
  24. data/docs/ChatCompletionNamedToolChoice.md +20 -0
  25. data/docs/ChatCompletionNamedToolChoiceFunction.md +18 -0
  26. data/docs/ChatCompletionRequestAssistantMessage.md +26 -0
  27. data/docs/ChatCompletionRequestAssistantMessageFunctionCall.md +20 -0
  28. data/docs/ChatCompletionRequestFunctionMessage.md +22 -0
  29. data/docs/ChatCompletionRequestMessage.md +45 -14
  30. data/docs/ChatCompletionRequestMessageContentPart.md +49 -0
  31. data/docs/ChatCompletionRequestMessageContentPartImage.md +20 -0
  32. data/docs/ChatCompletionRequestMessageContentPartImageImageUrl.md +20 -0
  33. data/docs/ChatCompletionRequestMessageContentPartText.md +20 -0
  34. data/docs/ChatCompletionRequestSystemMessage.md +22 -0
  35. data/docs/ChatCompletionRequestToolMessage.md +22 -0
  36. data/docs/ChatCompletionRequestUserMessage.md +22 -0
  37. data/docs/ChatCompletionRequestUserMessageContent.md +49 -0
  38. data/docs/ChatCompletionResponseMessage.md +5 -3
  39. data/docs/ChatCompletionRole.md +15 -0
  40. data/docs/ChatCompletionStreamResponseDelta.md +6 -4
  41. data/docs/ChatCompletionStreamResponseDeltaFunctionCall.md +20 -0
  42. data/docs/ChatCompletionTokenLogprob.md +24 -0
  43. data/docs/ChatCompletionTokenLogprobTopLogprobsInner.md +22 -0
  44. data/docs/ChatCompletionTool.md +20 -0
  45. data/docs/ChatCompletionToolChoiceOption.md +49 -0
  46. data/docs/CompletionUsage.md +22 -0
  47. data/docs/CompletionsApi.md +75 -0
  48. data/docs/CreateAssistantFileRequest.md +18 -0
  49. data/docs/CreateAssistantRequest.md +30 -0
  50. data/docs/CreateAssistantRequestModel.md +15 -0
  51. data/docs/CreateChatCompletionFunctionResponse.md +30 -0
  52. data/docs/CreateChatCompletionFunctionResponseChoicesInner.md +22 -0
  53. data/docs/CreateChatCompletionRequest.md +33 -21
  54. data/docs/CreateChatCompletionRequestFunctionCall.md +3 -3
  55. data/docs/CreateChatCompletionRequestModel.md +5 -37
  56. data/docs/CreateChatCompletionRequestResponseFormat.md +18 -0
  57. data/docs/CreateChatCompletionResponse.md +10 -8
  58. data/docs/CreateChatCompletionResponseChoicesInner.md +6 -4
  59. data/docs/CreateChatCompletionResponseChoicesInnerLogprobs.md +18 -0
  60. data/docs/CreateChatCompletionStreamResponse.md +9 -7
  61. data/docs/CreateChatCompletionStreamResponseChoicesInner.md +7 -5
  62. data/docs/CreateCompletionRequest.md +23 -21
  63. data/docs/CreateCompletionRequestModel.md +5 -37
  64. data/docs/CreateCompletionResponse.md +10 -8
  65. data/docs/CreateCompletionResponseChoicesInner.md +4 -4
  66. data/docs/CreateCompletionResponseChoicesInnerLogprobs.md +6 -6
  67. data/docs/CreateEmbeddingRequest.md +6 -2
  68. data/docs/CreateEmbeddingRequestModel.md +5 -37
  69. data/docs/CreateEmbeddingResponse.md +5 -5
  70. data/docs/CreateEmbeddingResponseUsage.md +2 -2
  71. data/docs/CreateFineTuningJobRequest.md +30 -0
  72. data/docs/CreateFineTuningJobRequestHyperparameters.md +22 -0
  73. data/docs/CreateFineTuningJobRequestHyperparametersBatchSize.md +49 -0
  74. data/docs/CreateFineTuningJobRequestHyperparametersLearningRateMultiplier.md +49 -0
  75. data/docs/CreateFineTuningJobRequestHyperparametersNEpochs.md +49 -0
  76. data/docs/CreateFineTuningJobRequestIntegrationsInner.md +20 -0
  77. data/docs/{CreateFineTuneRequestModel.md → CreateFineTuningJobRequestIntegrationsInnerType.md} +4 -4
  78. data/docs/CreateFineTuningJobRequestIntegrationsInnerWandb.md +24 -0
  79. data/docs/CreateFineTuningJobRequestModel.md +15 -0
  80. data/docs/CreateImageEditRequestModel.md +15 -0
  81. data/docs/CreateImageRequest.md +11 -5
  82. data/docs/CreateImageRequestModel.md +15 -0
  83. data/docs/CreateMessageRequest.md +24 -0
  84. data/docs/CreateModerationRequestModel.md +5 -37
  85. data/docs/CreateModerationResponse.md +3 -3
  86. data/docs/CreateModerationResponseResultsInner.md +1 -1
  87. data/docs/CreateModerationResponseResultsInnerCategories.md +15 -7
  88. data/docs/CreateModerationResponseResultsInnerCategoryScores.md +15 -7
  89. data/docs/CreateRunRequest.md +44 -0
  90. data/docs/CreateRunRequestModel.md +15 -0
  91. data/docs/CreateSpeechRequest.md +26 -0
  92. data/docs/CreateSpeechRequestModel.md +15 -0
  93. data/docs/CreateThreadAndRunRequest.md +42 -0
  94. data/docs/CreateThreadAndRunRequestToolsInner.md +51 -0
  95. data/docs/CreateThreadRequest.md +20 -0
  96. data/docs/CreateTranscription200Response.md +49 -0
  97. data/docs/CreateTranscriptionRequestModel.md +5 -37
  98. data/docs/CreateTranscriptionResponseJson.md +18 -0
  99. data/docs/CreateTranscriptionResponseVerboseJson.md +26 -0
  100. data/docs/CreateTranslation200Response.md +49 -0
  101. data/docs/{CreateTranscriptionResponse.md → CreateTranslationResponseJson.md} +2 -2
  102. data/docs/CreateTranslationResponseVerboseJson.md +24 -0
  103. data/docs/DeleteAssistantFileResponse.md +22 -0
  104. data/docs/DeleteAssistantResponse.md +22 -0
  105. data/docs/DeleteMessageResponse.md +22 -0
  106. data/docs/DeleteModelResponse.md +3 -3
  107. data/docs/DeleteThreadResponse.md +22 -0
  108. data/docs/DoneEvent.md +20 -0
  109. data/docs/Embedding.md +22 -0
  110. data/docs/EmbeddingsApi.md +75 -0
  111. data/docs/Error.md +4 -4
  112. data/docs/ErrorEvent.md +20 -0
  113. data/docs/FilesApi.md +351 -0
  114. data/docs/FineTuningApi.md +431 -0
  115. data/docs/FineTuningIntegration.md +20 -0
  116. data/docs/FineTuningJob.md +48 -0
  117. data/docs/FineTuningJobCheckpoint.md +30 -0
  118. data/docs/FineTuningJobCheckpointMetrics.md +30 -0
  119. data/docs/FineTuningJobError.md +22 -0
  120. data/docs/{FineTuneEvent.md → FineTuningJobEvent.md} +7 -5
  121. data/docs/FineTuningJobHyperparameters.md +18 -0
  122. data/docs/FineTuningJobHyperparametersNEpochs.md +49 -0
  123. data/docs/FineTuningJobIntegrationsInner.md +47 -0
  124. data/docs/FunctionObject.md +22 -0
  125. data/docs/Image.md +22 -0
  126. data/docs/ImagesApi.md +239 -0
  127. data/docs/ImagesResponse.md +1 -1
  128. data/docs/ListAssistantFilesResponse.md +26 -0
  129. data/docs/ListAssistantsResponse.md +26 -0
  130. data/docs/ListFilesResponse.md +3 -3
  131. data/docs/ListFineTuningJobCheckpointsResponse.md +26 -0
  132. data/docs/ListFineTuningJobEventsResponse.md +20 -0
  133. data/docs/ListMessageFilesResponse.md +26 -0
  134. data/docs/ListMessagesResponse.md +26 -0
  135. data/docs/ListPaginatedFineTuningJobsResponse.md +22 -0
  136. data/docs/ListRunStepsResponse.md +26 -0
  137. data/docs/ListRunsResponse.md +26 -0
  138. data/docs/ListThreadsResponse.md +26 -0
  139. data/docs/MessageContentImageFileObject.md +20 -0
  140. data/docs/MessageContentImageFileObjectImageFile.md +18 -0
  141. data/docs/MessageContentTextAnnotationsFileCitationObject.md +26 -0
  142. data/docs/MessageContentTextAnnotationsFileCitationObjectFileCitation.md +20 -0
  143. data/docs/MessageContentTextAnnotationsFilePathObject.md +26 -0
  144. data/docs/MessageContentTextAnnotationsFilePathObjectFilePath.md +18 -0
  145. data/docs/MessageContentTextObject.md +20 -0
  146. data/docs/MessageContentTextObjectText.md +20 -0
  147. data/docs/MessageContentTextObjectTextAnnotationsInner.md +49 -0
  148. data/docs/MessageDeltaContentImageFileObject.md +22 -0
  149. data/docs/MessageDeltaContentImageFileObjectImageFile.md +18 -0
  150. data/docs/MessageDeltaContentTextAnnotationsFileCitationObject.md +28 -0
  151. data/docs/MessageDeltaContentTextAnnotationsFileCitationObjectFileCitation.md +20 -0
  152. data/docs/MessageDeltaContentTextAnnotationsFilePathObject.md +28 -0
  153. data/docs/MessageDeltaContentTextAnnotationsFilePathObjectFilePath.md +18 -0
  154. data/docs/MessageDeltaContentTextObject.md +22 -0
  155. data/docs/MessageDeltaContentTextObjectText.md +20 -0
  156. data/docs/MessageDeltaContentTextObjectTextAnnotationsInner.md +49 -0
  157. data/docs/MessageDeltaObject.md +22 -0
  158. data/docs/MessageDeltaObjectDelta.md +22 -0
  159. data/docs/MessageDeltaObjectDeltaContentInner.md +49 -0
  160. data/docs/MessageFileObject.md +24 -0
  161. data/docs/MessageObject.md +44 -0
  162. data/docs/MessageObjectContentInner.md +49 -0
  163. data/docs/MessageObjectIncompleteDetails.md +18 -0
  164. data/docs/MessageStreamEvent.md +55 -0
  165. data/docs/MessageStreamEventOneOf.md +20 -0
  166. data/docs/MessageStreamEventOneOf1.md +20 -0
  167. data/docs/MessageStreamEventOneOf2.md +20 -0
  168. data/docs/MessageStreamEventOneOf3.md +20 -0
  169. data/docs/MessageStreamEventOneOf4.md +20 -0
  170. data/docs/Model.md +5 -5
  171. data/docs/ModelsApi.md +208 -0
  172. data/docs/ModerationsApi.md +75 -0
  173. data/docs/ModifyAssistantRequest.md +30 -0
  174. data/docs/ModifyMessageRequest.md +18 -0
  175. data/docs/ModifyRunRequest.md +18 -0
  176. data/docs/ModifyThreadRequest.md +18 -0
  177. data/docs/OpenAIFile.md +9 -9
  178. data/docs/RunCompletionUsage.md +22 -0
  179. data/docs/RunObject.md +68 -0
  180. data/docs/RunObjectIncompleteDetails.md +18 -0
  181. data/docs/RunObjectLastError.md +20 -0
  182. data/docs/RunObjectRequiredAction.md +20 -0
  183. data/docs/RunObjectRequiredActionSubmitToolOutputs.md +18 -0
  184. data/docs/RunStepCompletionUsage.md +22 -0
  185. data/docs/RunStepDeltaObject.md +22 -0
  186. data/docs/RunStepDeltaObjectDelta.md +18 -0
  187. data/docs/RunStepDeltaObjectDeltaStepDetails.md +49 -0
  188. data/docs/RunStepDeltaStepDetailsMessageCreationObject.md +20 -0
  189. data/docs/RunStepDeltaStepDetailsMessageCreationObjectMessageCreation.md +18 -0
  190. data/docs/RunStepDeltaStepDetailsToolCallsCodeObject.md +24 -0
  191. data/docs/RunStepDeltaStepDetailsToolCallsCodeObjectCodeInterpreter.md +20 -0
  192. data/docs/RunStepDeltaStepDetailsToolCallsCodeObjectCodeInterpreterOutputsInner.md +49 -0
  193. data/docs/RunStepDeltaStepDetailsToolCallsCodeOutputImageObject.md +22 -0
  194. data/docs/RunStepDeltaStepDetailsToolCallsCodeOutputImageObjectImage.md +18 -0
  195. data/docs/RunStepDeltaStepDetailsToolCallsCodeOutputLogsObject.md +22 -0
  196. data/docs/RunStepDeltaStepDetailsToolCallsFunctionObject.md +24 -0
  197. data/docs/RunStepDeltaStepDetailsToolCallsFunctionObjectFunction.md +22 -0
  198. data/docs/RunStepDeltaStepDetailsToolCallsObject.md +20 -0
  199. data/docs/RunStepDeltaStepDetailsToolCallsObjectToolCallsInner.md +51 -0
  200. data/docs/RunStepDeltaStepDetailsToolCallsRetrievalObject.md +24 -0
  201. data/docs/RunStepDetailsMessageCreationObject.md +20 -0
  202. data/docs/RunStepDetailsMessageCreationObjectMessageCreation.md +18 -0
  203. data/docs/RunStepDetailsToolCallsCodeObject.md +22 -0
  204. data/docs/RunStepDetailsToolCallsCodeObjectCodeInterpreter.md +20 -0
  205. data/docs/RunStepDetailsToolCallsCodeObjectCodeInterpreterOutputsInner.md +49 -0
  206. data/docs/RunStepDetailsToolCallsCodeOutputImageObject.md +20 -0
  207. data/docs/RunStepDetailsToolCallsCodeOutputImageObjectImage.md +18 -0
  208. data/docs/RunStepDetailsToolCallsCodeOutputLogsObject.md +20 -0
  209. data/docs/RunStepDetailsToolCallsFunctionObject.md +22 -0
  210. data/docs/RunStepDetailsToolCallsFunctionObjectFunction.md +22 -0
  211. data/docs/RunStepDetailsToolCallsObject.md +20 -0
  212. data/docs/RunStepDetailsToolCallsObjectToolCallsInner.md +51 -0
  213. data/docs/RunStepDetailsToolCallsRetrievalObject.md +22 -0
  214. data/docs/RunStepObject.md +48 -0
  215. data/docs/RunStepObjectLastError.md +20 -0
  216. data/docs/RunStepObjectStepDetails.md +49 -0
  217. data/docs/RunStepStreamEvent.md +59 -0
  218. data/docs/RunStepStreamEventOneOf.md +20 -0
  219. data/docs/RunStepStreamEventOneOf1.md +20 -0
  220. data/docs/RunStepStreamEventOneOf2.md +20 -0
  221. data/docs/RunStepStreamEventOneOf3.md +20 -0
  222. data/docs/RunStepStreamEventOneOf4.md +20 -0
  223. data/docs/RunStepStreamEventOneOf5.md +20 -0
  224. data/docs/RunStepStreamEventOneOf6.md +20 -0
  225. data/docs/RunStreamEvent.md +63 -0
  226. data/docs/RunStreamEventOneOf.md +20 -0
  227. data/docs/RunStreamEventOneOf1.md +20 -0
  228. data/docs/RunStreamEventOneOf2.md +20 -0
  229. data/docs/RunStreamEventOneOf3.md +20 -0
  230. data/docs/RunStreamEventOneOf4.md +20 -0
  231. data/docs/RunStreamEventOneOf5.md +20 -0
  232. data/docs/RunStreamEventOneOf6.md +20 -0
  233. data/docs/RunStreamEventOneOf7.md +20 -0
  234. data/docs/RunStreamEventOneOf8.md +20 -0
  235. data/docs/RunToolCallObject.md +22 -0
  236. data/docs/RunToolCallObjectFunction.md +20 -0
  237. data/docs/SubmitToolOutputsRunRequest.md +20 -0
  238. data/docs/SubmitToolOutputsRunRequestToolOutputsInner.md +20 -0
  239. data/docs/ThreadObject.md +24 -0
  240. data/docs/{CreateEditRequestModel.md → ThreadStreamEvent.md} +7 -7
  241. data/docs/ThreadStreamEventOneOf.md +20 -0
  242. data/docs/TranscriptionSegment.md +36 -0
  243. data/docs/TranscriptionWord.md +22 -0
  244. data/docs/TruncationObject.md +20 -0
  245. data/lib/openapi_openai/api/assistants_api.rb +2006 -0
  246. data/lib/openapi_openai/api/audio_api.rb +268 -0
  247. data/lib/openapi_openai/api/chat_api.rb +88 -0
  248. data/lib/openapi_openai/api/completions_api.rb +88 -0
  249. data/lib/openapi_openai/api/embeddings_api.rb +88 -0
  250. data/lib/openapi_openai/api/files_api.rb +342 -0
  251. data/lib/openapi_openai/api/fine_tuning_api.rb +405 -0
  252. data/lib/openapi_openai/api/images_api.rb +294 -0
  253. data/lib/openapi_openai/api/models_api.rb +199 -0
  254. data/lib/openapi_openai/api/moderations_api.rb +88 -0
  255. data/lib/openapi_openai/api_client.rb +2 -1
  256. data/lib/openapi_openai/api_error.rb +1 -1
  257. data/lib/openapi_openai/configuration.rb +8 -1
  258. data/lib/openapi_openai/models/assistant_file_object.rb +308 -0
  259. data/lib/openapi_openai/models/assistant_object.rb +481 -0
  260. data/lib/openapi_openai/models/assistant_object_tools_inner.rb +106 -0
  261. data/lib/openapi_openai/models/assistant_stream_event.rb +110 -0
  262. data/lib/openapi_openai/models/assistant_tools_code.rb +256 -0
  263. data/lib/openapi_openai/models/assistant_tools_function.rb +272 -0
  264. data/lib/openapi_openai/models/assistant_tools_retrieval.rb +256 -0
  265. data/lib/openapi_openai/models/assistants_api_named_tool_choice.rb +266 -0
  266. data/lib/openapi_openai/models/assistants_api_response_format.rb +252 -0
  267. data/lib/openapi_openai/models/assistants_api_response_format_option.rb +106 -0
  268. data/lib/openapi_openai/models/assistants_api_tool_choice_option.rb +106 -0
  269. data/lib/openapi_openai/models/chat_completion_function_call_option.rb +223 -0
  270. data/lib/openapi_openai/models/chat_completion_functions.rb +13 -13
  271. data/lib/openapi_openai/models/chat_completion_message_tool_call.rb +289 -0
  272. data/lib/openapi_openai/models/chat_completion_message_tool_call_chunk.rb +284 -0
  273. data/lib/openapi_openai/models/{chat_completion_request_message_function_call.rb → chat_completion_message_tool_call_chunk_function.rb} +4 -5
  274. data/lib/openapi_openai/models/chat_completion_message_tool_call_function.rb +240 -0
  275. data/lib/openapi_openai/models/chat_completion_named_tool_choice.rb +273 -0
  276. data/lib/openapi_openai/models/{create_chat_completion_request_function_call_one_of.rb → chat_completion_named_tool_choice_function.rb} +4 -4
  277. data/lib/openapi_openai/models/chat_completion_request_assistant_message.rb +298 -0
  278. data/lib/openapi_openai/models/chat_completion_request_assistant_message_function_call.rb +240 -0
  279. data/lib/openapi_openai/models/chat_completion_request_function_message.rb +286 -0
  280. data/lib/openapi_openai/models/chat_completion_request_message.rb +78 -255
  281. data/lib/openapi_openai/models/chat_completion_request_message_content_part.rb +105 -0
  282. data/lib/openapi_openai/models/chat_completion_request_message_content_part_image.rb +272 -0
  283. data/lib/openapi_openai/models/chat_completion_request_message_content_part_image_image_url.rb +268 -0
  284. data/lib/openapi_openai/models/chat_completion_request_message_content_part_text.rb +273 -0
  285. data/lib/openapi_openai/models/chat_completion_request_system_message.rb +283 -0
  286. data/lib/openapi_openai/models/chat_completion_request_tool_message.rb +290 -0
  287. data/lib/openapi_openai/models/chat_completion_request_user_message.rb +282 -0
  288. data/lib/openapi_openai/models/{create_fine_tune_request_model.rb → chat_completion_request_user_message_content.rb} +4 -3
  289. data/lib/openapi_openai/models/chat_completion_response_message.rb +30 -15
  290. data/lib/openapi_openai/models/chat_completion_role.rb +43 -0
  291. data/lib/openapi_openai/models/chat_completion_stream_response_delta.rb +29 -17
  292. data/lib/openapi_openai/models/chat_completion_stream_response_delta_function_call.rb +226 -0
  293. data/lib/openapi_openai/models/chat_completion_token_logprob.rb +273 -0
  294. data/lib/openapi_openai/models/chat_completion_token_logprob_top_logprobs_inner.rb +254 -0
  295. data/lib/openapi_openai/models/chat_completion_tool.rb +272 -0
  296. data/lib/openapi_openai/models/chat_completion_tool_choice_option.rb +106 -0
  297. data/lib/openapi_openai/models/{create_completion_response_usage.rb → completion_usage.rb} +25 -21
  298. data/lib/openapi_openai/models/create_assistant_file_request.rb +222 -0
  299. data/lib/openapi_openai/models/create_assistant_request.rb +372 -0
  300. data/lib/openapi_openai/models/create_assistant_request_model.rb +104 -0
  301. data/lib/openapi_openai/models/create_chat_completion_function_response.rb +346 -0
  302. data/lib/openapi_openai/models/create_chat_completion_function_response_choices_inner.rb +289 -0
  303. data/lib/openapi_openai/models/create_chat_completion_request.rb +277 -152
  304. data/lib/openapi_openai/models/create_chat_completion_request_function_call.rb +3 -3
  305. data/lib/openapi_openai/models/create_chat_completion_request_model.rb +8 -9
  306. data/lib/openapi_openai/models/create_chat_completion_request_response_format.rb +252 -0
  307. data/lib/openapi_openai/models/create_chat_completion_request_stop.rb +1 -1
  308. data/lib/openapi_openai/models/create_chat_completion_response.rb +75 -25
  309. data/lib/openapi_openai/models/create_chat_completion_response_choices_inner.rb +45 -10
  310. data/lib/openapi_openai/models/create_chat_completion_response_choices_inner_logprobs.rb +221 -0
  311. data/lib/openapi_openai/models/create_chat_completion_stream_response.rb +74 -24
  312. data/lib/openapi_openai/models/create_chat_completion_stream_response_choices_inner.rb +45 -16
  313. data/lib/openapi_openai/models/create_completion_request.rb +219 -182
  314. data/lib/openapi_openai/models/create_completion_request_model.rb +8 -9
  315. data/lib/openapi_openai/models/create_completion_request_prompt.rb +1 -1
  316. data/lib/openapi_openai/models/create_completion_request_stop.rb +1 -1
  317. data/lib/openapi_openai/models/create_completion_response.rb +75 -25
  318. data/lib/openapi_openai/models/create_completion_response_choices_inner.rb +25 -24
  319. data/lib/openapi_openai/models/create_completion_response_choices_inner_logprobs.rb +23 -23
  320. data/lib/openapi_openai/models/create_embedding_request.rb +87 -12
  321. data/lib/openapi_openai/models/create_embedding_request_input.rb +2 -2
  322. data/lib/openapi_openai/models/create_embedding_request_model.rb +8 -9
  323. data/lib/openapi_openai/models/create_embedding_response.rb +61 -24
  324. data/lib/openapi_openai/models/create_embedding_response_usage.rb +4 -1
  325. data/lib/openapi_openai/models/{create_fine_tune_request.rb → create_fine_tuning_job_request.rb} +78 -111
  326. data/lib/openapi_openai/models/create_fine_tuning_job_request_hyperparameters.rb +233 -0
  327. data/lib/openapi_openai/models/create_fine_tuning_job_request_hyperparameters_batch_size.rb +106 -0
  328. data/lib/openapi_openai/models/create_fine_tuning_job_request_hyperparameters_learning_rate_multiplier.rb +106 -0
  329. data/lib/openapi_openai/models/create_fine_tuning_job_request_hyperparameters_n_epochs.rb +106 -0
  330. data/lib/openapi_openai/models/{list_fine_tunes_response.rb → create_fine_tuning_job_request_integrations_inner.rb} +25 -27
  331. data/lib/openapi_openai/models/create_fine_tuning_job_request_integrations_inner_type.rb +105 -0
  332. data/lib/openapi_openai/models/create_fine_tuning_job_request_integrations_inner_wandb.rb +257 -0
  333. data/lib/openapi_openai/models/create_fine_tuning_job_request_model.rb +104 -0
  334. data/lib/openapi_openai/models/create_image_edit_request_model.rb +104 -0
  335. data/lib/openapi_openai/models/create_image_request.rb +81 -22
  336. data/lib/openapi_openai/models/create_image_request_model.rb +104 -0
  337. data/lib/openapi_openai/models/create_message_request.rb +352 -0
  338. data/lib/openapi_openai/models/create_moderation_request.rb +1 -1
  339. data/lib/openapi_openai/models/create_moderation_request_input.rb +1 -1
  340. data/lib/openapi_openai/models/create_moderation_request_model.rb +8 -9
  341. data/lib/openapi_openai/models/create_moderation_response.rb +5 -1
  342. data/lib/openapi_openai/models/create_moderation_response_results_inner.rb +2 -1
  343. data/lib/openapi_openai/models/create_moderation_response_results_inner_categories.rb +78 -2
  344. data/lib/openapi_openai/models/create_moderation_response_results_inner_category_scores.rb +78 -2
  345. data/lib/openapi_openai/models/create_run_request.rb +433 -0
  346. data/lib/openapi_openai/models/create_run_request_model.rb +104 -0
  347. data/lib/openapi_openai/models/{create_edit_request.rb → create_speech_request.rb} +105 -95
  348. data/lib/openapi_openai/models/create_speech_request_model.rb +104 -0
  349. data/lib/openapi_openai/models/create_thread_and_run_request.rb +418 -0
  350. data/lib/openapi_openai/models/create_thread_and_run_request_tools_inner.rb +106 -0
  351. data/lib/openapi_openai/models/{list_fine_tune_events_response.rb → create_thread_request.rb} +22 -33
  352. data/lib/openapi_openai/models/create_transcription200_response.rb +105 -0
  353. data/lib/openapi_openai/models/create_transcription_request_model.rb +9 -10
  354. data/lib/openapi_openai/models/{create_transcription_response.rb → create_transcription_response_json.rb} +6 -4
  355. data/lib/openapi_openai/models/create_transcription_response_verbose_json.rb +281 -0
  356. data/lib/openapi_openai/models/create_translation200_response.rb +105 -0
  357. data/lib/openapi_openai/models/{create_translation_response.rb → create_translation_response_json.rb} +4 -4
  358. data/lib/openapi_openai/models/create_translation_response_verbose_json.rb +268 -0
  359. data/lib/openapi_openai/models/delete_assistant_file_response.rb +288 -0
  360. data/lib/openapi_openai/models/delete_assistant_response.rb +287 -0
  361. data/lib/openapi_openai/models/delete_file_response.rb +35 -1
  362. data/lib/openapi_openai/models/delete_message_response.rb +287 -0
  363. data/lib/openapi_openai/models/delete_model_response.rb +21 -21
  364. data/lib/openapi_openai/models/delete_thread_response.rb +287 -0
  365. data/lib/openapi_openai/models/done_event.rb +284 -0
  366. data/lib/openapi_openai/models/embedding.rb +293 -0
  367. data/lib/openapi_openai/models/error.rb +22 -22
  368. data/lib/openapi_openai/models/error_event.rb +272 -0
  369. data/lib/openapi_openai/models/error_response.rb +1 -1
  370. data/lib/openapi_openai/models/fine_tuning_integration.rb +272 -0
  371. data/lib/openapi_openai/models/fine_tuning_job.rb +515 -0
  372. data/lib/openapi_openai/models/{fine_tune.rb → fine_tuning_job_checkpoint.rb} +94 -146
  373. data/lib/openapi_openai/models/fine_tuning_job_checkpoint_metrics.rb +269 -0
  374. data/lib/openapi_openai/models/{fine_tune_event.rb → fine_tuning_job_error.rb} +34 -50
  375. data/lib/openapi_openai/models/fine_tuning_job_event.rb +332 -0
  376. data/lib/openapi_openai/models/fine_tuning_job_hyperparameters.rb +222 -0
  377. data/lib/openapi_openai/models/fine_tuning_job_hyperparameters_n_epochs.rb +106 -0
  378. data/lib/openapi_openai/models/{create_edit_request_model.rb → fine_tuning_job_integrations_inner.rb} +3 -4
  379. data/lib/openapi_openai/models/function_object.rb +244 -0
  380. data/lib/openapi_openai/models/image.rb +236 -0
  381. data/lib/openapi_openai/models/images_response.rb +2 -2
  382. data/lib/openapi_openai/models/list_assistant_files_response.rb +287 -0
  383. data/lib/openapi_openai/models/list_assistants_response.rb +287 -0
  384. data/lib/openapi_openai/models/list_files_response.rb +54 -20
  385. data/lib/openapi_openai/models/list_fine_tuning_job_checkpoints_response.rb +309 -0
  386. data/lib/openapi_openai/models/{create_edit_response.rb → list_fine_tuning_job_events_response.rb} +57 -55
  387. data/lib/openapi_openai/models/list_message_files_response.rb +287 -0
  388. data/lib/openapi_openai/models/list_messages_response.rb +287 -0
  389. data/lib/openapi_openai/models/list_models_response.rb +35 -1
  390. data/lib/openapi_openai/models/list_paginated_fine_tuning_jobs_response.rb +289 -0
  391. data/lib/openapi_openai/models/list_run_steps_response.rb +287 -0
  392. data/lib/openapi_openai/models/list_runs_response.rb +287 -0
  393. data/lib/openapi_openai/models/list_threads_response.rb +287 -0
  394. data/lib/openapi_openai/models/message_content_image_file_object.rb +273 -0
  395. data/lib/openapi_openai/models/message_content_image_file_object_image_file.rb +222 -0
  396. data/lib/openapi_openai/models/message_content_text_annotations_file_citation_object.rb +360 -0
  397. data/lib/openapi_openai/models/message_content_text_annotations_file_citation_object_file_citation.rb +239 -0
  398. data/lib/openapi_openai/models/message_content_text_annotations_file_path_object.rb +360 -0
  399. data/lib/openapi_openai/models/message_content_text_annotations_file_path_object_file_path.rb +222 -0
  400. data/lib/openapi_openai/models/message_content_text_object.rb +273 -0
  401. data/lib/openapi_openai/models/message_content_text_object_text.rb +240 -0
  402. data/lib/openapi_openai/models/message_content_text_object_text_annotations_inner.rb +105 -0
  403. data/lib/openapi_openai/models/message_delta_content_image_file_object.rb +283 -0
  404. data/lib/openapi_openai/models/message_delta_content_image_file_object_image_file.rb +215 -0
  405. data/lib/openapi_openai/models/message_delta_content_text_annotations_file_citation_object.rb +349 -0
  406. data/lib/openapi_openai/models/message_delta_content_text_annotations_file_citation_object_file_citation.rb +225 -0
  407. data/lib/openapi_openai/models/message_delta_content_text_annotations_file_path_object.rb +349 -0
  408. data/lib/openapi_openai/models/message_delta_content_text_annotations_file_path_object_file_path.rb +215 -0
  409. data/lib/openapi_openai/models/{create_edit_response_choices_inner.rb → message_delta_content_text_object.rb} +42 -35
  410. data/lib/openapi_openai/models/message_delta_content_text_object_text.rb +226 -0
  411. data/lib/openapi_openai/models/message_delta_content_text_object_text_annotations_inner.rb +105 -0
  412. data/lib/openapi_openai/models/message_delta_object.rb +290 -0
  413. data/lib/openapi_openai/models/message_delta_object_delta.rb +293 -0
  414. data/lib/openapi_openai/models/message_delta_object_delta_content_inner.rb +105 -0
  415. data/lib/openapi_openai/models/message_file_object.rb +308 -0
  416. data/lib/openapi_openai/models/message_object.rb +500 -0
  417. data/lib/openapi_openai/models/message_object_content_inner.rb +105 -0
  418. data/lib/openapi_openai/models/message_object_incomplete_details.rb +257 -0
  419. data/lib/openapi_openai/models/message_stream_event.rb +108 -0
  420. data/lib/openapi_openai/models/message_stream_event_one_of.rb +272 -0
  421. data/lib/openapi_openai/models/message_stream_event_one_of1.rb +272 -0
  422. data/lib/openapi_openai/models/message_stream_event_one_of2.rb +272 -0
  423. data/lib/openapi_openai/models/message_stream_event_one_of3.rb +272 -0
  424. data/lib/openapi_openai/models/message_stream_event_one_of4.rb +272 -0
  425. data/lib/openapi_openai/models/model.rb +57 -18
  426. data/lib/openapi_openai/models/modify_assistant_request.rb +365 -0
  427. data/lib/openapi_openai/models/modify_message_request.rb +216 -0
  428. data/lib/openapi_openai/models/modify_run_request.rb +216 -0
  429. data/lib/openapi_openai/models/modify_thread_request.rb +216 -0
  430. data/lib/openapi_openai/models/open_ai_file.rb +93 -20
  431. data/lib/openapi_openai/models/run_completion_usage.rb +257 -0
  432. data/lib/openapi_openai/models/run_object.rb +686 -0
  433. data/lib/openapi_openai/models/run_object_incomplete_details.rb +250 -0
  434. data/lib/openapi_openai/models/run_object_last_error.rb +274 -0
  435. data/lib/openapi_openai/models/run_object_required_action.rb +273 -0
  436. data/lib/openapi_openai/models/run_object_required_action_submit_tool_outputs.rb +225 -0
  437. data/lib/openapi_openai/models/run_step_completion_usage.rb +257 -0
  438. data/lib/openapi_openai/models/run_step_delta_object.rb +290 -0
  439. data/lib/openapi_openai/models/{images_response_data_inner.rb → run_step_delta_object_delta.rb} +12 -20
  440. data/lib/openapi_openai/models/run_step_delta_object_delta_step_details.rb +106 -0
  441. data/lib/openapi_openai/models/run_step_delta_step_details_message_creation_object.rb +266 -0
  442. data/lib/openapi_openai/models/run_step_delta_step_details_message_creation_object_message_creation.rb +215 -0
  443. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_object.rb +293 -0
  444. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_object_code_interpreter.rb +228 -0
  445. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_object_code_interpreter_outputs_inner.rb +105 -0
  446. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_output_image_object.rb +282 -0
  447. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_output_image_object_image.rb +215 -0
  448. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_code_output_logs_object.rb +284 -0
  449. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_function_object.rb +292 -0
  450. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_function_object_function.rb +237 -0
  451. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_object.rb +269 -0
  452. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_object_tool_calls_inner.rb +106 -0
  453. data/lib/openapi_openai/models/run_step_delta_step_details_tool_calls_retrieval_object.rb +293 -0
  454. data/lib/openapi_openai/models/run_step_details_message_creation_object.rb +273 -0
  455. data/lib/openapi_openai/models/run_step_details_message_creation_object_message_creation.rb +222 -0
  456. data/lib/openapi_openai/models/run_step_details_tool_calls_code_object.rb +290 -0
  457. data/lib/openapi_openai/models/run_step_details_tool_calls_code_object_code_interpreter.rb +242 -0
  458. data/lib/openapi_openai/models/run_step_details_tool_calls_code_object_code_interpreter_outputs_inner.rb +105 -0
  459. data/lib/openapi_openai/models/run_step_details_tool_calls_code_output_image_object.rb +272 -0
  460. data/lib/openapi_openai/models/run_step_details_tool_calls_code_output_image_object_image.rb +222 -0
  461. data/lib/openapi_openai/models/run_step_details_tool_calls_code_output_logs_object.rb +274 -0
  462. data/lib/openapi_openai/models/run_step_details_tool_calls_function_object.rb +289 -0
  463. data/lib/openapi_openai/models/run_step_details_tool_calls_function_object_function.rb +253 -0
  464. data/lib/openapi_openai/models/run_step_details_tool_calls_object.rb +276 -0
  465. data/lib/openapi_openai/models/run_step_details_tool_calls_object_tool_calls_inner.rb +106 -0
  466. data/lib/openapi_openai/models/run_step_details_tool_calls_retrieval_object.rb +290 -0
  467. data/lib/openapi_openai/models/run_step_object.rb +505 -0
  468. data/lib/openapi_openai/models/run_step_object_last_error.rb +274 -0
  469. data/lib/openapi_openai/models/run_step_object_step_details.rb +106 -0
  470. data/lib/openapi_openai/models/run_step_stream_event.rb +110 -0
  471. data/lib/openapi_openai/models/run_step_stream_event_one_of.rb +272 -0
  472. data/lib/openapi_openai/models/run_step_stream_event_one_of1.rb +272 -0
  473. data/lib/openapi_openai/models/run_step_stream_event_one_of2.rb +272 -0
  474. data/lib/openapi_openai/models/run_step_stream_event_one_of3.rb +272 -0
  475. data/lib/openapi_openai/models/run_step_stream_event_one_of4.rb +272 -0
  476. data/lib/openapi_openai/models/run_step_stream_event_one_of5.rb +272 -0
  477. data/lib/openapi_openai/models/run_step_stream_event_one_of6.rb +272 -0
  478. data/lib/openapi_openai/models/run_stream_event.rb +112 -0
  479. data/lib/openapi_openai/models/run_stream_event_one_of.rb +272 -0
  480. data/lib/openapi_openai/models/run_stream_event_one_of1.rb +272 -0
  481. data/lib/openapi_openai/models/run_stream_event_one_of2.rb +272 -0
  482. data/lib/openapi_openai/models/run_stream_event_one_of3.rb +272 -0
  483. data/lib/openapi_openai/models/run_stream_event_one_of4.rb +272 -0
  484. data/lib/openapi_openai/models/run_stream_event_one_of5.rb +272 -0
  485. data/lib/openapi_openai/models/run_stream_event_one_of6.rb +272 -0
  486. data/lib/openapi_openai/models/run_stream_event_one_of7.rb +272 -0
  487. data/lib/openapi_openai/models/run_stream_event_one_of8.rb +272 -0
  488. data/lib/openapi_openai/models/run_tool_call_object.rb +290 -0
  489. data/lib/openapi_openai/models/run_tool_call_object_function.rb +240 -0
  490. data/lib/openapi_openai/models/submit_tool_outputs_run_request.rb +235 -0
  491. data/lib/openapi_openai/models/submit_tool_outputs_run_request_tool_outputs_inner.rb +225 -0
  492. data/lib/openapi_openai/models/thread_object.rb +304 -0
  493. data/lib/openapi_openai/models/thread_stream_event.rb +104 -0
  494. data/lib/openapi_openai/models/thread_stream_event_one_of.rb +272 -0
  495. data/lib/openapi_openai/models/transcription_segment.rb +377 -0
  496. data/lib/openapi_openai/models/{create_embedding_response_data_inner.rb → transcription_word.rb} +38 -37
  497. data/lib/openapi_openai/models/truncation_object.rb +275 -0
  498. data/lib/openapi_openai/version.rb +2 -2
  499. data/lib/openapi_openai.rb +209 -19
  500. data/openapi_openai.gemspec +2 -2
  501. data/spec/api/assistants_api_spec.rb +389 -0
  502. data/spec/api/audio_api_spec.rb +78 -0
  503. data/spec/api/chat_api_spec.rb +46 -0
  504. data/spec/api/completions_api_spec.rb +46 -0
  505. data/spec/api/embeddings_api_spec.rb +46 -0
  506. data/spec/api/files_api_spec.rb +91 -0
  507. data/spec/api/fine_tuning_api_spec.rb +106 -0
  508. data/spec/api/images_api_spec.rb +80 -0
  509. data/spec/api/models_api_spec.rb +67 -0
  510. data/spec/api/moderations_api_spec.rb +46 -0
  511. data/spec/models/assistant_file_object_spec.rb +58 -0
  512. data/spec/models/assistant_object_spec.rb +94 -0
  513. data/spec/models/assistant_object_tools_inner_spec.rb +32 -0
  514. data/spec/models/assistant_stream_event_spec.rb +32 -0
  515. data/spec/models/assistant_tools_code_spec.rb +40 -0
  516. data/spec/models/assistant_tools_function_spec.rb +46 -0
  517. data/spec/models/assistant_tools_retrieval_spec.rb +40 -0
  518. data/spec/models/assistants_api_named_tool_choice_spec.rb +46 -0
  519. data/spec/models/assistants_api_response_format_option_spec.rb +32 -0
  520. data/spec/models/assistants_api_response_format_spec.rb +40 -0
  521. data/spec/models/assistants_api_tool_choice_option_spec.rb +32 -0
  522. data/spec/models/chat_completion_function_call_option_spec.rb +36 -0
  523. data/spec/models/chat_completion_functions_spec.rb +3 -3
  524. data/spec/models/chat_completion_message_tool_call_chunk_function_spec.rb +42 -0
  525. data/spec/models/{create_edit_response_choices_inner_spec.rb → chat_completion_message_tool_call_chunk_spec.rb} +15 -15
  526. data/spec/models/{chat_completion_request_message_function_call_spec.rb → chat_completion_message_tool_call_function_spec.rb} +7 -7
  527. data/spec/models/chat_completion_message_tool_call_spec.rb +52 -0
  528. data/spec/models/chat_completion_named_tool_choice_function_spec.rb +36 -0
  529. data/spec/models/chat_completion_named_tool_choice_spec.rb +46 -0
  530. data/spec/models/chat_completion_request_assistant_message_function_call_spec.rb +42 -0
  531. data/spec/models/chat_completion_request_assistant_message_spec.rb +64 -0
  532. data/spec/models/chat_completion_request_function_message_spec.rb +52 -0
  533. data/spec/models/chat_completion_request_message_content_part_image_image_url_spec.rb +46 -0
  534. data/spec/models/chat_completion_request_message_content_part_image_spec.rb +46 -0
  535. data/spec/models/chat_completion_request_message_content_part_spec.rb +32 -0
  536. data/spec/models/chat_completion_request_message_content_part_text_spec.rb +46 -0
  537. data/spec/models/chat_completion_request_message_spec.rb +6 -32
  538. data/spec/models/chat_completion_request_system_message_spec.rb +52 -0
  539. data/spec/models/chat_completion_request_tool_message_spec.rb +52 -0
  540. data/spec/models/chat_completion_request_user_message_content_spec.rb +32 -0
  541. data/spec/models/chat_completion_request_user_message_spec.rb +52 -0
  542. data/spec/models/chat_completion_response_message_spec.rb +13 -7
  543. data/spec/models/chat_completion_role_spec.rb +30 -0
  544. data/spec/models/chat_completion_stream_response_delta_function_call_spec.rb +42 -0
  545. data/spec/models/chat_completion_stream_response_delta_spec.rb +14 -8
  546. data/spec/models/{create_edit_response_spec.rb → chat_completion_token_logprob_spec.rb} +11 -11
  547. data/spec/models/chat_completion_token_logprob_top_logprobs_inner_spec.rb +48 -0
  548. data/spec/models/chat_completion_tool_choice_option_spec.rb +32 -0
  549. data/spec/models/chat_completion_tool_spec.rb +46 -0
  550. data/spec/models/{create_completion_response_usage_spec.rb → completion_usage_spec.rb} +9 -9
  551. data/spec/models/create_assistant_file_request_spec.rb +36 -0
  552. data/spec/models/create_assistant_request_model_spec.rb +21 -0
  553. data/spec/models/{create_edit_request_spec.rb → create_assistant_request_spec.rb} +18 -12
  554. data/spec/models/create_chat_completion_function_response_choices_inner_spec.rb +52 -0
  555. data/spec/models/create_chat_completion_function_response_spec.rb +76 -0
  556. data/spec/models/create_chat_completion_request_function_call_spec.rb +1 -1
  557. data/spec/models/create_chat_completion_request_model_spec.rb +1 -12
  558. data/spec/models/{create_chat_completion_request_function_call_one_of_spec.rb → create_chat_completion_request_response_format_spec.rb} +12 -8
  559. data/spec/models/create_chat_completion_request_spec.rb +47 -11
  560. data/spec/models/create_chat_completion_request_stop_spec.rb +1 -1
  561. data/spec/models/create_chat_completion_response_choices_inner_logprobs_spec.rb +36 -0
  562. data/spec/models/create_chat_completion_response_choices_inner_spec.rb +12 -6
  563. data/spec/models/create_chat_completion_response_spec.rb +13 -3
  564. data/spec/models/create_chat_completion_stream_response_choices_inner_spec.rb +10 -4
  565. data/spec/models/create_chat_completion_stream_response_spec.rb +13 -3
  566. data/spec/models/create_completion_request_model_spec.rb +1 -12
  567. data/spec/models/create_completion_request_prompt_spec.rb +1 -1
  568. data/spec/models/create_completion_request_spec.rb +19 -13
  569. data/spec/models/create_completion_request_stop_spec.rb +1 -1
  570. data/spec/models/create_completion_response_choices_inner_logprobs_spec.rb +4 -4
  571. data/spec/models/create_completion_response_choices_inner_spec.rb +7 -7
  572. data/spec/models/create_completion_response_spec.rb +13 -3
  573. data/spec/models/create_embedding_request_input_spec.rb +1 -1
  574. data/spec/models/create_embedding_request_model_spec.rb +1 -12
  575. data/spec/models/create_embedding_request_spec.rb +18 -2
  576. data/spec/models/create_embedding_response_spec.rb +7 -3
  577. data/spec/models/create_embedding_response_usage_spec.rb +1 -1
  578. data/spec/models/create_fine_tuning_job_request_hyperparameters_batch_size_spec.rb +32 -0
  579. data/spec/models/create_fine_tuning_job_request_hyperparameters_learning_rate_multiplier_spec.rb +32 -0
  580. data/spec/models/create_fine_tuning_job_request_hyperparameters_n_epochs_spec.rb +32 -0
  581. data/spec/models/create_fine_tuning_job_request_hyperparameters_spec.rb +48 -0
  582. data/spec/models/create_fine_tuning_job_request_integrations_inner_spec.rb +42 -0
  583. data/spec/models/create_fine_tuning_job_request_integrations_inner_type_spec.rb +32 -0
  584. data/spec/models/create_fine_tuning_job_request_integrations_inner_wandb_spec.rb +54 -0
  585. data/spec/models/create_fine_tuning_job_request_model_spec.rb +21 -0
  586. data/spec/models/create_fine_tuning_job_request_spec.rb +72 -0
  587. data/spec/models/create_image_edit_request_model_spec.rb +21 -0
  588. data/spec/models/create_image_request_model_spec.rb +21 -0
  589. data/spec/models/create_image_request_spec.rb +30 -4
  590. data/spec/models/create_message_request_spec.rb +58 -0
  591. data/spec/models/create_moderation_request_input_spec.rb +1 -1
  592. data/spec/models/create_moderation_request_model_spec.rb +1 -12
  593. data/spec/models/create_moderation_request_spec.rb +1 -1
  594. data/spec/models/create_moderation_response_results_inner_categories_spec.rb +25 -1
  595. data/spec/models/create_moderation_response_results_inner_category_scores_spec.rb +25 -1
  596. data/spec/models/create_moderation_response_results_inner_spec.rb +1 -1
  597. data/spec/models/create_moderation_response_spec.rb +1 -1
  598. data/spec/models/create_run_request_model_spec.rb +21 -0
  599. data/spec/models/{create_fine_tune_request_spec.rb → create_run_request_spec.rb} +31 -19
  600. data/spec/models/create_speech_request_model_spec.rb +21 -0
  601. data/spec/models/create_speech_request_spec.rb +68 -0
  602. data/spec/models/{fine_tune_spec.rb → create_thread_and_run_request_spec.rb} +20 -20
  603. data/spec/models/create_thread_and_run_request_tools_inner_spec.rb +32 -0
  604. data/spec/models/{list_fine_tune_events_response_spec.rb → create_thread_request_spec.rb} +9 -9
  605. data/spec/models/create_transcription200_response_spec.rb +32 -0
  606. data/spec/models/create_transcription_request_model_spec.rb +1 -12
  607. data/spec/models/{create_transcription_response_spec.rb → create_transcription_response_json_spec.rb} +7 -7
  608. data/spec/models/create_transcription_response_verbose_json_spec.rb +60 -0
  609. data/spec/models/create_translation200_response_spec.rb +32 -0
  610. data/spec/models/{create_translation_response_spec.rb → create_translation_response_json_spec.rb} +7 -7
  611. data/spec/models/create_translation_response_verbose_json_spec.rb +54 -0
  612. data/spec/models/delete_assistant_file_response_spec.rb +52 -0
  613. data/spec/models/delete_assistant_response_spec.rb +52 -0
  614. data/spec/models/delete_file_response_spec.rb +5 -1
  615. data/spec/models/delete_message_response_spec.rb +52 -0
  616. data/spec/models/delete_model_response_spec.rb +3 -3
  617. data/spec/models/delete_thread_response_spec.rb +52 -0
  618. data/spec/models/done_event_spec.rb +50 -0
  619. data/spec/models/{create_embedding_response_data_inner_spec.rb → embedding_spec.rb} +13 -9
  620. data/spec/models/error_event_spec.rb +46 -0
  621. data/spec/models/error_response_spec.rb +1 -1
  622. data/spec/models/error_spec.rb +3 -3
  623. data/spec/models/fine_tuning_integration_spec.rb +46 -0
  624. data/spec/models/fine_tuning_job_checkpoint_metrics_spec.rb +72 -0
  625. data/spec/models/fine_tuning_job_checkpoint_spec.rb +76 -0
  626. data/spec/models/{fine_tune_event_spec.rb → fine_tuning_job_error_spec.rb} +10 -16
  627. data/spec/models/fine_tuning_job_event_spec.rb +68 -0
  628. data/spec/models/fine_tuning_job_hyperparameters_n_epochs_spec.rb +32 -0
  629. data/spec/models/fine_tuning_job_hyperparameters_spec.rb +36 -0
  630. data/spec/models/fine_tuning_job_integrations_inner_spec.rb +32 -0
  631. data/spec/models/fine_tuning_job_spec.rb +134 -0
  632. data/spec/models/function_object_spec.rb +48 -0
  633. data/spec/models/{images_response_data_inner_spec.rb → image_spec.rb} +14 -8
  634. data/spec/models/images_response_spec.rb +1 -1
  635. data/spec/models/list_assistant_files_response_spec.rb +60 -0
  636. data/spec/models/list_assistants_response_spec.rb +60 -0
  637. data/spec/models/list_files_response_spec.rb +7 -3
  638. data/spec/models/list_fine_tuning_job_checkpoints_response_spec.rb +64 -0
  639. data/spec/models/{list_fine_tunes_response_spec.rb → list_fine_tuning_job_events_response_spec.rb} +13 -9
  640. data/spec/models/list_message_files_response_spec.rb +60 -0
  641. data/spec/models/list_messages_response_spec.rb +60 -0
  642. data/spec/models/list_models_response_spec.rb +5 -1
  643. data/spec/models/list_paginated_fine_tuning_jobs_response_spec.rb +52 -0
  644. data/spec/models/list_run_steps_response_spec.rb +60 -0
  645. data/spec/models/list_runs_response_spec.rb +60 -0
  646. data/spec/models/list_threads_response_spec.rb +60 -0
  647. data/spec/models/message_content_image_file_object_image_file_spec.rb +36 -0
  648. data/spec/models/message_content_image_file_object_spec.rb +46 -0
  649. data/spec/models/message_content_text_annotations_file_citation_object_file_citation_spec.rb +42 -0
  650. data/spec/models/message_content_text_annotations_file_citation_object_spec.rb +64 -0
  651. data/spec/models/message_content_text_annotations_file_path_object_file_path_spec.rb +36 -0
  652. data/spec/models/message_content_text_annotations_file_path_object_spec.rb +64 -0
  653. data/spec/models/message_content_text_object_spec.rb +46 -0
  654. data/spec/models/message_content_text_object_text_annotations_inner_spec.rb +32 -0
  655. data/spec/models/message_content_text_object_text_spec.rb +42 -0
  656. data/spec/models/message_delta_content_image_file_object_image_file_spec.rb +36 -0
  657. data/spec/models/message_delta_content_image_file_object_spec.rb +52 -0
  658. data/spec/models/message_delta_content_text_annotations_file_citation_object_file_citation_spec.rb +42 -0
  659. data/spec/models/message_delta_content_text_annotations_file_citation_object_spec.rb +70 -0
  660. data/spec/models/message_delta_content_text_annotations_file_path_object_file_path_spec.rb +36 -0
  661. data/spec/models/message_delta_content_text_annotations_file_path_object_spec.rb +70 -0
  662. data/spec/models/message_delta_content_text_object_spec.rb +52 -0
  663. data/spec/models/message_delta_content_text_object_text_annotations_inner_spec.rb +32 -0
  664. data/spec/models/message_delta_content_text_object_text_spec.rb +42 -0
  665. data/spec/models/message_delta_object_delta_content_inner_spec.rb +32 -0
  666. data/spec/models/message_delta_object_delta_spec.rb +52 -0
  667. data/spec/models/message_delta_object_spec.rb +52 -0
  668. data/spec/models/message_file_object_spec.rb +58 -0
  669. data/spec/models/message_object_content_inner_spec.rb +32 -0
  670. data/spec/models/message_object_incomplete_details_spec.rb +40 -0
  671. data/spec/models/message_object_spec.rb +126 -0
  672. data/spec/models/message_stream_event_one_of1_spec.rb +46 -0
  673. data/spec/models/message_stream_event_one_of2_spec.rb +46 -0
  674. data/spec/models/message_stream_event_one_of3_spec.rb +46 -0
  675. data/spec/models/message_stream_event_one_of4_spec.rb +46 -0
  676. data/spec/models/message_stream_event_one_of_spec.rb +46 -0
  677. data/spec/models/message_stream_event_spec.rb +32 -0
  678. data/spec/models/model_spec.rb +7 -3
  679. data/spec/models/modify_assistant_request_spec.rb +72 -0
  680. data/spec/models/modify_message_request_spec.rb +36 -0
  681. data/spec/models/modify_run_request_spec.rb +36 -0
  682. data/spec/models/modify_thread_request_spec.rb +36 -0
  683. data/spec/models/open_ai_file_spec.rb +17 -5
  684. data/spec/models/run_completion_usage_spec.rb +48 -0
  685. data/spec/models/run_object_incomplete_details_spec.rb +40 -0
  686. data/spec/models/run_object_last_error_spec.rb +46 -0
  687. data/spec/models/run_object_required_action_spec.rb +46 -0
  688. data/spec/models/run_object_required_action_submit_tool_outputs_spec.rb +36 -0
  689. data/spec/models/run_object_spec.rb +194 -0
  690. data/spec/models/run_step_completion_usage_spec.rb +48 -0
  691. data/spec/models/run_step_delta_object_delta_spec.rb +36 -0
  692. data/spec/models/run_step_delta_object_delta_step_details_spec.rb +32 -0
  693. data/spec/models/run_step_delta_object_spec.rb +52 -0
  694. data/spec/models/run_step_delta_step_details_message_creation_object_message_creation_spec.rb +36 -0
  695. data/spec/models/run_step_delta_step_details_message_creation_object_spec.rb +46 -0
  696. data/spec/models/run_step_delta_step_details_tool_calls_code_object_code_interpreter_outputs_inner_spec.rb +32 -0
  697. data/spec/models/run_step_delta_step_details_tool_calls_code_object_code_interpreter_spec.rb +42 -0
  698. data/spec/models/run_step_delta_step_details_tool_calls_code_object_spec.rb +58 -0
  699. data/spec/models/run_step_delta_step_details_tool_calls_code_output_image_object_image_spec.rb +36 -0
  700. data/spec/models/run_step_delta_step_details_tool_calls_code_output_image_object_spec.rb +52 -0
  701. data/spec/models/run_step_delta_step_details_tool_calls_code_output_logs_object_spec.rb +52 -0
  702. data/spec/models/run_step_delta_step_details_tool_calls_function_object_function_spec.rb +48 -0
  703. data/spec/models/run_step_delta_step_details_tool_calls_function_object_spec.rb +58 -0
  704. data/spec/models/run_step_delta_step_details_tool_calls_object_spec.rb +46 -0
  705. data/spec/models/run_step_delta_step_details_tool_calls_object_tool_calls_inner_spec.rb +32 -0
  706. data/spec/models/run_step_delta_step_details_tool_calls_retrieval_object_spec.rb +58 -0
  707. data/spec/models/run_step_details_message_creation_object_message_creation_spec.rb +36 -0
  708. data/spec/models/run_step_details_message_creation_object_spec.rb +46 -0
  709. data/spec/models/run_step_details_tool_calls_code_object_code_interpreter_outputs_inner_spec.rb +32 -0
  710. data/spec/models/run_step_details_tool_calls_code_object_code_interpreter_spec.rb +42 -0
  711. data/spec/models/run_step_details_tool_calls_code_object_spec.rb +52 -0
  712. data/spec/models/run_step_details_tool_calls_code_output_image_object_image_spec.rb +36 -0
  713. data/spec/models/run_step_details_tool_calls_code_output_image_object_spec.rb +46 -0
  714. data/spec/models/run_step_details_tool_calls_code_output_logs_object_spec.rb +46 -0
  715. data/spec/models/run_step_details_tool_calls_function_object_function_spec.rb +48 -0
  716. data/spec/models/run_step_details_tool_calls_function_object_spec.rb +52 -0
  717. data/spec/models/run_step_details_tool_calls_object_spec.rb +46 -0
  718. data/spec/models/run_step_details_tool_calls_object_tool_calls_inner_spec.rb +32 -0
  719. data/spec/models/run_step_details_tool_calls_retrieval_object_spec.rb +52 -0
  720. data/spec/models/run_step_object_last_error_spec.rb +46 -0
  721. data/spec/models/run_step_object_spec.rb +138 -0
  722. data/spec/models/run_step_object_step_details_spec.rb +32 -0
  723. data/spec/models/run_step_stream_event_one_of1_spec.rb +46 -0
  724. data/spec/models/run_step_stream_event_one_of2_spec.rb +46 -0
  725. data/spec/models/run_step_stream_event_one_of3_spec.rb +46 -0
  726. data/spec/models/run_step_stream_event_one_of4_spec.rb +46 -0
  727. data/spec/models/run_step_stream_event_one_of5_spec.rb +46 -0
  728. data/spec/models/run_step_stream_event_one_of6_spec.rb +46 -0
  729. data/spec/models/run_step_stream_event_one_of_spec.rb +46 -0
  730. data/spec/models/run_step_stream_event_spec.rb +32 -0
  731. data/spec/models/run_stream_event_one_of1_spec.rb +46 -0
  732. data/spec/models/run_stream_event_one_of2_spec.rb +46 -0
  733. data/spec/models/run_stream_event_one_of3_spec.rb +46 -0
  734. data/spec/models/run_stream_event_one_of4_spec.rb +46 -0
  735. data/spec/models/run_stream_event_one_of5_spec.rb +46 -0
  736. data/spec/models/run_stream_event_one_of6_spec.rb +46 -0
  737. data/spec/models/run_stream_event_one_of7_spec.rb +46 -0
  738. data/spec/models/run_stream_event_one_of8_spec.rb +46 -0
  739. data/spec/models/run_stream_event_one_of_spec.rb +46 -0
  740. data/spec/models/{create_fine_tune_request_model_spec.rb → run_stream_event_spec.rb} +3 -3
  741. data/spec/models/run_tool_call_object_function_spec.rb +42 -0
  742. data/spec/models/run_tool_call_object_spec.rb +52 -0
  743. data/spec/models/submit_tool_outputs_run_request_spec.rb +42 -0
  744. data/spec/models/submit_tool_outputs_run_request_tool_outputs_inner_spec.rb +42 -0
  745. data/spec/models/thread_object_spec.rb +58 -0
  746. data/spec/models/thread_stream_event_one_of_spec.rb +46 -0
  747. data/spec/models/{create_edit_request_model_spec.rb → thread_stream_event_spec.rb} +3 -3
  748. data/spec/models/transcription_segment_spec.rb +90 -0
  749. data/spec/models/transcription_word_spec.rb +48 -0
  750. data/spec/models/truncation_object_spec.rb +46 -0
  751. data/spec/spec_helper.rb +1 -1
  752. metadata +867 -106
  753. data/docs/CreateCompletionResponseUsage.md +0 -22
  754. data/docs/CreateEditRequest.md +0 -28
  755. data/docs/CreateEditResponse.md +0 -24
  756. data/docs/CreateEditResponseChoicesInner.md +0 -24
  757. data/docs/CreateEmbeddingResponseDataInner.md +0 -22
  758. data/docs/CreateFineTuneRequest.md +0 -40
  759. data/docs/CreateTranslationResponse.md +0 -18
  760. data/docs/FineTune.md +0 -42
  761. data/docs/ImagesResponseDataInner.md +0 -20
  762. data/docs/ListFineTuneEventsResponse.md +0 -20
  763. data/docs/ListFineTunesResponse.md +0 -20
  764. data/docs/OpenAIApi.md +0 -1499
  765. data/lib/openapi_openai/api/open_ai_api.rb +0 -1583
  766. data/spec/api/open_ai_api_spec.rb +0 -306
@@ -0,0 +1,30 @@
1
+ # OpenApiOpenAIClient::CreateAssistantRequest
2
+
3
+ ## Properties
4
+
5
+ | Name | Type | Description | Notes |
6
+ | ---- | ---- | ----------- | ----- |
7
+ | **model** | [**CreateAssistantRequestModel**](CreateAssistantRequestModel.md) | | |
8
+ | **name** | **String** | The name of the assistant. The maximum length is 256 characters. | [optional] |
9
+ | **description** | **String** | The description of the assistant. The maximum length is 512 characters. | [optional] |
10
+ | **instructions** | **String** | The system instructions that the assistant uses. The maximum length is 256,000 characters. | [optional] |
11
+ | **tools** | [**Array<AssistantObjectToolsInner>**](AssistantObjectToolsInner.md) | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `retrieval`, or `function`. | [optional] |
12
+ | **file_ids** | **Array<String>** | A list of [file](/docs/api-reference/files) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. | [optional] |
13
+ | **metadata** | **Object** | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long. | [optional] |
14
+
15
+ ## Example
16
+
17
+ ```ruby
18
+ require 'openapi_openai'
19
+
20
+ instance = OpenApiOpenAIClient::CreateAssistantRequest.new(
21
+ model: null,
22
+ name: null,
23
+ description: null,
24
+ instructions: null,
25
+ tools: null,
26
+ file_ids: null,
27
+ metadata: null
28
+ )
29
+ ```
30
+
@@ -0,0 +1,15 @@
1
+ # OpenApiOpenAIClient::CreateAssistantRequestModel
2
+
3
+ ## Properties
4
+
5
+ | Name | Type | Description | Notes |
6
+ | ---- | ---- | ----------- | ----- |
7
+
8
+ ## Example
9
+
10
+ ```ruby
11
+ require 'openapi_openai'
12
+
13
+ instance = OpenApiOpenAIClient::CreateAssistantRequestModel.new()
14
+ ```
15
+
@@ -0,0 +1,30 @@
1
+ # OpenApiOpenAIClient::CreateChatCompletionFunctionResponse
2
+
3
+ ## Properties
4
+
5
+ | Name | Type | Description | Notes |
6
+ | ---- | ---- | ----------- | ----- |
7
+ | **id** | **String** | A unique identifier for the chat completion. | |
8
+ | **choices** | [**Array<CreateChatCompletionFunctionResponseChoicesInner>**](CreateChatCompletionFunctionResponseChoicesInner.md) | A list of chat completion choices. Can be more than one if `n` is greater than 1. | |
9
+ | **created** | **Integer** | The Unix timestamp (in seconds) of when the chat completion was created. | |
10
+ | **model** | **String** | The model used for the chat completion. | |
11
+ | **system_fingerprint** | **String** | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
12
+ | **object** | **String** | The object type, which is always `chat.completion`. | |
13
+ | **usage** | [**CompletionUsage**](CompletionUsage.md) | | [optional] |
14
+
15
+ ## Example
16
+
17
+ ```ruby
18
+ require 'openapi_openai'
19
+
20
+ instance = OpenApiOpenAIClient::CreateChatCompletionFunctionResponse.new(
21
+ id: null,
22
+ choices: null,
23
+ created: null,
24
+ model: null,
25
+ system_fingerprint: null,
26
+ object: null,
27
+ usage: null
28
+ )
29
+ ```
30
+
@@ -0,0 +1,22 @@
1
+ # OpenApiOpenAIClient::CreateChatCompletionFunctionResponseChoicesInner
2
+
3
+ ## Properties
4
+
5
+ | Name | Type | Description | Notes |
6
+ | ---- | ---- | ----------- | ----- |
7
+ | **finish_reason** | **String** | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, or `function_call` if the model called a function. | |
8
+ | **index** | **Integer** | The index of the choice in the list of choices. | |
9
+ | **message** | [**ChatCompletionResponseMessage**](ChatCompletionResponseMessage.md) | | |
10
+
11
+ ## Example
12
+
13
+ ```ruby
14
+ require 'openapi_openai'
15
+
16
+ instance = OpenApiOpenAIClient::CreateChatCompletionFunctionResponseChoicesInner.new(
17
+ finish_reason: null,
18
+ index: null,
19
+ message: null
20
+ )
21
+ ```
22
+
@@ -4,20 +4,26 @@
4
4
 
5
5
  | Name | Type | Description | Notes |
6
6
  | ---- | ---- | ----------- | ----- |
7
+ | **messages** | [**Array<ChatCompletionRequestMessage>**](ChatCompletionRequestMessage.md) | A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models). | |
7
8
  | **model** | [**CreateChatCompletionRequestModel**](CreateChatCompletionRequestModel.md) | | |
8
- | **messages** | [**Array<ChatCompletionRequestMessage>**](ChatCompletionRequestMessage.md) | A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb). | |
9
- | **functions** | [**Array<ChatCompletionFunctions>**](ChatCompletionFunctions.md) | A list of functions the model may generate JSON inputs for. | [optional] |
10
- | **function_call** | [**CreateChatCompletionRequestFunctionCall**](CreateChatCompletionRequestFunctionCall.md) | | [optional] |
9
+ | **frequency_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) | [optional][default to 0] |
10
+ | **logit_bias** | **Hash<String, Integer>** | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. | [optional] |
11
+ | **logprobs** | **Boolean** | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. | [optional][default to false] |
12
+ | **top_logprobs** | **Integer** | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used. | [optional] |
13
+ | **max_tokens** | **Integer** | The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. | [optional] |
14
+ | **n** | **Integer** | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. | [optional][default to 1] |
15
+ | **presence_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) | [optional][default to 0] |
16
+ | **response_format** | [**CreateChatCompletionRequestResponseFormat**](CreateChatCompletionRequestResponseFormat.md) | | [optional] |
17
+ | **seed** | **Integer** | This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend. | [optional] |
18
+ | **stop** | [**CreateChatCompletionRequestStop**](CreateChatCompletionRequestStop.md) | | [optional] |
19
+ | **stream** | **Boolean** | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions). | [optional][default to false] |
11
20
  | **temperature** | **Float** | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | [optional][default to 1] |
12
21
  | **top_p** | **Float** | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. | [optional][default to 1] |
13
- | **n** | **Integer** | How many chat completion choices to generate for each input message. | [optional][default to 1] |
14
- | **stream** | **Boolean** | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb). | [optional][default to false] |
15
- | **stop** | [**CreateChatCompletionRequestStop**](CreateChatCompletionRequestStop.md) | | [optional] |
16
- | **max_tokens** | **Integer** | The maximum number of [tokens](/tokenizer) to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens. | [optional] |
17
- | **presence_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details) | [optional][default to 0] |
18
- | **frequency_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details) | [optional][default to 0] |
19
- | **logit_bias** | **Object** | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. | [optional] |
22
+ | **tools** | [**Array<ChatCompletionTool>**](ChatCompletionTool.md) | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported. | [optional] |
23
+ | **tool_choice** | [**ChatCompletionToolChoiceOption**](ChatCompletionToolChoiceOption.md) | | [optional] |
20
24
  | **user** | **String** | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). | [optional] |
25
+ | **function_call** | [**CreateChatCompletionRequestFunctionCall**](CreateChatCompletionRequestFunctionCall.md) | | [optional] |
26
+ | **functions** | [**Array<ChatCompletionFunctions>**](ChatCompletionFunctions.md) | Deprecated in favor of `tools`. A list of functions the model may generate JSON inputs for. | [optional] |
21
27
 
22
28
  ## Example
23
29
 
@@ -25,20 +31,26 @@
25
31
  require 'openapi_openai'
26
32
 
27
33
  instance = OpenApiOpenAIClient::CreateChatCompletionRequest.new(
28
- model: null,
29
34
  messages: null,
30
- functions: null,
31
- function_call: null,
32
- temperature: 1,
33
- top_p: 1,
34
- n: 1,
35
- stream: null,
36
- stop: null,
37
- max_tokens: null,
38
- presence_penalty: null,
35
+ model: null,
39
36
  frequency_penalty: null,
40
37
  logit_bias: null,
41
- user: user-1234
38
+ logprobs: null,
39
+ top_logprobs: null,
40
+ max_tokens: null,
41
+ n: 1,
42
+ presence_penalty: null,
43
+ response_format: null,
44
+ seed: null,
45
+ stop: null,
46
+ stream: null,
47
+ temperature: 1,
48
+ top_p: 1,
49
+ tools: null,
50
+ tool_choice: null,
51
+ user: user-1234,
52
+ function_call: null,
53
+ functions: null
42
54
  )
43
55
  ```
44
56
 
@@ -14,7 +14,7 @@ require 'openapi_openai'
14
14
  OpenApiOpenAIClient::CreateChatCompletionRequestFunctionCall.openapi_one_of
15
15
  # =>
16
16
  # [
17
- # :'CreateChatCompletionRequestFunctionCallOneOf',
17
+ # :'ChatCompletionFunctionCallOption',
18
18
  # :'String'
19
19
  # ]
20
20
  ```
@@ -29,7 +29,7 @@ Find the appropriate object from the `openapi_one_of` list and casts the data in
29
29
  require 'openapi_openai'
30
30
 
31
31
  OpenApiOpenAIClient::CreateChatCompletionRequestFunctionCall.build(data)
32
- # => #<CreateChatCompletionRequestFunctionCallOneOf:0x00007fdd4aab02a0>
32
+ # => #<ChatCompletionFunctionCallOption:0x00007fdd4aab02a0>
33
33
 
34
34
  OpenApiOpenAIClient::CreateChatCompletionRequestFunctionCall.build(data_that_doesnt_match)
35
35
  # => nil
@@ -43,7 +43,7 @@ OpenApiOpenAIClient::CreateChatCompletionRequestFunctionCall.build(data_that_doe
43
43
 
44
44
  #### Return type
45
45
 
46
- - `CreateChatCompletionRequestFunctionCallOneOf`
46
+ - `ChatCompletionFunctionCallOption`
47
47
  - `String`
48
48
  - `nil` (if no type matches)
49
49
 
@@ -1,47 +1,15 @@
1
1
  # OpenApiOpenAIClient::CreateChatCompletionRequestModel
2
2
 
3
- ## Class instance methods
3
+ ## Properties
4
4
 
5
- ### `openapi_one_of`
5
+ | Name | Type | Description | Notes |
6
+ | ---- | ---- | ----------- | ----- |
6
7
 
7
- Returns the list of classes defined in oneOf.
8
-
9
- #### Example
10
-
11
- ```ruby
12
- require 'openapi_openai'
13
-
14
- OpenApiOpenAIClient::CreateChatCompletionRequestModel.openapi_one_of
15
- # =>
16
- # [
17
- # :'String'
18
- # ]
19
- ```
20
-
21
- ### build
22
-
23
- Find the appropriate object from the `openapi_one_of` list and casts the data into it.
24
-
25
- #### Example
8
+ ## Example
26
9
 
27
10
  ```ruby
28
11
  require 'openapi_openai'
29
12
 
30
- OpenApiOpenAIClient::CreateChatCompletionRequestModel.build(data)
31
- # => #<String:0x00007fdd4aab02a0>
32
-
33
- OpenApiOpenAIClient::CreateChatCompletionRequestModel.build(data_that_doesnt_match)
34
- # => nil
13
+ instance = OpenApiOpenAIClient::CreateChatCompletionRequestModel.new()
35
14
  ```
36
15
 
37
- #### Parameters
38
-
39
- | Name | Type | Description |
40
- | ---- | ---- | ----------- |
41
- | **data** | **Mixed** | data to be matched against the list of oneOf items |
42
-
43
- #### Return type
44
-
45
- - `String`
46
- - `nil` (if no type matches)
47
-
@@ -0,0 +1,18 @@
1
+ # OpenApiOpenAIClient::CreateChatCompletionRequestResponseFormat
2
+
3
+ ## Properties
4
+
5
+ | Name | Type | Description | Notes |
6
+ | ---- | ---- | ----------- | ----- |
7
+ | **type** | **String** | Must be one of &#x60;text&#x60; or &#x60;json_object&#x60;. | [optional][default to &#39;text&#39;] |
8
+
9
+ ## Example
10
+
11
+ ```ruby
12
+ require 'openapi_openai'
13
+
14
+ instance = OpenApiOpenAIClient::CreateChatCompletionRequestResponseFormat.new(
15
+ type: json_object
16
+ )
17
+ ```
18
+
@@ -4,12 +4,13 @@
4
4
 
5
5
  | Name | Type | Description | Notes |
6
6
  | ---- | ---- | ----------- | ----- |
7
- | **id** | **String** | | |
8
- | **object** | **String** | | |
9
- | **created** | **Integer** | | |
10
- | **model** | **String** | | |
11
- | **choices** | [**Array&lt;CreateChatCompletionResponseChoicesInner&gt;**](CreateChatCompletionResponseChoicesInner.md) | | |
12
- | **usage** | [**CreateCompletionResponseUsage**](CreateCompletionResponseUsage.md) | | [optional] |
7
+ | **id** | **String** | A unique identifier for the chat completion. | |
8
+ | **choices** | [**Array&lt;CreateChatCompletionResponseChoicesInner&gt;**](CreateChatCompletionResponseChoicesInner.md) | A list of chat completion choices. Can be more than one if &#x60;n&#x60; is greater than 1. | |
9
+ | **created** | **Integer** | The Unix timestamp (in seconds) of when the chat completion was created. | |
10
+ | **model** | **String** | The model used for the chat completion. | |
11
+ | **system_fingerprint** | **String** | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the &#x60;seed&#x60; request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
12
+ | **object** | **String** | The object type, which is always &#x60;chat.completion&#x60;. | |
13
+ | **usage** | [**CompletionUsage**](CompletionUsage.md) | | [optional] |
13
14
 
14
15
  ## Example
15
16
 
@@ -18,10 +19,11 @@ require 'openapi_openai'
18
19
 
19
20
  instance = OpenApiOpenAIClient::CreateChatCompletionResponse.new(
20
21
  id: null,
21
- object: null,
22
+ choices: null,
22
23
  created: null,
23
24
  model: null,
24
- choices: null,
25
+ system_fingerprint: null,
26
+ object: null,
25
27
  usage: null
26
28
  )
27
29
  ```
@@ -4,9 +4,10 @@
4
4
 
5
5
  | Name | Type | Description | Notes |
6
6
  | ---- | ---- | ----------- | ----- |
7
- | **index** | **Integer** | | [optional] |
8
- | **message** | [**ChatCompletionResponseMessage**](ChatCompletionResponseMessage.md) | | [optional] |
9
- | **finish_reason** | **String** | | [optional] |
7
+ | **finish_reason** | **String** | The reason the model stopped generating tokens. This will be &#x60;stop&#x60; if the model hit a natural stop point or a provided stop sequence, &#x60;length&#x60; if the maximum number of tokens specified in the request was reached, &#x60;content_filter&#x60; if content was omitted due to a flag from our content filters, &#x60;tool_calls&#x60; if the model called a tool, or &#x60;function_call&#x60; (deprecated) if the model called a function. | |
8
+ | **index** | **Integer** | The index of the choice in the list of choices. | |
9
+ | **message** | [**ChatCompletionResponseMessage**](ChatCompletionResponseMessage.md) | | |
10
+ | **logprobs** | [**CreateChatCompletionResponseChoicesInnerLogprobs**](CreateChatCompletionResponseChoicesInnerLogprobs.md) | | |
10
11
 
11
12
  ## Example
12
13
 
@@ -14,9 +15,10 @@
14
15
  require 'openapi_openai'
15
16
 
16
17
  instance = OpenApiOpenAIClient::CreateChatCompletionResponseChoicesInner.new(
18
+ finish_reason: null,
17
19
  index: null,
18
20
  message: null,
19
- finish_reason: null
21
+ logprobs: null
20
22
  )
21
23
  ```
22
24
 
@@ -0,0 +1,18 @@
1
+ # OpenApiOpenAIClient::CreateChatCompletionResponseChoicesInnerLogprobs
2
+
3
+ ## Properties
4
+
5
+ | Name | Type | Description | Notes |
6
+ | ---- | ---- | ----------- | ----- |
7
+ | **content** | [**Array&lt;ChatCompletionTokenLogprob&gt;**](ChatCompletionTokenLogprob.md) | A list of message content tokens with log probability information. | |
8
+
9
+ ## Example
10
+
11
+ ```ruby
12
+ require 'openapi_openai'
13
+
14
+ instance = OpenApiOpenAIClient::CreateChatCompletionResponseChoicesInnerLogprobs.new(
15
+ content: null
16
+ )
17
+ ```
18
+
@@ -4,11 +4,12 @@
4
4
 
5
5
  | Name | Type | Description | Notes |
6
6
  | ---- | ---- | ----------- | ----- |
7
- | **id** | **String** | | |
8
- | **object** | **String** | | |
9
- | **created** | **Integer** | | |
10
- | **model** | **String** | | |
11
- | **choices** | [**Array&lt;CreateChatCompletionStreamResponseChoicesInner&gt;**](CreateChatCompletionStreamResponseChoicesInner.md) | | |
7
+ | **id** | **String** | A unique identifier for the chat completion. Each chunk has the same ID. | |
8
+ | **choices** | [**Array&lt;CreateChatCompletionStreamResponseChoicesInner&gt;**](CreateChatCompletionStreamResponseChoicesInner.md) | A list of chat completion choices. Can be more than one if &#x60;n&#x60; is greater than 1. | |
9
+ | **created** | **Integer** | The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp. | |
10
+ | **model** | **String** | The model to generate the completion. | |
11
+ | **system_fingerprint** | **String** | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the &#x60;seed&#x60; request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
12
+ | **object** | **String** | The object type, which is always &#x60;chat.completion.chunk&#x60;. | |
12
13
 
13
14
  ## Example
14
15
 
@@ -17,10 +18,11 @@ require 'openapi_openai'
17
18
 
18
19
  instance = OpenApiOpenAIClient::CreateChatCompletionStreamResponse.new(
19
20
  id: null,
20
- object: null,
21
+ choices: null,
21
22
  created: null,
22
23
  model: null,
23
- choices: null
24
+ system_fingerprint: null,
25
+ object: null
24
26
  )
25
27
  ```
26
28
 
@@ -4,9 +4,10 @@
4
4
 
5
5
  | Name | Type | Description | Notes |
6
6
  | ---- | ---- | ----------- | ----- |
7
- | **index** | **Integer** | | [optional] |
8
- | **delta** | [**ChatCompletionStreamResponseDelta**](ChatCompletionStreamResponseDelta.md) | | [optional] |
9
- | **finish_reason** | **String** | | [optional] |
7
+ | **delta** | [**ChatCompletionStreamResponseDelta**](ChatCompletionStreamResponseDelta.md) | | |
8
+ | **logprobs** | [**CreateChatCompletionResponseChoicesInnerLogprobs**](CreateChatCompletionResponseChoicesInnerLogprobs.md) | | [optional] |
9
+ | **finish_reason** | **String** | The reason the model stopped generating tokens. This will be &#x60;stop&#x60; if the model hit a natural stop point or a provided stop sequence, &#x60;length&#x60; if the maximum number of tokens specified in the request was reached, &#x60;content_filter&#x60; if content was omitted due to a flag from our content filters, &#x60;tool_calls&#x60; if the model called a tool, or &#x60;function_call&#x60; (deprecated) if the model called a function. | |
10
+ | **index** | **Integer** | The index of the choice in the list of choices. | |
10
11
 
11
12
  ## Example
12
13
 
@@ -14,9 +15,10 @@
14
15
  require 'openapi_openai'
15
16
 
16
17
  instance = OpenApiOpenAIClient::CreateChatCompletionStreamResponseChoicesInner.new(
17
- index: null,
18
18
  delta: null,
19
- finish_reason: null
19
+ logprobs: null,
20
+ finish_reason: null,
21
+ index: null
20
22
  )
21
23
  ```
22
24
 
@@ -6,19 +6,20 @@
6
6
  | ---- | ---- | ----------- | ----- |
7
7
  | **model** | [**CreateCompletionRequestModel**](CreateCompletionRequestModel.md) | | |
8
8
  | **prompt** | [**CreateCompletionRequestPrompt**](CreateCompletionRequestPrompt.md) | | |
9
- | **suffix** | **String** | The suffix that comes after a completion of inserted text. | [optional] |
10
- | **max_tokens** | **Integer** | The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus &#x60;max_tokens&#x60; cannot exceed the model&#39;s context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens. | [optional][default to 16] |
11
- | **temperature** | **Float** | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or &#x60;top_p&#x60; but not both. | [optional][default to 1] |
12
- | **top_p** | **Float** | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or &#x60;temperature&#x60; but not both. | [optional][default to 1] |
13
- | **n** | **Integer** | How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for &#x60;max_tokens&#x60; and &#x60;stop&#x60;. | [optional][default to 1] |
14
- | **stream** | **Boolean** | Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a &#x60;data: [DONE]&#x60; message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb). | [optional][default to false] |
15
- | **logprobs** | **Integer** | Include the log probabilities on the &#x60;logprobs&#x60; most likely tokens, as well the chosen tokens. For example, if &#x60;logprobs&#x60; is 5, the API will return a list of the 5 most likely tokens. The API will always return the &#x60;logprob&#x60; of the sampled token, so there may be up to &#x60;logprobs+1&#x60; elements in the response. The maximum value for &#x60;logprobs&#x60; is 5. | [optional] |
9
+ | **best_of** | **Integer** | Generates &#x60;best_of&#x60; completions server-side and returns the \&quot;best\&quot; (the one with the highest log probability per token). Results cannot be streamed. When used with &#x60;n&#x60;, &#x60;best_of&#x60; controls the number of candidate completions and &#x60;n&#x60; specifies how many to return – &#x60;best_of&#x60; must be greater than &#x60;n&#x60;. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for &#x60;max_tokens&#x60; and &#x60;stop&#x60;. | [optional][default to 1] |
16
10
  | **echo** | **Boolean** | Echo back the prompt in addition to the completion | [optional][default to false] |
11
+ | **frequency_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model&#39;s likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) | [optional][default to 0] |
12
+ | **logit_bias** | **Hash&lt;String, Integer&gt;** | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view&#x3D;bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass &#x60;{\&quot;50256\&quot;: -100}&#x60; to prevent the &lt;|endoftext|&gt; token from being generated. | [optional] |
13
+ | **logprobs** | **Integer** | Include the log probabilities on the &#x60;logprobs&#x60; most likely output tokens, as well the chosen tokens. For example, if &#x60;logprobs&#x60; is 5, the API will return a list of the 5 most likely tokens. The API will always return the &#x60;logprob&#x60; of the sampled token, so there may be up to &#x60;logprobs+1&#x60; elements in the response. The maximum value for &#x60;logprobs&#x60; is 5. | [optional] |
14
+ | **max_tokens** | **Integer** | The maximum number of [tokens](/tokenizer) that can be generated in the completion. The token count of your prompt plus &#x60;max_tokens&#x60; cannot exceed the model&#39;s context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. | [optional][default to 16] |
15
+ | **n** | **Integer** | How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for &#x60;max_tokens&#x60; and &#x60;stop&#x60;. | [optional][default to 1] |
16
+ | **presence_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model&#39;s likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) | [optional][default to 0] |
17
+ | **seed** | **Integer** | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same &#x60;seed&#x60; and parameters should return the same result. Determinism is not guaranteed, and you should refer to the &#x60;system_fingerprint&#x60; response parameter to monitor changes in the backend. | [optional] |
17
18
  | **stop** | [**CreateCompletionRequestStop**](CreateCompletionRequestStop.md) | | [optional] |
18
- | **presence_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model&#39;s likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details) | [optional][default to 0] |
19
- | **frequency_penalty** | **Float** | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model&#39;s likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details) | [optional][default to 0] |
20
- | **best_of** | **Integer** | Generates &#x60;best_of&#x60; completions server-side and returns the \&quot;best\&quot; (the one with the highest log probability per token). Results cannot be streamed. When used with &#x60;n&#x60;, &#x60;best_of&#x60; controls the number of candidate completions and &#x60;n&#x60; specifies how many to return – &#x60;best_of&#x60; must be greater than &#x60;n&#x60;. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for &#x60;max_tokens&#x60; and &#x60;stop&#x60;. | [optional][default to 1] |
21
- | **logit_bias** | **Object** | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view&#x3D;bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass &#x60;{\&quot;50256\&quot;: -100}&#x60; to prevent the &lt;|endoftext|&gt; token from being generated. | [optional] |
19
+ | **stream** | **Boolean** | Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a &#x60;data: [DONE]&#x60; message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions). | [optional][default to false] |
20
+ | **suffix** | **String** | The suffix that comes after a completion of inserted text. This parameter is only supported for &#x60;gpt-3.5-turbo-instruct&#x60;. | [optional] |
21
+ | **temperature** | **Float** | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or &#x60;top_p&#x60; but not both. | [optional][default to 1] |
22
+ | **top_p** | **Float** | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or &#x60;temperature&#x60; but not both. | [optional][default to 1] |
22
23
  | **user** | **String** | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). | [optional] |
23
24
 
24
25
  ## Example
@@ -29,19 +30,20 @@ require 'openapi_openai'
29
30
  instance = OpenApiOpenAIClient::CreateCompletionRequest.new(
30
31
  model: null,
31
32
  prompt: null,
32
- suffix: test.,
33
- max_tokens: 16,
34
- temperature: 1,
35
- top_p: 1,
36
- n: 1,
37
- stream: null,
38
- logprobs: null,
33
+ best_of: null,
39
34
  echo: null,
40
- stop: null,
41
- presence_penalty: null,
42
35
  frequency_penalty: null,
43
- best_of: null,
44
36
  logit_bias: null,
37
+ logprobs: null,
38
+ max_tokens: 16,
39
+ n: 1,
40
+ presence_penalty: null,
41
+ seed: null,
42
+ stop: null,
43
+ stream: null,
44
+ suffix: test.,
45
+ temperature: 1,
46
+ top_p: 1,
45
47
  user: user-1234
46
48
  )
47
49
  ```
@@ -1,47 +1,15 @@
1
1
  # OpenApiOpenAIClient::CreateCompletionRequestModel
2
2
 
3
- ## Class instance methods
3
+ ## Properties
4
4
 
5
- ### `openapi_one_of`
5
+ | Name | Type | Description | Notes |
6
+ | ---- | ---- | ----------- | ----- |
6
7
 
7
- Returns the list of classes defined in oneOf.
8
-
9
- #### Example
10
-
11
- ```ruby
12
- require 'openapi_openai'
13
-
14
- OpenApiOpenAIClient::CreateCompletionRequestModel.openapi_one_of
15
- # =>
16
- # [
17
- # :'String'
18
- # ]
19
- ```
20
-
21
- ### build
22
-
23
- Find the appropriate object from the `openapi_one_of` list and casts the data into it.
24
-
25
- #### Example
8
+ ## Example
26
9
 
27
10
  ```ruby
28
11
  require 'openapi_openai'
29
12
 
30
- OpenApiOpenAIClient::CreateCompletionRequestModel.build(data)
31
- # => #<String:0x00007fdd4aab02a0>
32
-
33
- OpenApiOpenAIClient::CreateCompletionRequestModel.build(data_that_doesnt_match)
34
- # => nil
13
+ instance = OpenApiOpenAIClient::CreateCompletionRequestModel.new()
35
14
  ```
36
15
 
37
- #### Parameters
38
-
39
- | Name | Type | Description |
40
- | ---- | ---- | ----------- |
41
- | **data** | **Mixed** | data to be matched against the list of oneOf items |
42
-
43
- #### Return type
44
-
45
- - `String`
46
- - `nil` (if no type matches)
47
-
@@ -4,12 +4,13 @@
4
4
 
5
5
  | Name | Type | Description | Notes |
6
6
  | ---- | ---- | ----------- | ----- |
7
- | **id** | **String** | | |
8
- | **object** | **String** | | |
9
- | **created** | **Integer** | | |
10
- | **model** | **String** | | |
11
- | **choices** | [**Array&lt;CreateCompletionResponseChoicesInner&gt;**](CreateCompletionResponseChoicesInner.md) | | |
12
- | **usage** | [**CreateCompletionResponseUsage**](CreateCompletionResponseUsage.md) | | [optional] |
7
+ | **id** | **String** | A unique identifier for the completion. | |
8
+ | **choices** | [**Array&lt;CreateCompletionResponseChoicesInner&gt;**](CreateCompletionResponseChoicesInner.md) | The list of completion choices the model generated for the input prompt. | |
9
+ | **created** | **Integer** | The Unix timestamp (in seconds) of when the completion was created. | |
10
+ | **model** | **String** | The model used for completion. | |
11
+ | **system_fingerprint** | **String** | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the &#x60;seed&#x60; request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
12
+ | **object** | **String** | The object type, which is always \&quot;text_completion\&quot; | |
13
+ | **usage** | [**CompletionUsage**](CompletionUsage.md) | | [optional] |
13
14
 
14
15
  ## Example
15
16
 
@@ -18,10 +19,11 @@ require 'openapi_openai'
18
19
 
19
20
  instance = OpenApiOpenAIClient::CreateCompletionResponse.new(
20
21
  id: null,
21
- object: null,
22
+ choices: null,
22
23
  created: null,
23
24
  model: null,
24
- choices: null,
25
+ system_fingerprint: null,
26
+ object: null,
25
27
  usage: null
26
28
  )
27
29
  ```
@@ -4,10 +4,10 @@
4
4
 
5
5
  | Name | Type | Description | Notes |
6
6
  | ---- | ---- | ----------- | ----- |
7
- | **text** | **String** | | |
7
+ | **finish_reason** | **String** | The reason the model stopped generating tokens. This will be &#x60;stop&#x60; if the model hit a natural stop point or a provided stop sequence, &#x60;length&#x60; if the maximum number of tokens specified in the request was reached, or &#x60;content_filter&#x60; if content was omitted due to a flag from our content filters. | |
8
8
  | **index** | **Integer** | | |
9
9
  | **logprobs** | [**CreateCompletionResponseChoicesInnerLogprobs**](CreateCompletionResponseChoicesInnerLogprobs.md) | | |
10
- | **finish_reason** | **String** | | |
10
+ | **text** | **String** | | |
11
11
 
12
12
  ## Example
13
13
 
@@ -15,10 +15,10 @@
15
15
  require 'openapi_openai'
16
16
 
17
17
  instance = OpenApiOpenAIClient::CreateCompletionResponseChoicesInner.new(
18
- text: null,
18
+ finish_reason: null,
19
19
  index: null,
20
20
  logprobs: null,
21
- finish_reason: null
21
+ text: null
22
22
  )
23
23
  ```
24
24