openai 3.2.0 → 3.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1 +1 @@
1
- 6.4.0
1
+ 6.6.0
package/README.md CHANGED
@@ -2,17 +2,17 @@
2
2
 
3
3
  The OpenAI Node.js library provides convenient access to the OpenAI API from Node.js applications. Most of the code in this library is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi).
4
4
 
5
- **Important note: this library is meant for server-side usage only, as using it in client-side browser code will expose your secret API key. [See here](https://beta.openai.com/docs/api-reference/authentication) for more details.**
5
+ > ⚠️ **Important note: this library is meant for server-side usage only, as using it in client-side browser code will expose your secret API key. [See here](https://platform.openai.com/docs/api-reference/authentication) for more details.**
6
6
 
7
7
  ## Installation
8
8
 
9
9
  ```bash
10
- $ npm install openai
10
+ npm install openai
11
11
  ```
12
12
 
13
13
  ## Usage
14
14
 
15
- The library needs to be configured with your account's secret key, which is available on the [website](https://beta.openai.com/account/api-keys). We recommend setting it as an environment variable. Here's an example of initializing the library with the API key loaded from an environment variable and creating a completion:
15
+ The library needs to be configured with your account's secret key, which is available in your [OpenAI account page](https://platform.openai.com/account/api-keys). We recommend setting it as an environment variable. Here's an example of initializing the library with the API key loaded from an environment variable and creating a completion:
16
16
 
17
17
  ```javascript
18
18
  const { Configuration, OpenAIApi } = require("openai");
@@ -22,20 +22,19 @@ const configuration = new Configuration({
22
22
  });
23
23
  const openai = new OpenAIApi(configuration);
24
24
 
25
- const completion = await openai.createCompletion({
26
- model: "text-davinci-003",
27
- prompt: "Hello world",
25
+ const chatCompletion = await openai.createChatCompletion({
26
+ model: "gpt-3.5-turbo",
27
+ messages: [{role: "user", content: "Hello world"}],
28
28
  });
29
- console.log(completion.data.choices[0].text);
29
+ console.log(chatCompletion.data.choices[0].message);
30
30
  ```
31
31
 
32
- Check out the [full API documentation](https://beta.openai.com/docs/api-reference?lang=node.js) for examples of all the available functions.
32
+ Check out the [full API documentation](https://platform.openai.com/docs/api-reference?lang=node.js) for examples of all the available functions.
33
33
 
34
34
  ### Request options
35
35
 
36
36
  All of the available API request functions additionally contain an optional final parameter where you can pass custom [axios request options](https://axios-http.com/docs/req_config), for example:
37
37
 
38
-
39
38
  ```javascript
40
39
  const completion = await openai.createCompletion(
41
40
  {
package/api.ts CHANGED
@@ -4,7 +4,7 @@
4
4
  * OpenAI API
5
5
  * APIs for sampling from and fine-tuning language models
6
6
  *
7
- * The version of the OpenAPI document: 1.2.0
7
+ * The version of the OpenAPI document: 1.3.0
8
8
  *
9
9
  *
10
10
  * NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
@@ -23,6 +23,31 @@ import type { RequestArgs } from './base';
23
23
  // @ts-ignore
24
24
  import { BASE_PATH, COLLECTION_FORMATS, BaseAPI, RequiredError } from './base';
25
25
 
26
+ /**
27
+ *
28
+ * @export
29
+ * @interface ChatCompletionFunctions
30
+ */
31
+ export interface ChatCompletionFunctions {
32
+ /**
33
+ * The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
34
+ * @type {string}
35
+ * @memberof ChatCompletionFunctions
36
+ */
37
+ 'name': string;
38
+ /**
39
+ * The description of what the function does.
40
+ * @type {string}
41
+ * @memberof ChatCompletionFunctions
42
+ */
43
+ 'description'?: string;
44
+ /**
45
+ * The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/gpt/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
46
+ * @type {{ [key: string]: any; }}
47
+ * @memberof ChatCompletionFunctions
48
+ */
49
+ 'parameters'?: { [key: string]: any; };
50
+ }
26
51
  /**
27
52
  *
28
53
  * @export
@@ -30,33 +55,59 @@ import { BASE_PATH, COLLECTION_FORMATS, BaseAPI, RequiredError } from './base';
30
55
  */
31
56
  export interface ChatCompletionRequestMessage {
32
57
  /**
33
- * The role of the author of this message.
58
+ * The role of the messages author. One of `system`, `user`, `assistant`, or `function`.
34
59
  * @type {string}
35
60
  * @memberof ChatCompletionRequestMessage
36
61
  */
37
62
  'role': ChatCompletionRequestMessageRoleEnum;
38
63
  /**
39
- * The contents of the message
64
+ * The contents of the message. `content` is required for all messages except assistant messages with function calls.
40
65
  * @type {string}
41
66
  * @memberof ChatCompletionRequestMessage
42
67
  */
43
- 'content': string;
68
+ 'content'?: string;
44
69
  /**
45
- * The name of the user in a multi-user chat
70
+ * The name of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.
46
71
  * @type {string}
47
72
  * @memberof ChatCompletionRequestMessage
48
73
  */
49
74
  'name'?: string;
75
+ /**
76
+ *
77
+ * @type {ChatCompletionRequestMessageFunctionCall}
78
+ * @memberof ChatCompletionRequestMessage
79
+ */
80
+ 'function_call'?: ChatCompletionRequestMessageFunctionCall;
50
81
  }
51
82
 
52
83
  export const ChatCompletionRequestMessageRoleEnum = {
53
84
  System: 'system',
54
85
  User: 'user',
55
- Assistant: 'assistant'
86
+ Assistant: 'assistant',
87
+ Function: 'function'
56
88
  } as const;
57
89
 
58
90
  export type ChatCompletionRequestMessageRoleEnum = typeof ChatCompletionRequestMessageRoleEnum[keyof typeof ChatCompletionRequestMessageRoleEnum];
59
91
 
92
+ /**
93
+ * The name and arguments of a function that should be called, as generated by the model.
94
+ * @export
95
+ * @interface ChatCompletionRequestMessageFunctionCall
96
+ */
97
+ export interface ChatCompletionRequestMessageFunctionCall {
98
+ /**
99
+ * The name of the function to call.
100
+ * @type {string}
101
+ * @memberof ChatCompletionRequestMessageFunctionCall
102
+ */
103
+ 'name'?: string;
104
+ /**
105
+ * The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
106
+ * @type {string}
107
+ * @memberof ChatCompletionRequestMessageFunctionCall
108
+ */
109
+ 'arguments'?: string;
110
+ }
60
111
  /**
61
112
  *
62
113
  * @export
@@ -70,17 +121,24 @@ export interface ChatCompletionResponseMessage {
70
121
  */
71
122
  'role': ChatCompletionResponseMessageRoleEnum;
72
123
  /**
73
- * The contents of the message
124
+ * The contents of the message.
74
125
  * @type {string}
75
126
  * @memberof ChatCompletionResponseMessage
76
127
  */
77
- 'content': string;
128
+ 'content'?: string;
129
+ /**
130
+ *
131
+ * @type {ChatCompletionRequestMessageFunctionCall}
132
+ * @memberof ChatCompletionResponseMessage
133
+ */
134
+ 'function_call'?: ChatCompletionRequestMessageFunctionCall;
78
135
  }
79
136
 
80
137
  export const ChatCompletionResponseMessageRoleEnum = {
81
138
  System: 'system',
82
139
  User: 'user',
83
- Assistant: 'assistant'
140
+ Assistant: 'assistant',
141
+ Function: 'function'
84
142
  } as const;
85
143
 
86
144
  export type ChatCompletionResponseMessageRoleEnum = typeof ChatCompletionResponseMessageRoleEnum[keyof typeof ChatCompletionResponseMessageRoleEnum];
@@ -146,7 +204,7 @@ export interface CreateAnswerRequest {
146
204
  */
147
205
  'temperature'?: number | null;
148
206
  /**
149
- * Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. If you need more than this, please contact us through our [Help center](https://help.openai.com) and describe your use case. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs.
207
+ * Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs.
150
208
  * @type {number}
151
209
  * @memberof CreateAnswerRequest
152
210
  */
@@ -276,17 +334,29 @@ export interface CreateAnswerResponseSelectedDocumentsInner {
276
334
  */
277
335
  export interface CreateChatCompletionRequest {
278
336
  /**
279
- * ID of the model to use. Currently, only `gpt-3.5-turbo` and `gpt-3.5-turbo-0301` are supported.
337
+ * ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
280
338
  * @type {string}
281
339
  * @memberof CreateChatCompletionRequest
282
340
  */
283
341
  'model': string;
284
342
  /**
285
- * The messages to generate chat completions for, in the [chat format](/docs/guides/chat/introduction).
343
+ * A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
286
344
  * @type {Array<ChatCompletionRequestMessage>}
287
345
  * @memberof CreateChatCompletionRequest
288
346
  */
289
347
  'messages': Array<ChatCompletionRequestMessage>;
348
+ /**
349
+ * A list of functions the model may generate JSON inputs for.
350
+ * @type {Array<ChatCompletionFunctions>}
351
+ * @memberof CreateChatCompletionRequest
352
+ */
353
+ 'functions'?: Array<ChatCompletionFunctions>;
354
+ /**
355
+ *
356
+ * @type {CreateChatCompletionRequestFunctionCall}
357
+ * @memberof CreateChatCompletionRequest
358
+ */
359
+ 'function_call'?: CreateChatCompletionRequestFunctionCall;
290
360
  /**
291
361
  * What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both.
292
362
  * @type {number}
@@ -306,7 +376,7 @@ export interface CreateChatCompletionRequest {
306
376
  */
307
377
  'n'?: number | null;
308
378
  /**
309
- * If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
379
+ * If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
310
380
  * @type {boolean}
311
381
  * @memberof CreateChatCompletionRequest
312
382
  */
@@ -317,6 +387,12 @@ export interface CreateChatCompletionRequest {
317
387
  * @memberof CreateChatCompletionRequest
318
388
  */
319
389
  'stop'?: CreateChatCompletionRequestStop;
390
+ /**
391
+ * The maximum number of [tokens](/tokenizer) to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model\'s context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
392
+ * @type {number}
393
+ * @memberof CreateChatCompletionRequest
394
+ */
395
+ 'max_tokens'?: number;
320
396
  /**
321
397
  * Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model\'s likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details)
322
398
  * @type {number}
@@ -342,6 +418,26 @@ export interface CreateChatCompletionRequest {
342
418
  */
343
419
  'user'?: string;
344
420
  }
421
+ /**
422
+ * @type CreateChatCompletionRequestFunctionCall
423
+ * Controls how the model responds to function calls. \"none\" means the model does not call a function, and responds to the end-user. \"auto\" means the model can pick between an end-user or calling a function. Specifying a particular function via `{\"name\":\\ \"my_function\"}` forces the model to call that function. \"none\" is the default when no functions are present. \"auto\" is the default if functions are present.
424
+ * @export
425
+ */
426
+ export type CreateChatCompletionRequestFunctionCall = CreateChatCompletionRequestFunctionCallOneOf | string;
427
+
428
+ /**
429
+ *
430
+ * @export
431
+ * @interface CreateChatCompletionRequestFunctionCallOneOf
432
+ */
433
+ export interface CreateChatCompletionRequestFunctionCallOneOf {
434
+ /**
435
+ * The name of the function to call.
436
+ * @type {string}
437
+ * @memberof CreateChatCompletionRequestFunctionCallOneOf
438
+ */
439
+ 'name': string;
440
+ }
345
441
  /**
346
442
  * @type CreateChatCompletionRequestStop
347
443
  * Up to 4 sequences where the API will stop generating further tokens.
@@ -466,7 +562,7 @@ export interface CreateClassificationRequest {
466
562
  */
467
563
  'temperature'?: number | null;
468
564
  /**
469
- * Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. If you need more than this, please contact us through our [Help center](https://help.openai.com) and describe your use case. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs.
565
+ * Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs.
470
566
  * @type {number}
471
567
  * @memberof CreateClassificationRequest
472
568
  */
@@ -601,7 +697,7 @@ export interface CreateCompletionRequest {
601
697
  */
602
698
  'suffix'?: string | null;
603
699
  /**
604
- * The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model\'s context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
700
+ * The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model\'s context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
605
701
  * @type {number}
606
702
  * @memberof CreateCompletionRequest
607
703
  */
@@ -625,13 +721,13 @@ export interface CreateCompletionRequest {
625
721
  */
626
722
  'n'?: number | null;
627
723
  /**
628
- * Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
724
+ * Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
629
725
  * @type {boolean}
630
726
  * @memberof CreateCompletionRequest
631
727
  */
632
728
  'stream'?: boolean | null;
633
729
  /**
634
- * Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. If you need more than this, please contact us through our [Help center](https://help.openai.com) and describe your use case.
730
+ * Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
635
731
  * @type {number}
636
732
  * @memberof CreateCompletionRequest
637
733
  */
@@ -924,7 +1020,7 @@ export interface CreateEmbeddingRequest {
924
1020
  }
925
1021
  /**
926
1022
  * @type CreateEmbeddingRequestInput
927
- * Input text to get embeddings for, encoded as a string or array of tokens. To get embeddings for multiple inputs in a single request, pass an array of strings or array of token arrays. Each input must not exceed 8192 tokens in length.
1023
+ * Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. Each input must not exceed the max input tokens for the model (8191 tokens for `text-embedding-ada-002`). [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
928
1024
  * @export
929
1025
  */
930
1026
  export type CreateEmbeddingRequestInput = Array<any> | Array<number> | Array<string> | string;
@@ -1509,6 +1605,19 @@ export interface Engine {
1509
1605
  */
1510
1606
  'ready': boolean;
1511
1607
  }
1608
+ /**
1609
+ *
1610
+ * @export
1611
+ * @interface ErrorResponse
1612
+ */
1613
+ export interface ErrorResponse {
1614
+ /**
1615
+ *
1616
+ * @type {Error}
1617
+ * @memberof ErrorResponse
1618
+ */
1619
+ 'error': Error;
1620
+ }
1512
1621
  /**
1513
1622
  *
1514
1623
  * @export
@@ -1789,6 +1898,37 @@ export interface Model {
1789
1898
  */
1790
1899
  'owned_by': string;
1791
1900
  }
1901
+ /**
1902
+ *
1903
+ * @export
1904
+ * @interface ModelError
1905
+ */
1906
+ export interface ModelError {
1907
+ /**
1908
+ *
1909
+ * @type {string}
1910
+ * @memberof ModelError
1911
+ */
1912
+ 'type': string;
1913
+ /**
1914
+ *
1915
+ * @type {string}
1916
+ * @memberof ModelError
1917
+ */
1918
+ 'message': string;
1919
+ /**
1920
+ *
1921
+ * @type {string}
1922
+ * @memberof ModelError
1923
+ */
1924
+ 'param': string | null;
1925
+ /**
1926
+ *
1927
+ * @type {string}
1928
+ * @memberof ModelError
1929
+ */
1930
+ 'code': string | null;
1931
+ }
1792
1932
  /**
1793
1933
  *
1794
1934
  * @export
@@ -1924,7 +2064,7 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
1924
2064
  },
1925
2065
  /**
1926
2066
  *
1927
- * @summary Creates a completion for the chat message
2067
+ * @summary Creates a model response for the given chat conversation.
1928
2068
  * @param {CreateChatCompletionRequest} createChatCompletionRequest
1929
2069
  * @param {*} [options] Override http request option.
1930
2070
  * @throws {RequiredError}
@@ -1997,7 +2137,7 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
1997
2137
  },
1998
2138
  /**
1999
2139
  *
2000
- * @summary Creates a completion for the provided prompt and parameters
2140
+ * @summary Creates a completion for the provided prompt and parameters.
2001
2141
  * @param {CreateCompletionRequest} createCompletionRequest
2002
2142
  * @param {*} [options] Override http request option.
2003
2143
  * @throws {RequiredError}
@@ -2437,15 +2577,16 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
2437
2577
  /**
2438
2578
  *
2439
2579
  * @summary Transcribes audio into the input language.
2440
- * @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
2580
+ * @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
2441
2581
  * @param {string} model ID of the model to use. Only &#x60;whisper-1&#x60; is currently available.
2442
2582
  * @param {string} [prompt] An optional text to guide the model\\\&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
2443
2583
  * @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
2444
2584
  * @param {number} [temperature] The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
2585
+ * @param {string} [language] The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
2445
2586
  * @param {*} [options] Override http request option.
2446
2587
  * @throws {RequiredError}
2447
2588
  */
2448
- createTranscription: async (file: File, model: string, prompt?: string, responseFormat?: string, temperature?: number, options: AxiosRequestConfig = {}): Promise<RequestArgs> => {
2589
+ createTranscription: async (file: File, model: string, prompt?: string, responseFormat?: string, temperature?: number, language?: string, options: AxiosRequestConfig = {}): Promise<RequestArgs> => {
2449
2590
  // verify required parameter 'file' is not null or undefined
2450
2591
  assertParamExists('createTranscription', 'file', file)
2451
2592
  // verify required parameter 'model' is not null or undefined
@@ -2484,6 +2625,10 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
2484
2625
  localVarFormParams.append('temperature', temperature as any);
2485
2626
  }
2486
2627
 
2628
+ if (language !== undefined) {
2629
+ localVarFormParams.append('language', language as any);
2630
+ }
2631
+
2487
2632
 
2488
2633
  localVarHeaderParameter['Content-Type'] = 'multipart/form-data';
2489
2634
 
@@ -2500,7 +2645,7 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
2500
2645
  /**
2501
2646
  *
2502
2647
  * @summary Translates audio into into English.
2503
- * @param {File} file The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
2648
+ * @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
2504
2649
  * @param {string} model ID of the model to use. Only &#x60;whisper-1&#x60; is currently available.
2505
2650
  * @param {string} [prompt] An optional text to guide the model\\\&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
2506
2651
  * @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
@@ -2994,7 +3139,7 @@ export const OpenAIApiFp = function(configuration?: Configuration) {
2994
3139
  },
2995
3140
  /**
2996
3141
  *
2997
- * @summary Creates a completion for the chat message
3142
+ * @summary Creates a model response for the given chat conversation.
2998
3143
  * @param {CreateChatCompletionRequest} createChatCompletionRequest
2999
3144
  * @param {*} [options] Override http request option.
3000
3145
  * @throws {RequiredError}
@@ -3017,7 +3162,7 @@ export const OpenAIApiFp = function(configuration?: Configuration) {
3017
3162
  },
3018
3163
  /**
3019
3164
  *
3020
- * @summary Creates a completion for the provided prompt and parameters
3165
+ * @summary Creates a completion for the provided prompt and parameters.
3021
3166
  * @param {CreateCompletionRequest} createCompletionRequest
3022
3167
  * @param {*} [options] Override http request option.
3023
3168
  * @throws {RequiredError}
@@ -3141,22 +3286,23 @@ export const OpenAIApiFp = function(configuration?: Configuration) {
3141
3286
  /**
3142
3287
  *
3143
3288
  * @summary Transcribes audio into the input language.
3144
- * @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3289
+ * @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3145
3290
  * @param {string} model ID of the model to use. Only &#x60;whisper-1&#x60; is currently available.
3146
3291
  * @param {string} [prompt] An optional text to guide the model\\\&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
3147
3292
  * @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
3148
3293
  * @param {number} [temperature] The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
3294
+ * @param {string} [language] The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
3149
3295
  * @param {*} [options] Override http request option.
3150
3296
  * @throws {RequiredError}
3151
3297
  */
3152
- async createTranscription(file: File, model: string, prompt?: string, responseFormat?: string, temperature?: number, options?: AxiosRequestConfig): Promise<(axios?: AxiosInstance, basePath?: string) => AxiosPromise<CreateTranscriptionResponse>> {
3153
- const localVarAxiosArgs = await localVarAxiosParamCreator.createTranscription(file, model, prompt, responseFormat, temperature, options);
3298
+ async createTranscription(file: File, model: string, prompt?: string, responseFormat?: string, temperature?: number, language?: string, options?: AxiosRequestConfig): Promise<(axios?: AxiosInstance, basePath?: string) => AxiosPromise<CreateTranscriptionResponse>> {
3299
+ const localVarAxiosArgs = await localVarAxiosParamCreator.createTranscription(file, model, prompt, responseFormat, temperature, language, options);
3154
3300
  return createRequestFunction(localVarAxiosArgs, globalAxios, BASE_PATH, configuration);
3155
3301
  },
3156
3302
  /**
3157
3303
  *
3158
3304
  * @summary Translates audio into into English.
3159
- * @param {File} file The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3305
+ * @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3160
3306
  * @param {string} model ID of the model to use. Only &#x60;whisper-1&#x60; is currently available.
3161
3307
  * @param {string} [prompt] An optional text to guide the model\\\&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
3162
3308
  * @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
@@ -3332,7 +3478,7 @@ export const OpenAIApiFactory = function (configuration?: Configuration, basePat
3332
3478
  },
3333
3479
  /**
3334
3480
  *
3335
- * @summary Creates a completion for the chat message
3481
+ * @summary Creates a model response for the given chat conversation.
3336
3482
  * @param {CreateChatCompletionRequest} createChatCompletionRequest
3337
3483
  * @param {*} [options] Override http request option.
3338
3484
  * @throws {RequiredError}
@@ -3353,7 +3499,7 @@ export const OpenAIApiFactory = function (configuration?: Configuration, basePat
3353
3499
  },
3354
3500
  /**
3355
3501
  *
3356
- * @summary Creates a completion for the provided prompt and parameters
3502
+ * @summary Creates a completion for the provided prompt and parameters.
3357
3503
  * @param {CreateCompletionRequest} createCompletionRequest
3358
3504
  * @param {*} [options] Override http request option.
3359
3505
  * @throws {RequiredError}
@@ -3467,21 +3613,22 @@ export const OpenAIApiFactory = function (configuration?: Configuration, basePat
3467
3613
  /**
3468
3614
  *
3469
3615
  * @summary Transcribes audio into the input language.
3470
- * @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3616
+ * @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3471
3617
  * @param {string} model ID of the model to use. Only &#x60;whisper-1&#x60; is currently available.
3472
3618
  * @param {string} [prompt] An optional text to guide the model\\\&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
3473
3619
  * @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
3474
3620
  * @param {number} [temperature] The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
3621
+ * @param {string} [language] The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
3475
3622
  * @param {*} [options] Override http request option.
3476
3623
  * @throws {RequiredError}
3477
3624
  */
3478
- createTranscription(file: File, model: string, prompt?: string, responseFormat?: string, temperature?: number, options?: any): AxiosPromise<CreateTranscriptionResponse> {
3479
- return localVarFp.createTranscription(file, model, prompt, responseFormat, temperature, options).then((request) => request(axios, basePath));
3625
+ createTranscription(file: File, model: string, prompt?: string, responseFormat?: string, temperature?: number, language?: string, options?: any): AxiosPromise<CreateTranscriptionResponse> {
3626
+ return localVarFp.createTranscription(file, model, prompt, responseFormat, temperature, language, options).then((request) => request(axios, basePath));
3480
3627
  },
3481
3628
  /**
3482
3629
  *
3483
3630
  * @summary Translates audio into into English.
3484
- * @param {File} file The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3631
+ * @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3485
3632
  * @param {string} model ID of the model to use. Only &#x60;whisper-1&#x60; is currently available.
3486
3633
  * @param {string} [prompt] An optional text to guide the model\\\&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
3487
3634
  * @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
@@ -3648,7 +3795,7 @@ export class OpenAIApi extends BaseAPI {
3648
3795
 
3649
3796
  /**
3650
3797
  *
3651
- * @summary Creates a completion for the chat message
3798
+ * @summary Creates a model response for the given chat conversation.
3652
3799
  * @param {CreateChatCompletionRequest} createChatCompletionRequest
3653
3800
  * @param {*} [options] Override http request option.
3654
3801
  * @throws {RequiredError}
@@ -3673,7 +3820,7 @@ export class OpenAIApi extends BaseAPI {
3673
3820
 
3674
3821
  /**
3675
3822
  *
3676
- * @summary Creates a completion for the provided prompt and parameters
3823
+ * @summary Creates a completion for the provided prompt and parameters.
3677
3824
  * @param {CreateCompletionRequest} createCompletionRequest
3678
3825
  * @param {*} [options] Override http request option.
3679
3826
  * @throws {RequiredError}
@@ -3807,23 +3954,24 @@ export class OpenAIApi extends BaseAPI {
3807
3954
  /**
3808
3955
  *
3809
3956
  * @summary Transcribes audio into the input language.
3810
- * @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3957
+ * @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3811
3958
  * @param {string} model ID of the model to use. Only &#x60;whisper-1&#x60; is currently available.
3812
3959
  * @param {string} [prompt] An optional text to guide the model\\\&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
3813
3960
  * @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
3814
3961
  * @param {number} [temperature] The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
3962
+ * @param {string} [language] The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
3815
3963
  * @param {*} [options] Override http request option.
3816
3964
  * @throws {RequiredError}
3817
3965
  * @memberof OpenAIApi
3818
3966
  */
3819
- public createTranscription(file: File, model: string, prompt?: string, responseFormat?: string, temperature?: number, options?: AxiosRequestConfig) {
3820
- return OpenAIApiFp(this.configuration).createTranscription(file, model, prompt, responseFormat, temperature, options).then((request) => request(this.axios, this.basePath));
3967
+ public createTranscription(file: File, model: string, prompt?: string, responseFormat?: string, temperature?: number, language?: string, options?: AxiosRequestConfig) {
3968
+ return OpenAIApiFp(this.configuration).createTranscription(file, model, prompt, responseFormat, temperature, language, options).then((request) => request(this.axios, this.basePath));
3821
3969
  }
3822
3970
 
3823
3971
  /**
3824
3972
  *
3825
3973
  * @summary Translates audio into into English.
3826
- * @param {File} file The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3974
+ * @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3827
3975
  * @param {string} model ID of the model to use. Only &#x60;whisper-1&#x60; is currently available.
3828
3976
  * @param {string} [prompt] An optional text to guide the model\\\&#39;s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
3829
3977
  * @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
package/base.ts CHANGED
@@ -4,7 +4,7 @@
4
4
  * OpenAI API
5
5
  * APIs for sampling from and fine-tuning language models
6
6
  *
7
- * The version of the OpenAPI document: 1.2.0
7
+ * The version of the OpenAPI document: 1.3.0
8
8
  *
9
9
  *
10
10
  * NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
package/common.ts CHANGED
@@ -4,7 +4,7 @@
4
4
  * OpenAI API
5
5
  * APIs for sampling from and fine-tuning language models
6
6
  *
7
- * The version of the OpenAPI document: 1.2.0
7
+ * The version of the OpenAPI document: 1.3.0
8
8
  *
9
9
  *
10
10
  * NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
package/configuration.ts CHANGED
@@ -4,7 +4,7 @@
4
4
  * OpenAI API
5
5
  * APIs for sampling from and fine-tuning language models
6
6
  *
7
- * The version of the OpenAPI document: 1.2.0
7
+ * The version of the OpenAPI document: 1.3.0
8
8
  *
9
9
  *
10
10
  * NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).