openai 3.2.1 → 3.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.openapi-generator/VERSION +1 -1
- package/README.md +8 -9
- package/api.ts +169 -35
- package/base.ts +1 -1
- package/common.ts +1 -1
- package/configuration.ts +1 -1
- package/dist/api.d.ts +168 -33
- package/dist/api.js +21 -19
- package/dist/base.d.ts +1 -1
- package/dist/base.js +1 -1
- package/dist/common.d.ts +1 -1
- package/dist/common.js +1 -1
- package/dist/configuration.d.ts +1 -1
- package/dist/configuration.js +1 -1
- package/dist/index.d.ts +1 -1
- package/dist/index.js +1 -1
- package/index.ts +1 -1
- package/package.json +1 -1
|
@@ -1 +1 @@
|
|
|
1
|
-
6.
|
|
1
|
+
6.6.0
|
package/README.md
CHANGED
|
@@ -2,17 +2,17 @@
|
|
|
2
2
|
|
|
3
3
|
The OpenAI Node.js library provides convenient access to the OpenAI API from Node.js applications. Most of the code in this library is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi).
|
|
4
4
|
|
|
5
|
-
**Important note: this library is meant for server-side usage only, as using it in client-side browser code will expose your secret API key. [See here](https://
|
|
5
|
+
> ⚠️ **Important note: this library is meant for server-side usage only, as using it in client-side browser code will expose your secret API key. [See here](https://platform.openai.com/docs/api-reference/authentication) for more details.**
|
|
6
6
|
|
|
7
7
|
## Installation
|
|
8
8
|
|
|
9
9
|
```bash
|
|
10
|
-
|
|
10
|
+
npm install openai
|
|
11
11
|
```
|
|
12
12
|
|
|
13
13
|
## Usage
|
|
14
14
|
|
|
15
|
-
The library needs to be configured with your account's secret key, which is available
|
|
15
|
+
The library needs to be configured with your account's secret key, which is available in your [OpenAI account page](https://platform.openai.com/account/api-keys). We recommend setting it as an environment variable. Here's an example of initializing the library with the API key loaded from an environment variable and creating a completion:
|
|
16
16
|
|
|
17
17
|
```javascript
|
|
18
18
|
const { Configuration, OpenAIApi } = require("openai");
|
|
@@ -22,20 +22,19 @@ const configuration = new Configuration({
|
|
|
22
22
|
});
|
|
23
23
|
const openai = new OpenAIApi(configuration);
|
|
24
24
|
|
|
25
|
-
const
|
|
26
|
-
model: "
|
|
27
|
-
|
|
25
|
+
const chatCompletion = await openai.createChatCompletion({
|
|
26
|
+
model: "gpt-3.5-turbo",
|
|
27
|
+
messages: [{role: "user", content: "Hello world"}],
|
|
28
28
|
});
|
|
29
|
-
console.log(
|
|
29
|
+
console.log(chatCompletion.data.choices[0].message);
|
|
30
30
|
```
|
|
31
31
|
|
|
32
|
-
Check out the [full API documentation](https://
|
|
32
|
+
Check out the [full API documentation](https://platform.openai.com/docs/api-reference?lang=node.js) for examples of all the available functions.
|
|
33
33
|
|
|
34
34
|
### Request options
|
|
35
35
|
|
|
36
36
|
All of the available API request functions additionally contain an optional final parameter where you can pass custom [axios request options](https://axios-http.com/docs/req_config), for example:
|
|
37
37
|
|
|
38
|
-
|
|
39
38
|
```javascript
|
|
40
39
|
const completion = await openai.createCompletion(
|
|
41
40
|
{
|
package/api.ts
CHANGED
|
@@ -4,7 +4,7 @@
|
|
|
4
4
|
* OpenAI API
|
|
5
5
|
* APIs for sampling from and fine-tuning language models
|
|
6
6
|
*
|
|
7
|
-
* The version of the OpenAPI document: 1.
|
|
7
|
+
* The version of the OpenAPI document: 1.3.0
|
|
8
8
|
*
|
|
9
9
|
*
|
|
10
10
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
|
@@ -23,6 +23,31 @@ import type { RequestArgs } from './base';
|
|
|
23
23
|
// @ts-ignore
|
|
24
24
|
import { BASE_PATH, COLLECTION_FORMATS, BaseAPI, RequiredError } from './base';
|
|
25
25
|
|
|
26
|
+
/**
|
|
27
|
+
*
|
|
28
|
+
* @export
|
|
29
|
+
* @interface ChatCompletionFunctions
|
|
30
|
+
*/
|
|
31
|
+
export interface ChatCompletionFunctions {
|
|
32
|
+
/**
|
|
33
|
+
* The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
|
|
34
|
+
* @type {string}
|
|
35
|
+
* @memberof ChatCompletionFunctions
|
|
36
|
+
*/
|
|
37
|
+
'name': string;
|
|
38
|
+
/**
|
|
39
|
+
* The description of what the function does.
|
|
40
|
+
* @type {string}
|
|
41
|
+
* @memberof ChatCompletionFunctions
|
|
42
|
+
*/
|
|
43
|
+
'description'?: string;
|
|
44
|
+
/**
|
|
45
|
+
* The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/gpt/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
|
|
46
|
+
* @type {{ [key: string]: any; }}
|
|
47
|
+
* @memberof ChatCompletionFunctions
|
|
48
|
+
*/
|
|
49
|
+
'parameters'?: { [key: string]: any; };
|
|
50
|
+
}
|
|
26
51
|
/**
|
|
27
52
|
*
|
|
28
53
|
* @export
|
|
@@ -30,33 +55,59 @@ import { BASE_PATH, COLLECTION_FORMATS, BaseAPI, RequiredError } from './base';
|
|
|
30
55
|
*/
|
|
31
56
|
export interface ChatCompletionRequestMessage {
|
|
32
57
|
/**
|
|
33
|
-
* The role of the author of
|
|
58
|
+
* The role of the messages author. One of `system`, `user`, `assistant`, or `function`.
|
|
34
59
|
* @type {string}
|
|
35
60
|
* @memberof ChatCompletionRequestMessage
|
|
36
61
|
*/
|
|
37
62
|
'role': ChatCompletionRequestMessageRoleEnum;
|
|
38
63
|
/**
|
|
39
|
-
* The contents of the message
|
|
64
|
+
* The contents of the message. `content` is required for all messages except assistant messages with function calls.
|
|
40
65
|
* @type {string}
|
|
41
66
|
* @memberof ChatCompletionRequestMessage
|
|
42
67
|
*/
|
|
43
|
-
'content'
|
|
68
|
+
'content'?: string;
|
|
44
69
|
/**
|
|
45
|
-
* The name of the
|
|
70
|
+
* The name of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.
|
|
46
71
|
* @type {string}
|
|
47
72
|
* @memberof ChatCompletionRequestMessage
|
|
48
73
|
*/
|
|
49
74
|
'name'?: string;
|
|
75
|
+
/**
|
|
76
|
+
*
|
|
77
|
+
* @type {ChatCompletionRequestMessageFunctionCall}
|
|
78
|
+
* @memberof ChatCompletionRequestMessage
|
|
79
|
+
*/
|
|
80
|
+
'function_call'?: ChatCompletionRequestMessageFunctionCall;
|
|
50
81
|
}
|
|
51
82
|
|
|
52
83
|
export const ChatCompletionRequestMessageRoleEnum = {
|
|
53
84
|
System: 'system',
|
|
54
85
|
User: 'user',
|
|
55
|
-
Assistant: 'assistant'
|
|
86
|
+
Assistant: 'assistant',
|
|
87
|
+
Function: 'function'
|
|
56
88
|
} as const;
|
|
57
89
|
|
|
58
90
|
export type ChatCompletionRequestMessageRoleEnum = typeof ChatCompletionRequestMessageRoleEnum[keyof typeof ChatCompletionRequestMessageRoleEnum];
|
|
59
91
|
|
|
92
|
+
/**
|
|
93
|
+
* The name and arguments of a function that should be called, as generated by the model.
|
|
94
|
+
* @export
|
|
95
|
+
* @interface ChatCompletionRequestMessageFunctionCall
|
|
96
|
+
*/
|
|
97
|
+
export interface ChatCompletionRequestMessageFunctionCall {
|
|
98
|
+
/**
|
|
99
|
+
* The name of the function to call.
|
|
100
|
+
* @type {string}
|
|
101
|
+
* @memberof ChatCompletionRequestMessageFunctionCall
|
|
102
|
+
*/
|
|
103
|
+
'name'?: string;
|
|
104
|
+
/**
|
|
105
|
+
* The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
|
|
106
|
+
* @type {string}
|
|
107
|
+
* @memberof ChatCompletionRequestMessageFunctionCall
|
|
108
|
+
*/
|
|
109
|
+
'arguments'?: string;
|
|
110
|
+
}
|
|
60
111
|
/**
|
|
61
112
|
*
|
|
62
113
|
* @export
|
|
@@ -70,17 +121,24 @@ export interface ChatCompletionResponseMessage {
|
|
|
70
121
|
*/
|
|
71
122
|
'role': ChatCompletionResponseMessageRoleEnum;
|
|
72
123
|
/**
|
|
73
|
-
* The contents of the message
|
|
124
|
+
* The contents of the message.
|
|
74
125
|
* @type {string}
|
|
75
126
|
* @memberof ChatCompletionResponseMessage
|
|
76
127
|
*/
|
|
77
|
-
'content'
|
|
128
|
+
'content'?: string;
|
|
129
|
+
/**
|
|
130
|
+
*
|
|
131
|
+
* @type {ChatCompletionRequestMessageFunctionCall}
|
|
132
|
+
* @memberof ChatCompletionResponseMessage
|
|
133
|
+
*/
|
|
134
|
+
'function_call'?: ChatCompletionRequestMessageFunctionCall;
|
|
78
135
|
}
|
|
79
136
|
|
|
80
137
|
export const ChatCompletionResponseMessageRoleEnum = {
|
|
81
138
|
System: 'system',
|
|
82
139
|
User: 'user',
|
|
83
|
-
Assistant: 'assistant'
|
|
140
|
+
Assistant: 'assistant',
|
|
141
|
+
Function: 'function'
|
|
84
142
|
} as const;
|
|
85
143
|
|
|
86
144
|
export type ChatCompletionResponseMessageRoleEnum = typeof ChatCompletionResponseMessageRoleEnum[keyof typeof ChatCompletionResponseMessageRoleEnum];
|
|
@@ -146,7 +204,7 @@ export interface CreateAnswerRequest {
|
|
|
146
204
|
*/
|
|
147
205
|
'temperature'?: number | null;
|
|
148
206
|
/**
|
|
149
|
-
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
|
|
207
|
+
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs.
|
|
150
208
|
* @type {number}
|
|
151
209
|
* @memberof CreateAnswerRequest
|
|
152
210
|
*/
|
|
@@ -276,17 +334,29 @@ export interface CreateAnswerResponseSelectedDocumentsInner {
|
|
|
276
334
|
*/
|
|
277
335
|
export interface CreateChatCompletionRequest {
|
|
278
336
|
/**
|
|
279
|
-
* ID of the model to use.
|
|
337
|
+
* ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
|
|
280
338
|
* @type {string}
|
|
281
339
|
* @memberof CreateChatCompletionRequest
|
|
282
340
|
*/
|
|
283
341
|
'model': string;
|
|
284
342
|
/**
|
|
285
|
-
*
|
|
343
|
+
* A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
|
|
286
344
|
* @type {Array<ChatCompletionRequestMessage>}
|
|
287
345
|
* @memberof CreateChatCompletionRequest
|
|
288
346
|
*/
|
|
289
347
|
'messages': Array<ChatCompletionRequestMessage>;
|
|
348
|
+
/**
|
|
349
|
+
* A list of functions the model may generate JSON inputs for.
|
|
350
|
+
* @type {Array<ChatCompletionFunctions>}
|
|
351
|
+
* @memberof CreateChatCompletionRequest
|
|
352
|
+
*/
|
|
353
|
+
'functions'?: Array<ChatCompletionFunctions>;
|
|
354
|
+
/**
|
|
355
|
+
*
|
|
356
|
+
* @type {CreateChatCompletionRequestFunctionCall}
|
|
357
|
+
* @memberof CreateChatCompletionRequest
|
|
358
|
+
*/
|
|
359
|
+
'function_call'?: CreateChatCompletionRequestFunctionCall;
|
|
290
360
|
/**
|
|
291
361
|
* What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both.
|
|
292
362
|
* @type {number}
|
|
@@ -306,7 +376,7 @@ export interface CreateChatCompletionRequest {
|
|
|
306
376
|
*/
|
|
307
377
|
'n'?: number | null;
|
|
308
378
|
/**
|
|
309
|
-
* If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
|
|
379
|
+
* If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
|
|
310
380
|
* @type {boolean}
|
|
311
381
|
* @memberof CreateChatCompletionRequest
|
|
312
382
|
*/
|
|
@@ -318,7 +388,7 @@ export interface CreateChatCompletionRequest {
|
|
|
318
388
|
*/
|
|
319
389
|
'stop'?: CreateChatCompletionRequestStop;
|
|
320
390
|
/**
|
|
321
|
-
* The maximum number of tokens
|
|
391
|
+
* The maximum number of [tokens](/tokenizer) to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model\'s context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
|
|
322
392
|
* @type {number}
|
|
323
393
|
* @memberof CreateChatCompletionRequest
|
|
324
394
|
*/
|
|
@@ -348,6 +418,26 @@ export interface CreateChatCompletionRequest {
|
|
|
348
418
|
*/
|
|
349
419
|
'user'?: string;
|
|
350
420
|
}
|
|
421
|
+
/**
|
|
422
|
+
* @type CreateChatCompletionRequestFunctionCall
|
|
423
|
+
* Controls how the model responds to function calls. \"none\" means the model does not call a function, and responds to the end-user. \"auto\" means the model can pick between an end-user or calling a function. Specifying a particular function via `{\"name\":\\ \"my_function\"}` forces the model to call that function. \"none\" is the default when no functions are present. \"auto\" is the default if functions are present.
|
|
424
|
+
* @export
|
|
425
|
+
*/
|
|
426
|
+
export type CreateChatCompletionRequestFunctionCall = CreateChatCompletionRequestFunctionCallOneOf | string;
|
|
427
|
+
|
|
428
|
+
/**
|
|
429
|
+
*
|
|
430
|
+
* @export
|
|
431
|
+
* @interface CreateChatCompletionRequestFunctionCallOneOf
|
|
432
|
+
*/
|
|
433
|
+
export interface CreateChatCompletionRequestFunctionCallOneOf {
|
|
434
|
+
/**
|
|
435
|
+
* The name of the function to call.
|
|
436
|
+
* @type {string}
|
|
437
|
+
* @memberof CreateChatCompletionRequestFunctionCallOneOf
|
|
438
|
+
*/
|
|
439
|
+
'name': string;
|
|
440
|
+
}
|
|
351
441
|
/**
|
|
352
442
|
* @type CreateChatCompletionRequestStop
|
|
353
443
|
* Up to 4 sequences where the API will stop generating further tokens.
|
|
@@ -472,7 +562,7 @@ export interface CreateClassificationRequest {
|
|
|
472
562
|
*/
|
|
473
563
|
'temperature'?: number | null;
|
|
474
564
|
/**
|
|
475
|
-
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
|
|
565
|
+
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs.
|
|
476
566
|
* @type {number}
|
|
477
567
|
* @memberof CreateClassificationRequest
|
|
478
568
|
*/
|
|
@@ -607,7 +697,7 @@ export interface CreateCompletionRequest {
|
|
|
607
697
|
*/
|
|
608
698
|
'suffix'?: string | null;
|
|
609
699
|
/**
|
|
610
|
-
* The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model\'s context length.
|
|
700
|
+
* The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model\'s context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
|
|
611
701
|
* @type {number}
|
|
612
702
|
* @memberof CreateCompletionRequest
|
|
613
703
|
*/
|
|
@@ -631,13 +721,13 @@ export interface CreateCompletionRequest {
|
|
|
631
721
|
*/
|
|
632
722
|
'n'?: number | null;
|
|
633
723
|
/**
|
|
634
|
-
* Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
|
|
724
|
+
* Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
|
|
635
725
|
* @type {boolean}
|
|
636
726
|
* @memberof CreateCompletionRequest
|
|
637
727
|
*/
|
|
638
728
|
'stream'?: boolean | null;
|
|
639
729
|
/**
|
|
640
|
-
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
|
|
730
|
+
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
|
|
641
731
|
* @type {number}
|
|
642
732
|
* @memberof CreateCompletionRequest
|
|
643
733
|
*/
|
|
@@ -930,7 +1020,7 @@ export interface CreateEmbeddingRequest {
|
|
|
930
1020
|
}
|
|
931
1021
|
/**
|
|
932
1022
|
* @type CreateEmbeddingRequestInput
|
|
933
|
-
* Input text to
|
|
1023
|
+
* Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. Each input must not exceed the max input tokens for the model (8191 tokens for `text-embedding-ada-002`). [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
|
|
934
1024
|
* @export
|
|
935
1025
|
*/
|
|
936
1026
|
export type CreateEmbeddingRequestInput = Array<any> | Array<number> | Array<string> | string;
|
|
@@ -1515,6 +1605,19 @@ export interface Engine {
|
|
|
1515
1605
|
*/
|
|
1516
1606
|
'ready': boolean;
|
|
1517
1607
|
}
|
|
1608
|
+
/**
|
|
1609
|
+
*
|
|
1610
|
+
* @export
|
|
1611
|
+
* @interface ErrorResponse
|
|
1612
|
+
*/
|
|
1613
|
+
export interface ErrorResponse {
|
|
1614
|
+
/**
|
|
1615
|
+
*
|
|
1616
|
+
* @type {Error}
|
|
1617
|
+
* @memberof ErrorResponse
|
|
1618
|
+
*/
|
|
1619
|
+
'error': Error;
|
|
1620
|
+
}
|
|
1518
1621
|
/**
|
|
1519
1622
|
*
|
|
1520
1623
|
* @export
|
|
@@ -1795,6 +1898,37 @@ export interface Model {
|
|
|
1795
1898
|
*/
|
|
1796
1899
|
'owned_by': string;
|
|
1797
1900
|
}
|
|
1901
|
+
/**
|
|
1902
|
+
*
|
|
1903
|
+
* @export
|
|
1904
|
+
* @interface ModelError
|
|
1905
|
+
*/
|
|
1906
|
+
export interface ModelError {
|
|
1907
|
+
/**
|
|
1908
|
+
*
|
|
1909
|
+
* @type {string}
|
|
1910
|
+
* @memberof ModelError
|
|
1911
|
+
*/
|
|
1912
|
+
'type': string;
|
|
1913
|
+
/**
|
|
1914
|
+
*
|
|
1915
|
+
* @type {string}
|
|
1916
|
+
* @memberof ModelError
|
|
1917
|
+
*/
|
|
1918
|
+
'message': string;
|
|
1919
|
+
/**
|
|
1920
|
+
*
|
|
1921
|
+
* @type {string}
|
|
1922
|
+
* @memberof ModelError
|
|
1923
|
+
*/
|
|
1924
|
+
'param': string | null;
|
|
1925
|
+
/**
|
|
1926
|
+
*
|
|
1927
|
+
* @type {string}
|
|
1928
|
+
* @memberof ModelError
|
|
1929
|
+
*/
|
|
1930
|
+
'code': string | null;
|
|
1931
|
+
}
|
|
1798
1932
|
/**
|
|
1799
1933
|
*
|
|
1800
1934
|
* @export
|
|
@@ -1930,7 +2064,7 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
|
|
|
1930
2064
|
},
|
|
1931
2065
|
/**
|
|
1932
2066
|
*
|
|
1933
|
-
* @summary Creates a
|
|
2067
|
+
* @summary Creates a model response for the given chat conversation.
|
|
1934
2068
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
1935
2069
|
* @param {*} [options] Override http request option.
|
|
1936
2070
|
* @throws {RequiredError}
|
|
@@ -2003,7 +2137,7 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
|
|
|
2003
2137
|
},
|
|
2004
2138
|
/**
|
|
2005
2139
|
*
|
|
2006
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
2140
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
2007
2141
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
2008
2142
|
* @param {*} [options] Override http request option.
|
|
2009
2143
|
* @throws {RequiredError}
|
|
@@ -2443,7 +2577,7 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
|
|
|
2443
2577
|
/**
|
|
2444
2578
|
*
|
|
2445
2579
|
* @summary Transcribes audio into the input language.
|
|
2446
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2580
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2447
2581
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
2448
2582
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
2449
2583
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -2511,7 +2645,7 @@ export const OpenAIApiAxiosParamCreator = function (configuration?: Configuratio
|
|
|
2511
2645
|
/**
|
|
2512
2646
|
*
|
|
2513
2647
|
* @summary Translates audio into into English.
|
|
2514
|
-
* @param {File} file The audio file
|
|
2648
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2515
2649
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
2516
2650
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
2517
2651
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -3005,7 +3139,7 @@ export const OpenAIApiFp = function(configuration?: Configuration) {
|
|
|
3005
3139
|
},
|
|
3006
3140
|
/**
|
|
3007
3141
|
*
|
|
3008
|
-
* @summary Creates a
|
|
3142
|
+
* @summary Creates a model response for the given chat conversation.
|
|
3009
3143
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
3010
3144
|
* @param {*} [options] Override http request option.
|
|
3011
3145
|
* @throws {RequiredError}
|
|
@@ -3028,7 +3162,7 @@ export const OpenAIApiFp = function(configuration?: Configuration) {
|
|
|
3028
3162
|
},
|
|
3029
3163
|
/**
|
|
3030
3164
|
*
|
|
3031
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
3165
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
3032
3166
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
3033
3167
|
* @param {*} [options] Override http request option.
|
|
3034
3168
|
* @throws {RequiredError}
|
|
@@ -3152,7 +3286,7 @@ export const OpenAIApiFp = function(configuration?: Configuration) {
|
|
|
3152
3286
|
/**
|
|
3153
3287
|
*
|
|
3154
3288
|
* @summary Transcribes audio into the input language.
|
|
3155
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3289
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3156
3290
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
3157
3291
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
3158
3292
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -3168,7 +3302,7 @@ export const OpenAIApiFp = function(configuration?: Configuration) {
|
|
|
3168
3302
|
/**
|
|
3169
3303
|
*
|
|
3170
3304
|
* @summary Translates audio into into English.
|
|
3171
|
-
* @param {File} file The audio file
|
|
3305
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3172
3306
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
3173
3307
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
3174
3308
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -3344,7 +3478,7 @@ export const OpenAIApiFactory = function (configuration?: Configuration, basePat
|
|
|
3344
3478
|
},
|
|
3345
3479
|
/**
|
|
3346
3480
|
*
|
|
3347
|
-
* @summary Creates a
|
|
3481
|
+
* @summary Creates a model response for the given chat conversation.
|
|
3348
3482
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
3349
3483
|
* @param {*} [options] Override http request option.
|
|
3350
3484
|
* @throws {RequiredError}
|
|
@@ -3365,7 +3499,7 @@ export const OpenAIApiFactory = function (configuration?: Configuration, basePat
|
|
|
3365
3499
|
},
|
|
3366
3500
|
/**
|
|
3367
3501
|
*
|
|
3368
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
3502
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
3369
3503
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
3370
3504
|
* @param {*} [options] Override http request option.
|
|
3371
3505
|
* @throws {RequiredError}
|
|
@@ -3479,7 +3613,7 @@ export const OpenAIApiFactory = function (configuration?: Configuration, basePat
|
|
|
3479
3613
|
/**
|
|
3480
3614
|
*
|
|
3481
3615
|
* @summary Transcribes audio into the input language.
|
|
3482
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3616
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3483
3617
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
3484
3618
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
3485
3619
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -3494,7 +3628,7 @@ export const OpenAIApiFactory = function (configuration?: Configuration, basePat
|
|
|
3494
3628
|
/**
|
|
3495
3629
|
*
|
|
3496
3630
|
* @summary Translates audio into into English.
|
|
3497
|
-
* @param {File} file The audio file
|
|
3631
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3498
3632
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
3499
3633
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
3500
3634
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -3661,7 +3795,7 @@ export class OpenAIApi extends BaseAPI {
|
|
|
3661
3795
|
|
|
3662
3796
|
/**
|
|
3663
3797
|
*
|
|
3664
|
-
* @summary Creates a
|
|
3798
|
+
* @summary Creates a model response for the given chat conversation.
|
|
3665
3799
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
3666
3800
|
* @param {*} [options] Override http request option.
|
|
3667
3801
|
* @throws {RequiredError}
|
|
@@ -3686,7 +3820,7 @@ export class OpenAIApi extends BaseAPI {
|
|
|
3686
3820
|
|
|
3687
3821
|
/**
|
|
3688
3822
|
*
|
|
3689
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
3823
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
3690
3824
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
3691
3825
|
* @param {*} [options] Override http request option.
|
|
3692
3826
|
* @throws {RequiredError}
|
|
@@ -3820,7 +3954,7 @@ export class OpenAIApi extends BaseAPI {
|
|
|
3820
3954
|
/**
|
|
3821
3955
|
*
|
|
3822
3956
|
* @summary Transcribes audio into the input language.
|
|
3823
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3957
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3824
3958
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
3825
3959
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
3826
3960
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -3837,7 +3971,7 @@ export class OpenAIApi extends BaseAPI {
|
|
|
3837
3971
|
/**
|
|
3838
3972
|
*
|
|
3839
3973
|
* @summary Translates audio into into English.
|
|
3840
|
-
* @param {File} file The audio file
|
|
3974
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
3841
3975
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
3842
3976
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
3843
3977
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
package/base.ts
CHANGED
|
@@ -4,7 +4,7 @@
|
|
|
4
4
|
* OpenAI API
|
|
5
5
|
* APIs for sampling from and fine-tuning language models
|
|
6
6
|
*
|
|
7
|
-
* The version of the OpenAPI document: 1.
|
|
7
|
+
* The version of the OpenAPI document: 1.3.0
|
|
8
8
|
*
|
|
9
9
|
*
|
|
10
10
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/common.ts
CHANGED
|
@@ -4,7 +4,7 @@
|
|
|
4
4
|
* OpenAI API
|
|
5
5
|
* APIs for sampling from and fine-tuning language models
|
|
6
6
|
*
|
|
7
|
-
* The version of the OpenAPI document: 1.
|
|
7
|
+
* The version of the OpenAPI document: 1.3.0
|
|
8
8
|
*
|
|
9
9
|
*
|
|
10
10
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/configuration.ts
CHANGED
|
@@ -4,7 +4,7 @@
|
|
|
4
4
|
* OpenAI API
|
|
5
5
|
* APIs for sampling from and fine-tuning language models
|
|
6
6
|
*
|
|
7
|
-
* The version of the OpenAPI document: 1.
|
|
7
|
+
* The version of the OpenAPI document: 1.3.0
|
|
8
8
|
*
|
|
9
9
|
*
|
|
10
10
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/dist/api.d.ts
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
* OpenAI API
|
|
3
3
|
* APIs for sampling from and fine-tuning language models
|
|
4
4
|
*
|
|
5
|
-
* The version of the OpenAPI document: 1.
|
|
5
|
+
* The version of the OpenAPI document: 1.3.0
|
|
6
6
|
*
|
|
7
7
|
*
|
|
8
8
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
|
@@ -13,6 +13,33 @@ import type { Configuration } from './configuration';
|
|
|
13
13
|
import type { AxiosPromise, AxiosInstance, AxiosRequestConfig } from 'axios';
|
|
14
14
|
import type { RequestArgs } from './base';
|
|
15
15
|
import { BaseAPI } from './base';
|
|
16
|
+
/**
|
|
17
|
+
*
|
|
18
|
+
* @export
|
|
19
|
+
* @interface ChatCompletionFunctions
|
|
20
|
+
*/
|
|
21
|
+
export interface ChatCompletionFunctions {
|
|
22
|
+
/**
|
|
23
|
+
* The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
|
|
24
|
+
* @type {string}
|
|
25
|
+
* @memberof ChatCompletionFunctions
|
|
26
|
+
*/
|
|
27
|
+
'name': string;
|
|
28
|
+
/**
|
|
29
|
+
* The description of what the function does.
|
|
30
|
+
* @type {string}
|
|
31
|
+
* @memberof ChatCompletionFunctions
|
|
32
|
+
*/
|
|
33
|
+
'description'?: string;
|
|
34
|
+
/**
|
|
35
|
+
* The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/gpt/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.
|
|
36
|
+
* @type {{ [key: string]: any; }}
|
|
37
|
+
* @memberof ChatCompletionFunctions
|
|
38
|
+
*/
|
|
39
|
+
'parameters'?: {
|
|
40
|
+
[key: string]: any;
|
|
41
|
+
};
|
|
42
|
+
}
|
|
16
43
|
/**
|
|
17
44
|
*
|
|
18
45
|
* @export
|
|
@@ -20,30 +47,56 @@ import { BaseAPI } from './base';
|
|
|
20
47
|
*/
|
|
21
48
|
export interface ChatCompletionRequestMessage {
|
|
22
49
|
/**
|
|
23
|
-
* The role of the author of
|
|
50
|
+
* The role of the messages author. One of `system`, `user`, `assistant`, or `function`.
|
|
24
51
|
* @type {string}
|
|
25
52
|
* @memberof ChatCompletionRequestMessage
|
|
26
53
|
*/
|
|
27
54
|
'role': ChatCompletionRequestMessageRoleEnum;
|
|
28
55
|
/**
|
|
29
|
-
* The contents of the message
|
|
56
|
+
* The contents of the message. `content` is required for all messages except assistant messages with function calls.
|
|
30
57
|
* @type {string}
|
|
31
58
|
* @memberof ChatCompletionRequestMessage
|
|
32
59
|
*/
|
|
33
|
-
'content'
|
|
60
|
+
'content'?: string;
|
|
34
61
|
/**
|
|
35
|
-
* The name of the
|
|
62
|
+
* The name of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.
|
|
36
63
|
* @type {string}
|
|
37
64
|
* @memberof ChatCompletionRequestMessage
|
|
38
65
|
*/
|
|
39
66
|
'name'?: string;
|
|
67
|
+
/**
|
|
68
|
+
*
|
|
69
|
+
* @type {ChatCompletionRequestMessageFunctionCall}
|
|
70
|
+
* @memberof ChatCompletionRequestMessage
|
|
71
|
+
*/
|
|
72
|
+
'function_call'?: ChatCompletionRequestMessageFunctionCall;
|
|
40
73
|
}
|
|
41
74
|
export declare const ChatCompletionRequestMessageRoleEnum: {
|
|
42
75
|
readonly System: "system";
|
|
43
76
|
readonly User: "user";
|
|
44
77
|
readonly Assistant: "assistant";
|
|
78
|
+
readonly Function: "function";
|
|
45
79
|
};
|
|
46
80
|
export declare type ChatCompletionRequestMessageRoleEnum = typeof ChatCompletionRequestMessageRoleEnum[keyof typeof ChatCompletionRequestMessageRoleEnum];
|
|
81
|
+
/**
|
|
82
|
+
* The name and arguments of a function that should be called, as generated by the model.
|
|
83
|
+
* @export
|
|
84
|
+
* @interface ChatCompletionRequestMessageFunctionCall
|
|
85
|
+
*/
|
|
86
|
+
export interface ChatCompletionRequestMessageFunctionCall {
|
|
87
|
+
/**
|
|
88
|
+
* The name of the function to call.
|
|
89
|
+
* @type {string}
|
|
90
|
+
* @memberof ChatCompletionRequestMessageFunctionCall
|
|
91
|
+
*/
|
|
92
|
+
'name'?: string;
|
|
93
|
+
/**
|
|
94
|
+
* The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
|
|
95
|
+
* @type {string}
|
|
96
|
+
* @memberof ChatCompletionRequestMessageFunctionCall
|
|
97
|
+
*/
|
|
98
|
+
'arguments'?: string;
|
|
99
|
+
}
|
|
47
100
|
/**
|
|
48
101
|
*
|
|
49
102
|
* @export
|
|
@@ -57,16 +110,23 @@ export interface ChatCompletionResponseMessage {
|
|
|
57
110
|
*/
|
|
58
111
|
'role': ChatCompletionResponseMessageRoleEnum;
|
|
59
112
|
/**
|
|
60
|
-
* The contents of the message
|
|
113
|
+
* The contents of the message.
|
|
61
114
|
* @type {string}
|
|
62
115
|
* @memberof ChatCompletionResponseMessage
|
|
63
116
|
*/
|
|
64
|
-
'content'
|
|
117
|
+
'content'?: string;
|
|
118
|
+
/**
|
|
119
|
+
*
|
|
120
|
+
* @type {ChatCompletionRequestMessageFunctionCall}
|
|
121
|
+
* @memberof ChatCompletionResponseMessage
|
|
122
|
+
*/
|
|
123
|
+
'function_call'?: ChatCompletionRequestMessageFunctionCall;
|
|
65
124
|
}
|
|
66
125
|
export declare const ChatCompletionResponseMessageRoleEnum: {
|
|
67
126
|
readonly System: "system";
|
|
68
127
|
readonly User: "user";
|
|
69
128
|
readonly Assistant: "assistant";
|
|
129
|
+
readonly Function: "function";
|
|
70
130
|
};
|
|
71
131
|
export declare type ChatCompletionResponseMessageRoleEnum = typeof ChatCompletionResponseMessageRoleEnum[keyof typeof ChatCompletionResponseMessageRoleEnum];
|
|
72
132
|
/**
|
|
@@ -130,7 +190,7 @@ export interface CreateAnswerRequest {
|
|
|
130
190
|
*/
|
|
131
191
|
'temperature'?: number | null;
|
|
132
192
|
/**
|
|
133
|
-
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
|
|
193
|
+
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs.
|
|
134
194
|
* @type {number}
|
|
135
195
|
* @memberof CreateAnswerRequest
|
|
136
196
|
*/
|
|
@@ -259,17 +319,29 @@ export interface CreateAnswerResponseSelectedDocumentsInner {
|
|
|
259
319
|
*/
|
|
260
320
|
export interface CreateChatCompletionRequest {
|
|
261
321
|
/**
|
|
262
|
-
* ID of the model to use.
|
|
322
|
+
* ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
|
|
263
323
|
* @type {string}
|
|
264
324
|
* @memberof CreateChatCompletionRequest
|
|
265
325
|
*/
|
|
266
326
|
'model': string;
|
|
267
327
|
/**
|
|
268
|
-
*
|
|
328
|
+
* A list of messages comprising the conversation so far. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
|
|
269
329
|
* @type {Array<ChatCompletionRequestMessage>}
|
|
270
330
|
* @memberof CreateChatCompletionRequest
|
|
271
331
|
*/
|
|
272
332
|
'messages': Array<ChatCompletionRequestMessage>;
|
|
333
|
+
/**
|
|
334
|
+
* A list of functions the model may generate JSON inputs for.
|
|
335
|
+
* @type {Array<ChatCompletionFunctions>}
|
|
336
|
+
* @memberof CreateChatCompletionRequest
|
|
337
|
+
*/
|
|
338
|
+
'functions'?: Array<ChatCompletionFunctions>;
|
|
339
|
+
/**
|
|
340
|
+
*
|
|
341
|
+
* @type {CreateChatCompletionRequestFunctionCall}
|
|
342
|
+
* @memberof CreateChatCompletionRequest
|
|
343
|
+
*/
|
|
344
|
+
'function_call'?: CreateChatCompletionRequestFunctionCall;
|
|
273
345
|
/**
|
|
274
346
|
* What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both.
|
|
275
347
|
* @type {number}
|
|
@@ -289,7 +361,7 @@ export interface CreateChatCompletionRequest {
|
|
|
289
361
|
*/
|
|
290
362
|
'n'?: number | null;
|
|
291
363
|
/**
|
|
292
|
-
* If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
|
|
364
|
+
* If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
|
|
293
365
|
* @type {boolean}
|
|
294
366
|
* @memberof CreateChatCompletionRequest
|
|
295
367
|
*/
|
|
@@ -301,7 +373,7 @@ export interface CreateChatCompletionRequest {
|
|
|
301
373
|
*/
|
|
302
374
|
'stop'?: CreateChatCompletionRequestStop;
|
|
303
375
|
/**
|
|
304
|
-
* The maximum number of tokens
|
|
376
|
+
* The maximum number of [tokens](/tokenizer) to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model\'s context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
|
|
305
377
|
* @type {number}
|
|
306
378
|
* @memberof CreateChatCompletionRequest
|
|
307
379
|
*/
|
|
@@ -331,6 +403,25 @@ export interface CreateChatCompletionRequest {
|
|
|
331
403
|
*/
|
|
332
404
|
'user'?: string;
|
|
333
405
|
}
|
|
406
|
+
/**
|
|
407
|
+
* @type CreateChatCompletionRequestFunctionCall
|
|
408
|
+
* Controls how the model responds to function calls. \"none\" means the model does not call a function, and responds to the end-user. \"auto\" means the model can pick between an end-user or calling a function. Specifying a particular function via `{\"name\":\\ \"my_function\"}` forces the model to call that function. \"none\" is the default when no functions are present. \"auto\" is the default if functions are present.
|
|
409
|
+
* @export
|
|
410
|
+
*/
|
|
411
|
+
export declare type CreateChatCompletionRequestFunctionCall = CreateChatCompletionRequestFunctionCallOneOf | string;
|
|
412
|
+
/**
|
|
413
|
+
*
|
|
414
|
+
* @export
|
|
415
|
+
* @interface CreateChatCompletionRequestFunctionCallOneOf
|
|
416
|
+
*/
|
|
417
|
+
export interface CreateChatCompletionRequestFunctionCallOneOf {
|
|
418
|
+
/**
|
|
419
|
+
* The name of the function to call.
|
|
420
|
+
* @type {string}
|
|
421
|
+
* @memberof CreateChatCompletionRequestFunctionCallOneOf
|
|
422
|
+
*/
|
|
423
|
+
'name': string;
|
|
424
|
+
}
|
|
334
425
|
/**
|
|
335
426
|
* @type CreateChatCompletionRequestStop
|
|
336
427
|
* Up to 4 sequences where the API will stop generating further tokens.
|
|
@@ -454,7 +545,7 @@ export interface CreateClassificationRequest {
|
|
|
454
545
|
*/
|
|
455
546
|
'temperature'?: number | null;
|
|
456
547
|
/**
|
|
457
|
-
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
|
|
548
|
+
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs.
|
|
458
549
|
* @type {number}
|
|
459
550
|
* @memberof CreateClassificationRequest
|
|
460
551
|
*/
|
|
@@ -589,7 +680,7 @@ export interface CreateCompletionRequest {
|
|
|
589
680
|
*/
|
|
590
681
|
'suffix'?: string | null;
|
|
591
682
|
/**
|
|
592
|
-
* The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model\'s context length.
|
|
683
|
+
* The maximum number of [tokens](/tokenizer) to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model\'s context length. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
|
|
593
684
|
* @type {number}
|
|
594
685
|
* @memberof CreateCompletionRequest
|
|
595
686
|
*/
|
|
@@ -613,13 +704,13 @@ export interface CreateCompletionRequest {
|
|
|
613
704
|
*/
|
|
614
705
|
'n'?: number | null;
|
|
615
706
|
/**
|
|
616
|
-
* Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
|
|
707
|
+
* Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
|
|
617
708
|
* @type {boolean}
|
|
618
709
|
* @memberof CreateCompletionRequest
|
|
619
710
|
*/
|
|
620
711
|
'stream'?: boolean | null;
|
|
621
712
|
/**
|
|
622
|
-
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
|
|
713
|
+
* Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5.
|
|
623
714
|
* @type {number}
|
|
624
715
|
* @memberof CreateCompletionRequest
|
|
625
716
|
*/
|
|
@@ -910,7 +1001,7 @@ export interface CreateEmbeddingRequest {
|
|
|
910
1001
|
}
|
|
911
1002
|
/**
|
|
912
1003
|
* @type CreateEmbeddingRequestInput
|
|
913
|
-
* Input text to
|
|
1004
|
+
* Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. Each input must not exceed the max input tokens for the model (8191 tokens for `text-embedding-ada-002`). [Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) for counting tokens.
|
|
914
1005
|
* @export
|
|
915
1006
|
*/
|
|
916
1007
|
export declare type CreateEmbeddingRequestInput = Array<any> | Array<number> | Array<string> | string;
|
|
@@ -1489,6 +1580,19 @@ export interface Engine {
|
|
|
1489
1580
|
*/
|
|
1490
1581
|
'ready': boolean;
|
|
1491
1582
|
}
|
|
1583
|
+
/**
|
|
1584
|
+
*
|
|
1585
|
+
* @export
|
|
1586
|
+
* @interface ErrorResponse
|
|
1587
|
+
*/
|
|
1588
|
+
export interface ErrorResponse {
|
|
1589
|
+
/**
|
|
1590
|
+
*
|
|
1591
|
+
* @type {Error}
|
|
1592
|
+
* @memberof ErrorResponse
|
|
1593
|
+
*/
|
|
1594
|
+
'error': Error;
|
|
1595
|
+
}
|
|
1492
1596
|
/**
|
|
1493
1597
|
*
|
|
1494
1598
|
* @export
|
|
@@ -1769,6 +1873,37 @@ export interface Model {
|
|
|
1769
1873
|
*/
|
|
1770
1874
|
'owned_by': string;
|
|
1771
1875
|
}
|
|
1876
|
+
/**
|
|
1877
|
+
*
|
|
1878
|
+
* @export
|
|
1879
|
+
* @interface ModelError
|
|
1880
|
+
*/
|
|
1881
|
+
export interface ModelError {
|
|
1882
|
+
/**
|
|
1883
|
+
*
|
|
1884
|
+
* @type {string}
|
|
1885
|
+
* @memberof ModelError
|
|
1886
|
+
*/
|
|
1887
|
+
'type': string;
|
|
1888
|
+
/**
|
|
1889
|
+
*
|
|
1890
|
+
* @type {string}
|
|
1891
|
+
* @memberof ModelError
|
|
1892
|
+
*/
|
|
1893
|
+
'message': string;
|
|
1894
|
+
/**
|
|
1895
|
+
*
|
|
1896
|
+
* @type {string}
|
|
1897
|
+
* @memberof ModelError
|
|
1898
|
+
*/
|
|
1899
|
+
'param': string | null;
|
|
1900
|
+
/**
|
|
1901
|
+
*
|
|
1902
|
+
* @type {string}
|
|
1903
|
+
* @memberof ModelError
|
|
1904
|
+
*/
|
|
1905
|
+
'code': string | null;
|
|
1906
|
+
}
|
|
1772
1907
|
/**
|
|
1773
1908
|
*
|
|
1774
1909
|
* @export
|
|
@@ -1848,7 +1983,7 @@ export declare const OpenAIApiAxiosParamCreator: (configuration?: Configuration)
|
|
|
1848
1983
|
createAnswer: (createAnswerRequest: CreateAnswerRequest, options?: AxiosRequestConfig) => Promise<RequestArgs>;
|
|
1849
1984
|
/**
|
|
1850
1985
|
*
|
|
1851
|
-
* @summary Creates a
|
|
1986
|
+
* @summary Creates a model response for the given chat conversation.
|
|
1852
1987
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
1853
1988
|
* @param {*} [options] Override http request option.
|
|
1854
1989
|
* @throws {RequiredError}
|
|
@@ -1865,7 +2000,7 @@ export declare const OpenAIApiAxiosParamCreator: (configuration?: Configuration)
|
|
|
1865
2000
|
createClassification: (createClassificationRequest: CreateClassificationRequest, options?: AxiosRequestConfig) => Promise<RequestArgs>;
|
|
1866
2001
|
/**
|
|
1867
2002
|
*
|
|
1868
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
2003
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
1869
2004
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
1870
2005
|
* @param {*} [options] Override http request option.
|
|
1871
2006
|
* @throws {RequiredError}
|
|
@@ -1959,7 +2094,7 @@ export declare const OpenAIApiAxiosParamCreator: (configuration?: Configuration)
|
|
|
1959
2094
|
/**
|
|
1960
2095
|
*
|
|
1961
2096
|
* @summary Transcribes audio into the input language.
|
|
1962
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2097
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1963
2098
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
1964
2099
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
1965
2100
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -1972,7 +2107,7 @@ export declare const OpenAIApiAxiosParamCreator: (configuration?: Configuration)
|
|
|
1972
2107
|
/**
|
|
1973
2108
|
*
|
|
1974
2109
|
* @summary Translates audio into into English.
|
|
1975
|
-
* @param {File} file The audio file
|
|
2110
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1976
2111
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
1977
2112
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
1978
2113
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -2101,7 +2236,7 @@ export declare const OpenAIApiFp: (configuration?: Configuration) => {
|
|
|
2101
2236
|
createAnswer(createAnswerRequest: CreateAnswerRequest, options?: AxiosRequestConfig): Promise<(axios?: AxiosInstance, basePath?: string) => AxiosPromise<CreateAnswerResponse>>;
|
|
2102
2237
|
/**
|
|
2103
2238
|
*
|
|
2104
|
-
* @summary Creates a
|
|
2239
|
+
* @summary Creates a model response for the given chat conversation.
|
|
2105
2240
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
2106
2241
|
* @param {*} [options] Override http request option.
|
|
2107
2242
|
* @throws {RequiredError}
|
|
@@ -2118,7 +2253,7 @@ export declare const OpenAIApiFp: (configuration?: Configuration) => {
|
|
|
2118
2253
|
createClassification(createClassificationRequest: CreateClassificationRequest, options?: AxiosRequestConfig): Promise<(axios?: AxiosInstance, basePath?: string) => AxiosPromise<CreateClassificationResponse>>;
|
|
2119
2254
|
/**
|
|
2120
2255
|
*
|
|
2121
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
2256
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
2122
2257
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
2123
2258
|
* @param {*} [options] Override http request option.
|
|
2124
2259
|
* @throws {RequiredError}
|
|
@@ -2212,7 +2347,7 @@ export declare const OpenAIApiFp: (configuration?: Configuration) => {
|
|
|
2212
2347
|
/**
|
|
2213
2348
|
*
|
|
2214
2349
|
* @summary Transcribes audio into the input language.
|
|
2215
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2350
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2216
2351
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
2217
2352
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
2218
2353
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -2225,7 +2360,7 @@ export declare const OpenAIApiFp: (configuration?: Configuration) => {
|
|
|
2225
2360
|
/**
|
|
2226
2361
|
*
|
|
2227
2362
|
* @summary Translates audio into into English.
|
|
2228
|
-
* @param {File} file The audio file
|
|
2363
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2229
2364
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
2230
2365
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
2231
2366
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -2354,7 +2489,7 @@ export declare const OpenAIApiFactory: (configuration?: Configuration, basePath?
|
|
|
2354
2489
|
createAnswer(createAnswerRequest: CreateAnswerRequest, options?: any): AxiosPromise<CreateAnswerResponse>;
|
|
2355
2490
|
/**
|
|
2356
2491
|
*
|
|
2357
|
-
* @summary Creates a
|
|
2492
|
+
* @summary Creates a model response for the given chat conversation.
|
|
2358
2493
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
2359
2494
|
* @param {*} [options] Override http request option.
|
|
2360
2495
|
* @throws {RequiredError}
|
|
@@ -2371,7 +2506,7 @@ export declare const OpenAIApiFactory: (configuration?: Configuration, basePath?
|
|
|
2371
2506
|
createClassification(createClassificationRequest: CreateClassificationRequest, options?: any): AxiosPromise<CreateClassificationResponse>;
|
|
2372
2507
|
/**
|
|
2373
2508
|
*
|
|
2374
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
2509
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
2375
2510
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
2376
2511
|
* @param {*} [options] Override http request option.
|
|
2377
2512
|
* @throws {RequiredError}
|
|
@@ -2465,7 +2600,7 @@ export declare const OpenAIApiFactory: (configuration?: Configuration, basePath?
|
|
|
2465
2600
|
/**
|
|
2466
2601
|
*
|
|
2467
2602
|
* @summary Transcribes audio into the input language.
|
|
2468
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2603
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2469
2604
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
2470
2605
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
2471
2606
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -2478,7 +2613,7 @@ export declare const OpenAIApiFactory: (configuration?: Configuration, basePath?
|
|
|
2478
2613
|
/**
|
|
2479
2614
|
*
|
|
2480
2615
|
* @summary Translates audio into into English.
|
|
2481
|
-
* @param {File} file The audio file
|
|
2616
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2482
2617
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
2483
2618
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
2484
2619
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -2611,7 +2746,7 @@ export declare class OpenAIApi extends BaseAPI {
|
|
|
2611
2746
|
createAnswer(createAnswerRequest: CreateAnswerRequest, options?: AxiosRequestConfig): Promise<import("axios").AxiosResponse<CreateAnswerResponse, any>>;
|
|
2612
2747
|
/**
|
|
2613
2748
|
*
|
|
2614
|
-
* @summary Creates a
|
|
2749
|
+
* @summary Creates a model response for the given chat conversation.
|
|
2615
2750
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
2616
2751
|
* @param {*} [options] Override http request option.
|
|
2617
2752
|
* @throws {RequiredError}
|
|
@@ -2630,7 +2765,7 @@ export declare class OpenAIApi extends BaseAPI {
|
|
|
2630
2765
|
createClassification(createClassificationRequest: CreateClassificationRequest, options?: AxiosRequestConfig): Promise<import("axios").AxiosResponse<CreateClassificationResponse, any>>;
|
|
2631
2766
|
/**
|
|
2632
2767
|
*
|
|
2633
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
2768
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
2634
2769
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
2635
2770
|
* @param {*} [options] Override http request option.
|
|
2636
2771
|
* @throws {RequiredError}
|
|
@@ -2734,7 +2869,7 @@ export declare class OpenAIApi extends BaseAPI {
|
|
|
2734
2869
|
/**
|
|
2735
2870
|
*
|
|
2736
2871
|
* @summary Transcribes audio into the input language.
|
|
2737
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2872
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2738
2873
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
2739
2874
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
2740
2875
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -2748,7 +2883,7 @@ export declare class OpenAIApi extends BaseAPI {
|
|
|
2748
2883
|
/**
|
|
2749
2884
|
*
|
|
2750
2885
|
* @summary Translates audio into into English.
|
|
2751
|
-
* @param {File} file The audio file
|
|
2886
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
2752
2887
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
2753
2888
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
2754
2889
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
package/dist/api.js
CHANGED
|
@@ -5,7 +5,7 @@
|
|
|
5
5
|
* OpenAI API
|
|
6
6
|
* APIs for sampling from and fine-tuning language models
|
|
7
7
|
*
|
|
8
|
-
* The version of the OpenAPI document: 1.
|
|
8
|
+
* The version of the OpenAPI document: 1.3.0
|
|
9
9
|
*
|
|
10
10
|
*
|
|
11
11
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
|
@@ -32,12 +32,14 @@ const base_1 = require("./base");
|
|
|
32
32
|
exports.ChatCompletionRequestMessageRoleEnum = {
|
|
33
33
|
System: 'system',
|
|
34
34
|
User: 'user',
|
|
35
|
-
Assistant: 'assistant'
|
|
35
|
+
Assistant: 'assistant',
|
|
36
|
+
Function: 'function'
|
|
36
37
|
};
|
|
37
38
|
exports.ChatCompletionResponseMessageRoleEnum = {
|
|
38
39
|
System: 'system',
|
|
39
40
|
User: 'user',
|
|
40
|
-
Assistant: 'assistant'
|
|
41
|
+
Assistant: 'assistant',
|
|
42
|
+
Function: 'function'
|
|
41
43
|
};
|
|
42
44
|
exports.CreateImageRequestSizeEnum = {
|
|
43
45
|
_256x256: '256x256',
|
|
@@ -116,7 +118,7 @@ exports.OpenAIApiAxiosParamCreator = function (configuration) {
|
|
|
116
118
|
}),
|
|
117
119
|
/**
|
|
118
120
|
*
|
|
119
|
-
* @summary Creates a
|
|
121
|
+
* @summary Creates a model response for the given chat conversation.
|
|
120
122
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
121
123
|
* @param {*} [options] Override http request option.
|
|
122
124
|
* @throws {RequiredError}
|
|
@@ -177,7 +179,7 @@ exports.OpenAIApiAxiosParamCreator = function (configuration) {
|
|
|
177
179
|
}),
|
|
178
180
|
/**
|
|
179
181
|
*
|
|
180
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
182
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
181
183
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
182
184
|
* @param {*} [options] Override http request option.
|
|
183
185
|
* @throws {RequiredError}
|
|
@@ -543,7 +545,7 @@ exports.OpenAIApiAxiosParamCreator = function (configuration) {
|
|
|
543
545
|
/**
|
|
544
546
|
*
|
|
545
547
|
* @summary Transcribes audio into the input language.
|
|
546
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
548
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
547
549
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
548
550
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
549
551
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -599,7 +601,7 @@ exports.OpenAIApiAxiosParamCreator = function (configuration) {
|
|
|
599
601
|
/**
|
|
600
602
|
*
|
|
601
603
|
* @summary Translates audio into into English.
|
|
602
|
-
* @param {File} file The audio file
|
|
604
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
603
605
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
604
606
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
605
607
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -1024,7 +1026,7 @@ exports.OpenAIApiFp = function (configuration) {
|
|
|
1024
1026
|
},
|
|
1025
1027
|
/**
|
|
1026
1028
|
*
|
|
1027
|
-
* @summary Creates a
|
|
1029
|
+
* @summary Creates a model response for the given chat conversation.
|
|
1028
1030
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
1029
1031
|
* @param {*} [options] Override http request option.
|
|
1030
1032
|
* @throws {RequiredError}
|
|
@@ -1051,7 +1053,7 @@ exports.OpenAIApiFp = function (configuration) {
|
|
|
1051
1053
|
},
|
|
1052
1054
|
/**
|
|
1053
1055
|
*
|
|
1054
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
1056
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
1055
1057
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
1056
1058
|
* @param {*} [options] Override http request option.
|
|
1057
1059
|
* @throws {RequiredError}
|
|
@@ -1195,7 +1197,7 @@ exports.OpenAIApiFp = function (configuration) {
|
|
|
1195
1197
|
/**
|
|
1196
1198
|
*
|
|
1197
1199
|
* @summary Transcribes audio into the input language.
|
|
1198
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1200
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1199
1201
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
1200
1202
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
1201
1203
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -1213,7 +1215,7 @@ exports.OpenAIApiFp = function (configuration) {
|
|
|
1213
1215
|
/**
|
|
1214
1216
|
*
|
|
1215
1217
|
* @summary Translates audio into into English.
|
|
1216
|
-
* @param {File} file The audio file
|
|
1218
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1217
1219
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
1218
1220
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
1219
1221
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -1414,7 +1416,7 @@ exports.OpenAIApiFactory = function (configuration, basePath, axios) {
|
|
|
1414
1416
|
},
|
|
1415
1417
|
/**
|
|
1416
1418
|
*
|
|
1417
|
-
* @summary Creates a
|
|
1419
|
+
* @summary Creates a model response for the given chat conversation.
|
|
1418
1420
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
1419
1421
|
* @param {*} [options] Override http request option.
|
|
1420
1422
|
* @throws {RequiredError}
|
|
@@ -1435,7 +1437,7 @@ exports.OpenAIApiFactory = function (configuration, basePath, axios) {
|
|
|
1435
1437
|
},
|
|
1436
1438
|
/**
|
|
1437
1439
|
*
|
|
1438
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
1440
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
1439
1441
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
1440
1442
|
* @param {*} [options] Override http request option.
|
|
1441
1443
|
* @throws {RequiredError}
|
|
@@ -1549,7 +1551,7 @@ exports.OpenAIApiFactory = function (configuration, basePath, axios) {
|
|
|
1549
1551
|
/**
|
|
1550
1552
|
*
|
|
1551
1553
|
* @summary Transcribes audio into the input language.
|
|
1552
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1554
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1553
1555
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
1554
1556
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
1555
1557
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -1564,7 +1566,7 @@ exports.OpenAIApiFactory = function (configuration, basePath, axios) {
|
|
|
1564
1566
|
/**
|
|
1565
1567
|
*
|
|
1566
1568
|
* @summary Translates audio into into English.
|
|
1567
|
-
* @param {File} file The audio file
|
|
1569
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1568
1570
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
1569
1571
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
1570
1572
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -1728,7 +1730,7 @@ class OpenAIApi extends base_1.BaseAPI {
|
|
|
1728
1730
|
}
|
|
1729
1731
|
/**
|
|
1730
1732
|
*
|
|
1731
|
-
* @summary Creates a
|
|
1733
|
+
* @summary Creates a model response for the given chat conversation.
|
|
1732
1734
|
* @param {CreateChatCompletionRequest} createChatCompletionRequest
|
|
1733
1735
|
* @param {*} [options] Override http request option.
|
|
1734
1736
|
* @throws {RequiredError}
|
|
@@ -1751,7 +1753,7 @@ class OpenAIApi extends base_1.BaseAPI {
|
|
|
1751
1753
|
}
|
|
1752
1754
|
/**
|
|
1753
1755
|
*
|
|
1754
|
-
* @summary Creates a completion for the provided prompt and parameters
|
|
1756
|
+
* @summary Creates a completion for the provided prompt and parameters.
|
|
1755
1757
|
* @param {CreateCompletionRequest} createCompletionRequest
|
|
1756
1758
|
* @param {*} [options] Override http request option.
|
|
1757
1759
|
* @throws {RequiredError}
|
|
@@ -1875,7 +1877,7 @@ class OpenAIApi extends base_1.BaseAPI {
|
|
|
1875
1877
|
/**
|
|
1876
1878
|
*
|
|
1877
1879
|
* @summary Transcribes audio into the input language.
|
|
1878
|
-
* @param {File} file The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1880
|
+
* @param {File} file The audio file object (not file name) to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1879
1881
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
1880
1882
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
|
|
1881
1883
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
|
@@ -1891,7 +1893,7 @@ class OpenAIApi extends base_1.BaseAPI {
|
|
|
1891
1893
|
/**
|
|
1892
1894
|
*
|
|
1893
1895
|
* @summary Translates audio into into English.
|
|
1894
|
-
* @param {File} file The audio file
|
|
1896
|
+
* @param {File} file The audio file object (not file name) translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
|
|
1895
1897
|
* @param {string} model ID of the model to use. Only `whisper-1` is currently available.
|
|
1896
1898
|
* @param {string} [prompt] An optional text to guide the model\\\'s style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
|
|
1897
1899
|
* @param {string} [responseFormat] The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
|
package/dist/base.d.ts
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
* OpenAI API
|
|
3
3
|
* APIs for sampling from and fine-tuning language models
|
|
4
4
|
*
|
|
5
|
-
* The version of the OpenAPI document: 1.
|
|
5
|
+
* The version of the OpenAPI document: 1.3.0
|
|
6
6
|
*
|
|
7
7
|
*
|
|
8
8
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/dist/base.js
CHANGED
|
@@ -5,7 +5,7 @@
|
|
|
5
5
|
* OpenAI API
|
|
6
6
|
* APIs for sampling from and fine-tuning language models
|
|
7
7
|
*
|
|
8
|
-
* The version of the OpenAPI document: 1.
|
|
8
|
+
* The version of the OpenAPI document: 1.3.0
|
|
9
9
|
*
|
|
10
10
|
*
|
|
11
11
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/dist/common.d.ts
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
* OpenAI API
|
|
3
3
|
* APIs for sampling from and fine-tuning language models
|
|
4
4
|
*
|
|
5
|
-
* The version of the OpenAPI document: 1.
|
|
5
|
+
* The version of the OpenAPI document: 1.3.0
|
|
6
6
|
*
|
|
7
7
|
*
|
|
8
8
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/dist/common.js
CHANGED
|
@@ -5,7 +5,7 @@
|
|
|
5
5
|
* OpenAI API
|
|
6
6
|
* APIs for sampling from and fine-tuning language models
|
|
7
7
|
*
|
|
8
|
-
* The version of the OpenAPI document: 1.
|
|
8
|
+
* The version of the OpenAPI document: 1.3.0
|
|
9
9
|
*
|
|
10
10
|
*
|
|
11
11
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/dist/configuration.d.ts
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
* OpenAI API
|
|
3
3
|
* APIs for sampling from and fine-tuning language models
|
|
4
4
|
*
|
|
5
|
-
* The version of the OpenAPI document: 1.
|
|
5
|
+
* The version of the OpenAPI document: 1.3.0
|
|
6
6
|
*
|
|
7
7
|
*
|
|
8
8
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/dist/configuration.js
CHANGED
|
@@ -5,7 +5,7 @@
|
|
|
5
5
|
* OpenAI API
|
|
6
6
|
* APIs for sampling from and fine-tuning language models
|
|
7
7
|
*
|
|
8
|
-
* The version of the OpenAPI document: 1.
|
|
8
|
+
* The version of the OpenAPI document: 1.3.0
|
|
9
9
|
*
|
|
10
10
|
*
|
|
11
11
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/dist/index.d.ts
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
* OpenAI API
|
|
3
3
|
* APIs for sampling from and fine-tuning language models
|
|
4
4
|
*
|
|
5
|
-
* The version of the OpenAPI document: 1.
|
|
5
|
+
* The version of the OpenAPI document: 1.3.0
|
|
6
6
|
*
|
|
7
7
|
*
|
|
8
8
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/dist/index.js
CHANGED
|
@@ -5,7 +5,7 @@
|
|
|
5
5
|
* OpenAI API
|
|
6
6
|
* APIs for sampling from and fine-tuning language models
|
|
7
7
|
*
|
|
8
|
-
* The version of the OpenAPI document: 1.
|
|
8
|
+
* The version of the OpenAPI document: 1.3.0
|
|
9
9
|
*
|
|
10
10
|
*
|
|
11
11
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|
package/index.ts
CHANGED
|
@@ -4,7 +4,7 @@
|
|
|
4
4
|
* OpenAI API
|
|
5
5
|
* APIs for sampling from and fine-tuning language models
|
|
6
6
|
*
|
|
7
|
-
* The version of the OpenAPI document: 1.
|
|
7
|
+
* The version of the OpenAPI document: 1.3.0
|
|
8
8
|
*
|
|
9
9
|
*
|
|
10
10
|
* NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
|