orbai 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/openapi.yaml ADDED
@@ -0,0 +1,4616 @@
1
+ openapi: 3.0.0
2
+ info:
3
+ title: OpenAI API
4
+ description: The OpenAI REST API. Please see https://platform.openai.com/docs/api-reference for more details.
5
+ version: "2.0.0"
6
+ termsOfService: https://openai.com/policies/terms-of-use
7
+ contact:
8
+ name: OpenAI Support
9
+ url: https://help.openai.com/
10
+ license:
11
+ name: MIT
12
+ url: https://github.com/openai/openai-openapi/blob/master/LICENSE
13
+ servers:
14
+ - url: https://api.openai.com/v1
15
+ tags:
16
+ - name: Audio
17
+ description: Learn how to turn audio into text.
18
+ - name: Chat
19
+ description: Given a list of messages comprising a conversation, the model will return a response.
20
+ - name: Completions
21
+ description: Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.
22
+ - name: Embeddings
23
+ description: Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
24
+ - name: Fine-tuning
25
+ description: Manage fine-tuning jobs to tailor a model to your specific training data.
26
+ - name: Files
27
+ description: Files are used to upload documents that can be used with features like fine-tuning.
28
+ - name: Images
29
+ description: Given a prompt and/or an input image, the model will generate a new image.
30
+ - name: Models
31
+ description: List and describe the various models available in the API.
32
+ - name: Moderations
33
+ description: Given a input text, outputs if the model classifies it as violating OpenAI's content policy.
34
+ - name: Fine-tunes
35
+ description: Manage legacy fine-tuning jobs to tailor a model to your specific training data.
36
+ - name: Edits
37
+ description: Given a prompt and an instruction, the model will return an edited version of the prompt.
38
+
39
+ paths:
40
+ # Note: When adding an endpoint, make sure you also add it in the `groups` section, in the end of this file,
41
+ # under the appropriate group
42
+ /chat/completions:
43
+ post:
44
+ operationId: createChatCompletion
45
+ tags:
46
+ - Chat
47
+ summary: Creates a model response for the given chat conversation.
48
+ requestBody:
49
+ required: true
50
+ content:
51
+ application/json:
52
+ schema:
53
+ $ref: "#/components/schemas/CreateChatCompletionRequest"
54
+ responses:
55
+ "200":
56
+ description: OK
57
+ content:
58
+ application/json:
59
+ schema:
60
+ $ref: "#/components/schemas/CreateChatCompletionResponse"
61
+
62
+ x-oaiMeta:
63
+ name: Create chat completion
64
+ group: chat
65
+ returns: |
66
+ Returns a [chat completion](/docs/api-reference/chat/object) object, or a streamed sequence of [chat completion chunk](/docs/api-reference/chat/streaming) objects if the request is streamed.
67
+ path: create
68
+ examples:
69
+ - title: No Streaming
70
+ request:
71
+ curl: |
72
+ curl https://api.openai.com/v1/chat/completions \
73
+ -H "Content-Type: application/json" \
74
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
75
+ -d '{
76
+ "model": "VAR_model_id",
77
+ "messages": [
78
+ {
79
+ "role": "system",
80
+ "content": "You are a helpful assistant."
81
+ },
82
+ {
83
+ "role": "user",
84
+ "content": "Hello!"
85
+ }
86
+ ]
87
+ }'
88
+ python: |
89
+ import os
90
+ import openai
91
+ openai.api_key = os.getenv("OPENAI_API_KEY")
92
+
93
+ completion = openai.ChatCompletion.create(
94
+ model="VAR_model_id",
95
+ messages=[
96
+ {"role": "system", "content": "You are a helpful assistant."},
97
+ {"role": "user", "content": "Hello!"}
98
+ ]
99
+ )
100
+
101
+ print(completion.choices[0].message)
102
+ node.js: |-
103
+ import OpenAI from "openai";
104
+
105
+ const openai = new OpenAI();
106
+
107
+ async function main() {
108
+ const completion = await openai.chat.completions.create({
109
+ messages: [{ role: "system", content: "You are a helpful assistant." }],
110
+ model: "VAR_model_id",
111
+ });
112
+
113
+ console.log(completion.choices[0]);
114
+ }
115
+
116
+ main();
117
+ response: &chat_completion_example |
118
+ {
119
+ "id": "chatcmpl-123",
120
+ "object": "chat.completion",
121
+ "created": 1677652288,
122
+ "model": "gpt-3.5-turbo-0613",
123
+ "choices": [{
124
+ "index": 0,
125
+ "message": {
126
+ "role": "assistant",
127
+ "content": "\n\nHello there, how may I assist you today?",
128
+ },
129
+ "finish_reason": "stop"
130
+ }],
131
+ "usage": {
132
+ "prompt_tokens": 9,
133
+ "completion_tokens": 12,
134
+ "total_tokens": 21
135
+ }
136
+ }
137
+ - title: Streaming
138
+ request:
139
+ curl: |
140
+ curl https://api.openai.com/v1/chat/completions \
141
+ -H "Content-Type: application/json" \
142
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
143
+ -d '{
144
+ "model": "VAR_model_id",
145
+ "messages": [
146
+ {
147
+ "role": "system",
148
+ "content": "You are a helpful assistant."
149
+ },
150
+ {
151
+ "role": "user",
152
+ "content": "Hello!"
153
+ }
154
+ ],
155
+ "stream": true
156
+ }'
157
+ python: |
158
+ import os
159
+ import openai
160
+ openai.api_key = os.getenv("OPENAI_API_KEY")
161
+
162
+ completion = openai.ChatCompletion.create(
163
+ model="VAR_model_id",
164
+ messages=[
165
+ {"role": "system", "content": "You are a helpful assistant."},
166
+ {"role": "user", "content": "Hello!"}
167
+ ],
168
+ stream=True
169
+ )
170
+
171
+ for chunk in completion:
172
+ print(chunk.choices[0].delta)
173
+
174
+ node.js: |-
175
+ import OpenAI from "openai";
176
+
177
+ const openai = new OpenAI();
178
+
179
+ async function main() {
180
+ const completion = await openai.chat.completions.create({
181
+ model: "VAR_model_id",
182
+ messages: [
183
+ {"role": "system", "content": "You are a helpful assistant."},
184
+ {"role": "user", "content": "Hello!"}
185
+ ],
186
+ stream: true,
187
+ });
188
+
189
+ for await (const chunk of completion) {
190
+ console.log(chunk.choices[0].delta.content);
191
+ }
192
+ }
193
+
194
+ main();
195
+ response: &chat_completion_chunk_example |
196
+ {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}
197
+
198
+ {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
199
+
200
+ {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]}
201
+
202
+ ....
203
+
204
+ {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"content":" today"},"finish_reason":null}]}
205
+
206
+ {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{"content":"?"},"finish_reason":null}]}
207
+
208
+ {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-3.5-turbo-0613","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
209
+ - title: Function calling
210
+ request:
211
+ curl: |
212
+ curl https://api.openai.com/v1/chat/completions \
213
+ -H "Content-Type: application/json" \
214
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
215
+ -d '{
216
+ "model": "gpt-3.5-turbo",
217
+ "messages": [
218
+ {
219
+ "role": "user",
220
+ "content": "What is the weather like in Boston?"
221
+ }
222
+ ],
223
+ "functions": [
224
+ {
225
+ "name": "get_current_weather",
226
+ "description": "Get the current weather in a given location",
227
+ "parameters": {
228
+ "type": "object",
229
+ "properties": {
230
+ "location": {
231
+ "type": "string",
232
+ "description": "The city and state, e.g. San Francisco, CA"
233
+ },
234
+ "unit": {
235
+ "type": "string",
236
+ "enum": ["celsius", "fahrenheit"]
237
+ }
238
+ },
239
+ "required": ["location"]
240
+ }
241
+ }
242
+ ],
243
+ "function_call": "auto"
244
+ }'
245
+ python: |
246
+ import os
247
+ import openai
248
+ openai.api_key = os.getenv("OPENAI_API_KEY")
249
+
250
+ functions = [
251
+ {
252
+ "name": "get_current_weather",
253
+ "description": "Get the current weather in a given location",
254
+ "parameters": {
255
+ "type": "object",
256
+ "properties": {
257
+ "location": {
258
+ "type": "string",
259
+ "description": "The city and state, e.g. San Francisco, CA",
260
+ },
261
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
262
+ },
263
+ "required": ["location"],
264
+ },
265
+ }
266
+ ]
267
+ messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
268
+ completion = openai.ChatCompletion.create(
269
+ model="VAR_model_id",
270
+ messages=messages,
271
+ functions=functions,
272
+ function_call="auto", # auto is default, but we'll be explicit
273
+ )
274
+
275
+ print(completion)
276
+
277
+ node.js: |-
278
+ import OpenAI from "openai";
279
+
280
+ const openai = new OpenAI();
281
+
282
+ async function main() {
283
+ const messages = [{"role": "user", "content": "What's the weather like in Boston today?"}];
284
+ const functions = [
285
+ {
286
+ "name": "get_current_weather",
287
+ "description": "Get the current weather in a given location",
288
+ "parameters": {
289
+ "type": "object",
290
+ "properties": {
291
+ "location": {
292
+ "type": "string",
293
+ "description": "The city and state, e.g. San Francisco, CA",
294
+ },
295
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
296
+ },
297
+ "required": ["location"],
298
+ },
299
+ }
300
+ ];
301
+
302
+ const response = await openai.chat.completions.create({
303
+ model: "gpt-3.5-turbo",
304
+ messages: messages,
305
+ functions: functions,
306
+ function_call: "auto", // auto is default, but we'll be explicit
307
+ });
308
+
309
+ console.log(response);
310
+ }
311
+
312
+ main();
313
+ response: &chat_completion_function_example |
314
+ {
315
+ "choices": [
316
+ {
317
+ "finish_reason": "function_call",
318
+ "index": 0,
319
+ "message": {
320
+ "content": null,
321
+ "function_call": {
322
+ "arguments": "{\n \"location\": \"Boston, MA\"\n}",
323
+ "name": "get_current_weather"
324
+ },
325
+ "role": "assistant"
326
+ }
327
+ }
328
+ ],
329
+ "created": 1694028367,
330
+ "model": "gpt-3.5-turbo-0613",
331
+ "object": "chat.completion",
332
+ "usage": {
333
+ "completion_tokens": 18,
334
+ "prompt_tokens": 82,
335
+ "total_tokens": 100
336
+ }
337
+ }
338
+ /completions:
339
+ post:
340
+ operationId: createCompletion
341
+ tags:
342
+ - Completions
343
+ summary: Creates a completion for the provided prompt and parameters.
344
+ requestBody:
345
+ required: true
346
+ content:
347
+ application/json:
348
+ schema:
349
+ $ref: "#/components/schemas/CreateCompletionRequest"
350
+ responses:
351
+ "200":
352
+ description: OK
353
+ content:
354
+ application/json:
355
+ schema:
356
+ $ref: "#/components/schemas/CreateCompletionResponse"
357
+ x-oaiMeta:
358
+ name: Create completion
359
+ returns: |
360
+ Returns a [completion](/docs/api-reference/completions/object) object, or a sequence of completion objects if the request is streamed.
361
+ legacy: true
362
+ examples:
363
+ - title: No streaming
364
+ request:
365
+ curl: |
366
+ curl https://api.openai.com/v1/completions \
367
+ -H "Content-Type: application/json" \
368
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
369
+ -d '{
370
+ "model": "VAR_model_id",
371
+ "prompt": "Say this is a test",
372
+ "max_tokens": 7,
373
+ "temperature": 0
374
+ }'
375
+ python: |
376
+ import os
377
+ import openai
378
+ openai.api_key = os.getenv("OPENAI_API_KEY")
379
+ openai.Completion.create(
380
+ model="VAR_model_id",
381
+ prompt="Say this is a test",
382
+ max_tokens=7,
383
+ temperature=0
384
+ )
385
+ node.js: |-
386
+ import OpenAI from "openai";
387
+
388
+ const openai = new OpenAI();
389
+
390
+ async function main() {
391
+ const completion = await openai.completions.create({
392
+ model: "VAR_model_id",
393
+ prompt: "Say this is a test.",
394
+ max_tokens: 7,
395
+ temperature: 0,
396
+ });
397
+
398
+ console.log(completion);
399
+ }
400
+ main();
401
+ response: |
402
+ {
403
+ "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
404
+ "object": "text_completion",
405
+ "created": 1589478378,
406
+ "model": "VAR_model_id",
407
+ "choices": [
408
+ {
409
+ "text": "\n\nThis is indeed a test",
410
+ "index": 0,
411
+ "logprobs": null,
412
+ "finish_reason": "length"
413
+ }
414
+ ],
415
+ "usage": {
416
+ "prompt_tokens": 5,
417
+ "completion_tokens": 7,
418
+ "total_tokens": 12
419
+ }
420
+ }
421
+ - title: Streaming
422
+ request:
423
+ curl: |
424
+ curl https://api.openai.com/v1/completions \
425
+ -H "Content-Type: application/json" \
426
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
427
+ -d '{
428
+ "model": "VAR_model_id",
429
+ "prompt": "Say this is a test",
430
+ "max_tokens": 7,
431
+ "temperature": 0,
432
+ "stream": true
433
+ }'
434
+ python: |
435
+ import os
436
+ import openai
437
+ openai.api_key = os.getenv("OPENAI_API_KEY")
438
+ for chunk in openai.Completion.create(
439
+ model="VAR_model_id",
440
+ prompt="Say this is a test",
441
+ max_tokens=7,
442
+ temperature=0,
443
+ stream=True
444
+ ):
445
+ print(chunk['choices'][0]['text'])
446
+ node.js: |-
447
+ import OpenAI from "openai";
448
+
449
+ const openai = new OpenAI();
450
+
451
+ async function main() {
452
+ const stream = await openai.completions.create({
453
+ model: "VAR_model_id",
454
+ prompt: "Say this is a test.",
455
+ stream: true,
456
+ });
457
+
458
+ for await (const chunk of stream) {
459
+ console.log(chunk.choices[0].text)
460
+ }
461
+ }
462
+ main();
463
+ response: |
464
+ {
465
+ "id": "cmpl-7iA7iJjj8V2zOkCGvWF2hAkDWBQZe",
466
+ "object": "text_completion",
467
+ "created": 1690759702,
468
+ "choices": [
469
+ {
470
+ "text": "This",
471
+ "index": 0,
472
+ "logprobs": null,
473
+ "finish_reason": null
474
+ }
475
+ ],
476
+ "model": "gpt-3.5-turbo-instruct"
477
+ }
478
+ /edits:
479
+ post:
480
+ operationId: createEdit
481
+ deprecated: true
482
+ tags:
483
+ - Edits
484
+ summary: Creates a new edit for the provided input, instruction, and parameters.
485
+ requestBody:
486
+ required: true
487
+ content:
488
+ application/json:
489
+ schema:
490
+ $ref: "#/components/schemas/CreateEditRequest"
491
+ responses:
492
+ "200":
493
+ description: OK
494
+ content:
495
+ application/json:
496
+ schema:
497
+ $ref: "#/components/schemas/CreateEditResponse"
498
+ x-oaiMeta:
499
+ name: Create edit
500
+ returns: |
501
+ Returns an [edit](/docs/api-reference/edits/object) object.
502
+ group: edits
503
+ examples:
504
+ request:
505
+ curl: |
506
+ curl https://api.openai.com/v1/edits \
507
+ -H "Content-Type: application/json" \
508
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
509
+ -d '{
510
+ "model": "VAR_model_id",
511
+ "input": "What day of the wek is it?",
512
+ "instruction": "Fix the spelling mistakes"
513
+ }'
514
+ python: |
515
+ import os
516
+ import openai
517
+ openai.api_key = os.getenv("OPENAI_API_KEY")
518
+ openai.Edit.create(
519
+ model="VAR_model_id",
520
+ input="What day of the wek is it?",
521
+ instruction="Fix the spelling mistakes"
522
+ )
523
+ node.js: |-
524
+ import OpenAI from "openai";
525
+
526
+ const openai = new OpenAI();
527
+
528
+ async function main() {
529
+ const edit = await openai.edits.create({
530
+ model: "VAR_model_id",
531
+ input: "What day of the wek is it?",
532
+ instruction: "Fix the spelling mistakes.",
533
+ });
534
+
535
+ console.log(edit);
536
+ }
537
+
538
+ main();
539
+ response: &edit_example |
540
+ {
541
+ "object": "edit",
542
+ "created": 1589478378,
543
+ "choices": [
544
+ {
545
+ "text": "What day of the week is it?",
546
+ "index": 0,
547
+ }
548
+ ],
549
+ "usage": {
550
+ "prompt_tokens": 25,
551
+ "completion_tokens": 32,
552
+ "total_tokens": 57
553
+ }
554
+ }
555
+
556
+ /images/generations:
557
+ post:
558
+ operationId: createImage
559
+ tags:
560
+ - Images
561
+ summary: Creates an image given a prompt.
562
+ requestBody:
563
+ required: true
564
+ content:
565
+ application/json:
566
+ schema:
567
+ $ref: "#/components/schemas/CreateImageRequest"
568
+ responses:
569
+ "200":
570
+ description: OK
571
+ content:
572
+ application/json:
573
+ schema:
574
+ $ref: "#/components/schemas/ImagesResponse"
575
+ x-oaiMeta:
576
+ name: Create image
577
+ returns: Returns a list of [image](/docs/api-reference/images/object) objects.
578
+ examples:
579
+ request:
580
+ curl: |
581
+ curl https://api.openai.com/v1/images/generations \
582
+ -H "Content-Type: application/json" \
583
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
584
+ -d '{
585
+ "prompt": "A cute baby sea otter",
586
+ "n": 2,
587
+ "size": "1024x1024"
588
+ }'
589
+ python: |
590
+ import os
591
+ import openai
592
+ openai.api_key = os.getenv("OPENAI_API_KEY")
593
+ openai.Image.create(
594
+ prompt="A cute baby sea otter",
595
+ n=2,
596
+ size="1024x1024"
597
+ )
598
+ node.js: |-
599
+ import OpenAI from "openai";
600
+
601
+ const openai = new OpenAI();
602
+
603
+ async function main() {
604
+ const image = await openai.images.generate({ prompt: "A cute baby sea otter" });
605
+
606
+ console.log(image.data);
607
+ }
608
+ main();
609
+ response: |
610
+ {
611
+ "created": 1589478378,
612
+ "data": [
613
+ {
614
+ "url": "https://..."
615
+ },
616
+ {
617
+ "url": "https://..."
618
+ }
619
+ ]
620
+ }
621
+
622
+ /images/edits:
623
+ post:
624
+ operationId: createImageEdit
625
+ tags:
626
+ - Images
627
+ summary: Creates an edited or extended image given an original image and a prompt.
628
+ requestBody:
629
+ required: true
630
+ content:
631
+ multipart/form-data:
632
+ schema:
633
+ $ref: "#/components/schemas/CreateImageEditRequest"
634
+ responses:
635
+ "200":
636
+ description: OK
637
+ content:
638
+ application/json:
639
+ schema:
640
+ $ref: "#/components/schemas/ImagesResponse"
641
+ x-oaiMeta:
642
+ name: Create image edit
643
+ returns: Returns a list of [image](/docs/api-reference/images/object) objects.
644
+ examples:
645
+ request:
646
+ curl: |
647
+ curl https://api.openai.com/v1/images/edits \
648
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
649
+ -F image="@otter.png" \
650
+ -F mask="@mask.png" \
651
+ -F prompt="A cute baby sea otter wearing a beret" \
652
+ -F n=2 \
653
+ -F size="1024x1024"
654
+ python: |
655
+ import os
656
+ import openai
657
+ openai.api_key = os.getenv("OPENAI_API_KEY")
658
+ openai.Image.create_edit(
659
+ image=open("otter.png", "rb"),
660
+ mask=open("mask.png", "rb"),
661
+ prompt="A cute baby sea otter wearing a beret",
662
+ n=2,
663
+ size="1024x1024"
664
+ )
665
+ node.js: |-
666
+ import fs from "fs";
667
+ import OpenAI from "openai";
668
+
669
+ const openai = new OpenAI();
670
+
671
+ async function main() {
672
+ const image = await openai.images.edit({
673
+ image: fs.createReadStream("otter.png"),
674
+ mask: fs.createReadStream("mask.png"),
675
+ prompt: "A cute baby sea otter wearing a beret",
676
+ });
677
+
678
+ console.log(image.data);
679
+ }
680
+ main();
681
+ response: |
682
+ {
683
+ "created": 1589478378,
684
+ "data": [
685
+ {
686
+ "url": "https://..."
687
+ },
688
+ {
689
+ "url": "https://..."
690
+ }
691
+ ]
692
+ }
693
+
694
+ /images/variations:
695
+ post:
696
+ operationId: createImageVariation
697
+ tags:
698
+ - Images
699
+ summary: Creates a variation of a given image.
700
+ requestBody:
701
+ required: true
702
+ content:
703
+ multipart/form-data:
704
+ schema:
705
+ $ref: "#/components/schemas/CreateImageVariationRequest"
706
+ responses:
707
+ "200":
708
+ description: OK
709
+ content:
710
+ application/json:
711
+ schema:
712
+ $ref: "#/components/schemas/ImagesResponse"
713
+ x-oaiMeta:
714
+ name: Create image variation
715
+ returns: Returns a list of [image](/docs/api-reference/images/object) objects.
716
+ examples:
717
+ request:
718
+ curl: |
719
+ curl https://api.openai.com/v1/images/variations \
720
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
721
+ -F image="@otter.png" \
722
+ -F n=2 \
723
+ -F size="1024x1024"
724
+ python: |
725
+ import os
726
+ import openai
727
+ openai.api_key = os.getenv("OPENAI_API_KEY")
728
+ openai.Image.create_variation(
729
+ image=open("otter.png", "rb"),
730
+ n=2,
731
+ size="1024x1024"
732
+ )
733
+ node.js: |-
734
+ import fs from "fs";
735
+ import OpenAI from "openai";
736
+
737
+ const openai = new OpenAI();
738
+
739
+ async function main() {
740
+ const image = await openai.images.createVariation({
741
+ image: fs.createReadStream("otter.png"),
742
+ });
743
+
744
+ console.log(image.data);
745
+ }
746
+ main();
747
+ response: |
748
+ {
749
+ "created": 1589478378,
750
+ "data": [
751
+ {
752
+ "url": "https://..."
753
+ },
754
+ {
755
+ "url": "https://..."
756
+ }
757
+ ]
758
+ }
759
+
760
+ /embeddings:
761
+ post:
762
+ operationId: createEmbedding
763
+ tags:
764
+ - Embeddings
765
+ summary: Creates an embedding vector representing the input text.
766
+ requestBody:
767
+ required: true
768
+ content:
769
+ application/json:
770
+ schema:
771
+ $ref: "#/components/schemas/CreateEmbeddingRequest"
772
+ responses:
773
+ "200":
774
+ description: OK
775
+ content:
776
+ application/json:
777
+ schema:
778
+ $ref: "#/components/schemas/CreateEmbeddingResponse"
779
+ x-oaiMeta:
780
+ name: Create embeddings
781
+ returns: A list of [embedding](/docs/api-reference/embeddings/object) objects.
782
+ examples:
783
+ request:
784
+ curl: |
785
+ curl https://api.openai.com/v1/embeddings \
786
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
787
+ -H "Content-Type: application/json" \
788
+ -d '{
789
+ "input": "The food was delicious and the waiter...",
790
+ "model": "text-embedding-ada-002",
791
+ "encoding_format": "float"
792
+ }'
793
+ python: |
794
+ import os
795
+ import openai
796
+ openai.api_key = os.getenv("OPENAI_API_KEY")
797
+ openai.Embedding.create(
798
+ model="text-embedding-ada-002",
799
+ input="The food was delicious and the waiter...",
800
+ encoding_format="float"
801
+ )
802
+ node.js: |-
803
+ import OpenAI from "openai";
804
+
805
+ const openai = new OpenAI();
806
+
807
+ async function main() {
808
+ const embedding = await openai.embeddings.create({
809
+ model: "text-embedding-ada-002",
810
+ input: "The quick brown fox jumped over the lazy dog",
811
+ encoding_format: "float",
812
+ });
813
+
814
+ console.log(embedding);
815
+ }
816
+
817
+ main();
818
+ response: |
819
+ {
820
+ "object": "list",
821
+ "data": [
822
+ {
823
+ "object": "embedding",
824
+ "embedding": [
825
+ 0.0023064255,
826
+ -0.009327292,
827
+ .... (1536 floats total for ada-002)
828
+ -0.0028842222,
829
+ ],
830
+ "index": 0
831
+ }
832
+ ],
833
+ "model": "text-embedding-ada-002",
834
+ "usage": {
835
+ "prompt_tokens": 8,
836
+ "total_tokens": 8
837
+ }
838
+ }
839
+
840
+ /audio/transcriptions:
841
+ post:
842
+ operationId: createTranscription
843
+ tags:
844
+ - Audio
845
+ summary: Transcribes audio into the input language.
846
+ requestBody:
847
+ required: true
848
+ content:
849
+ multipart/form-data:
850
+ schema:
851
+ $ref: "#/components/schemas/CreateTranscriptionRequest"
852
+ responses:
853
+ "200":
854
+ description: OK
855
+ content:
856
+ application/json:
857
+ schema:
858
+ $ref: "#/components/schemas/CreateTranscriptionResponse"
859
+ x-oaiMeta:
860
+ name: Create transcription
861
+ returns: The transcriped text.
862
+ examples:
863
+ request:
864
+ curl: |
865
+ curl https://api.openai.com/v1/audio/transcriptions \
866
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
867
+ -H "Content-Type: multipart/form-data" \
868
+ -F file="@/path/to/file/audio.mp3" \
869
+ -F model="whisper-1"
870
+ python: |
871
+ import os
872
+ import openai
873
+ openai.api_key = os.getenv("OPENAI_API_KEY")
874
+ audio_file = open("audio.mp3", "rb")
875
+ transcript = openai.Audio.transcribe("whisper-1", audio_file)
876
+ node: |-
877
+ import fs from "fs";
878
+ import OpenAI from "openai";
879
+
880
+ const openai = new OpenAI();
881
+
882
+ async function main() {
883
+ const transcription = await openai.audio.transcriptions.create({
884
+ file: fs.createReadStream("audio.mp3"),
885
+ model: "whisper-1",
886
+ });
887
+
888
+ console.log(transcription.text);
889
+ }
890
+ main();
891
+ response: |
892
+ {
893
+ "text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
894
+ }
895
+
896
+ /audio/translations:
897
+ post:
898
+ operationId: createTranslation
899
+ tags:
900
+ - Audio
901
+ summary: Translates audio into English.
902
+ requestBody:
903
+ required: true
904
+ content:
905
+ multipart/form-data:
906
+ schema:
907
+ $ref: "#/components/schemas/CreateTranslationRequest"
908
+ responses:
909
+ "200":
910
+ description: OK
911
+ content:
912
+ application/json:
913
+ schema:
914
+ $ref: "#/components/schemas/CreateTranslationResponse"
915
+ x-oaiMeta:
916
+ name: Create translation
917
+ returns: The translated text.
918
+ examples:
919
+ request:
920
+ curl: |
921
+ curl https://api.openai.com/v1/audio/translations \
922
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
923
+ -H "Content-Type: multipart/form-data" \
924
+ -F file="@/path/to/file/german.m4a" \
925
+ -F model="whisper-1"
926
+ python: |
927
+ import os
928
+ import openai
929
+ openai.api_key = os.getenv("OPENAI_API_KEY")
930
+ audio_file = open("german.m4a", "rb")
931
+ transcript = openai.Audio.translate("whisper-1", audio_file)
932
+ node: |
933
+ const { Configuration, OpenAIApi } = require("openai");
934
+ const configuration = new Configuration({
935
+ apiKey: process.env.OPENAI_API_KEY,
936
+ });
937
+ const openai = new OpenAIApi(configuration);
938
+ const resp = await openai.createTranslation(
939
+ fs.createReadStream("audio.mp3"),
940
+ "whisper-1"
941
+ );
942
+ response: |
943
+ {
944
+ "text": "Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"
945
+ }
946
+
947
+ /files:
948
+ get:
949
+ operationId: listFiles
950
+ tags:
951
+ - Files
952
+ summary: Returns a list of files that belong to the user's organization.
953
+ responses:
954
+ "200":
955
+ description: OK
956
+ content:
957
+ application/json:
958
+ schema:
959
+ $ref: "#/components/schemas/ListFilesResponse"
960
+ x-oaiMeta:
961
+ name: List files
962
+ returns: A list of [file](/docs/api-reference/files/object) objects.
963
+ examples:
964
+ request:
965
+ curl: |
966
+ curl https://api.openai.com/v1/files \
967
+ -H "Authorization: Bearer $OPENAI_API_KEY"
968
+ python: |
969
+ import os
970
+ import openai
971
+ openai.api_key = os.getenv("OPENAI_API_KEY")
972
+ openai.File.list()
973
+ node.js: |-
974
+ import OpenAI from "openai";
975
+
976
+ const openai = new OpenAI();
977
+
978
+ async function main() {
979
+ const list = await openai.files.list();
980
+
981
+ for await (const file of list) {
982
+ console.log(file);
983
+ }
984
+ }
985
+
986
+ main();
987
+ response: |
988
+ {
989
+ "data": [
990
+ {
991
+ "id": "file-abc123",
992
+ "object": "file",
993
+ "bytes": 175,
994
+ "created_at": 1613677385,
995
+ "filename": "train.jsonl",
996
+ "purpose": "search"
997
+ },
998
+ {
999
+ "id": "file-abc123",
1000
+ "object": "file",
1001
+ "bytes": 140,
1002
+ "created_at": 1613779121,
1003
+ "filename": "puppy.jsonl",
1004
+ "purpose": "search"
1005
+ }
1006
+ ],
1007
+ "object": "list"
1008
+ }
1009
+ post:
1010
+ operationId: createFile
1011
+ tags:
1012
+ - Files
1013
+ summary: |
1014
+ Upload a file that can be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please [contact us](https://help.openai.com/) if you need to increase the storage limit.
1015
+ requestBody:
1016
+ required: true
1017
+ content:
1018
+ multipart/form-data:
1019
+ schema:
1020
+ $ref: "#/components/schemas/CreateFileRequest"
1021
+ responses:
1022
+ "200":
1023
+ description: OK
1024
+ content:
1025
+ application/json:
1026
+ schema:
1027
+ $ref: "#/components/schemas/OpenAIFile"
1028
+ x-oaiMeta:
1029
+ name: Upload file
1030
+ returns: The uploaded [file](/docs/api-reference/files/object) object.
1031
+ examples:
1032
+ request:
1033
+ curl: |
1034
+ curl https://api.openai.com/v1/files \
1035
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
1036
+ -F purpose="fine-tune" \
1037
+ -F file="@mydata.jsonl"
1038
+ python: |
1039
+ import os
1040
+ import openai
1041
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1042
+ openai.File.create(
1043
+ file=open("mydata.jsonl", "rb"),
1044
+ purpose='fine-tune'
1045
+ )
1046
+ node.js: |-
1047
+ import fs from "fs";
1048
+ import OpenAI from "openai";
1049
+
1050
+ const openai = new OpenAI();
1051
+
1052
+ async function main() {
1053
+ const file = await openai.files.create({
1054
+ file: fs.createReadStream("mydata.jsonl"),
1055
+ purpose: "fine-tune",
1056
+ });
1057
+
1058
+ console.log(file);
1059
+ }
1060
+
1061
+ main();
1062
+ response: |
1063
+ {
1064
+ "id": "file-abc123",
1065
+ "object": "file",
1066
+ "bytes": 140,
1067
+ "created_at": 1613779121,
1068
+ "filename": "mydata.jsonl",
1069
+ "purpose": "fine-tune",
1070
+ "status": "uploaded" | "processed" | "pending" | "error"
1071
+ }
1072
+
1073
+ /files/{file_id}:
1074
+ delete:
1075
+ operationId: deleteFile
1076
+ tags:
1077
+ - Files
1078
+ summary: Delete a file.
1079
+ parameters:
1080
+ - in: path
1081
+ name: file_id
1082
+ required: true
1083
+ schema:
1084
+ type: string
1085
+ description: The ID of the file to use for this request.
1086
+ responses:
1087
+ "200":
1088
+ description: OK
1089
+ content:
1090
+ application/json:
1091
+ schema:
1092
+ $ref: "#/components/schemas/DeleteFileResponse"
1093
+ x-oaiMeta:
1094
+ name: Delete file
1095
+ returns: Deletion status.
1096
+ examples:
1097
+ request:
1098
+ curl: |
1099
+ curl https://api.openai.com/v1/files/file-abc123 \
1100
+ -X DELETE \
1101
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1102
+ python: |
1103
+ import os
1104
+ import openai
1105
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1106
+ openai.File.delete("file-abc123")
1107
+ node.js: |-
1108
+ import OpenAI from "openai";
1109
+
1110
+ const openai = new OpenAI();
1111
+
1112
+ async function main() {
1113
+ const file = await openai.files.del("file-abc123");
1114
+
1115
+ console.log(file);
1116
+ }
1117
+
1118
+ main();
1119
+ response: |
1120
+ {
1121
+ "id": "file-abc123",
1122
+ "object": "file",
1123
+ "deleted": true
1124
+ }
1125
+ get:
1126
+ operationId: retrieveFile
1127
+ tags:
1128
+ - Files
1129
+ summary: Returns information about a specific file.
1130
+ parameters:
1131
+ - in: path
1132
+ name: file_id
1133
+ required: true
1134
+ schema:
1135
+ type: string
1136
+ description: The ID of the file to use for this request.
1137
+ responses:
1138
+ "200":
1139
+ description: OK
1140
+ content:
1141
+ application/json:
1142
+ schema:
1143
+ $ref: "#/components/schemas/OpenAIFile"
1144
+ x-oaiMeta:
1145
+ name: Retrieve file
1146
+ returns: The [file](/docs/api-reference/files/object) object matching the specified ID.
1147
+ examples:
1148
+ request:
1149
+ curl: |
1150
+ curl https://api.openai.com/v1/files/file-abc123 \
1151
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1152
+ python: |
1153
+ import os
1154
+ import openai
1155
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1156
+ openai.File.retrieve("file-abc123")
1157
+ node.js: |-
1158
+ import OpenAI from "openai";
1159
+
1160
+ const openai = new OpenAI();
1161
+
1162
+ async function main() {
1163
+ const file = await openai.files.retrieve("file-abc123");
1164
+
1165
+ console.log(file);
1166
+ }
1167
+
1168
+ main();
1169
+ response: |
1170
+ {
1171
+ "id": "file-abc123",
1172
+ "object": "file",
1173
+ "bytes": 140,
1174
+ "created_at": 1613779657,
1175
+ "filename": "mydata.jsonl",
1176
+ "purpose": "fine-tune"
1177
+ }
1178
+
1179
+ /files/{file_id}/content:
1180
+ get:
1181
+ operationId: downloadFile
1182
+ tags:
1183
+ - Files
1184
+ summary: Returns the contents of the specified file.
1185
+ parameters:
1186
+ - in: path
1187
+ name: file_id
1188
+ required: true
1189
+ schema:
1190
+ type: string
1191
+ description: The ID of the file to use for this request.
1192
+ responses:
1193
+ "200":
1194
+ description: OK
1195
+ content:
1196
+ application/json:
1197
+ schema:
1198
+ type: string
1199
+ x-oaiMeta:
1200
+ name: Retrieve file content
1201
+ returns: The file content.
1202
+ examples:
1203
+ request:
1204
+ curl: |
1205
+ curl https://api.openai.com/v1/files/file-abc123/content \
1206
+ -H "Authorization: Bearer $OPENAI_API_KEY" > file.jsonl
1207
+ python: |
1208
+ import os
1209
+ import openai
1210
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1211
+ content = openai.File.download("file-abc123")
1212
+ node.js: |
1213
+ import OpenAI from "openai";
1214
+
1215
+ const openai = new OpenAI();
1216
+
1217
+ async function main() {
1218
+ const file = await openai.files.retrieveContent("file-abc123");
1219
+
1220
+ console.log(file);
1221
+ }
1222
+
1223
+ main();
1224
+
1225
+ /fine_tuning/jobs:
1226
+ post:
1227
+ operationId: createFineTuningJob
1228
+ tags:
1229
+ - Fine-tuning
1230
+ summary: |
1231
+ Creates a job that fine-tunes a specified model from a given dataset.
1232
+
1233
+ Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
1234
+
1235
+ [Learn more about fine-tuning](/docs/guides/fine-tuning)
1236
+ requestBody:
1237
+ required: true
1238
+ content:
1239
+ application/json:
1240
+ schema:
1241
+ $ref: "#/components/schemas/CreateFineTuningJobRequest"
1242
+ responses:
1243
+ "200":
1244
+ description: OK
1245
+ content:
1246
+ application/json:
1247
+ schema:
1248
+ $ref: "#/components/schemas/FineTuningJob"
1249
+ x-oaiMeta:
1250
+ name: Create fine-tuning job
1251
+ returns: A [fine-tuning.job](/docs/api-reference/fine-tuning/object) object.
1252
+ examples:
1253
+ - title: No hyperparameters
1254
+ request:
1255
+ curl: |
1256
+ curl https://api.openai.com/v1/fine_tuning/jobs \
1257
+ -H "Content-Type: application/json" \
1258
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
1259
+ -d '{
1260
+ "training_file": "file-abc123",
1261
+ "model": "gpt-3.5-turbo"
1262
+ }'
1263
+ python: |
1264
+ import os
1265
+ import openai
1266
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1267
+ openai.FineTuningJob.create(training_file="file-abc123", model="gpt-3.5-turbo")
1268
+ node.js: |
1269
+ import OpenAI from "openai";
1270
+
1271
+ const openai = new OpenAI();
1272
+
1273
+ async function main() {
1274
+ const fineTune = await openai.fineTuning.jobs.create({
1275
+ training_file: "file-abc123"
1276
+ });
1277
+
1278
+ console.log(fineTune);
1279
+ }
1280
+
1281
+ main();
1282
+ response: |
1283
+ {
1284
+ "object": "fine_tuning.job",
1285
+ "id": "ftjob-abc123",
1286
+ "model": "gpt-3.5-turbo-0613",
1287
+ "created_at": 1614807352,
1288
+ "fine_tuned_model": null,
1289
+ "organization_id": "org-123",
1290
+ "result_files": [],
1291
+ "status": "queued",
1292
+ "validation_file": null,
1293
+ "training_file": "file-abc123",
1294
+ }
1295
+ - title: Hyperparameters
1296
+ request:
1297
+ curl: |
1298
+ curl https://api.openai.com/v1/fine_tuning/jobs \
1299
+ -H "Content-Type: application/json" \
1300
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
1301
+ -d '{
1302
+ "training_file": "file-abc123",
1303
+ "model": "gpt-3.5-turbo",
1304
+ "hyperparameters": {
1305
+ "n_epochs": 2
1306
+ }
1307
+ }'
1308
+ python: |
1309
+ import os
1310
+ import openai
1311
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1312
+ openai.FineTuningJob.create(training_file="file-abc123", model="gpt-3.5-turbo", hyperparameters={"n_epochs":2})
1313
+ node.js: |
1314
+ import OpenAI from "openai";
1315
+
1316
+ const openai = new OpenAI();
1317
+
1318
+ async function main() {
1319
+ const fineTune = await openai.fineTuning.jobs.create({
1320
+ training_file: "file-abc123",
1321
+ model: "gpt-3.5-turbo",
1322
+ hyperparameters: { n_epochs: 2 }
1323
+ });
1324
+
1325
+ console.log(fineTune);
1326
+ }
1327
+
1328
+ main();
1329
+ response: |
1330
+ {
1331
+ "object": "fine_tuning.job",
1332
+ "id": "ftjob-abc123",
1333
+ "model": "gpt-3.5-turbo-0613",
1334
+ "created_at": 1614807352,
1335
+ "fine_tuned_model": null,
1336
+ "organization_id": "org-123",
1337
+ "result_files": [],
1338
+ "status": "queued",
1339
+ "validation_file": null,
1340
+ "training_file": "file-abc123",
1341
+ "hyperparameters":{"n_epochs":2},
1342
+ }
1343
+ - title: Validation file
1344
+ request:
1345
+ curl: |
1346
+ curl https://api.openai.com/v1/fine_tuning/jobs \
1347
+ -H "Content-Type: application/json" \
1348
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
1349
+ -d '{
1350
+ "training_file": "file-abc123",
1351
+ "validation_file": "file-abc123",
1352
+ "model": "gpt-3.5-turbo"
1353
+ }'
1354
+ python: |
1355
+ import os
1356
+ import openai
1357
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1358
+ openai.FineTuningJob.create(training_file="file-abc123", validation_file="file-abc123", model="gpt-3.5-turbo")
1359
+ node.js: |
1360
+ import OpenAI from "openai";
1361
+
1362
+ const openai = new OpenAI();
1363
+
1364
+ async function main() {
1365
+ const fineTune = await openai.fineTuning.jobs.create({
1366
+ training_file: "file-abc123",
1367
+ validation_file: "file-abc123"
1368
+ });
1369
+
1370
+ console.log(fineTune);
1371
+ }
1372
+
1373
+ main();
1374
+ response: |
1375
+ {
1376
+ "object": "fine_tuning.job",
1377
+ "id": "ftjob-abc123",
1378
+ "model": "gpt-3.5-turbo-0613",
1379
+ "created_at": 1614807352,
1380
+ "fine_tuned_model": null,
1381
+ "organization_id": "org-123",
1382
+ "result_files": [],
1383
+ "status": "queued",
1384
+ "validation_file": "file-abc123",
1385
+ "training_file": "file-abc123",
1386
+ }
1387
+ get:
1388
+ operationId: listPaginatedFineTuningJobs
1389
+ tags:
1390
+ - Fine-tuning
1391
+ summary: |
1392
+ List your organization's fine-tuning jobs
1393
+ parameters:
1394
+ - name: after
1395
+ in: query
1396
+ description: Identifier for the last job from the previous pagination request.
1397
+ required: false
1398
+ schema:
1399
+ type: string
1400
+ - name: limit
1401
+ in: query
1402
+ description: Number of fine-tuning jobs to retrieve.
1403
+ required: false
1404
+ schema:
1405
+ type: integer
1406
+ default: 20
1407
+ responses:
1408
+ "200":
1409
+ description: OK
1410
+ content:
1411
+ application/json:
1412
+ schema:
1413
+ $ref: "#/components/schemas/ListPaginatedFineTuningJobsResponse"
1414
+ x-oaiMeta:
1415
+ name: List fine-tuning jobs
1416
+ returns: A list of paginated [fine-tuning job](/docs/api-reference/fine-tuning/object) objects.
1417
+ examples:
1418
+ request:
1419
+ curl: |
1420
+ curl https://api.openai.com/v1/fine_tuning/jobs?limit=2 \
1421
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1422
+ python: |
1423
+ import os
1424
+ import openai
1425
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1426
+ openai.FineTuningJob.list()
1427
+ node.js: |-
1428
+ import OpenAI from "openai";
1429
+
1430
+ const openai = new OpenAI();
1431
+
1432
+ async function main() {
1433
+ const list = await openai.fineTuning.jobs.list();
1434
+
1435
+ for await (const fineTune of list) {
1436
+ console.log(fineTune);
1437
+ }
1438
+ }
1439
+
1440
+ main();
1441
+ response: |
1442
+ {
1443
+ "object": "list",
1444
+ "data": [
1445
+ {
1446
+ "object": "fine_tuning.job.event",
1447
+ "id": "ft-event-TjX0lMfOniCZX64t9PUQT5hn",
1448
+ "created_at": 1689813489,
1449
+ "level": "warn",
1450
+ "message": "Fine tuning process stopping due to job cancellation",
1451
+ "data": null,
1452
+ "type": "message"
1453
+ },
1454
+ { ... },
1455
+ { ... }
1456
+ ], "has_more": true
1457
+ }
1458
+ /fine_tuning/jobs/{fine_tuning_job_id}:
1459
+ get:
1460
+ operationId: retrieveFineTuningJob
1461
+ tags:
1462
+ - Fine-tuning
1463
+ summary: |
1464
+ Get info about a fine-tuning job.
1465
+
1466
+ [Learn more about fine-tuning](/docs/guides/fine-tuning)
1467
+ parameters:
1468
+ - in: path
1469
+ name: fine_tuning_job_id
1470
+ required: true
1471
+ schema:
1472
+ type: string
1473
+ example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
1474
+ description: |
1475
+ The ID of the fine-tuning job.
1476
+ responses:
1477
+ "200":
1478
+ description: OK
1479
+ content:
1480
+ application/json:
1481
+ schema:
1482
+ $ref: "#/components/schemas/FineTuningJob"
1483
+ x-oaiMeta:
1484
+ name: Retrieve fine-tuning job
1485
+ returns: The [fine-tuning](/docs/api-reference/fine-tunes/object) object with the given ID.
1486
+ examples:
1487
+ request:
1488
+ curl: |
1489
+ curl https://api.openai.com/v1/fine_tuning/jobs/ft-AF1WoRqd3aJAHsqc9NY7iL8F \
1490
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1491
+ python: |
1492
+ import os
1493
+ import openai
1494
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1495
+ openai.FineTuningJob.retrieve("ftjob-abc123")
1496
+ node.js: |
1497
+ import OpenAI from "openai";
1498
+
1499
+ const openai = new OpenAI();
1500
+
1501
+ async function main() {
1502
+ const fineTune = await openai.fineTuning.jobs.retrieve("ftjob-abc123");
1503
+
1504
+ console.log(fineTune);
1505
+ }
1506
+
1507
+ main();
1508
+ response: &fine_tuning_example |
1509
+ {
1510
+ "object": "fine_tuning.job",
1511
+ "id": "ftjob-abc123",
1512
+ "model": "davinci-002",
1513
+ "created_at": 1692661014,
1514
+ "finished_at": 1692661190,
1515
+ "fine_tuned_model": "ft:davinci-002:my-org:custom_suffix:7q8mpxmy",
1516
+ "organization_id": "org-123",
1517
+ "result_files": [
1518
+ "file-abc123"
1519
+ ],
1520
+ "status": "succeeded",
1521
+ "validation_file": null,
1522
+ "training_file": "file-abc123",
1523
+ "hyperparameters": {
1524
+ "n_epochs": 4,
1525
+ },
1526
+ "trained_tokens": 5768
1527
+ }
1528
+ /fine_tuning/jobs/{fine_tuning_job_id}/events:
1529
+ get:
1530
+ operationId: listFineTuningEvents
1531
+ tags:
1532
+ - Fine-tuning
1533
+ summary: |
1534
+ Get status updates for a fine-tuning job.
1535
+ parameters:
1536
+ - in: path
1537
+ name: fine_tuning_job_id
1538
+ required: true
1539
+ schema:
1540
+ type: string
1541
+ example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
1542
+ description: |
1543
+ The ID of the fine-tuning job to get events for.
1544
+ - name: after
1545
+ in: query
1546
+ description: Identifier for the last event from the previous pagination request.
1547
+ required: false
1548
+ schema:
1549
+ type: string
1550
+ - name: limit
1551
+ in: query
1552
+ description: Number of events to retrieve.
1553
+ required: false
1554
+ schema:
1555
+ type: integer
1556
+ default: 20
1557
+ responses:
1558
+ "200":
1559
+ description: OK
1560
+ content:
1561
+ application/json:
1562
+ schema:
1563
+ $ref: "#/components/schemas/ListFineTuningJobEventsResponse"
1564
+ x-oaiMeta:
1565
+ name: List fine-tuning events
1566
+ returns: A list of fine-tuning event objects.
1567
+ examples:
1568
+ request:
1569
+ curl: |
1570
+ curl https://api.openai.com/v1/fine_tuning/jobs/ftjob-abc123/events \
1571
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1572
+ python: |
1573
+ import os
1574
+ import openai
1575
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1576
+ openai.FineTuningJob.list_events(id="ftjob-abc123", limit=2)
1577
+ node.js: |-
1578
+ import OpenAI from "openai";
1579
+
1580
+ const openai = new OpenAI();
1581
+
1582
+ async function main() {
1583
+ const list = await openai.fineTuning.list_events(id="ftjob-abc123", limit=2);
1584
+
1585
+ for await (const fineTune of list) {
1586
+ console.log(fineTune);
1587
+ }
1588
+ }
1589
+
1590
+ main();
1591
+ response: |
1592
+ {
1593
+ "object": "list",
1594
+ "data": [
1595
+ {
1596
+ "object": "fine_tuning.job.event",
1597
+ "id": "ft-event-ddTJfwuMVpfLXseO0Am0Gqjm",
1598
+ "created_at": 1692407401,
1599
+ "level": "info",
1600
+ "message": "Fine tuning job successfully completed",
1601
+ "data": null,
1602
+ "type": "message"
1603
+ },
1604
+ {
1605
+ "object": "fine_tuning.job.event",
1606
+ "id": "ft-event-tyiGuB72evQncpH87xe505Sv",
1607
+ "created_at": 1692407400,
1608
+ "level": "info",
1609
+ "message": "New fine-tuned model created: ft:gpt-3.5-turbo:openai::7p4lURel",
1610
+ "data": null,
1611
+ "type": "message"
1612
+ }
1613
+ ],
1614
+ "has_more": true
1615
+ }
1616
+
1617
+ /fine_tuning/jobs/{fine_tuning_job_id}/cancel:
1618
+ post:
1619
+ operationId: cancelFineTuningJob
1620
+ tags:
1621
+ - Fine-tuning
1622
+ summary: |
1623
+ Immediately cancel a fine-tune job.
1624
+ parameters:
1625
+ - in: path
1626
+ name: fine_tuning_job_id
1627
+ required: true
1628
+ schema:
1629
+ type: string
1630
+ example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
1631
+ description: |
1632
+ The ID of the fine-tuning job to cancel.
1633
+ responses:
1634
+ "200":
1635
+ description: OK
1636
+ content:
1637
+ application/json:
1638
+ schema:
1639
+ $ref: "#/components/schemas/FineTuningJob"
1640
+ x-oaiMeta:
1641
+ name: Cancel fine-tuning
1642
+ returns: The cancelled [fine-tuning](/docs/api-reference/fine-tuning/object) object.
1643
+ examples:
1644
+ request:
1645
+ curl: |
1646
+ curl -X POST https://api.openai.com/v1/fine_tuning/jobs/ftjob-abc123/cancel \
1647
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1648
+ python: |
1649
+ import os
1650
+ import openai
1651
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1652
+ openai.FineTuningJob.cancel("ftjob-abc123")
1653
+ node.js: |-
1654
+ import OpenAI from "openai";
1655
+
1656
+ const openai = new OpenAI();
1657
+
1658
+ async function main() {
1659
+ const fineTune = await openai.fineTuning.jobs.cancel("ftjob-abc123");
1660
+
1661
+ console.log(fineTune);
1662
+ }
1663
+ main();
1664
+ response: |
1665
+ {
1666
+ "object": "fine_tuning.job",
1667
+ "id": "ftjob-abc123",
1668
+ "model": "gpt-3.5-turbo-0613",
1669
+ "created_at": 1689376978,
1670
+ "fine_tuned_model": null,
1671
+ "organization_id": "org-123",
1672
+ "result_files": [],
1673
+ "hyperparameters": {
1674
+ "n_epochs": "auto"
1675
+ },
1676
+ "status": "cancelled",
1677
+ "validation_file": "file-abc123",
1678
+ "training_file": "file-abc123"
1679
+ }
1680
+
1681
+ /fine-tunes:
1682
+ post:
1683
+ operationId: createFineTune
1684
+ deprecated: true
1685
+ tags:
1686
+ - Fine-tunes
1687
+ summary: |
1688
+ Creates a job that fine-tunes a specified model from a given dataset.
1689
+
1690
+ Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
1691
+
1692
+ [Learn more about fine-tuning](/docs/guides/legacy-fine-tuning)
1693
+ requestBody:
1694
+ required: true
1695
+ content:
1696
+ application/json:
1697
+ schema:
1698
+ $ref: "#/components/schemas/CreateFineTuneRequest"
1699
+ responses:
1700
+ "200":
1701
+ description: OK
1702
+ content:
1703
+ application/json:
1704
+ schema:
1705
+ $ref: "#/components/schemas/FineTune"
1706
+ x-oaiMeta:
1707
+ name: Create fine-tune
1708
+ returns: A [fine-tune](/docs/api-reference/fine-tunes/object) object.
1709
+ examples:
1710
+ request:
1711
+ curl: |
1712
+ curl https://api.openai.com/v1/fine-tunes \
1713
+ -H "Content-Type: application/json" \
1714
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
1715
+ -d '{
1716
+ "training_file": "file-abc123"
1717
+ }'
1718
+ python: |
1719
+ import os
1720
+ import openai
1721
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1722
+ openai.FineTune.create(training_file="file-abc123")
1723
+ node.js: |
1724
+ import OpenAI from "openai";
1725
+
1726
+ const openai = new OpenAI();
1727
+
1728
+ async function main() {
1729
+ const fineTune = await openai.fineTunes.create({
1730
+ training_file: "file-abc123"
1731
+ });
1732
+
1733
+ console.log(fineTune);
1734
+ }
1735
+
1736
+ main();
1737
+ response: |
1738
+ {
1739
+ "id": "ft-AF1WoRqd3aJAHsqc9NY7iL8F",
1740
+ "object": "fine-tune",
1741
+ "model": "curie",
1742
+ "created_at": 1614807352,
1743
+ "events": [
1744
+ {
1745
+ "object": "fine-tune-event",
1746
+ "created_at": 1614807352,
1747
+ "level": "info",
1748
+ "message": "Job enqueued. Waiting for jobs ahead to complete. Queue number: 0."
1749
+ }
1750
+ ],
1751
+ "fine_tuned_model": null,
1752
+ "hyperparams": {
1753
+ "batch_size": 4,
1754
+ "learning_rate_multiplier": 0.1,
1755
+ "n_epochs": 4,
1756
+ "prompt_loss_weight": 0.1,
1757
+ },
1758
+ "organization_id": "org-123",
1759
+ "result_files": [],
1760
+ "status": "pending",
1761
+ "validation_files": [],
1762
+ "training_files": [
1763
+ {
1764
+ "id": "file-abc123",
1765
+ "object": "file",
1766
+ "bytes": 1547276,
1767
+ "created_at": 1610062281,
1768
+ "filename": "my-data-train.jsonl",
1769
+ "purpose": "fine-tune-train"
1770
+ }
1771
+ ],
1772
+ "updated_at": 1614807352,
1773
+ }
1774
+ get:
1775
+ operationId: listFineTunes
1776
+ deprecated: true
1777
+ tags:
1778
+ - Fine-tunes
1779
+ summary: |
1780
+ List your organization's fine-tuning jobs
1781
+ responses:
1782
+ "200":
1783
+ description: OK
1784
+ content:
1785
+ application/json:
1786
+ schema:
1787
+ $ref: "#/components/schemas/ListFineTunesResponse"
1788
+ x-oaiMeta:
1789
+ name: List fine-tunes
1790
+ returns: A list of [fine-tune](/docs/api-reference/fine-tunes/object) objects.
1791
+ examples:
1792
+ request:
1793
+ curl: |
1794
+ curl https://api.openai.com/v1/fine-tunes \
1795
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1796
+ python: |
1797
+ import os
1798
+ import openai
1799
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1800
+ openai.FineTune.list()
1801
+ node.js: |-
1802
+ import OpenAI from "openai";
1803
+
1804
+ const openai = new OpenAI();
1805
+
1806
+ async function main() {
1807
+ const list = await openai.fineTunes.list();
1808
+
1809
+ for await (const fineTune of list) {
1810
+ console.log(fineTune);
1811
+ }
1812
+ }
1813
+
1814
+ main();
1815
+ response: |
1816
+ {
1817
+ "object": "list",
1818
+ "data": [
1819
+ {
1820
+ "id": "ft-AF1WoRqd3aJAHsqc9NY7iL8F",
1821
+ "object": "fine-tune",
1822
+ "model": "curie",
1823
+ "created_at": 1614807352,
1824
+ "fine_tuned_model": null,
1825
+ "hyperparams": { ... },
1826
+ "organization_id": "org-123",
1827
+ "result_files": [],
1828
+ "status": "pending",
1829
+ "validation_files": [],
1830
+ "training_files": [ { ... } ],
1831
+ "updated_at": 1614807352,
1832
+ },
1833
+ { ... },
1834
+ { ... }
1835
+ ]
1836
+ }
1837
+
1838
+ /fine-tunes/{fine_tune_id}:
1839
+ get:
1840
+ operationId: retrieveFineTune
1841
+ deprecated: true
1842
+ tags:
1843
+ - Fine-tunes
1844
+ summary: |
1845
+ Gets info about the fine-tune job.
1846
+
1847
+ [Learn more about fine-tuning](/docs/guides/legacy-fine-tuning)
1848
+ parameters:
1849
+ - in: path
1850
+ name: fine_tune_id
1851
+ required: true
1852
+ schema:
1853
+ type: string
1854
+ example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
1855
+ description: |
1856
+ The ID of the fine-tune job
1857
+ responses:
1858
+ "200":
1859
+ description: OK
1860
+ content:
1861
+ application/json:
1862
+ schema:
1863
+ $ref: "#/components/schemas/FineTune"
1864
+ x-oaiMeta:
1865
+ name: Retrieve fine-tune
1866
+ returns: The [fine-tune](/docs/api-reference/fine-tunes/object) object with the given ID.
1867
+ examples:
1868
+ request:
1869
+ curl: |
1870
+ curl https://api.openai.com/v1/fine-tunes/ft-AF1WoRqd3aJAHsqc9NY7iL8F \
1871
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1872
+ python: |
1873
+ import os
1874
+ import openai
1875
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1876
+ openai.FineTune.retrieve(id="ft-AF1WoRqd3aJAHsqc9NY7iL8F")
1877
+ node.js: |-
1878
+ import OpenAI from "openai";
1879
+
1880
+ const openai = new OpenAI();
1881
+
1882
+ async function main() {
1883
+ const fineTune = await openai.fineTunes.retrieve("ft-AF1WoRqd3aJAHsqc9NY7iL8F");
1884
+
1885
+ console.log(fineTune);
1886
+ }
1887
+
1888
+ main();
1889
+ response: &fine_tune_example |
1890
+ {
1891
+ "id": "ft-AF1WoRqd3aJAHsqc9NY7iL8F",
1892
+ "object": "fine-tune",
1893
+ "model": "curie",
1894
+ "created_at": 1614807352,
1895
+ "events": [
1896
+ {
1897
+ "object": "fine-tune-event",
1898
+ "created_at": 1614807352,
1899
+ "level": "info",
1900
+ "message": "Job enqueued. Waiting for jobs ahead to complete. Queue number: 0."
1901
+ },
1902
+ {
1903
+ "object": "fine-tune-event",
1904
+ "created_at": 1614807356,
1905
+ "level": "info",
1906
+ "message": "Job started."
1907
+ },
1908
+ {
1909
+ "object": "fine-tune-event",
1910
+ "created_at": 1614807861,
1911
+ "level": "info",
1912
+ "message": "Uploaded snapshot: curie:ft-acmeco-2021-03-03-21-44-20."
1913
+ },
1914
+ {
1915
+ "object": "fine-tune-event",
1916
+ "created_at": 1614807864,
1917
+ "level": "info",
1918
+ "message": "Uploaded result files: file-abc123."
1919
+ },
1920
+ {
1921
+ "object": "fine-tune-event",
1922
+ "created_at": 1614807864,
1923
+ "level": "info",
1924
+ "message": "Job succeeded."
1925
+ }
1926
+ ],
1927
+ "fine_tuned_model": "curie:ft-acmeco-2021-03-03-21-44-20",
1928
+ "hyperparams": {
1929
+ "batch_size": 4,
1930
+ "learning_rate_multiplier": 0.1,
1931
+ "n_epochs": 4,
1932
+ "prompt_loss_weight": 0.1,
1933
+ },
1934
+ "organization_id": "org-123",
1935
+ "result_files": [
1936
+ {
1937
+ "id": "file-abc123",
1938
+ "object": "file",
1939
+ "bytes": 81509,
1940
+ "created_at": 1614807863,
1941
+ "filename": "compiled_results.csv",
1942
+ "purpose": "fine-tune-results"
1943
+ }
1944
+ ],
1945
+ "status": "succeeded",
1946
+ "validation_files": [],
1947
+ "training_files": [
1948
+ {
1949
+ "id": "file-abc123",
1950
+ "object": "file",
1951
+ "bytes": 1547276,
1952
+ "created_at": 1610062281,
1953
+ "filename": "my-data-train.jsonl",
1954
+ "purpose": "fine-tune-train"
1955
+ }
1956
+ ],
1957
+ "updated_at": 1614807865,
1958
+ }
1959
+
1960
+ /fine-tunes/{fine_tune_id}/cancel:
1961
+ post:
1962
+ operationId: cancelFineTune
1963
+ deprecated: true
1964
+ tags:
1965
+ - Fine-tunes
1966
+ summary: |
1967
+ Immediately cancel a fine-tune job.
1968
+ parameters:
1969
+ - in: path
1970
+ name: fine_tune_id
1971
+ required: true
1972
+ schema:
1973
+ type: string
1974
+ example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
1975
+ description: |
1976
+ The ID of the fine-tune job to cancel
1977
+ responses:
1978
+ "200":
1979
+ description: OK
1980
+ content:
1981
+ application/json:
1982
+ schema:
1983
+ $ref: "#/components/schemas/FineTune"
1984
+ x-oaiMeta:
1985
+ name: Cancel fine-tune
1986
+ returns: The cancelled [fine-tune](/docs/api-reference/fine-tunes/object) object.
1987
+ examples:
1988
+ request:
1989
+ curl: |
1990
+ curl https://api.openai.com/v1/fine-tunes/ft-AF1WoRqd3aJAHsqc9NY7iL8F/cancel \
1991
+ -H "Authorization: Bearer $OPENAI_API_KEY"
1992
+ python: |
1993
+ import os
1994
+ import openai
1995
+ openai.api_key = os.getenv("OPENAI_API_KEY")
1996
+ openai.FineTune.cancel(id="ft-AF1WoRqd3aJAHsqc9NY7iL8F")
1997
+ node.js: |-
1998
+ import OpenAI from "openai";
1999
+
2000
+ const openai = new OpenAI();
2001
+
2002
+ async function main() {
2003
+ const fineTune = await openai.fineTunes.cancel("ft-AF1WoRqd3aJAHsqc9NY7iL8F");
2004
+
2005
+ console.log(fineTune);
2006
+ }
2007
+ main();
2008
+ response: |
2009
+ {
2010
+ "id": "ft-xhrpBbvVUzYGo8oUO1FY4nI7",
2011
+ "object": "fine-tune",
2012
+ "model": "curie",
2013
+ "created_at": 1614807770,
2014
+ "events": [ { ... } ],
2015
+ "fine_tuned_model": null,
2016
+ "hyperparams": { ... },
2017
+ "organization_id": "org-123",
2018
+ "result_files": [],
2019
+ "status": "cancelled",
2020
+ "validation_files": [],
2021
+ "training_files": [
2022
+ {
2023
+ "id": "file-abc123",
2024
+ "object": "file",
2025
+ "bytes": 1547276,
2026
+ "created_at": 1610062281,
2027
+ "filename": "my-data-train.jsonl",
2028
+ "purpose": "fine-tune-train"
2029
+ }
2030
+ ],
2031
+ "updated_at": 1614807789,
2032
+ }
2033
+
2034
+ /fine-tunes/{fine_tune_id}/events:
2035
+ get:
2036
+ operationId: listFineTuneEvents
2037
+ deprecated: true
2038
+ tags:
2039
+ - Fine-tunes
2040
+ summary: |
2041
+ Get fine-grained status updates for a fine-tune job.
2042
+ parameters:
2043
+ - in: path
2044
+ name: fine_tune_id
2045
+ required: true
2046
+ schema:
2047
+ type: string
2048
+ example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
2049
+ description: |
2050
+ The ID of the fine-tune job to get events for.
2051
+ - in: query
2052
+ name: stream
2053
+ required: false
2054
+ schema:
2055
+ type: boolean
2056
+ default: false
2057
+ description: |
2058
+ Whether to stream events for the fine-tune job. If set to true,
2059
+ events will be sent as data-only
2060
+ [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
2061
+ as they become available. The stream will terminate with a
2062
+ `data: [DONE]` message when the job is finished (succeeded, cancelled,
2063
+ or failed).
2064
+
2065
+ If set to false, only events generated so far will be returned.
2066
+ responses:
2067
+ "200":
2068
+ description: OK
2069
+ content:
2070
+ application/json:
2071
+ schema:
2072
+ $ref: "#/components/schemas/ListFineTuneEventsResponse"
2073
+ x-oaiMeta:
2074
+ name: List fine-tune events
2075
+ returns: A list of fine-tune event objects.
2076
+ examples:
2077
+ request:
2078
+ curl: |
2079
+ curl https://api.openai.com/v1/fine-tunes/ft-AF1WoRqd3aJAHsqc9NY7iL8F/events \
2080
+ -H "Authorization: Bearer $OPENAI_API_KEY"
2081
+ python: |
2082
+ import os
2083
+ import openai
2084
+ openai.api_key = os.getenv("OPENAI_API_KEY")
2085
+ openai.FineTune.list_events(id="ft-AF1WoRqd3aJAHsqc9NY7iL8F")
2086
+ node.js: |-
2087
+ import OpenAI from "openai";
2088
+
2089
+ const openai = new OpenAI();
2090
+
2091
+ async function main() {
2092
+ const fineTune = await openai.fineTunes.listEvents("ft-AF1WoRqd3aJAHsqc9NY7iL8F");
2093
+
2094
+ console.log(fineTune);
2095
+ }
2096
+ main();
2097
+ response: |
2098
+ {
2099
+ "object": "list",
2100
+ "data": [
2101
+ {
2102
+ "object": "fine-tune-event",
2103
+ "created_at": 1614807352,
2104
+ "level": "info",
2105
+ "message": "Job enqueued. Waiting for jobs ahead to complete. Queue number: 0."
2106
+ },
2107
+ {
2108
+ "object": "fine-tune-event",
2109
+ "created_at": 1614807356,
2110
+ "level": "info",
2111
+ "message": "Job started."
2112
+ },
2113
+ {
2114
+ "object": "fine-tune-event",
2115
+ "created_at": 1614807861,
2116
+ "level": "info",
2117
+ "message": "Uploaded snapshot: curie:ft-acmeco-2021-03-03-21-44-20."
2118
+ },
2119
+ {
2120
+ "object": "fine-tune-event",
2121
+ "created_at": 1614807864,
2122
+ "level": "info",
2123
+ "message": "Uploaded result files: file-abc123"
2124
+ },
2125
+ {
2126
+ "object": "fine-tune-event",
2127
+ "created_at": 1614807864,
2128
+ "level": "info",
2129
+ "message": "Job succeeded."
2130
+ }
2131
+ ]
2132
+ }
2133
+
2134
+ /models:
2135
+ get:
2136
+ operationId: listModels
2137
+ tags:
2138
+ - Models
2139
+ summary: Lists the currently available models, and provides basic information about each one such as the owner and availability.
2140
+ responses:
2141
+ "200":
2142
+ description: OK
2143
+ content:
2144
+ application/json:
2145
+ schema:
2146
+ $ref: "#/components/schemas/ListModelsResponse"
2147
+ x-oaiMeta:
2148
+ name: List models
2149
+ returns: A list of [model](/docs/api-reference/models/object) objects.
2150
+ examples:
2151
+ request:
2152
+ curl: |
2153
+ curl https://api.openai.com/v1/models \
2154
+ -H "Authorization: Bearer $OPENAI_API_KEY"
2155
+ python: |
2156
+ import os
2157
+ import openai
2158
+ openai.api_key = os.getenv("OPENAI_API_KEY")
2159
+ openai.Model.list()
2160
+ node.js: |-
2161
+ import OpenAI from "openai";
2162
+
2163
+ const openai = new OpenAI();
2164
+
2165
+ async function main() {
2166
+ const list = await openai.models.list();
2167
+
2168
+ for await (const model of list) {
2169
+ console.log(model);
2170
+ }
2171
+ }
2172
+ main();
2173
+ response: |
2174
+ {
2175
+ "object": "list",
2176
+ "data": [
2177
+ {
2178
+ "id": "model-id-0",
2179
+ "object": "model",
2180
+ "created": 1686935002,
2181
+ "owned_by": "organization-owner"
2182
+ },
2183
+ {
2184
+ "id": "model-id-1",
2185
+ "object": "model",
2186
+ "created": 1686935002,
2187
+ "owned_by": "organization-owner",
2188
+ },
2189
+ {
2190
+ "id": "model-id-2",
2191
+ "object": "model",
2192
+ "created": 1686935002,
2193
+ "owned_by": "openai"
2194
+ },
2195
+ ],
2196
+ "object": "list"
2197
+ }
2198
+
2199
+ /models/{model}:
2200
+ get:
2201
+ operationId: retrieveModel
2202
+ tags:
2203
+ - Models
2204
+ summary: Retrieves a model instance, providing basic information about the model such as the owner and permissioning.
2205
+ parameters:
2206
+ - in: path
2207
+ name: model
2208
+ required: true
2209
+ schema:
2210
+ type: string
2211
+ # ideally this will be an actual ID, so this will always work from browser
2212
+ example: gpt-3.5-turbo
2213
+ description: The ID of the model to use for this request
2214
+ responses:
2215
+ "200":
2216
+ description: OK
2217
+ content:
2218
+ application/json:
2219
+ schema:
2220
+ $ref: "#/components/schemas/Model"
2221
+ x-oaiMeta:
2222
+ name: Retrieve model
2223
+ returns: The [model](/docs/api-reference/models/object) object matching the specified ID.
2224
+ examples:
2225
+ request:
2226
+ curl: |
2227
+ curl https://api.openai.com/v1/models/VAR_model_id \
2228
+ -H "Authorization: Bearer $OPENAI_API_KEY"
2229
+ python: |
2230
+ import os
2231
+ import openai
2232
+ openai.api_key = os.getenv("OPENAI_API_KEY")
2233
+ openai.Model.retrieve("VAR_model_id")
2234
+ node.js: |-
2235
+ import OpenAI from "openai";
2236
+
2237
+ const openai = new OpenAI();
2238
+
2239
+ async function main() {
2240
+ const model = await openai.models.retrieve("gpt-3.5-turbo");
2241
+
2242
+ console.log(model);
2243
+ }
2244
+
2245
+ main();
2246
+ response: &retrieve_model_response |
2247
+ {
2248
+ "id": "VAR_model_id",
2249
+ "object": "model",
2250
+ "created": 1686935002,
2251
+ "owned_by": "openai"
2252
+ }
2253
+ delete:
2254
+ operationId: deleteModel
2255
+ tags:
2256
+ - Models
2257
+ summary: Delete a fine-tuned model. You must have the Owner role in your organization to delete a model.
2258
+ parameters:
2259
+ - in: path
2260
+ name: model
2261
+ required: true
2262
+ schema:
2263
+ type: string
2264
+ example: ft:gpt-3.5-turbo:acemeco:suffix:abc123
2265
+ description: The model to delete
2266
+ responses:
2267
+ "200":
2268
+ description: OK
2269
+ content:
2270
+ application/json:
2271
+ schema:
2272
+ $ref: "#/components/schemas/DeleteModelResponse"
2273
+ x-oaiMeta:
2274
+ name: Delete fine-tune model
2275
+ returns: Deletion status.
2276
+ examples:
2277
+ request:
2278
+ curl: |
2279
+ curl https://api.openai.com/v1/models/ft:gpt-3.5-turbo:acemeco:suffix:abc123 \
2280
+ -X DELETE \
2281
+ -H "Authorization: Bearer $OPENAI_API_KEY"
2282
+ python: |
2283
+ import os
2284
+ import openai
2285
+ openai.api_key = os.getenv("OPENAI_API_KEY")
2286
+ openai.Model.delete("ft:gpt-3.5-turbo:acemeco:suffix:abc123")
2287
+ node.js: |-
2288
+ import OpenAI from "openai";
2289
+
2290
+ const openai = new OpenAI();
2291
+
2292
+ async function main() {
2293
+ const model = await openai.models.del("ft:gpt-3.5-turbo:acemeco:suffix:abc123");
2294
+
2295
+ console.log(model);
2296
+ }
2297
+ main();
2298
+ response: |
2299
+ {
2300
+ "id": "ft:gpt-3.5-turbo:acemeco:suffix:abc123",
2301
+ "object": "model",
2302
+ "deleted": true
2303
+ }
2304
+
2305
+ /moderations:
2306
+ post:
2307
+ operationId: createModeration
2308
+ tags:
2309
+ - Moderations
2310
+ summary: Classifies if text violates OpenAI's Content Policy
2311
+ requestBody:
2312
+ required: true
2313
+ content:
2314
+ application/json:
2315
+ schema:
2316
+ $ref: "#/components/schemas/CreateModerationRequest"
2317
+ responses:
2318
+ "200":
2319
+ description: OK
2320
+ content:
2321
+ application/json:
2322
+ schema:
2323
+ $ref: "#/components/schemas/CreateModerationResponse"
2324
+ x-oaiMeta:
2325
+ name: Create moderation
2326
+ returns: A [moderation](/docs/api-reference/moderations/object) object.
2327
+ examples:
2328
+ request:
2329
+ curl: |
2330
+ curl https://api.openai.com/v1/moderations \
2331
+ -H "Content-Type: application/json" \
2332
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
2333
+ -d '{
2334
+ "input": "I want to kill them."
2335
+ }'
2336
+ python: |
2337
+ import os
2338
+ import openai
2339
+ openai.api_key = os.getenv("OPENAI_API_KEY")
2340
+ openai.Moderation.create(
2341
+ input="I want to kill them.",
2342
+ )
2343
+ node.js: |
2344
+ import OpenAI from "openai";
2345
+
2346
+ const openai = new OpenAI();
2347
+
2348
+ async function main() {
2349
+ const moderation = await openai.moderations.create({ input: "I want to kill them." });
2350
+
2351
+ console.log(moderation);
2352
+ }
2353
+ main();
2354
+ response: &moderation_example |
2355
+ {
2356
+ "id": "modr-XXXXX",
2357
+ "model": "text-moderation-005",
2358
+ "results": [
2359
+ {
2360
+ "flagged": true,
2361
+ "categories": {
2362
+ "sexual": false,
2363
+ "hate": false,
2364
+ "harassment": false,
2365
+ "self-harm": false,
2366
+ "sexual/minors": false,
2367
+ "hate/threatening": false,
2368
+ "violence/graphic": false,
2369
+ "self-harm/intent": false,
2370
+ "self-harm/instructions": false,
2371
+ "harassment/threatening": true,
2372
+ "violence": true,
2373
+ },
2374
+ "category_scores": {
2375
+ "sexual": 1.2282071e-06,
2376
+ "hate": 0.010696256,
2377
+ "harassment": 0.29842457,
2378
+ "self-harm": 1.5236925e-08,
2379
+ "sexual/minors": 5.7246268e-08,
2380
+ "hate/threatening": 0.0060676364,
2381
+ "violence/graphic": 4.435014e-06,
2382
+ "self-harm/intent": 8.098441e-10,
2383
+ "self-harm/instructions": 2.8498655e-11,
2384
+ "harassment/threatening": 0.63055265,
2385
+ "violence": 0.99011886,
2386
+ }
2387
+ }
2388
+ ]
2389
+ }
2390
+
2391
+ components:
2392
+ securitySchemes:
2393
+ ApiKeyAuth:
2394
+ type: http
2395
+ scheme: "bearer"
2396
+
2397
+ schemas:
2398
+ Error:
2399
+ type: object
2400
+ properties:
2401
+ code:
2402
+ type: string
2403
+ nullable: true
2404
+ message:
2405
+ type: string
2406
+ nullable: false
2407
+ param:
2408
+ type: string
2409
+ nullable: true
2410
+ type:
2411
+ type: string
2412
+ nullable: false
2413
+ required:
2414
+ - type
2415
+ - message
2416
+ - param
2417
+ - code
2418
+
2419
+ ErrorResponse:
2420
+ type: object
2421
+ properties:
2422
+ error:
2423
+ $ref: "#/components/schemas/Error"
2424
+ required:
2425
+ - error
2426
+
2427
+ ListModelsResponse:
2428
+ type: object
2429
+ properties:
2430
+ object:
2431
+ type: string
2432
+ data:
2433
+ type: array
2434
+ items:
2435
+ $ref: "#/components/schemas/Model"
2436
+ required:
2437
+ - object
2438
+ - data
2439
+
2440
+ DeleteModelResponse:
2441
+ type: object
2442
+ properties:
2443
+ id:
2444
+ type: string
2445
+ deleted:
2446
+ type: boolean
2447
+ object:
2448
+ type: string
2449
+ required:
2450
+ - id
2451
+ - object
2452
+ - deleted
2453
+
2454
+ CreateCompletionRequest:
2455
+ type: object
2456
+ properties:
2457
+ model:
2458
+ description: &model_description |
2459
+ ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
2460
+ anyOf:
2461
+ - type: string
2462
+ - type: string
2463
+ enum:
2464
+ [
2465
+ "babbage-002",
2466
+ "davinci-002",
2467
+ "gpt-3.5-turbo-instruct",
2468
+ "text-davinci-003",
2469
+ "text-davinci-002",
2470
+ "text-davinci-001",
2471
+ "code-davinci-002",
2472
+ "text-curie-001",
2473
+ "text-babbage-001",
2474
+ "text-ada-001",
2475
+ ]
2476
+ x-oaiTypeLabel: string
2477
+ prompt:
2478
+ description: &completions_prompt_description |
2479
+ The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
2480
+
2481
+ Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.
2482
+ default: "<|endoftext|>"
2483
+ nullable: true
2484
+ oneOf:
2485
+ - type: string
2486
+ default: ""
2487
+ example: "This is a test."
2488
+ - type: array
2489
+ items:
2490
+ type: string
2491
+ default: ""
2492
+ example: "This is a test."
2493
+ - type: array
2494
+ minItems: 1
2495
+ items:
2496
+ type: integer
2497
+ example: "[1212, 318, 257, 1332, 13]"
2498
+ - type: array
2499
+ minItems: 1
2500
+ items:
2501
+ type: array
2502
+ minItems: 1
2503
+ items:
2504
+ type: integer
2505
+ example: "[[1212, 318, 257, 1332, 13]]"
2506
+ best_of:
2507
+ type: integer
2508
+ default: 1
2509
+ minimum: 0
2510
+ maximum: 20
2511
+ nullable: true
2512
+ description: &completions_best_of_description |
2513
+ Generates `best_of` completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.
2514
+
2515
+ When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to return – `best_of` must be greater than `n`.
2516
+
2517
+ **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.
2518
+ echo:
2519
+ type: boolean
2520
+ default: false
2521
+ nullable: true
2522
+ description: &completions_echo_description >
2523
+ Echo back the prompt in addition to the completion
2524
+ frequency_penalty:
2525
+ type: number
2526
+ default: 0
2527
+ minimum: -2
2528
+ maximum: 2
2529
+ nullable: true
2530
+ description: &completions_frequency_penalty_description |
2531
+ Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
2532
+
2533
+ [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)
2534
+ logit_bias: &completions_logit_bias
2535
+ type: object
2536
+ x-oaiTypeLabel: map
2537
+ default: null
2538
+ nullable: true
2539
+ additionalProperties:
2540
+ type: integer
2541
+ description: &completions_logit_bias_description |
2542
+ Modify the likelihood of specified tokens appearing in the completion.
2543
+
2544
+ Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
2545
+
2546
+ As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
2547
+ logprobs: &completions_logprobs_configuration
2548
+ type: integer
2549
+ minimum: 0
2550
+ maximum: 5
2551
+ default: null
2552
+ nullable: true
2553
+ description: &completions_logprobs_description |
2554
+ Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response.
2555
+
2556
+ The maximum value for `logprobs` is 5.
2557
+ max_tokens:
2558
+ type: integer
2559
+ minimum: 0
2560
+ default: 16
2561
+ example: 16
2562
+ nullable: true
2563
+ description: &completions_max_tokens_description |
2564
+ The maximum number of [tokens](/tokenizer) to generate in the completion.
2565
+
2566
+ The token count of your prompt plus `max_tokens` cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
2567
+ n:
2568
+ type: integer
2569
+ minimum: 1
2570
+ maximum: 128
2571
+ default: 1
2572
+ example: 1
2573
+ nullable: true
2574
+ description: &completions_completions_description |
2575
+ How many completions to generate for each prompt.
2576
+
2577
+ **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.
2578
+ presence_penalty:
2579
+ type: number
2580
+ default: 0
2581
+ minimum: -2
2582
+ maximum: 2
2583
+ nullable: true
2584
+ description: &completions_presence_penalty_description |
2585
+ Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
2586
+
2587
+ [See more information about frequency and presence penalties.](/docs/guides/gpt/parameter-details)
2588
+ stop:
2589
+ description: &completions_stop_description >
2590
+ Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
2591
+ default: null
2592
+ nullable: true
2593
+ oneOf:
2594
+ - type: string
2595
+ default: <|endoftext|>
2596
+ example: "\n"
2597
+ nullable: true
2598
+ - type: array
2599
+ minItems: 1
2600
+ maxItems: 4
2601
+ items:
2602
+ type: string
2603
+ example: '["\n"]'
2604
+ stream:
2605
+ description: >
2606
+ Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
2607
+ as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
2608
+ type: boolean
2609
+ nullable: true
2610
+ default: false
2611
+ suffix:
2612
+ description: The suffix that comes after a completion of inserted text.
2613
+ default: null
2614
+ nullable: true
2615
+ type: string
2616
+ example: "test."
2617
+ temperature:
2618
+ type: number
2619
+ minimum: 0
2620
+ maximum: 2
2621
+ default: 1
2622
+ example: 1
2623
+ nullable: true
2624
+ description: &completions_temperature_description |
2625
+ What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
2626
+
2627
+ We generally recommend altering this or `top_p` but not both.
2628
+ top_p:
2629
+ type: number
2630
+ minimum: 0
2631
+ maximum: 1
2632
+ default: 1
2633
+ example: 1
2634
+ nullable: true
2635
+ description: &completions_top_p_description |
2636
+ An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
2637
+
2638
+ We generally recommend altering this or `temperature` but not both.
2639
+ user: &end_user_param_configuration
2640
+ type: string
2641
+ example: user-1234
2642
+ description: |
2643
+ A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
2644
+ required:
2645
+ - model
2646
+ - prompt
2647
+
2648
+ CreateCompletionResponse:
2649
+ type: object
2650
+ description: |
2651
+ Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
2652
+ properties:
2653
+ id:
2654
+ type: string
2655
+ description: A unique identifier for the completion.
2656
+ choices:
2657
+ type: array
2658
+ description: The list of completion choices the model generated for the input prompt.
2659
+ items:
2660
+ type: object
2661
+ required:
2662
+ - finish_reason
2663
+ - index
2664
+ - logprobs
2665
+ - text
2666
+ properties:
2667
+ finish_reason:
2668
+ type: string
2669
+ description: &completion_finish_reason_description |
2670
+ The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence,
2671
+ `length` if the maximum number of tokens specified in the request was reached,
2672
+ or `content_filter` if content was omitted due to a flag from our content filters.
2673
+ enum: ["stop", "length", "content_filter"]
2674
+ index:
2675
+ type: integer
2676
+ logprobs:
2677
+ type: object
2678
+ nullable: true
2679
+ properties:
2680
+ text_offset:
2681
+ type: array
2682
+ items:
2683
+ type: integer
2684
+ token_logprobs:
2685
+ type: array
2686
+ items:
2687
+ type: number
2688
+ tokens:
2689
+ type: array
2690
+ items:
2691
+ type: string
2692
+ top_logprobs:
2693
+ type: array
2694
+ items:
2695
+ type: object
2696
+ additionalProperties:
2697
+ type: integer
2698
+ text:
2699
+ type: string
2700
+ created:
2701
+ type: integer
2702
+ description: The Unix timestamp (in seconds) of when the completion was created.
2703
+ model:
2704
+ type: string
2705
+ description: The model used for completion.
2706
+ object:
2707
+ type: string
2708
+ description: The object type, which is always "text_completion"
2709
+ usage:
2710
+ $ref: "#/components/schemas/CompletionUsage"
2711
+ required:
2712
+ - id
2713
+ - object
2714
+ - created
2715
+ - model
2716
+ - choices
2717
+ x-oaiMeta:
2718
+ name: The completion object
2719
+ legacy: true
2720
+ example: |
2721
+ {
2722
+ "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
2723
+ "object": "text_completion",
2724
+ "created": 1589478378,
2725
+ "model": "gpt-3.5-turbo",
2726
+ "choices": [
2727
+ {
2728
+ "text": "\n\nThis is indeed a test",
2729
+ "index": 0,
2730
+ "logprobs": null,
2731
+ "finish_reason": "length"
2732
+ }
2733
+ ],
2734
+ "usage": {
2735
+ "prompt_tokens": 5,
2736
+ "completion_tokens": 7,
2737
+ "total_tokens": 12
2738
+ }
2739
+ }
2740
+
2741
+ ChatCompletionRequestMessage:
2742
+ type: object
2743
+ properties:
2744
+ content:
2745
+ type: string
2746
+ nullable: true
2747
+ description: The contents of the message. `content` is required for all messages, and may be null for assistant messages with function calls.
2748
+ function_call:
2749
+ type: object
2750
+ description: The name and arguments of a function that should be called, as generated by the model.
2751
+ properties:
2752
+ arguments:
2753
+ type: string
2754
+ description: The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
2755
+ name:
2756
+ type: string
2757
+ description: The name of the function to call.
2758
+ required:
2759
+ - arguments
2760
+ - name
2761
+ name:
2762
+ type: string
2763
+ description: The name of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.
2764
+ role:
2765
+ type: string
2766
+ enum: ["system", "user", "assistant", "function"]
2767
+ description: The role of the messages author. One of `system`, `user`, `assistant`, or `function`.
2768
+ required:
2769
+ - content
2770
+ - role
2771
+
2772
+ ChatCompletionFunctionParameters:
2773
+ type: object
2774
+ description: "The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/gpt/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.\n\nTo describe a function that accepts no parameters, provide the value `{\"type\": \"object\", \"properties\": {}}`."
2775
+ additionalProperties: true
2776
+
2777
+ ChatCompletionFunctions:
2778
+ type: object
2779
+ properties:
2780
+ description:
2781
+ type: string
2782
+ description: A description of what the function does, used by the model to choose when and how to call the function.
2783
+ name:
2784
+ type: string
2785
+ description: The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
2786
+ parameters:
2787
+ $ref: "#/components/schemas/ChatCompletionFunctionParameters"
2788
+ required:
2789
+ - name
2790
+ - parameters
2791
+
2792
+ ChatCompletionFunctionCallOption:
2793
+ type: object
2794
+ properties:
2795
+ name:
2796
+ type: string
2797
+ description: The name of the function to call.
2798
+ required:
2799
+ - name
2800
+
2801
+ ChatCompletionResponseMessage:
2802
+ type: object
2803
+ description: A chat completion message generated by the model.
2804
+ properties:
2805
+ content:
2806
+ type: string
2807
+ description: The contents of the message.
2808
+ nullable: true
2809
+ function_call:
2810
+ type: object
2811
+ description: The name and arguments of a function that should be called, as generated by the model.
2812
+ properties:
2813
+ arguments:
2814
+ type: string
2815
+ description: The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
2816
+ name:
2817
+ type: string
2818
+ description: The name of the function to call.
2819
+ required:
2820
+ - name
2821
+ - arguments
2822
+ role:
2823
+ type: string
2824
+ enum: ["system", "user", "assistant", "function"]
2825
+ description: The role of the author of this message.
2826
+ required:
2827
+ - role
2828
+ - content
2829
+
2830
+ ChatCompletionStreamResponseDelta:
2831
+ type: object
2832
+ description: A chat completion delta generated by streamed model responses.
2833
+ properties:
2834
+ content:
2835
+ type: string
2836
+ description: The contents of the chunk message.
2837
+ nullable: true
2838
+ function_call:
2839
+ type: object
2840
+ description: The name and arguments of a function that should be called, as generated by the model.
2841
+ properties:
2842
+ arguments:
2843
+ type: string
2844
+ description: The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
2845
+ name:
2846
+ type: string
2847
+ description: The name of the function to call.
2848
+ role:
2849
+ type: string
2850
+ enum: ["system", "user", "assistant", "function"]
2851
+ description: The role of the author of this message.
2852
+
2853
+ CreateChatCompletionRequest:
2854
+ type: object
2855
+ properties:
2856
+ messages:
2857
+ description: A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
2858
+ type: array
2859
+ minItems: 1
2860
+ items:
2861
+ $ref: "#/components/schemas/ChatCompletionRequestMessage"
2862
+ model:
2863
+ description: ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
2864
+ example: "gpt-3.5-turbo"
2865
+ anyOf:
2866
+ - type: string
2867
+ - type: string
2868
+ enum:
2869
+ [
2870
+ "gpt-4",
2871
+ "gpt-4-0314",
2872
+ "gpt-4-0613",
2873
+ "gpt-4-32k",
2874
+ "gpt-4-32k-0314",
2875
+ "gpt-4-32k-0613",
2876
+ "gpt-3.5-turbo",
2877
+ "gpt-3.5-turbo-16k",
2878
+ "gpt-3.5-turbo-0301",
2879
+ "gpt-3.5-turbo-0613",
2880
+ "gpt-3.5-turbo-16k-0613",
2881
+ ]
2882
+ x-oaiTypeLabel: string
2883
+ frequency_penalty:
2884
+ type: number
2885
+ default: 0
2886
+ minimum: -2
2887
+ maximum: 2
2888
+ nullable: true
2889
+ description: *completions_frequency_penalty_description
2890
+ function_call:
2891
+ description: >
2892
+ Controls how the model calls functions. "none" means the model will not call a function and instead generates a message. "auto" means the model can pick between generating a message or calling a function. Specifying a particular function via `{"name": "my_function"}` forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present.
2893
+ oneOf:
2894
+ - type: string
2895
+ enum: [none, auto]
2896
+ - $ref: "#/components/schemas/ChatCompletionFunctionCallOption"
2897
+ functions:
2898
+ description: A list of functions the model may generate JSON inputs for.
2899
+ type: array
2900
+ minItems: 1
2901
+ maxItems: 128
2902
+ items:
2903
+ $ref: "#/components/schemas/ChatCompletionFunctions"
2904
+ logit_bias:
2905
+ type: object
2906
+ x-oaiTypeLabel: map
2907
+ default: null
2908
+ nullable: true
2909
+ additionalProperties:
2910
+ type: integer
2911
+ description: |
2912
+ Modify the likelihood of specified tokens appearing in the completion.
2913
+
2914
+ Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
2915
+ max_tokens:
2916
+ description: |
2917
+ The maximum number of [tokens](/tokenizer) to generate in the chat completion.
2918
+
2919
+ The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
2920
+ default: inf
2921
+ type: integer
2922
+ nullable: true
2923
+ n:
2924
+ type: integer
2925
+ minimum: 1
2926
+ maximum: 128
2927
+ default: 1
2928
+ example: 1
2929
+ nullable: true
2930
+ description: How many chat completion choices to generate for each input message.
2931
+ presence_penalty:
2932
+ type: number
2933
+ default: 0
2934
+ minimum: -2
2935
+ maximum: 2
2936
+ nullable: true
2937
+ description: *completions_presence_penalty_description
2938
+ stop:
2939
+ description: |
2940
+ Up to 4 sequences where the API will stop generating further tokens.
2941
+ default: null
2942
+ oneOf:
2943
+ - type: string
2944
+ nullable: true
2945
+ - type: array
2946
+ minItems: 1
2947
+ maxItems: 4
2948
+ items:
2949
+ type: string
2950
+ stream:
2951
+ description: >
2952
+ If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
2953
+ as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
2954
+ type: boolean
2955
+ nullable: true
2956
+ default: false
2957
+ temperature:
2958
+ type: number
2959
+ minimum: 0
2960
+ maximum: 2
2961
+ default: 1
2962
+ example: 1
2963
+ nullable: true
2964
+ description: *completions_temperature_description
2965
+ top_p:
2966
+ type: number
2967
+ minimum: 0
2968
+ maximum: 1
2969
+ default: 1
2970
+ example: 1
2971
+ nullable: true
2972
+ description: *completions_top_p_description
2973
+ user: *end_user_param_configuration
2974
+ required:
2975
+ - model
2976
+ - messages
2977
+
2978
+ CreateChatCompletionResponse:
2979
+ type: object
2980
+ description: Represents a chat completion response returned by model, based on the provided input.
2981
+ properties:
2982
+ id:
2983
+ type: string
2984
+ description: A unique identifier for the chat completion.
2985
+ choices:
2986
+ type: array
2987
+ description: A list of chat completion choices. Can be more than one if `n` is greater than 1.
2988
+ items:
2989
+ type: object
2990
+ required:
2991
+ - finish_reason
2992
+ - index
2993
+ - message
2994
+ properties:
2995
+ finish_reason:
2996
+ type: string
2997
+ description: &chat_completion_finish_reason_description |
2998
+ The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence,
2999
+ `length` if the maximum number of tokens specified in the request was reached,
3000
+ `content_filter` if content was omitted due to a flag from our content filters,
3001
+ or `function_call` if the model called a function.
3002
+ enum: ["stop", "length", "function_call", "content_filter"]
3003
+ index:
3004
+ type: integer
3005
+ description: The index of the choice in the list of choices.
3006
+ message:
3007
+ $ref: "#/components/schemas/ChatCompletionResponseMessage"
3008
+ created:
3009
+ type: integer
3010
+ description: The Unix timestamp (in seconds) of when the chat completion was created.
3011
+ model:
3012
+ type: string
3013
+ description: The model used for the chat completion.
3014
+ object:
3015
+ type: string
3016
+ description: The object type, which is always `chat.completion`.
3017
+ usage:
3018
+ $ref: "#/components/schemas/CompletionUsage"
3019
+ required:
3020
+ - choices
3021
+ - created
3022
+ - id
3023
+ - model
3024
+ - object
3025
+ x-oaiMeta:
3026
+ name: The chat completion object
3027
+ group: chat
3028
+ example: *chat_completion_example
3029
+
3030
+ CreateChatCompletionFunctionResponse:
3031
+ type: object
3032
+ description: Represents a chat completion response returned by model, based on the provided input.
3033
+ properties:
3034
+ id:
3035
+ type: string
3036
+ description: A unique identifier for the chat completion.
3037
+ choices:
3038
+ type: array
3039
+ description: A list of chat completion choices. Can be more than one if `n` is greater than 1.
3040
+ items:
3041
+ type: object
3042
+ required:
3043
+ - finish_reason
3044
+ - index
3045
+ - message
3046
+ properties:
3047
+ finish_reason:
3048
+ type: string
3049
+ description:
3050
+ &chat_completion_function_finish_reason_description |
3051
+ The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, or `function_call` if the model called a function.
3052
+ enum: ["stop", "length", "function_call", "content_filter"]
3053
+ index:
3054
+ type: integer
3055
+ description: The index of the choice in the list of choices.
3056
+ message:
3057
+ $ref: "#/components/schemas/ChatCompletionResponseMessage"
3058
+ created:
3059
+ type: integer
3060
+ description: The Unix timestamp (in seconds) of when the chat completion was created.
3061
+ model:
3062
+ type: string
3063
+ description: The model used for the chat completion.
3064
+ object:
3065
+ type: string
3066
+ description: The object type, which is always `chat.completion`.
3067
+ usage:
3068
+ $ref: "#/components/schemas/CompletionUsage"
3069
+ required:
3070
+ - choices
3071
+ - created
3072
+ - id
3073
+ - model
3074
+ - object
3075
+ x-oaiMeta:
3076
+ name: The chat completion object
3077
+ group: chat
3078
+ example: *chat_completion_function_example
3079
+
3080
+ ListPaginatedFineTuningJobsResponse:
3081
+ type: object
3082
+ properties:
3083
+ data:
3084
+ type: array
3085
+ items:
3086
+ $ref: "#/components/schemas/FineTuningJob"
3087
+ has_more:
3088
+ type: boolean
3089
+ object:
3090
+ type: string
3091
+ required:
3092
+ - object
3093
+ - data
3094
+ - has_more
3095
+
3096
+ CreateChatCompletionStreamResponse:
3097
+ type: object
3098
+ description: Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
3099
+ properties:
3100
+ id:
3101
+ type: string
3102
+ description: A unique identifier for the chat completion. Each chunk has the same ID.
3103
+ choices:
3104
+ type: array
3105
+ description: A list of chat completion choices. Can be more than one if `n` is greater than 1.
3106
+ items:
3107
+ type: object
3108
+ required:
3109
+ - delta
3110
+ - finish_reason
3111
+ - index
3112
+ properties:
3113
+ delta:
3114
+ $ref: "#/components/schemas/ChatCompletionStreamResponseDelta"
3115
+ finish_reason:
3116
+ type: string
3117
+ description: *chat_completion_finish_reason_description
3118
+ enum: ["stop", "length", "function_call", "content_filter"]
3119
+ nullable: true
3120
+ index:
3121
+ type: integer
3122
+ description: The index of the choice in the list of choices.
3123
+ created:
3124
+ type: integer
3125
+ description: The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
3126
+ model:
3127
+ type: string
3128
+ description: The model to generate the completion.
3129
+ object:
3130
+ type: string
3131
+ description: The object type, which is always `chat.completion.chunk`.
3132
+ required:
3133
+ - choices
3134
+ - created
3135
+ - id
3136
+ - model
3137
+ - object
3138
+ x-oaiMeta:
3139
+ name: The chat completion chunk object
3140
+ group: chat
3141
+ example: *chat_completion_chunk_example
3142
+
3143
+ CreateEditRequest:
3144
+ type: object
3145
+ properties:
3146
+ instruction:
3147
+ description: The instruction that tells the model how to edit the prompt.
3148
+ type: string
3149
+ example: "Fix the spelling mistakes."
3150
+ model:
3151
+ description: ID of the model to use. You can use the `text-davinci-edit-001` or `code-davinci-edit-001` model with this endpoint.
3152
+ example: "text-davinci-edit-001"
3153
+ anyOf:
3154
+ - type: string
3155
+ - type: string
3156
+ enum: ["text-davinci-edit-001", "code-davinci-edit-001"]
3157
+ x-oaiTypeLabel: string
3158
+ input:
3159
+ description: The input text to use as a starting point for the edit.
3160
+ type: string
3161
+ default: ""
3162
+ nullable: true
3163
+ example: "What day of the wek is it?"
3164
+ n:
3165
+ type: integer
3166
+ minimum: 1
3167
+ maximum: 20
3168
+ default: 1
3169
+ example: 1
3170
+ nullable: true
3171
+ description: How many edits to generate for the input and instruction.
3172
+ temperature:
3173
+ type: number
3174
+ minimum: 0
3175
+ maximum: 2
3176
+ default: 1
3177
+ example: 1
3178
+ nullable: true
3179
+ description: *completions_temperature_description
3180
+ top_p:
3181
+ type: number
3182
+ minimum: 0
3183
+ maximum: 1
3184
+ default: 1
3185
+ example: 1
3186
+ nullable: true
3187
+ description: *completions_top_p_description
3188
+ required:
3189
+ - model
3190
+ - instruction
3191
+
3192
+ CreateEditResponse:
3193
+ type: object
3194
+ title: Edit
3195
+ deprecated: true
3196
+ properties:
3197
+ choices:
3198
+ type: array
3199
+ description: A list of edit choices. Can be more than one if `n` is greater than 1.
3200
+ items:
3201
+ type: object
3202
+ required:
3203
+ - text
3204
+ - index
3205
+ - finish_reason
3206
+ properties:
3207
+ finish_reason:
3208
+ type: string
3209
+ description: *completion_finish_reason_description
3210
+ enum: ["stop", "length"]
3211
+ index:
3212
+ type: integer
3213
+ description: The index of the choice in the list of choices.
3214
+ text:
3215
+ type: string
3216
+ description: The edited result.
3217
+ object:
3218
+ type: string
3219
+ description: The object type, which is always `edit`.
3220
+ created:
3221
+ type: integer
3222
+ description: The Unix timestamp (in seconds) of when the edit was created.
3223
+ usage:
3224
+ $ref: "#/components/schemas/CompletionUsage"
3225
+ required:
3226
+ - object
3227
+ - created
3228
+ - choices
3229
+ - usage
3230
+ x-oaiMeta:
3231
+ name: The edit object
3232
+ example: *edit_example
3233
+
3234
+ CreateImageRequest:
3235
+ type: object
3236
+ properties:
3237
+ prompt:
3238
+ description: A text description of the desired image(s). The maximum length is 1000 characters.
3239
+ type: string
3240
+ example: "A cute baby sea otter"
3241
+ n: &images_n
3242
+ type: integer
3243
+ minimum: 1
3244
+ maximum: 10
3245
+ default: 1
3246
+ example: 1
3247
+ nullable: true
3248
+ description: The number of images to generate. Must be between 1 and 10.
3249
+ response_format: &images_response_format
3250
+ type: string
3251
+ enum: ["url", "b64_json"]
3252
+ default: "url"
3253
+ example: "url"
3254
+ nullable: true
3255
+ description: The format in which the generated images are returned. Must be one of `url` or `b64_json`.
3256
+ size: &images_size
3257
+ type: string
3258
+ enum: ["256x256", "512x512", "1024x1024"]
3259
+ default: "1024x1024"
3260
+ example: "1024x1024"
3261
+ nullable: true
3262
+ description: The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`.
3263
+ user: *end_user_param_configuration
3264
+ required:
3265
+ - prompt
3266
+
3267
+ ImagesResponse:
3268
+ properties:
3269
+ created:
3270
+ type: integer
3271
+ data:
3272
+ type: array
3273
+ items:
3274
+ $ref: "#/components/schemas/Image"
3275
+ required:
3276
+ - created
3277
+ - data
3278
+
3279
+ Image:
3280
+ type: object
3281
+ description: Represents the url or the content of an image generated by the OpenAI API.
3282
+ properties:
3283
+ b64_json:
3284
+ type: string
3285
+ description: The base64-encoded JSON of the generated image, if `response_format` is `b64_json`.
3286
+ url:
3287
+ type: string
3288
+ description: The URL of the generated image, if `response_format` is `url` (default).
3289
+ x-oaiMeta:
3290
+ name: The image object
3291
+ example: |
3292
+ {
3293
+ "url": "..."
3294
+ }
3295
+
3296
+ CreateImageEditRequest:
3297
+ type: object
3298
+ properties:
3299
+ image:
3300
+ description: The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.
3301
+ type: string
3302
+ format: binary
3303
+ prompt:
3304
+ description: A text description of the desired image(s). The maximum length is 1000 characters.
3305
+ type: string
3306
+ example: "A cute baby sea otter wearing a beret"
3307
+ mask:
3308
+ description: An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where `image` should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as `image`.
3309
+ type: string
3310
+ format: binary
3311
+ n: *images_n
3312
+ size: *images_size
3313
+ response_format: *images_response_format
3314
+ user: *end_user_param_configuration
3315
+ required:
3316
+ - prompt
3317
+ - image
3318
+
3319
+ CreateImageVariationRequest:
3320
+ type: object
3321
+ properties:
3322
+ image:
3323
+ description: The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.
3324
+ type: string
3325
+ format: binary
3326
+ n: *images_n
3327
+ response_format: *images_response_format
3328
+ size: *images_size
3329
+ user: *end_user_param_configuration
3330
+ required:
3331
+ - image
3332
+
3333
+ CreateModerationRequest:
3334
+ type: object
3335
+ properties:
3336
+ input:
3337
+ description: The input text to classify
3338
+ oneOf:
3339
+ - type: string
3340
+ default: ""
3341
+ example: "I want to kill them."
3342
+ - type: array
3343
+ items:
3344
+ type: string
3345
+ default: ""
3346
+ example: "I want to kill them."
3347
+ model:
3348
+ description: |
3349
+ Two content moderations models are available: `text-moderation-stable` and `text-moderation-latest`.
3350
+
3351
+ The default is `text-moderation-latest` which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use `text-moderation-stable`, we will provide advanced notice before updating the model. Accuracy of `text-moderation-stable` may be slightly lower than for `text-moderation-latest`.
3352
+ nullable: false
3353
+ default: "text-moderation-latest"
3354
+ example: "text-moderation-stable"
3355
+ anyOf:
3356
+ - type: string
3357
+ - type: string
3358
+ enum: ["text-moderation-latest", "text-moderation-stable"]
3359
+ x-oaiTypeLabel: string
3360
+ required:
3361
+ - input
3362
+
3363
+ CreateModerationResponse:
3364
+ type: object
3365
+ description: Represents policy compliance report by OpenAI's content moderation model against a given input.
3366
+ properties:
3367
+ id:
3368
+ type: string
3369
+ description: The unique identifier for the moderation request.
3370
+ model:
3371
+ type: string
3372
+ description: The model used to generate the moderation results.
3373
+ results:
3374
+ type: array
3375
+ description: A list of moderation objects.
3376
+ items:
3377
+ type: object
3378
+ properties:
3379
+ flagged:
3380
+ type: boolean
3381
+ description: Whether the content violates [OpenAI's usage policies](/policies/usage-policies).
3382
+ categories:
3383
+ type: object
3384
+ description: A list of the categories, and whether they are flagged or not.
3385
+ properties:
3386
+ hate:
3387
+ type: boolean
3388
+ description: Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. Hateful content aimed at non-protected groups (e.g., chess players) is harrassment.
3389
+ hate/threatening:
3390
+ type: boolean
3391
+ description: Hateful content that also includes violence or serious harm towards the targeted group based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.
3392
+ harassment:
3393
+ type: boolean
3394
+ description: Content that expresses, incites, or promotes harassing language towards any target.
3395
+ harassment/threatening:
3396
+ type: boolean
3397
+ description: Harassment content that also includes violence or serious harm towards any target.
3398
+ self-harm:
3399
+ type: boolean
3400
+ description: Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.
3401
+ self-harm/intent:
3402
+ type: boolean
3403
+ description: Content where the speaker expresses that they are engaging or intend to engage in acts of self-harm, such as suicide, cutting, and eating disorders.
3404
+ self-harm/instructions:
3405
+ type: boolean
3406
+ description: Content that encourages performing acts of self-harm, such as suicide, cutting, and eating disorders, or that gives instructions or advice on how to commit such acts.
3407
+ sexual:
3408
+ type: boolean
3409
+ description: Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).
3410
+ sexual/minors:
3411
+ type: boolean
3412
+ description: Sexual content that includes an individual who is under 18 years old.
3413
+ violence:
3414
+ type: boolean
3415
+ description: Content that depicts death, violence, or physical injury.
3416
+ violence/graphic:
3417
+ type: boolean
3418
+ description: Content that depicts death, violence, or physical injury in graphic detail.
3419
+ required:
3420
+ - hate
3421
+ - hate/threatening
3422
+ - harassment
3423
+ - harassment/threatening
3424
+ - self-harm
3425
+ - self-harm/intent
3426
+ - self-harm/instructions
3427
+ - sexual
3428
+ - sexual/minors
3429
+ - violence
3430
+ - violence/graphic
3431
+ category_scores:
3432
+ type: object
3433
+ description: A list of the categories along with their scores as predicted by model.
3434
+ properties:
3435
+ hate:
3436
+ type: number
3437
+ description: The score for the category 'hate'.
3438
+ hate/threatening:
3439
+ type: number
3440
+ description: The score for the category 'hate/threatening'.
3441
+ harassment:
3442
+ type: number
3443
+ description: The score for the category 'harassment'.
3444
+ harassment/threatening:
3445
+ type: number
3446
+ description: The score for the category 'harassment/threatening'.
3447
+ self-harm:
3448
+ type: number
3449
+ description: The score for the category 'self-harm'.
3450
+ self-harm/intent:
3451
+ type: number
3452
+ description: The score for the category 'self-harm/intent'.
3453
+ self-harm/instructions:
3454
+ type: number
3455
+ description: The score for the category 'self-harm/instructions'.
3456
+ sexual:
3457
+ type: number
3458
+ description: The score for the category 'sexual'.
3459
+ sexual/minors:
3460
+ type: number
3461
+ description: The score for the category 'sexual/minors'.
3462
+ violence:
3463
+ type: number
3464
+ description: The score for the category 'violence'.
3465
+ violence/graphic:
3466
+ type: number
3467
+ description: The score for the category 'violence/graphic'.
3468
+ required:
3469
+ - hate
3470
+ - hate/threatening
3471
+ - harassment
3472
+ - harassment/threatening
3473
+ - self-harm
3474
+ - self-harm/intent
3475
+ - self-harm/instructions
3476
+ - sexual
3477
+ - sexual/minors
3478
+ - violence
3479
+ - violence/graphic
3480
+ required:
3481
+ - flagged
3482
+ - categories
3483
+ - category_scores
3484
+ required:
3485
+ - id
3486
+ - model
3487
+ - results
3488
+ x-oaiMeta:
3489
+ name: The moderation object
3490
+ example: *moderation_example
3491
+
3492
+ ListFilesResponse:
3493
+ type: object
3494
+ properties:
3495
+ data:
3496
+ type: array
3497
+ items:
3498
+ $ref: "#/components/schemas/OpenAIFile"
3499
+ object:
3500
+ type: string
3501
+ required:
3502
+ - object
3503
+ - data
3504
+
3505
+ CreateFileRequest:
3506
+ type: object
3507
+ additionalProperties: false
3508
+ properties:
3509
+ file:
3510
+ description: |
3511
+ The file object (not file name) to be uploaded.
3512
+
3513
+ If the `purpose` is set to "fine-tune", the file will be used for fine-tuning.
3514
+ type: string
3515
+ format: binary
3516
+ purpose:
3517
+ description: |
3518
+ The intended purpose of the uploaded file.
3519
+
3520
+ Use "fine-tune" for [fine-tuning](/docs/api-reference/fine-tuning). This allows us to validate the format of the uploaded file is correct for fine-tuning.
3521
+ type: string
3522
+ required:
3523
+ - file
3524
+ - purpose
3525
+
3526
+ DeleteFileResponse:
3527
+ type: object
3528
+ properties:
3529
+ id:
3530
+ type: string
3531
+ object:
3532
+ type: string
3533
+ deleted:
3534
+ type: boolean
3535
+ required:
3536
+ - id
3537
+ - object
3538
+ - deleted
3539
+
3540
+ CreateFineTuningJobRequest:
3541
+ type: object
3542
+ properties:
3543
+ model:
3544
+ description: |
3545
+ The name of the model to fine-tune. You can select one of the
3546
+ [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).
3547
+ example: "gpt-3.5-turbo"
3548
+ anyOf:
3549
+ - type: string
3550
+ - type: string
3551
+ enum: ["babbage-002", "davinci-002", "gpt-3.5-turbo"]
3552
+ x-oaiTypeLabel: string
3553
+ training_file:
3554
+ description: |
3555
+ The ID of an uploaded file that contains training data.
3556
+
3557
+ See [upload file](/docs/api-reference/files/upload) for how to upload a file.
3558
+
3559
+ Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose `fine-tune`.
3560
+
3561
+ See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
3562
+ type: string
3563
+ example: "file-abc123"
3564
+ hyperparameters:
3565
+ type: object
3566
+ description: The hyperparameters used for the fine-tuning job.
3567
+ properties:
3568
+ n_epochs:
3569
+ description: |
3570
+ The number of epochs to train the model for. An epoch refers to one
3571
+ full cycle through the training dataset.
3572
+ oneOf:
3573
+ - type: string
3574
+ enum: [auto]
3575
+ - type: integer
3576
+ minimum: 1
3577
+ maximum: 50
3578
+ default: auto
3579
+ suffix:
3580
+ description: |
3581
+ A string of up to 18 characters that will be added to your fine-tuned model name.
3582
+
3583
+ For example, a `suffix` of "custom-model-name" would produce a model name like `ft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel`.
3584
+ type: string
3585
+ minLength: 1
3586
+ maxLength: 40
3587
+ default: null
3588
+ nullable: true
3589
+ validation_file:
3590
+ description: |
3591
+ The ID of an uploaded file that contains validation data.
3592
+
3593
+ If you provide this file, the data is used to generate validation
3594
+ metrics periodically during fine-tuning. These metrics can be viewed in
3595
+ the fine-tuning results file.
3596
+ The same data should not be present in both train and validation files.
3597
+
3598
+ Your dataset must be formatted as a JSONL file. You must upload your file with the purpose `fine-tune`.
3599
+
3600
+ See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
3601
+ type: string
3602
+ nullable: true
3603
+ example: "file-abc123"
3604
+ required:
3605
+ - model
3606
+ - training_file
3607
+
3608
+ ListFineTuningJobEventsResponse:
3609
+ type: object
3610
+ properties:
3611
+ data:
3612
+ type: array
3613
+ items:
3614
+ $ref: "#/components/schemas/FineTuningJobEvent"
3615
+ object:
3616
+ type: string
3617
+ required:
3618
+ - object
3619
+ - data
3620
+
3621
+ CreateFineTuneRequest:
3622
+ type: object
3623
+ properties:
3624
+ training_file:
3625
+ description: |
3626
+ The ID of an uploaded file that contains training data.
3627
+
3628
+ See [upload file](/docs/api-reference/files/upload) for how to upload a file.
3629
+
3630
+ Your dataset must be formatted as a JSONL file, where each training
3631
+ example is a JSON object with the keys "prompt" and "completion".
3632
+ Additionally, you must upload your file with the purpose `fine-tune`.
3633
+
3634
+ See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/creating-training-data) for more details.
3635
+ type: string
3636
+ example: "file-abc123"
3637
+ batch_size:
3638
+ description: |
3639
+ The batch size to use for training. The batch size is the number of
3640
+ training examples used to train a single forward and backward pass.
3641
+
3642
+ By default, the batch size will be dynamically configured to be
3643
+ ~0.2% of the number of examples in the training set, capped at 256 -
3644
+ in general, we've found that larger batch sizes tend to work better
3645
+ for larger datasets.
3646
+ default: null
3647
+ type: integer
3648
+ nullable: true
3649
+ classification_betas:
3650
+ description: |
3651
+ If this is provided, we calculate F-beta scores at the specified
3652
+ beta values. The F-beta score is a generalization of F-1 score.
3653
+ This is only used for binary classification.
3654
+
3655
+ With a beta of 1 (i.e. the F-1 score), precision and recall are
3656
+ given the same weight. A larger beta score puts more weight on
3657
+ recall and less on precision. A smaller beta score puts more weight
3658
+ on precision and less on recall.
3659
+ type: array
3660
+ items:
3661
+ type: number
3662
+ example: [0.6, 1, 1.5, 2]
3663
+ default: null
3664
+ nullable: true
3665
+ classification_n_classes:
3666
+ description: |
3667
+ The number of classes in a classification task.
3668
+
3669
+ This parameter is required for multiclass classification.
3670
+ type: integer
3671
+ default: null
3672
+ nullable: true
3673
+ classification_positive_class:
3674
+ description: |
3675
+ The positive class in binary classification.
3676
+
3677
+ This parameter is needed to generate precision, recall, and F1
3678
+ metrics when doing binary classification.
3679
+ type: string
3680
+ default: null
3681
+ nullable: true
3682
+ compute_classification_metrics:
3683
+ description: |
3684
+ If set, we calculate classification-specific metrics such as accuracy
3685
+ and F-1 score using the validation set at the end of every epoch.
3686
+ These metrics can be viewed in the [results file](/docs/guides/legacy-fine-tuning/analyzing-your-fine-tuned-model).
3687
+
3688
+ In order to compute classification metrics, you must provide a
3689
+ `validation_file`. Additionally, you must
3690
+ specify `classification_n_classes` for multiclass classification or
3691
+ `classification_positive_class` for binary classification.
3692
+ type: boolean
3693
+ default: false
3694
+ nullable: true
3695
+ hyperparameters:
3696
+ type: object
3697
+ description: The hyperparameters used for the fine-tuning job.
3698
+ properties:
3699
+ n_epochs:
3700
+ description: |
3701
+ The number of epochs to train the model for. An epoch refers to one
3702
+ full cycle through the training dataset.
3703
+ oneOf:
3704
+ - type: string
3705
+ enum: [auto]
3706
+ - type: integer
3707
+ minimum: 1
3708
+ maximum: 50
3709
+ default: auto
3710
+ learning_rate_multiplier:
3711
+ description: |
3712
+ The learning rate multiplier to use for training.
3713
+ The fine-tuning learning rate is the original learning rate used for
3714
+ pretraining multiplied by this value.
3715
+
3716
+ By default, the learning rate multiplier is the 0.05, 0.1, or 0.2
3717
+ depending on final `batch_size` (larger learning rates tend to
3718
+ perform better with larger batch sizes). We recommend experimenting
3719
+ with values in the range 0.02 to 0.2 to see what produces the best
3720
+ results.
3721
+ default: null
3722
+ type: number
3723
+ nullable: true
3724
+ model:
3725
+ description: |
3726
+ The name of the base model to fine-tune. You can select one of "ada",
3727
+ "babbage", "curie", "davinci", or a fine-tuned model created after 2022-04-21 and before 2023-08-22.
3728
+ To learn more about these models, see the
3729
+ [Models](/docs/models) documentation.
3730
+ default: "curie"
3731
+ example: "curie"
3732
+ nullable: true
3733
+ anyOf:
3734
+ - type: string
3735
+ - type: string
3736
+ enum: ["ada", "babbage", "curie", "davinci"]
3737
+ x-oaiTypeLabel: string
3738
+ prompt_loss_weight:
3739
+ description: |
3740
+ The weight to use for loss on the prompt tokens. This controls how
3741
+ much the model tries to learn to generate the prompt (as compared
3742
+ to the completion which always has a weight of 1.0), and can add
3743
+ a stabilizing effect to training when completions are short.
3744
+
3745
+ If prompts are extremely long (relative to completions), it may make
3746
+ sense to reduce this weight so as to avoid over-prioritizing
3747
+ learning the prompt.
3748
+ default: 0.01
3749
+ type: number
3750
+ nullable: true
3751
+ suffix:
3752
+ description: |
3753
+ A string of up to 40 characters that will be added to your fine-tuned model name.
3754
+
3755
+ For example, a `suffix` of "custom-model-name" would produce a model name like `ada:ft-your-org:custom-model-name-2022-02-15-04-21-04`.
3756
+ type: string
3757
+ minLength: 1
3758
+ maxLength: 40
3759
+ default: null
3760
+ nullable: true
3761
+ validation_file:
3762
+ description: |
3763
+ The ID of an uploaded file that contains validation data.
3764
+
3765
+ If you provide this file, the data is used to generate validation
3766
+ metrics periodically during fine-tuning. These metrics can be viewed in
3767
+ the [fine-tuning results file](/docs/guides/legacy-fine-tuning/analyzing-your-fine-tuned-model).
3768
+ Your train and validation data should be mutually exclusive.
3769
+
3770
+ Your dataset must be formatted as a JSONL file, where each validation
3771
+ example is a JSON object with the keys "prompt" and "completion".
3772
+ Additionally, you must upload your file with the purpose `fine-tune`.
3773
+
3774
+ See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/creating-training-data) for more details.
3775
+ type: string
3776
+ nullable: true
3777
+ example: "file-abc123"
3778
+ required:
3779
+ - training_file
3780
+
3781
+ ListFineTunesResponse:
3782
+ type: object
3783
+ properties:
3784
+ data:
3785
+ type: array
3786
+ items:
3787
+ $ref: "#/components/schemas/FineTune"
3788
+ object:
3789
+ type: string
3790
+ required:
3791
+ - object
3792
+ - data
3793
+
3794
+ ListFineTuneEventsResponse:
3795
+ type: object
3796
+ properties:
3797
+ data:
3798
+ type: array
3799
+ items:
3800
+ $ref: "#/components/schemas/FineTuneEvent"
3801
+ object:
3802
+ type: string
3803
+ required:
3804
+ - object
3805
+ - data
3806
+
3807
+ CreateEmbeddingRequest:
3808
+ type: object
3809
+ additionalProperties: false
3810
+ properties:
3811
+ input:
3812
+ description: |
3813
+ Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`) and cannot be an empty string. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
3814
+ example: "The quick brown fox jumped over the lazy dog"
3815
+ oneOf:
3816
+ - type: string
3817
+ default: ""
3818
+ example: "This is a test."
3819
+ - type: array
3820
+ minItems: 1
3821
+ items:
3822
+ type: string
3823
+ default: ""
3824
+ example: "This is a test."
3825
+ - type: array
3826
+ minItems: 1
3827
+ items:
3828
+ type: integer
3829
+ example: "[1212, 318, 257, 1332, 13]"
3830
+ - type: array
3831
+ minItems: 1
3832
+ items:
3833
+ type: array
3834
+ minItems: 1
3835
+ items:
3836
+ type: integer
3837
+ example: "[[1212, 318, 257, 1332, 13]]"
3838
+ model:
3839
+ description: *model_description
3840
+ example: "text-embedding-ada-002"
3841
+ anyOf:
3842
+ - type: string
3843
+ - type: string
3844
+ enum: ["text-embedding-ada-002"]
3845
+ x-oaiTypeLabel: string
3846
+ encoding_format:
3847
+ description: "The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/)."
3848
+ example: "float"
3849
+ default: "float"
3850
+ type: string
3851
+ enum: ["float", "base64"]
3852
+ user: *end_user_param_configuration
3853
+ required:
3854
+ - model
3855
+ - input
3856
+
3857
+ CreateEmbeddingResponse:
3858
+ type: object
3859
+ properties:
3860
+ data:
3861
+ type: array
3862
+ description: The list of embeddings generated by the model.
3863
+ items:
3864
+ $ref: "#/components/schemas/Embedding"
3865
+ model:
3866
+ type: string
3867
+ description: The name of the model used to generate the embedding.
3868
+ object:
3869
+ type: string
3870
+ description: The object type, which is always "embedding".
3871
+ usage:
3872
+ type: object
3873
+ description: The usage information for the request.
3874
+ properties:
3875
+ prompt_tokens:
3876
+ type: integer
3877
+ description: The number of tokens used by the prompt.
3878
+ total_tokens:
3879
+ type: integer
3880
+ description: The total number of tokens used by the request.
3881
+ required:
3882
+ - prompt_tokens
3883
+ - total_tokens
3884
+ required:
3885
+ - object
3886
+ - model
3887
+ - data
3888
+ - usage
3889
+
3890
+ CreateTranscriptionRequest:
3891
+ type: object
3892
+ additionalProperties: false
3893
+ properties:
3894
+ file:
3895
+ description: |
3896
+ The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
3897
+ type: string
3898
+ x-oaiTypeLabel: file
3899
+ format: binary
3900
+ model:
3901
+ description: |
3902
+ ID of the model to use. Only `whisper-1` is currently available.
3903
+ example: whisper-1
3904
+ anyOf:
3905
+ - type: string
3906
+ - type: string
3907
+ enum: ["whisper-1"]
3908
+ x-oaiTypeLabel: string
3909
+ language:
3910
+ description: |
3911
+ The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
3912
+ type: string
3913
+ prompt:
3914
+ description: |
3915
+ An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
3916
+ type: string
3917
+ response_format:
3918
+ description: |
3919
+ The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
3920
+ type: string
3921
+ enum:
3922
+ - json
3923
+ - text
3924
+ - srt
3925
+ - verbose_json
3926
+ - vtt
3927
+ default: json
3928
+ temperature:
3929
+ description: |
3930
+ The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
3931
+ type: number
3932
+ default: 0
3933
+ required:
3934
+ - file
3935
+ - model
3936
+
3937
+ # Note: This does not currently support the non-default response format types.
3938
+ CreateTranscriptionResponse:
3939
+ type: object
3940
+ properties:
3941
+ text:
3942
+ type: string
3943
+ required:
3944
+ - text
3945
+
3946
+ CreateTranslationRequest:
3947
+ type: object
3948
+ additionalProperties: false
3949
+ properties:
3950
+ file:
3951
+ description: |
3952
+ The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
3953
+ type: string
3954
+ x-oaiTypeLabel: file
3955
+ format: binary
3956
+ model:
3957
+ description: |
3958
+ ID of the model to use. Only `whisper-1` is currently available.
3959
+ example: whisper-1
3960
+ anyOf:
3961
+ - type: string
3962
+ - type: string
3963
+ enum: ["whisper-1"]
3964
+ x-oaiTypeLabel: string
3965
+ prompt:
3966
+ description: |
3967
+ An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
3968
+ type: string
3969
+ response_format:
3970
+ description: |
3971
+ The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.
3972
+ type: string
3973
+ default: json
3974
+ temperature:
3975
+ description: |
3976
+ The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
3977
+ type: number
3978
+ default: 0
3979
+ required:
3980
+ - file
3981
+ - model
3982
+
3983
+ # Note: This does not currently support the non-default response format types.
3984
+ CreateTranslationResponse:
3985
+ type: object
3986
+ properties:
3987
+ text:
3988
+ type: string
3989
+ required:
3990
+ - text
3991
+
3992
+ Model:
3993
+ title: Model
3994
+ description: Describes an OpenAI model offering that can be used with the API.
3995
+ properties:
3996
+ id:
3997
+ type: string
3998
+ description: The model identifier, which can be referenced in the API endpoints.
3999
+ created:
4000
+ type: integer
4001
+ description: The Unix timestamp (in seconds) when the model was created.
4002
+ object:
4003
+ type: string
4004
+ description: The object type, which is always "model".
4005
+ owned_by:
4006
+ type: string
4007
+ description: The organization that owns the model.
4008
+ required:
4009
+ - id
4010
+ - object
4011
+ - created
4012
+ - owned_by
4013
+ x-oaiMeta:
4014
+ name: The model object
4015
+ example: *retrieve_model_response
4016
+
4017
+ OpenAIFile:
4018
+ title: OpenAIFile
4019
+ description: |
4020
+ The `File` object represents a document that has been uploaded to OpenAI.
4021
+ properties:
4022
+ id:
4023
+ type: string
4024
+ description: The file identifier, which can be referenced in the API endpoints.
4025
+ bytes:
4026
+ type: integer
4027
+ description: The size of the file in bytes.
4028
+ created_at:
4029
+ type: integer
4030
+ description: The Unix timestamp (in seconds) for when the file was created.
4031
+ filename:
4032
+ type: string
4033
+ description: The name of the file.
4034
+ object:
4035
+ type: string
4036
+ description: The object type, which is always "file".
4037
+ purpose:
4038
+ type: string
4039
+ description: The intended purpose of the file. Currently, only "fine-tune" is supported.
4040
+ status:
4041
+ type: string
4042
+ description: The current status of the file, which can be either `uploaded`, `processed`, `pending`, `error`, `deleting` or `deleted`.
4043
+ status_details:
4044
+ type: string
4045
+ nullable: true
4046
+ description: |
4047
+ Additional details about the status of the file. If the file is in the `error` state, this will include a message describing the error.
4048
+ required:
4049
+ - id
4050
+ - object
4051
+ - bytes
4052
+ - created_at
4053
+ - filename
4054
+ - purpose
4055
+ - format
4056
+ x-oaiMeta:
4057
+ name: The file object
4058
+ example: |
4059
+ {
4060
+ "id": "file-abc123",
4061
+ "object": "file",
4062
+ "bytes": 120000,
4063
+ "created_at": 1677610602,
4064
+ "filename": "my_file.jsonl",
4065
+ "purpose": "fine-tune",
4066
+ "status": "uploaded",
4067
+ "status_details": null
4068
+ }
4069
+ Embedding:
4070
+ type: object
4071
+ description: |
4072
+ Represents an embedding vector returned by embedding endpoint.
4073
+ properties:
4074
+ index:
4075
+ type: integer
4076
+ description: The index of the embedding in the list of embeddings.
4077
+ embedding:
4078
+ type: array
4079
+ description: |
4080
+ The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the [embedding guide](/docs/guides/embeddings).
4081
+ items:
4082
+ type: number
4083
+ object:
4084
+ type: string
4085
+ description: The object type, which is always "embedding".
4086
+ required:
4087
+ - index
4088
+ - object
4089
+ - embedding
4090
+ x-oaiMeta:
4091
+ name: The embedding object
4092
+ example: |
4093
+ {
4094
+ "object": "embedding",
4095
+ "embedding": [
4096
+ 0.0023064255,
4097
+ -0.009327292,
4098
+ .... (1536 floats total for ada-002)
4099
+ -0.0028842222,
4100
+ ],
4101
+ "index": 0
4102
+ }
4103
+
4104
+ FineTuningJob:
4105
+ type: object
4106
+ title: FineTuningJob
4107
+ description: |
4108
+ The `fine_tuning.job` object represents a fine-tuning job that has been created through the API.
4109
+ properties:
4110
+ id:
4111
+ type: string
4112
+ description: The object identifier, which can be referenced in the API endpoints.
4113
+ created_at:
4114
+ type: integer
4115
+ description: The Unix timestamp (in seconds) for when the fine-tuning job was created.
4116
+ error:
4117
+ type: object
4118
+ nullable: true
4119
+ description: For fine-tuning jobs that have `failed`, this will contain more information on the cause of the failure.
4120
+ properties:
4121
+ code:
4122
+ type: string
4123
+ description: A machine-readable error code.
4124
+ message:
4125
+ type: string
4126
+ description: A human-readable error message.
4127
+ param:
4128
+ type: string
4129
+ description: The parameter that was invalid, usually `training_file` or `validation_file`. This field will be null if the failure was not parameter-specific.
4130
+ nullable: true
4131
+ required:
4132
+ - code
4133
+ - message
4134
+ - param
4135
+ fine_tuned_model:
4136
+ type: string
4137
+ nullable: true
4138
+ description: The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.
4139
+ finished_at:
4140
+ type: integer
4141
+ nullable: true
4142
+ description: The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.
4143
+ hyperparameters:
4144
+ type: object
4145
+ description: The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
4146
+ properties:
4147
+ n_epochs:
4148
+ oneOf:
4149
+ - type: string
4150
+ enum: [auto]
4151
+ - type: integer
4152
+ minimum: 1
4153
+ maximum: 50
4154
+ default: auto
4155
+ description:
4156
+ The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
4157
+
4158
+ "auto" decides the optimal number of epochs based on the size of the dataset. If setting the number manually, we support any number between 1 and 50 epochs.
4159
+ required:
4160
+ - n_epochs
4161
+ model:
4162
+ type: string
4163
+ description: The base model that is being fine-tuned.
4164
+ object:
4165
+ type: string
4166
+ description: The object type, which is always "fine_tuning.job".
4167
+ organization_id:
4168
+ type: string
4169
+ description: The organization that owns the fine-tuning job.
4170
+ result_files:
4171
+ type: array
4172
+ description: The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the [Files API](/docs/api-reference/files/retrieve-contents).
4173
+ items:
4174
+ type: string
4175
+ example: file-abc123
4176
+ status:
4177
+ type: string
4178
+ description: The current status of the fine-tuning job, which can be either `validating_files`, `queued`, `running`, `succeeded`, `failed`, or `cancelled`.
4179
+ trained_tokens:
4180
+ type: integer
4181
+ nullable: true
4182
+ description: The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.
4183
+ training_file:
4184
+ type: string
4185
+ description: The file ID used for training. You can retrieve the training data with the [Files API](/docs/api-reference/files/retrieve-contents).
4186
+ validation_file:
4187
+ type: string
4188
+ nullable: true
4189
+ description: The file ID used for validation. You can retrieve the validation results with the [Files API](/docs/api-reference/files/retrieve-contents).
4190
+ required:
4191
+ - created_at
4192
+ - error
4193
+ - finished_at
4194
+ - fine_tuned_model
4195
+ - hyperparameters
4196
+ - id
4197
+ - model
4198
+ - object
4199
+ - organization_id
4200
+ - result_files
4201
+ - status
4202
+ - trained_tokens
4203
+ - training_file
4204
+ - validation_file
4205
+ x-oaiMeta:
4206
+ name: The fine-tuning job object
4207
+ example: *fine_tuning_example
4208
+
4209
+ FineTuningJobEvent:
4210
+ type: object
4211
+ description: Fine-tuning job event object
4212
+ properties:
4213
+ id:
4214
+ type: string
4215
+ created_at:
4216
+ type: integer
4217
+ level:
4218
+ type: string
4219
+ enum: ["info", "warn", "error"]
4220
+ message:
4221
+ type: string
4222
+ object:
4223
+ type: string
4224
+ required:
4225
+ - id
4226
+ - object
4227
+ - created_at
4228
+ - level
4229
+ - message
4230
+ x-oaiMeta:
4231
+ name: The fine-tuning job event object
4232
+ example: |
4233
+ {
4234
+ "object": "event",
4235
+ "id": "ftevent-abc123"
4236
+ "created_at": 1677610602,
4237
+ "level": "info",
4238
+ "message": "Created fine-tuning job"
4239
+ }
4240
+
4241
+ FineTune:
4242
+ type: object
4243
+ deprecated: true
4244
+ description: |
4245
+ The `FineTune` object represents a legacy fine-tune job that has been created through the API.
4246
+ properties:
4247
+ id:
4248
+ type: string
4249
+ description: The object identifier, which can be referenced in the API endpoints.
4250
+ created_at:
4251
+ type: integer
4252
+ description: The Unix timestamp (in seconds) for when the fine-tuning job was created.
4253
+ events:
4254
+ type: array
4255
+ description: The list of events that have been observed in the lifecycle of the FineTune job.
4256
+ items:
4257
+ $ref: "#/components/schemas/FineTuneEvent"
4258
+ fine_tuned_model:
4259
+ type: string
4260
+ nullable: true
4261
+ description: The name of the fine-tuned model that is being created.
4262
+ hyperparams:
4263
+ type: object
4264
+ description: The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](/docs/guides/legacy-fine-tuning/hyperparameters) for more details.
4265
+ properties:
4266
+ batch_size:
4267
+ type: integer
4268
+ description: |
4269
+ The batch size to use for training. The batch size is the number of
4270
+ training examples used to train a single forward and backward pass.
4271
+ classification_n_classes:
4272
+ type: integer
4273
+ description: |
4274
+ The number of classes to use for computing classification metrics.
4275
+ classification_positive_class:
4276
+ type: string
4277
+ description: |
4278
+ The positive class to use for computing classification metrics.
4279
+ compute_classification_metrics:
4280
+ type: boolean
4281
+ description: |
4282
+ The classification metrics to compute using the validation dataset at the end of every epoch.
4283
+ learning_rate_multiplier:
4284
+ type: number
4285
+ description: |
4286
+ The learning rate multiplier to use for training.
4287
+ n_epochs:
4288
+ type: integer
4289
+ description: |
4290
+ The number of epochs to train the model for. An epoch refers to one
4291
+ full cycle through the training dataset.
4292
+ prompt_loss_weight:
4293
+ type: number
4294
+ description: |
4295
+ The weight to use for loss on the prompt tokens.
4296
+ required:
4297
+ - batch_size
4298
+ - learning_rate_multiplier
4299
+ - n_epochs
4300
+ - prompt_loss_weight
4301
+ model:
4302
+ type: string
4303
+ description: The base model that is being fine-tuned.
4304
+ object:
4305
+ type: string
4306
+ description: The object type, which is always "fine-tune".
4307
+ organization_id:
4308
+ type: string
4309
+ description: The organization that owns the fine-tuning job.
4310
+ result_files:
4311
+ type: array
4312
+ description: The compiled results files for the fine-tuning job.
4313
+ items:
4314
+ $ref: "#/components/schemas/OpenAIFile"
4315
+ status:
4316
+ type: string
4317
+ description: The current status of the fine-tuning job, which can be either `created`, `running`, `succeeded`, `failed`, or `cancelled`.
4318
+ training_files:
4319
+ type: array
4320
+ description: The list of files used for training.
4321
+ items:
4322
+ $ref: "#/components/schemas/OpenAIFile"
4323
+ updated_at:
4324
+ type: integer
4325
+ description: The Unix timestamp (in seconds) for when the fine-tuning job was last updated.
4326
+ validation_files:
4327
+ type: array
4328
+ description: The list of files used for validation.
4329
+ items:
4330
+ $ref: "#/components/schemas/OpenAIFile"
4331
+ required:
4332
+ - created_at
4333
+ - fine_tuned_model
4334
+ - hyperparams
4335
+ - id
4336
+ - model
4337
+ - object
4338
+ - organization_id
4339
+ - result_files
4340
+ - status
4341
+ - training_files
4342
+ - updated_at
4343
+ - validation_files
4344
+ x-oaiMeta:
4345
+ name: The fine-tune object
4346
+ example: *fine_tune_example
4347
+
4348
+ FineTuneEvent:
4349
+ type: object
4350
+ deprecated: true
4351
+ description: Fine-tune event object
4352
+ properties:
4353
+ created_at:
4354
+ type: integer
4355
+ level:
4356
+ type: string
4357
+ message:
4358
+ type: string
4359
+ object:
4360
+ type: string
4361
+ required:
4362
+ - object
4363
+ - created_at
4364
+ - level
4365
+ - message
4366
+ x-oaiMeta:
4367
+ name: The fine-tune event object
4368
+ example: |
4369
+ {
4370
+ "object": "event",
4371
+ "created_at": 1677610602,
4372
+ "level": "info",
4373
+ "message": "Created fine-tune job"
4374
+ }
4375
+
4376
+ CompletionUsage:
4377
+ type: object
4378
+ description: Usage statistics for the completion request.
4379
+ properties:
4380
+ completion_tokens:
4381
+ type: integer
4382
+ description: Number of tokens in the generated completion.
4383
+ prompt_tokens:
4384
+ type: integer
4385
+ description: Number of tokens in the prompt.
4386
+ total_tokens:
4387
+ type: integer
4388
+ description: Total number of tokens used in the request (prompt + completion).
4389
+ required:
4390
+ - prompt_tokens
4391
+ - completion_tokens
4392
+ - total_tokens
4393
+
4394
+ security:
4395
+ - ApiKeyAuth: []
4396
+
4397
+ x-oaiMeta:
4398
+ groups:
4399
+ # > General Notes
4400
+ # The `groups` section is used to generate the API reference pages and navigation, in the same
4401
+ # order listed below. Additionally, each `group` can have a list of `sections`, each of which
4402
+ # will become a navigation subroute and subsection under the group. Each section has:
4403
+ # - `type`: Currently, either an `endpoint` or `object`, depending on how the section needs to
4404
+ # be rendered
4405
+ # - `key`: The reference key that can be used to lookup the section definition
4406
+ # - `path`: The path (url) of the section, which is used to generate the navigation link.
4407
+ #
4408
+ # > The `object` sections maps to a schema component and the following fields are read for rendering
4409
+ # - `x-oaiMeta.name`: The name of the object, which will become the section title
4410
+ # - `x-oaiMeta.example`: The example object, which will be used to generate the example sample (always JSON)
4411
+ # - `description`: The description of the object, which will be used to generate the section description
4412
+ #
4413
+ # > The `endpoint` section maps to an operation path and the following fields are read for rendering:
4414
+ # - `x-oaiMeta.name`: The name of the endpoint, which will become the section title
4415
+ # - `x-oaiMeta.examples`: The endpoint examples, which can be an object (meaning a single variation, most
4416
+ # endpoints, or an array of objects, meaning multiple variations, e.g. the
4417
+ # chat completion and completion endpoints, with streamed and non-streamed examples.
4418
+ # - `x-oaiMeta.returns`: text describing what the endpoint returns.
4419
+ # - `summary`: The summary of the endpoint, which will be used to generate the section description
4420
+ - id: audio
4421
+ title: Audio
4422
+ description: |
4423
+ Learn how to turn audio into text.
4424
+
4425
+ Related guide: [Speech to text](/docs/guides/speech-to-text)
4426
+ sections:
4427
+ - type: endpoint
4428
+ key: createTranscription
4429
+ path: createTranscription
4430
+ - type: endpoint
4431
+ key: createTranslation
4432
+ path: createTranslation
4433
+ - id: chat
4434
+ title: Chat
4435
+ description: |
4436
+ Given a list of messages comprising a conversation, the model will return a response.
4437
+
4438
+ Related guide: [Chat completions](/docs/guides/gpt)
4439
+ sections:
4440
+ - type: object
4441
+ key: CreateChatCompletionResponse
4442
+ path: object
4443
+ - type: object
4444
+ key: CreateChatCompletionStreamResponse
4445
+ path: streaming
4446
+ - type: endpoint
4447
+ key: createChatCompletion
4448
+ path: create
4449
+ - id: completions
4450
+ title: Completions
4451
+ legacy: true
4452
+ description: |
4453
+ Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position. We recommend most users use our Chat completions API. [Learn more](/docs/deprecations/2023-07-06-gpt-and-embeddings)
4454
+
4455
+ Related guide: [Legacy Completions](/docs/guides/gpt/completions-api)
4456
+ sections:
4457
+ - type: object
4458
+ key: CreateCompletionResponse
4459
+ path: object
4460
+ - type: endpoint
4461
+ key: createCompletion
4462
+ path: create
4463
+ - id: embeddings
4464
+ title: Embeddings
4465
+ description: |
4466
+ Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
4467
+
4468
+ Related guide: [Embeddings](/docs/guides/embeddings)
4469
+ sections:
4470
+ - type: object
4471
+ key: Embedding
4472
+ path: object
4473
+ - type: endpoint
4474
+ key: createEmbedding
4475
+ path: create
4476
+ - id: fine-tuning
4477
+ title: Fine-tuning
4478
+ description: |
4479
+ Manage fine-tuning jobs to tailor a model to your specific training data.
4480
+
4481
+ Related guide: [Fine-tune models](/docs/guides/fine-tuning)
4482
+ sections:
4483
+ - type: object
4484
+ key: FineTuningJob
4485
+ path: object
4486
+ - type: endpoint
4487
+ key: createFineTuningJob
4488
+ path: create
4489
+ - type: endpoint
4490
+ key: listPaginatedFineTuningJobs
4491
+ path: list
4492
+ - type: endpoint
4493
+ key: retrieveFineTuningJob
4494
+ path: retrieve
4495
+ - type: endpoint
4496
+ key: cancelFineTuningJob
4497
+ path: cancel
4498
+ - type: object
4499
+ key: FineTuningJobEvent
4500
+ path: event-object
4501
+ - type: endpoint
4502
+ key: listFineTuningEvents
4503
+ path: list-events
4504
+ - id: files
4505
+ title: Files
4506
+ description: |
4507
+ Files are used to upload documents that can be used with features like [fine-tuning](/docs/api-reference/fine-tuning).
4508
+ sections:
4509
+ - type: object
4510
+ key: OpenAIFile
4511
+ path: object
4512
+ - type: endpoint
4513
+ key: listFiles
4514
+ path: list
4515
+ - type: endpoint
4516
+ key: createFile
4517
+ path: create
4518
+ - type: endpoint
4519
+ key: deleteFile
4520
+ path: delete
4521
+ - type: endpoint
4522
+ key: retrieveFile
4523
+ path: retrieve
4524
+ - type: endpoint
4525
+ key: downloadFile
4526
+ path: retrieve-contents
4527
+ - id: images
4528
+ title: Images
4529
+ description: |
4530
+ Given a prompt and/or an input image, the model will generate a new image.
4531
+
4532
+ Related guide: [Image generation](/docs/guides/images)
4533
+ sections:
4534
+ - type: object
4535
+ key: Image
4536
+ path: object
4537
+ - type: endpoint
4538
+ key: createImage
4539
+ path: create
4540
+ - type: endpoint
4541
+ key: createImageEdit
4542
+ path: createEdit
4543
+ - type: endpoint
4544
+ key: createImageVariation
4545
+ path: createVariation
4546
+ - id: models
4547
+ title: Models
4548
+ description: |
4549
+ List and describe the various models available in the API. You can refer to the [Models](/docs/models) documentation to understand what models are available and the differences between them.
4550
+ sections:
4551
+ - type: object
4552
+ key: Model
4553
+ path: object
4554
+ - type: endpoint
4555
+ key: listModels
4556
+ path: list
4557
+ - type: endpoint
4558
+ key: retrieveModel
4559
+ path: retrieve
4560
+ - type: endpoint
4561
+ key: deleteModel
4562
+ path: delete
4563
+ - id: moderations
4564
+ title: Moderations
4565
+ description: |
4566
+ Given a input text, outputs if the model classifies it as violating OpenAI's content policy.
4567
+
4568
+ Related guide: [Moderations](/docs/guides/moderation)
4569
+ sections:
4570
+ - type: object
4571
+ key: CreateModerationResponse
4572
+ path: object
4573
+ - type: endpoint
4574
+ key: createModeration
4575
+ path: create
4576
+ - id: fine-tunes
4577
+ title: Fine-tunes
4578
+ deprecated: true
4579
+ description: |
4580
+ Manage legacy fine-tuning jobs to tailor a model to your specific training data.
4581
+
4582
+ We recommend transitioning to the updating [fine-tuning API](/docs/guides/fine-tuning)
4583
+ sections:
4584
+ - type: object
4585
+ key: FineTune
4586
+ path: object
4587
+ - type: endpoint
4588
+ key: createFineTune
4589
+ path: create
4590
+ - type: endpoint
4591
+ key: listFineTunes
4592
+ path: list
4593
+ - type: endpoint
4594
+ key: retrieveFineTune
4595
+ path: retrieve
4596
+ - type: endpoint
4597
+ key: cancelFineTune
4598
+ path: cancel
4599
+ - type: object
4600
+ key: FineTuneEvent
4601
+ path: event-object
4602
+ - type: endpoint
4603
+ key: listFineTuneEvents
4604
+ path: list-events
4605
+ - id: edits
4606
+ title: Edits
4607
+ deprecated: true
4608
+ description: |
4609
+ Given a prompt and an instruction, the model will return an edited version of the prompt.
4610
+ sections:
4611
+ - type: object
4612
+ key: CreateEditResponse
4613
+ path: object
4614
+ - type: endpoint
4615
+ key: createEdit
4616
+ path: create