ruby-openai 4.0.0 → 6.3.1

Sign up to get free protection for your applications and to get access to all the features.
data/README.md CHANGED
@@ -1,14 +1,14 @@
1
1
  # Ruby OpenAI
2
2
 
3
- [![Gem Version](https://badge.fury.io/rb/ruby-openai.svg)](https://badge.fury.io/rb/ruby-openai)
3
+ [![Gem Version](https://img.shields.io/gem/v/ruby-openai.svg)](https://rubygems.org/gems/ruby-openai)
4
4
  [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/alexrudall/ruby-openai/blob/main/LICENSE.txt)
5
5
  [![CircleCI Build Status](https://circleci.com/gh/alexrudall/ruby-openai.svg?style=shield)](https://circleci.com/gh/alexrudall/ruby-openai)
6
6
 
7
- Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
7
+ Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖🩵
8
8
 
9
9
  Stream text with GPT-4, transcribe and translate audio with Whisper, or create images with DALL·E...
10
10
 
11
- [Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD)
11
+ [🚢 Hire me](https://peaceterms.com?utm_source=ruby-openai&utm_medium=readme&utm_id=26072023) | [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 Twitter](https://twitter.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney)
12
12
 
13
13
  ### Bundler
14
14
 
@@ -20,13 +20,17 @@ gem "ruby-openai"
20
20
 
21
21
  And then execute:
22
22
 
23
+ ```bash
23
24
  $ bundle install
25
+ ```
24
26
 
25
27
  ### Gem install
26
28
 
27
29
  Or install with:
28
30
 
31
+ ```bash
29
32
  $ gem install ruby-openai
33
+ ```
30
34
 
31
35
  and require with:
32
36
 
@@ -36,8 +40,8 @@ require "openai"
36
40
 
37
41
  ## Usage
38
42
 
39
- - Get your API key from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys)
40
- - If you belong to multiple organizations, you can get your Organization ID from [https://beta.openai.com/account/org-settings](https://beta.openai.com/account/org-settings)
43
+ - Get your API key from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)
44
+ - If you belong to multiple organizations, you can get your Organization ID from [https://platform.openai.com/account/org-settings](https://platform.openai.com/account/org-settings)
41
45
 
42
46
  ### Quickstart
43
47
 
@@ -64,15 +68,27 @@ Then you can create a client like this:
64
68
  client = OpenAI::Client.new
65
69
  ```
66
70
 
71
+ You can still override the config defaults when making new clients; any options not included will fall back to any global config set with OpenAI.configure. e.g. in this example the organization_id, request_timeout, etc. will fallback to any set globally using OpenAI.configure, with only the access_token overridden:
72
+
73
+ ```ruby
74
+ client = OpenAI::Client.new(access_token: "access_token_goes_here")
75
+ ```
76
+
67
77
  #### Custom timeout or base URI
68
78
 
69
- The default timeout for any request using this library is 120 seconds. You can change that by passing a number of seconds to the `request_timeout` when initializing the client. You can also change the base URI used for all requests, eg. to use observability tools like [Helicone](https://docs.helicone.ai/quickstart/integrate-in-one-line-of-code):
79
+ The default timeout for any request using this library is 120 seconds. You can change that by passing a number of seconds to the `request_timeout` when initializing the client. You can also change the base URI used for all requests, eg. to use observability tools like [Helicone](https://docs.helicone.ai/quickstart/integrate-in-one-line-of-code), and add arbitrary other headers e.g. for [openai-caching-proxy-worker](https://github.com/6/openai-caching-proxy-worker):
70
80
 
71
81
  ```ruby
72
82
  client = OpenAI::Client.new(
73
83
  access_token: "access_token_goes_here",
74
84
  uri_base: "https://oai.hconeai.com/",
75
- request_timeout: 240
85
+ request_timeout: 240,
86
+ extra_headers: {
87
+ "X-Proxy-TTL" => "43200", # For https://github.com/6/openai-caching-proxy-worker#specifying-a-cache-ttl
88
+ "X-Proxy-Refresh": "true", # For https://github.com/6/openai-caching-proxy-worker#refreshing-the-cache
89
+ "Helicone-Auth": "Bearer HELICONE_API_KEY", # For https://docs.helicone.ai/getting-started/integration-method/openai-proxy
90
+ "helicone-stream-force-format" => "true", # Use this with Helicone otherwise streaming drops chunks # https://github.com/alexrudall/ruby-openai/issues/251
91
+ }
76
92
  )
77
93
  ```
78
94
 
@@ -84,9 +100,51 @@ OpenAI.configure do |config|
84
100
  config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional
85
101
  config.uri_base = "https://oai.hconeai.com/" # Optional
86
102
  config.request_timeout = 240 # Optional
103
+ config.extra_headers = {
104
+ "X-Proxy-TTL" => "43200", # For https://github.com/6/openai-caching-proxy-worker#specifying-a-cache-ttl
105
+ "X-Proxy-Refresh": "true", # For https://github.com/6/openai-caching-proxy-worker#refreshing-the-cache
106
+ "Helicone-Auth": "Bearer HELICONE_API_KEY" # For https://docs.helicone.ai/getting-started/integration-method/openai-proxy
107
+ } # Optional
87
108
  end
88
109
  ```
89
110
 
111
+ #### Verbose Logging
112
+
113
+ You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middleware/index) to the client in a block, eg. to enable verbose logging with Ruby's [Logger](https://ruby-doc.org/3.2.2/stdlibs/logger/Logger.html):
114
+
115
+ ```ruby
116
+ client = OpenAI::Client.new do |f|
117
+ f.response :logger, Logger.new($stdout), bodies: true
118
+ end
119
+ ```
120
+
121
+ #### Azure
122
+
123
+ To use the [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) API, you can configure the gem like this:
124
+
125
+ ```ruby
126
+ OpenAI.configure do |config|
127
+ config.access_token = ENV.fetch("AZURE_OPENAI_API_KEY")
128
+ config.uri_base = ENV.fetch("AZURE_OPENAI_URI")
129
+ config.api_type = :azure
130
+ config.api_version = "2023-03-15-preview"
131
+ end
132
+ ```
133
+
134
+ where `AZURE_OPENAI_URI` is e.g. `https://custom-domain.openai.azure.com/openai/deployments/gpt-35-turbo`
135
+
136
+ ### Counting Tokens
137
+
138
+ OpenAI parses prompt text into [tokens](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them), which are words or portions of words. (These tokens are unrelated to your API access_token.) Counting tokens can help you estimate your [costs](https://openai.com/pricing). It can also help you ensure your prompt text size is within the max-token limits of your model's context window, and choose an appropriate [`max_tokens`](https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens) completion parameter so your response will fit as well.
139
+
140
+ To estimate the token-count of your text:
141
+
142
+ ```ruby
143
+ OpenAI.rough_token_count("Your text")
144
+ ```
145
+
146
+ If you need a more accurate count, try [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby).
147
+
90
148
  ### Models
91
149
 
92
150
  There are different models that can be used to generate text. For a full list and to retrieve information about a single model:
@@ -99,7 +157,7 @@ client.models.retrieve(id: "text-ada-001")
99
157
  #### Examples
100
158
 
101
159
  - [GPT-4 (limited beta)](https://platform.openai.com/docs/models/gpt-4)
102
- - gpt-4
160
+ - gpt-4 (uses current version)
103
161
  - gpt-4-0314
104
162
  - gpt-4-32k
105
163
  - [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5)
@@ -111,9 +169,9 @@ client.models.retrieve(id: "text-ada-001")
111
169
  - text-babbage-001
112
170
  - text-curie-001
113
171
 
114
- ### ChatGPT
172
+ ### Chat
115
173
 
116
- ChatGPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
174
+ GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
117
175
 
118
176
  ```ruby
119
177
  response = client.chat(
@@ -126,9 +184,11 @@ puts response.dig("choices", 0, "message", "content")
126
184
  # => "Hello! How may I assist you today?"
127
185
  ```
128
186
 
129
- ### Streaming ChatGPT
187
+ #### Streaming Chat
188
+
189
+ [Quick guide to streaming Chat with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)
130
190
 
131
- You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) to the `stream` parameter to receive the stream of text chunks as they are generated. Each time one or more chunks is received, the Proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will pass that to your proc as a Hash.
191
+ You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) (or any object with a `#call` method) to the `stream` parameter to receive the stream of completion chunks as they are generated. Each time one or more chunks is received, the proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will raise a Faraday error.
132
192
 
133
193
  ```ruby
134
194
  client.chat(
@@ -143,19 +203,139 @@ client.chat(
143
203
  # => "Anna is a young woman in her mid-twenties, with wavy chestnut hair that falls to her shoulders..."
144
204
  ```
145
205
 
146
- ### Completions
206
+ Note: OpenAPI currently does not report token usage for streaming responses. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby). We think that each call to the stream proc corresponds to a single token, so you can also try counting the number of calls to the proc to get the completion token count.
147
207
 
148
- Hit the OpenAI API for a completion using other GPT-3 models:
208
+ #### Vision
209
+
210
+ You can use the GPT-4 Vision model to generate a description of an image:
149
211
 
150
212
  ```ruby
151
- response = client.completions(
213
+ messages = [
214
+ { "type": "text", "text": "What’s in this image?"},
215
+ { "type": "image_url",
216
+ "image_url": {
217
+ "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
218
+ },
219
+ }
220
+ ]
221
+ response = client.chat(
152
222
  parameters: {
153
- model: "text-davinci-001",
154
- prompt: "Once upon a time",
155
- max_tokens: 5
223
+ model: "gpt-4-vision-preview", # Required.
224
+ messages: [{ role: "user", content: messages}], # Required.
156
225
  })
157
- puts response["choices"].map { |c| c["text"] }
158
- # => [", there lived a great"]
226
+ puts response.dig("choices", 0, "message", "content")
227
+ # => "The image depicts a serene natural landscape featuring a long wooden boardwalk extending straight ahead"
228
+ ```
229
+
230
+ #### JSON Mode
231
+
232
+ You can set the response_format to ask for responses in JSON (at least for `gpt-3.5-turbo-1106`):
233
+
234
+ ```ruby
235
+ response = client.chat(
236
+ parameters: {
237
+ model: "gpt-3.5-turbo-1106",
238
+ response_format: { type: "json_object" },
239
+ messages: [{ role: "user", content: "Hello! Give me some JSON please."}],
240
+ temperature: 0.7,
241
+ })
242
+ puts response.dig("choices", 0, "message", "content")
243
+ {
244
+ "name": "John",
245
+ "age": 30,
246
+ "city": "New York",
247
+ "hobbies": ["reading", "traveling", "hiking"],
248
+ "isStudent": false
249
+ }
250
+ ```
251
+
252
+ You can stream it as well!
253
+
254
+ ```ruby
255
+ response = client.chat(
256
+ parameters: {
257
+ model: "gpt-3.5-turbo-1106",
258
+ messages: [{ role: "user", content: "Can I have some JSON please?"}],
259
+ response_format: { type: "json_object" },
260
+ stream: proc do |chunk, _bytesize|
261
+ print chunk.dig("choices", 0, "delta", "content")
262
+ end
263
+ })
264
+ {
265
+ "message": "Sure, please let me know what specific JSON data you are looking for.",
266
+ "JSON_data": {
267
+ "example_1": {
268
+ "key_1": "value_1",
269
+ "key_2": "value_2",
270
+ "key_3": "value_3"
271
+ },
272
+ "example_2": {
273
+ "key_4": "value_4",
274
+ "key_5": "value_5",
275
+ "key_6": "value_6"
276
+ }
277
+ }
278
+ }
279
+ ```
280
+
281
+ ### Functions
282
+
283
+ You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call those them. For example, if you want the model to use your method `get_current_weather` to get the current weather in a given location:
284
+
285
+ ```ruby
286
+ def get_current_weather(location:, unit: "fahrenheit")
287
+ # use a weather api to fetch weather
288
+ end
289
+
290
+ response =
291
+ client.chat(
292
+ parameters: {
293
+ model: "gpt-3.5-turbo-0613",
294
+ messages: [
295
+ {
296
+ "role": "user",
297
+ "content": "What is the weather like in San Francisco?",
298
+ },
299
+ ],
300
+ functions: [
301
+ {
302
+ name: "get_current_weather",
303
+ description: "Get the current weather in a given location",
304
+ parameters: {
305
+ type: :object,
306
+ properties: {
307
+ location: {
308
+ type: :string,
309
+ description: "The city and state, e.g. San Francisco, CA",
310
+ },
311
+ unit: {
312
+ type: "string",
313
+ enum: %w[celsius fahrenheit],
314
+ },
315
+ },
316
+ required: ["location"],
317
+ },
318
+ },
319
+ ],
320
+ },
321
+ )
322
+
323
+ message = response.dig("choices", 0, "message")
324
+
325
+ if message["role"] == "assistant" && message["function_call"]
326
+ function_name = message.dig("function_call", "name")
327
+ args =
328
+ JSON.parse(
329
+ message.dig("function_call", "arguments"),
330
+ { symbolize_names: true },
331
+ )
332
+
333
+ case function_name
334
+ when "get_current_weather"
335
+ get_current_weather(**args)
336
+ end
337
+ end
338
+ # => "The weather is nice 🌞"
159
339
  ```
160
340
 
161
341
  ### Edits
@@ -179,12 +359,15 @@ puts response.dig("choices", 0, "text")
179
359
  You can use the embeddings endpoint to get a vector of numbers representing an input. You can then compare these vectors for different inputs to efficiently check how similar the inputs are.
180
360
 
181
361
  ```ruby
182
- client.embeddings(
362
+ response = client.embeddings(
183
363
  parameters: {
184
- model: "babbage-similarity",
364
+ model: "text-embedding-ada-002",
185
365
  input: "The food was delicious and the waiter..."
186
366
  }
187
367
  )
368
+
369
+ puts response.dig("data", 0, "embedding")
370
+ # => Vector representation of your embedding
188
371
  ```
189
372
 
190
373
  ### Files
@@ -206,22 +389,22 @@ client.files.content(id: "file-123")
206
389
  client.files.delete(id: "file-123")
207
390
  ```
208
391
 
209
- ### Fine-tunes
392
+ ### Finetunes
210
393
 
211
394
  Upload your fine-tuning data in a `.jsonl` file as above and get its ID:
212
395
 
213
396
  ```ruby
214
- response = client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
397
+ response = client.files.upload(parameters: { file: "path/to/sarcasm.jsonl", purpose: "fine-tune" })
215
398
  file_id = JSON.parse(response.body)["id"]
216
399
  ```
217
400
 
218
- You can then use this file ID to create a fine-tune model:
401
+ You can then use this file ID to create a fine tuning job:
219
402
 
220
403
  ```ruby
221
404
  response = client.finetunes.create(
222
405
  parameters: {
223
406
  training_file: file_id,
224
- model: "ada"
407
+ model: "gpt-3.5-turbo-0613"
225
408
  })
226
409
  fine_tune_id = response["id"]
227
410
  ```
@@ -252,10 +435,10 @@ response = client.completions(
252
435
  response.dig("choices", 0, "text")
253
436
  ```
254
437
 
255
- You can delete the fine-tuned model when you are done with it:
438
+ You can also capture the events for a job:
256
439
 
257
- ```ruby
258
- client.finetunes.delete(fine_tuned_model: fine_tuned_model)
440
+ ```
441
+ client.finetunes.list_events(id: fine_tune_id)
259
442
  ```
260
443
 
261
444
  ### Image Generation
@@ -315,7 +498,7 @@ Whisper is a speech to text model that can be used to generate text based on aud
315
498
  The translations API takes as input the audio file in any of the supported languages and transcribes the audio into English.
316
499
 
317
500
  ```ruby
318
- response = client.translate(
501
+ response = client.audio.translate(
319
502
  parameters: {
320
503
  model: "whisper-1",
321
504
  file: File.open("path_to_file", "rb"),
@@ -329,7 +512,7 @@ puts response["text"]
329
512
  The transcriptions API takes as input the audio file you want to transcribe and returns the text in the desired output file format.
330
513
 
331
514
  ```ruby
332
- response = client.transcribe(
515
+ response = client.audio.transcribe(
333
516
  parameters: {
334
517
  model: "whisper-1",
335
518
  file: File.open("path_to_file", "rb"),
@@ -338,6 +521,34 @@ puts response["text"]
338
521
  # => "Transcription of the text"
339
522
  ```
340
523
 
524
+ #### Speech
525
+
526
+ The speech API takes as input the text and a voice and returns the content of an audio file you can listen to.
527
+
528
+ ```ruby
529
+ response = client.audio.speech(
530
+ parameters: {
531
+ model: "tts-1",
532
+ input: "This is a speech test!",
533
+ voice: "alloy"
534
+ }
535
+ )
536
+ File.binwrite('demo.mp3', response)
537
+ # => mp3 file that plays: "This is a speech test!"
538
+ ```
539
+
540
+ ### Errors
541
+
542
+ HTTP errors can be caught like this:
543
+
544
+ ```
545
+ begin
546
+ OpenAI::Client.new.models.retrieve(id: "text-ada-001")
547
+ rescue Faraday::Error => e
548
+ raise "Got a Faraday error: #{e}"
549
+ end
550
+ ```
551
+
341
552
  ## Development
342
553
 
343
554
  After checking out the repo, run `bin/setup` to install dependencies. You can run `bin/console` for an interactive prompt that will allow you to experiment.
@@ -0,0 +1,27 @@
1
+ module OpenAI
2
+ class Assistants
3
+ def initialize(client:)
4
+ @client = client.beta(assistants: "v1")
5
+ end
6
+
7
+ def list
8
+ @client.get(path: "/assistants")
9
+ end
10
+
11
+ def retrieve(id:)
12
+ @client.get(path: "/assistants/#{id}")
13
+ end
14
+
15
+ def create(parameters: {})
16
+ @client.json_post(path: "/assistants", parameters: parameters)
17
+ end
18
+
19
+ def modify(id:, parameters: {})
20
+ @client.json_post(path: "/assistants/#{id}", parameters: parameters)
21
+ end
22
+
23
+ def delete(id:)
24
+ @client.delete(path: "/assistants/#{id}")
25
+ end
26
+ end
27
+ end
@@ -0,0 +1,19 @@
1
+ module OpenAI
2
+ class Audio
3
+ def initialize(client:)
4
+ @client = client
5
+ end
6
+
7
+ def transcribe(parameters: {})
8
+ @client.multipart_post(path: "/audio/transcriptions", parameters: parameters)
9
+ end
10
+
11
+ def translate(parameters: {})
12
+ @client.multipart_post(path: "/audio/translations", parameters: parameters)
13
+ end
14
+
15
+ def speech(parameters: {})
16
+ @client.json_post(path: "/audio/speech", parameters: parameters)
17
+ end
18
+ end
19
+ end
data/lib/openai/client.rb CHANGED
@@ -1,56 +1,91 @@
1
1
  module OpenAI
2
2
  class Client
3
- extend OpenAI::HTTP
3
+ include OpenAI::HTTP
4
4
 
5
- def initialize(access_token: nil, organization_id: nil, uri_base: nil, request_timeout: nil)
6
- OpenAI.configuration.access_token = access_token if access_token
7
- OpenAI.configuration.organization_id = organization_id if organization_id
8
- OpenAI.configuration.uri_base = uri_base if uri_base
9
- OpenAI.configuration.request_timeout = request_timeout if request_timeout
10
- end
5
+ CONFIG_KEYS = %i[
6
+ api_type
7
+ api_version
8
+ access_token
9
+ organization_id
10
+ uri_base
11
+ request_timeout
12
+ extra_headers
13
+ ].freeze
14
+ attr_reader *CONFIG_KEYS, :faraday_middleware
11
15
 
12
- def chat(parameters: {})
13
- OpenAI::Client.json_post(path: "/chat/completions", parameters: parameters)
16
+ def initialize(config = {}, &faraday_middleware)
17
+ CONFIG_KEYS.each do |key|
18
+ # Set instance variables like api_type & access_token. Fall back to global config
19
+ # if not present.
20
+ instance_variable_set("@#{key}", config[key] || OpenAI.configuration.send(key))
21
+ end
22
+ @faraday_middleware = faraday_middleware
14
23
  end
15
24
 
16
- def completions(parameters: {})
17
- OpenAI::Client.json_post(path: "/completions", parameters: parameters)
25
+ def chat(parameters: {})
26
+ json_post(path: "/chat/completions", parameters: parameters)
18
27
  end
19
28
 
20
29
  def edits(parameters: {})
21
- OpenAI::Client.json_post(path: "/edits", parameters: parameters)
30
+ json_post(path: "/edits", parameters: parameters)
22
31
  end
23
32
 
24
33
  def embeddings(parameters: {})
25
- OpenAI::Client.json_post(path: "/embeddings", parameters: parameters)
34
+ json_post(path: "/embeddings", parameters: parameters)
35
+ end
36
+
37
+ def audio
38
+ @audio ||= OpenAI::Audio.new(client: self)
26
39
  end
27
40
 
28
41
  def files
29
- @files ||= OpenAI::Files.new
42
+ @files ||= OpenAI::Files.new(client: self)
30
43
  end
31
44
 
32
45
  def finetunes
33
- @finetunes ||= OpenAI::Finetunes.new
46
+ @finetunes ||= OpenAI::Finetunes.new(client: self)
34
47
  end
35
48
 
36
49
  def images
37
- @images ||= OpenAI::Images.new
50
+ @images ||= OpenAI::Images.new(client: self)
38
51
  end
39
52
 
40
53
  def models
41
- @models ||= OpenAI::Models.new
54
+ @models ||= OpenAI::Models.new(client: self)
55
+ end
56
+
57
+ def assistants
58
+ @assistants ||= OpenAI::Assistants.new(client: self)
59
+ end
60
+
61
+ def threads
62
+ @threads ||= OpenAI::Threads.new(client: self)
63
+ end
64
+
65
+ def messages
66
+ @messages ||= OpenAI::Messages.new(client: self)
67
+ end
68
+
69
+ def runs
70
+ @runs ||= OpenAI::Runs.new(client: self)
71
+ end
72
+
73
+ def run_steps
74
+ @run_steps ||= OpenAI::RunSteps.new(client: self)
42
75
  end
43
76
 
44
77
  def moderations(parameters: {})
45
- OpenAI::Client.json_post(path: "/moderations", parameters: parameters)
78
+ json_post(path: "/moderations", parameters: parameters)
46
79
  end
47
80
 
48
- def transcribe(parameters: {})
49
- OpenAI::Client.multipart_post(path: "/audio/transcriptions", parameters: parameters)
81
+ def azure?
82
+ @api_type&.to_sym == :azure
50
83
  end
51
84
 
52
- def translate(parameters: {})
53
- OpenAI::Client.multipart_post(path: "/audio/translations", parameters: parameters)
85
+ def beta(apis)
86
+ dup.tap do |client|
87
+ client.add_headers("OpenAI-Beta": apis.map { |k, v| "#{k}=#{v}" }.join(";"))
88
+ end
54
89
  end
55
90
  end
56
91
  end
@@ -5,5 +5,6 @@ module Ruby
5
5
  Error = ::OpenAI::Error
6
6
  ConfigurationError = ::OpenAI::ConfigurationError
7
7
  Configuration = ::OpenAI::Configuration
8
+ MiddlewareErrors = ::OpenAI::MiddlewareErrors
8
9
  end
9
10
  end
data/lib/openai/files.rb CHANGED
@@ -1,33 +1,32 @@
1
1
  module OpenAI
2
2
  class Files
3
- def initialize(access_token: nil, organization_id: nil)
4
- OpenAI.configuration.access_token = access_token if access_token
5
- OpenAI.configuration.organization_id = organization_id if organization_id
3
+ def initialize(client:)
4
+ @client = client
6
5
  end
7
6
 
8
7
  def list
9
- OpenAI::Client.get(path: "/files")
8
+ @client.get(path: "/files")
10
9
  end
11
10
 
12
11
  def upload(parameters: {})
13
- validate(file: parameters[:file])
12
+ validate(file: parameters[:file]) if parameters[:file].include?(".jsonl")
14
13
 
15
- OpenAI::Client.multipart_post(
14
+ @client.multipart_post(
16
15
  path: "/files",
17
16
  parameters: parameters.merge(file: File.open(parameters[:file]))
18
17
  )
19
18
  end
20
19
 
21
20
  def retrieve(id:)
22
- OpenAI::Client.get(path: "/files/#{id}")
21
+ @client.get(path: "/files/#{id}")
23
22
  end
24
23
 
25
24
  def content(id:)
26
- OpenAI::Client.get(path: "/files/#{id}/content")
25
+ @client.get(path: "/files/#{id}/content")
27
26
  end
28
27
 
29
28
  def delete(id:)
30
- OpenAI::Client.delete(path: "/files/#{id}")
29
+ @client.delete(path: "/files/#{id}")
31
30
  end
32
31
 
33
32
  private
@@ -1,36 +1,27 @@
1
1
  module OpenAI
2
2
  class Finetunes
3
- def initialize(access_token: nil, organization_id: nil)
4
- OpenAI.configuration.access_token = access_token if access_token
5
- OpenAI.configuration.organization_id = organization_id if organization_id
3
+ def initialize(client:)
4
+ @client = client
6
5
  end
7
6
 
8
7
  def list
9
- OpenAI::Client.get(path: "/fine-tunes")
8
+ @client.get(path: "/fine_tuning/jobs")
10
9
  end
11
10
 
12
11
  def create(parameters: {})
13
- OpenAI::Client.json_post(path: "/fine-tunes", parameters: parameters)
12
+ @client.json_post(path: "/fine_tuning/jobs", parameters: parameters)
14
13
  end
15
14
 
16
15
  def retrieve(id:)
17
- OpenAI::Client.get(path: "/fine-tunes/#{id}")
16
+ @client.get(path: "/fine_tuning/jobs/#{id}")
18
17
  end
19
18
 
20
19
  def cancel(id:)
21
- OpenAI::Client.multipart_post(path: "/fine-tunes/#{id}/cancel")
20
+ @client.json_post(path: "/fine_tuning/jobs/#{id}/cancel", parameters: {})
22
21
  end
23
22
 
24
- def events(id:)
25
- OpenAI::Client.get(path: "/fine-tunes/#{id}/events")
26
- end
27
-
28
- def delete(fine_tuned_model:)
29
- if fine_tuned_model.start_with?("ft-")
30
- raise ArgumentError, "Please give a fine_tuned_model name, not a fine-tune ID"
31
- end
32
-
33
- OpenAI::Client.delete(path: "/models/#{fine_tuned_model}")
23
+ def list_events(id:)
24
+ @client.get(path: "/fine_tuning/jobs/#{id}/events")
34
25
  end
35
26
  end
36
27
  end