ruby-openai 7.3.1 → 7.4.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.gitignore +3 -0
- data/CHANGELOG.md +7 -0
- data/Gemfile +1 -1
- data/Gemfile.lock +8 -10
- data/README.md +447 -265
- data/lib/openai/client.rb +19 -11
- data/lib/openai/compatibility.rb +1 -0
- data/lib/openai/usage.rb +70 -0
- data/lib/openai/version.rb +1 -1
- data/lib/openai.rb +4 -0
- metadata +3 -2
data/README.md
CHANGED
@@ -1,14 +1,17 @@
|
|
1
1
|
# Ruby OpenAI
|
2
|
-
|
3
2
|
[![Gem Version](https://img.shields.io/gem/v/ruby-openai.svg)](https://rubygems.org/gems/ruby-openai)
|
4
3
|
[![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/alexrudall/ruby-openai/blob/main/LICENSE.txt)
|
5
4
|
[![CircleCI Build Status](https://circleci.com/gh/alexrudall/ruby-openai.svg?style=shield)](https://circleci.com/gh/alexrudall/ruby-openai)
|
6
5
|
|
7
6
|
Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
|
8
7
|
|
9
|
-
Stream text with GPT-
|
8
|
+
Stream text with GPT-4, transcribe and translate audio with Whisper, or create images with DALL·E...
|
9
|
+
|
10
|
+
💥 Click [subscribe now](https://mailchi.mp/8c7b574726a9/ruby-openai) to hear first about new releases in the Rails AI newsletter!
|
10
11
|
|
11
|
-
[
|
12
|
+
[![RailsAI Newsletter](https://github.com/user-attachments/assets/737cbb99-6029-42b8-9f22-a106725a4b1f)](https://mailchi.mp/8c7b574726a9/ruby-openai)
|
13
|
+
|
14
|
+
[🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 X](https://x.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney)
|
12
15
|
|
13
16
|
## Contents
|
14
17
|
|
@@ -17,7 +20,7 @@ Stream text with GPT-4o, transcribe and translate audio with Whisper, or create
|
|
17
20
|
- [Installation](#installation)
|
18
21
|
- [Bundler](#bundler)
|
19
22
|
- [Gem install](#gem-install)
|
20
|
-
- [
|
23
|
+
- [How to use](#how-to-use)
|
21
24
|
- [Quickstart](#quickstart)
|
22
25
|
- [With Config](#with-config)
|
23
26
|
- [Custom timeout or base URI](#custom-timeout-or-base-uri)
|
@@ -49,6 +52,7 @@ Stream text with GPT-4o, transcribe and translate audio with Whisper, or create
|
|
49
52
|
- [Threads and Messages](#threads-and-messages)
|
50
53
|
- [Runs](#runs)
|
51
54
|
- [Create and Run](#create-and-run)
|
55
|
+
- [Vision in a thread](#vision-in-a-thread)
|
52
56
|
- [Runs involving function tools](#runs-involving-function-tools)
|
53
57
|
- [Exploring chunks used in File Search](#exploring-chunks-used-in-file-search)
|
54
58
|
- [Image Generation](#image-generation)
|
@@ -61,6 +65,7 @@ Stream text with GPT-4o, transcribe and translate audio with Whisper, or create
|
|
61
65
|
- [Translate](#translate)
|
62
66
|
- [Transcribe](#transcribe)
|
63
67
|
- [Speech](#speech)
|
68
|
+
- [Usage](#usage)
|
64
69
|
- [Errors](#errors-1)
|
65
70
|
- [Development](#development)
|
66
71
|
- [Release](#release)
|
@@ -98,7 +103,7 @@ and require with:
|
|
98
103
|
require "openai"
|
99
104
|
```
|
100
105
|
|
101
|
-
##
|
106
|
+
## How to use
|
102
107
|
|
103
108
|
- Get your API key from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)
|
104
109
|
- If you belong to multiple organizations, you can get your Organization ID from [https://platform.openai.com/account/org-settings](https://platform.openai.com/account/org-settings)
|
@@ -121,6 +126,7 @@ For a more robust setup, you can configure the gem with your API keys, for examp
|
|
121
126
|
```ruby
|
122
127
|
OpenAI.configure do |config|
|
123
128
|
config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
|
129
|
+
config.admin_token = ENV.fetch("OPENAI_ADMIN_TOKEN") # Optional, used for admin endpoints, created here: https://platform.openai.com/settings/organization/admin-keys
|
124
130
|
config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional
|
125
131
|
config.log_errors = true # Highly recommended in development, so you can see what errors OpenAI is returning. Not recommended in production because it could leak private data to your logs.
|
126
132
|
end
|
@@ -132,10 +138,10 @@ Then you can create a client like this:
|
|
132
138
|
client = OpenAI::Client.new
|
133
139
|
```
|
134
140
|
|
135
|
-
You can still override the config defaults when making new clients; any options not included will fall back to any global config set with OpenAI.configure. e.g. in this example the organization_id, request_timeout, etc. will fallback to any set globally using OpenAI.configure, with only the access_token overridden:
|
141
|
+
You can still override the config defaults when making new clients; any options not included will fall back to any global config set with OpenAI.configure. e.g. in this example the organization_id, request_timeout, etc. will fallback to any set globally using OpenAI.configure, with only the access_token and admin_token overridden:
|
136
142
|
|
137
143
|
```ruby
|
138
|
-
client = OpenAI::Client.new(access_token: "access_token_goes_here")
|
144
|
+
client = OpenAI::Client.new(access_token: "access_token_goes_here", admin_token: "admin_token_goes_here")
|
139
145
|
```
|
140
146
|
|
141
147
|
#### Custom timeout or base URI
|
@@ -146,15 +152,15 @@ client = OpenAI::Client.new(access_token: "access_token_goes_here")
|
|
146
152
|
|
147
153
|
```ruby
|
148
154
|
client = OpenAI::Client.new(
|
149
|
-
|
150
|
-
|
151
|
-
|
152
|
-
|
153
|
-
|
154
|
-
|
155
|
-
|
156
|
-
|
157
|
-
|
155
|
+
access_token: "access_token_goes_here",
|
156
|
+
uri_base: "https://oai.hconeai.com/",
|
157
|
+
request_timeout: 240,
|
158
|
+
extra_headers: {
|
159
|
+
"X-Proxy-TTL" => "43200", # For https://github.com/6/openai-caching-proxy-worker#specifying-a-cache-ttl
|
160
|
+
"X-Proxy-Refresh": "true", # For https://github.com/6/openai-caching-proxy-worker#refreshing-the-cache
|
161
|
+
"Helicone-Auth": "Bearer HELICONE_API_KEY", # For https://docs.helicone.ai/getting-started/integration-method/openai-proxy
|
162
|
+
"helicone-stream-force-format" => "true", # Use this with Helicone otherwise streaming drops chunks # https://github.com/alexrudall/ruby-openai/issues/251
|
163
|
+
}
|
158
164
|
)
|
159
165
|
```
|
160
166
|
|
@@ -162,16 +168,17 @@ or when configuring the gem:
|
|
162
168
|
|
163
169
|
```ruby
|
164
170
|
OpenAI.configure do |config|
|
165
|
-
|
166
|
-
|
167
|
-
|
168
|
-
|
169
|
-
|
170
|
-
|
171
|
-
|
172
|
-
|
173
|
-
|
174
|
-
|
171
|
+
config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
|
172
|
+
config.admin_token = ENV.fetch("OPENAI_ADMIN_TOKEN") # Optional, used for admin endpoints, created here: https://platform.openai.com/settings/organization/admin-keys
|
173
|
+
config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional
|
174
|
+
config.log_errors = true # Optional
|
175
|
+
config.uri_base = "https://oai.hconeai.com/" # Optional
|
176
|
+
config.request_timeout = 240 # Optional
|
177
|
+
config.extra_headers = {
|
178
|
+
"X-Proxy-TTL" => "43200", # For https://github.com/6/openai-caching-proxy-worker#specifying-a-cache-ttl
|
179
|
+
"X-Proxy-Refresh": "true", # For https://github.com/6/openai-caching-proxy-worker#refreshing-the-cache
|
180
|
+
"Helicone-Auth": "Bearer HELICONE_API_KEY" # For https://docs.helicone.ai/getting-started/integration-method/openai-proxy
|
181
|
+
} # Optional
|
175
182
|
end
|
176
183
|
```
|
177
184
|
|
@@ -193,7 +200,7 @@ By default, `ruby-openai` does not log any `Faraday::Error`s encountered while e
|
|
193
200
|
If you would like to enable this functionality, you can set `log_errors` to `true` when configuring the client:
|
194
201
|
|
195
202
|
```ruby
|
196
|
-
|
203
|
+
client = OpenAI::Client.new(log_errors: true)
|
197
204
|
```
|
198
205
|
|
199
206
|
##### Faraday middleware
|
@@ -201,9 +208,9 @@ If you would like to enable this functionality, you can set `log_errors` to `tru
|
|
201
208
|
You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middleware/index) to the client in a block, eg. to enable verbose logging with Ruby's [Logger](https://ruby-doc.org/3.2.2/stdlibs/logger/Logger.html):
|
202
209
|
|
203
210
|
```ruby
|
204
|
-
|
205
|
-
|
206
|
-
|
211
|
+
client = OpenAI::Client.new do |f|
|
212
|
+
f.response :logger, Logger.new($stdout), bodies: true
|
213
|
+
end
|
207
214
|
```
|
208
215
|
|
209
216
|
#### Azure
|
@@ -211,12 +218,12 @@ You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middlew
|
|
211
218
|
To use the [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) API, you can configure the gem like this:
|
212
219
|
|
213
220
|
```ruby
|
214
|
-
|
215
|
-
|
216
|
-
|
217
|
-
|
218
|
-
|
219
|
-
|
221
|
+
OpenAI.configure do |config|
|
222
|
+
config.access_token = ENV.fetch("AZURE_OPENAI_API_KEY")
|
223
|
+
config.uri_base = ENV.fetch("AZURE_OPENAI_URI")
|
224
|
+
config.api_type = :azure
|
225
|
+
config.api_version = "2023-03-15-preview"
|
226
|
+
end
|
220
227
|
```
|
221
228
|
|
222
229
|
where `AZURE_OPENAI_URI` is e.g. `https://custom-domain.openai.azure.com/openai/deployments/gpt-35-turbo`
|
@@ -241,14 +248,15 @@ client = OpenAI::Client.new(
|
|
241
248
|
)
|
242
249
|
|
243
250
|
client.chat(
|
244
|
-
|
245
|
-
|
246
|
-
|
247
|
-
|
248
|
-
|
249
|
-
|
250
|
-
|
251
|
-
|
251
|
+
parameters: {
|
252
|
+
model: "llama3", # Required.
|
253
|
+
messages: [{ role: "user", content: "Hello!"}], # Required.
|
254
|
+
temperature: 0.7,
|
255
|
+
stream: proc do |chunk, _bytesize|
|
256
|
+
print chunk.dig("choices", 0, "delta", "content")
|
257
|
+
end
|
258
|
+
}
|
259
|
+
)
|
252
260
|
|
253
261
|
# => Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
|
254
262
|
```
|
@@ -258,20 +266,21 @@ client.chat(
|
|
258
266
|
[Groq API Chat](https://console.groq.com/docs/quickstart) is broadly compatible with the OpenAI API, with a [few minor differences](https://console.groq.com/docs/openai). Get an access token from [here](https://console.groq.com/keys), then:
|
259
267
|
|
260
268
|
```ruby
|
261
|
-
|
262
|
-
|
263
|
-
|
264
|
-
|
269
|
+
client = OpenAI::Client.new(
|
270
|
+
access_token: "groq_access_token_goes_here",
|
271
|
+
uri_base: "https://api.groq.com/openai"
|
272
|
+
)
|
265
273
|
|
266
|
-
|
267
|
-
|
268
|
-
|
269
|
-
|
270
|
-
|
271
|
-
|
272
|
-
|
273
|
-
|
274
|
-
|
274
|
+
client.chat(
|
275
|
+
parameters: {
|
276
|
+
model: "llama3-8b-8192", # Required.
|
277
|
+
messages: [{ role: "user", content: "Hello!"}], # Required.
|
278
|
+
temperature: 0.7,
|
279
|
+
stream: proc do |chunk, _bytesize|
|
280
|
+
print chunk.dig("choices", 0, "delta", "content")
|
281
|
+
end
|
282
|
+
}
|
283
|
+
)
|
275
284
|
```
|
276
285
|
|
277
286
|
### Counting Tokens
|
@@ -301,11 +310,12 @@ GPT is a model that can be used to generate text in a conversational style. You
|
|
301
310
|
|
302
311
|
```ruby
|
303
312
|
response = client.chat(
|
304
|
-
|
305
|
-
|
306
|
-
|
307
|
-
|
308
|
-
|
313
|
+
parameters: {
|
314
|
+
model: "gpt-4o", # Required.
|
315
|
+
messages: [{ role: "user", content: "Hello!"}], # Required.
|
316
|
+
temperature: 0.7,
|
317
|
+
}
|
318
|
+
)
|
309
319
|
puts response.dig("choices", 0, "message", "content")
|
310
320
|
# => "Hello! How may I assist you today?"
|
311
321
|
```
|
@@ -318,14 +328,15 @@ You can stream from the API in realtime, which can be much faster and used to cr
|
|
318
328
|
|
319
329
|
```ruby
|
320
330
|
client.chat(
|
321
|
-
|
322
|
-
|
323
|
-
|
324
|
-
|
325
|
-
|
326
|
-
|
327
|
-
|
328
|
-
|
331
|
+
parameters: {
|
332
|
+
model: "gpt-4o", # Required.
|
333
|
+
messages: [{ role: "user", content: "Describe a character called Anna!"}], # Required.
|
334
|
+
temperature: 0.7,
|
335
|
+
stream: proc do |chunk, _bytesize|
|
336
|
+
print chunk.dig("choices", 0, "delta", "content")
|
337
|
+
end
|
338
|
+
}
|
339
|
+
)
|
329
340
|
# => "Anna is a young woman in her mid-twenties, with wavy chestnut hair that falls to her shoulders..."
|
330
341
|
```
|
331
342
|
|
@@ -334,12 +345,13 @@ Note: In order to get usage information, you can provide the [`stream_options` p
|
|
334
345
|
```ruby
|
335
346
|
stream_proc = proc { |chunk, _bytesize| puts "--------------"; puts chunk.inspect; }
|
336
347
|
client.chat(
|
337
|
-
|
338
|
-
|
339
|
-
|
340
|
-
|
341
|
-
|
342
|
-
|
348
|
+
parameters: {
|
349
|
+
model: "gpt-4o",
|
350
|
+
stream: stream_proc,
|
351
|
+
stream_options: { include_usage: true },
|
352
|
+
messages: [{ role: "user", content: "Hello!"}],
|
353
|
+
}
|
354
|
+
)
|
343
355
|
# => --------------
|
344
356
|
# => {"id"=>"chatcmpl-7bbq05PiZqlHxjV1j7OHnKKDURKaf", "object"=>"chat.completion.chunk", "created"=>1718750612, "model"=>"gpt-4o-2024-05-13", "system_fingerprint"=>"fp_9cb5d38cf7", "choices"=>[{"index"=>0, "delta"=>{"role"=>"assistant", "content"=>""}, "logprobs"=>nil, "finish_reason"=>nil}], "usage"=>nil}
|
345
357
|
# => --------------
|
@@ -366,10 +378,11 @@ messages = [
|
|
366
378
|
}
|
367
379
|
]
|
368
380
|
response = client.chat(
|
369
|
-
|
370
|
-
|
371
|
-
|
372
|
-
|
381
|
+
parameters: {
|
382
|
+
model: "gpt-4-vision-preview", # Required.
|
383
|
+
messages: [{ role: "user", content: messages}], # Required.
|
384
|
+
}
|
385
|
+
)
|
373
386
|
puts response.dig("choices", 0, "message", "content")
|
374
387
|
# => "The image depicts a serene natural landscape featuring a long wooden boardwalk extending straight ahead"
|
375
388
|
```
|
@@ -379,21 +392,22 @@ puts response.dig("choices", 0, "message", "content")
|
|
379
392
|
You can set the response_format to ask for responses in JSON:
|
380
393
|
|
381
394
|
```ruby
|
382
|
-
|
383
|
-
|
384
|
-
|
385
|
-
|
386
|
-
|
387
|
-
|
388
|
-
|
389
|
-
|
390
|
-
|
391
|
-
|
392
|
-
|
393
|
-
|
394
|
-
|
395
|
-
|
396
|
-
|
395
|
+
response = client.chat(
|
396
|
+
parameters: {
|
397
|
+
model: "gpt-4o",
|
398
|
+
response_format: { type: "json_object" },
|
399
|
+
messages: [{ role: "user", content: "Hello! Give me some JSON please."}],
|
400
|
+
temperature: 0.7,
|
401
|
+
})
|
402
|
+
puts response.dig("choices", 0, "message", "content")
|
403
|
+
# =>
|
404
|
+
# {
|
405
|
+
# "name": "John",
|
406
|
+
# "age": 30,
|
407
|
+
# "city": "New York",
|
408
|
+
# "hobbies": ["reading", "traveling", "hiking"],
|
409
|
+
# "isStudent": false
|
410
|
+
# }
|
397
411
|
```
|
398
412
|
|
399
413
|
You can stream it as well!
|
@@ -403,26 +417,28 @@ You can stream it as well!
|
|
403
417
|
parameters: {
|
404
418
|
model: "gpt-4o",
|
405
419
|
messages: [{ role: "user", content: "Can I have some JSON please?"}],
|
406
|
-
|
407
|
-
|
408
|
-
|
409
|
-
|
410
|
-
})
|
411
|
-
{
|
412
|
-
"message": "Sure, please let me know what specific JSON data you are looking for.",
|
413
|
-
"JSON_data": {
|
414
|
-
"example_1": {
|
415
|
-
"key_1": "value_1",
|
416
|
-
"key_2": "value_2",
|
417
|
-
"key_3": "value_3"
|
418
|
-
},
|
419
|
-
"example_2": {
|
420
|
-
"key_4": "value_4",
|
421
|
-
"key_5": "value_5",
|
422
|
-
"key_6": "value_6"
|
423
|
-
}
|
420
|
+
response_format: { type: "json_object" },
|
421
|
+
stream: proc do |chunk, _bytesize|
|
422
|
+
print chunk.dig("choices", 0, "delta", "content")
|
423
|
+
end
|
424
424
|
}
|
425
|
-
|
425
|
+
)
|
426
|
+
# =>
|
427
|
+
# {
|
428
|
+
# "message": "Sure, please let me know what specific JSON data you are looking for.",
|
429
|
+
# "JSON_data": {
|
430
|
+
# "example_1": {
|
431
|
+
# "key_1": "value_1",
|
432
|
+
# "key_2": "value_2",
|
433
|
+
# "key_3": "value_3"
|
434
|
+
# },
|
435
|
+
# "example_2": {
|
436
|
+
# "key_4": "value_4",
|
437
|
+
# "key_5": "value_5",
|
438
|
+
# "key_6": "value_6"
|
439
|
+
# }
|
440
|
+
# }
|
441
|
+
# }
|
426
442
|
```
|
427
443
|
|
428
444
|
### Functions
|
@@ -430,7 +446,6 @@ You can stream it as well!
|
|
430
446
|
You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call them - eg., to use your method `get_current_weather` to get the weather in a given location. Note that tool_choice is optional, but if you exclude it, the model will choose whether to use the function or not ([see here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice)).
|
431
447
|
|
432
448
|
```ruby
|
433
|
-
|
434
449
|
def get_current_weather(location:, unit: "fahrenheit")
|
435
450
|
# Here you could use a weather api to fetch the weather.
|
436
451
|
"The weather in #{location} is nice 🌞 #{unit}"
|
@@ -471,8 +486,9 @@ response =
|
|
471
486
|
},
|
472
487
|
}
|
473
488
|
],
|
474
|
-
|
475
|
-
|
489
|
+
# Optional, defaults to "auto"
|
490
|
+
# Can also put "none" or specific functions, see docs
|
491
|
+
tool_choice: "required"
|
476
492
|
},
|
477
493
|
)
|
478
494
|
|
@@ -486,12 +502,13 @@ if message["role"] == "assistant" && message["tool_calls"]
|
|
486
502
|
tool_call.dig("function", "arguments"),
|
487
503
|
{ symbolize_names: true },
|
488
504
|
)
|
489
|
-
function_response =
|
505
|
+
function_response =
|
506
|
+
case function_name
|
490
507
|
when "get_current_weather"
|
491
508
|
get_current_weather(**function_args) # => "The weather is nice 🌞"
|
492
509
|
else
|
493
510
|
# decide how to handle
|
494
|
-
|
511
|
+
end
|
495
512
|
|
496
513
|
# For a subsequent message with the role "tool", OpenAI requires the preceding message to have a tool_calls argument.
|
497
514
|
messages << message
|
@@ -508,7 +525,8 @@ if message["role"] == "assistant" && message["tool_calls"]
|
|
508
525
|
parameters: {
|
509
526
|
model: "gpt-4o",
|
510
527
|
messages: messages
|
511
|
-
|
528
|
+
}
|
529
|
+
)
|
512
530
|
|
513
531
|
puts second_response.dig("choices", 0, "message", "content")
|
514
532
|
|
@@ -524,11 +542,12 @@ Hit the OpenAI API for a completion using other GPT-3 models:
|
|
524
542
|
|
525
543
|
```ruby
|
526
544
|
response = client.completions(
|
527
|
-
|
528
|
-
|
529
|
-
|
530
|
-
|
531
|
-
|
545
|
+
parameters: {
|
546
|
+
model: "gpt-4o",
|
547
|
+
prompt: "Once upon a time",
|
548
|
+
max_tokens: 5
|
549
|
+
}
|
550
|
+
)
|
532
551
|
puts response["choices"].map { |c| c["text"] }
|
533
552
|
# => [", there lived a great"]
|
534
553
|
```
|
@@ -539,10 +558,10 @@ You can use the embeddings endpoint to get a vector of numbers representing an i
|
|
539
558
|
|
540
559
|
```ruby
|
541
560
|
response = client.embeddings(
|
542
|
-
|
543
|
-
|
544
|
-
|
545
|
-
|
561
|
+
parameters: {
|
562
|
+
model: "text-embedding-ada-002",
|
563
|
+
input: "The food was delicious and the waiter..."
|
564
|
+
}
|
546
565
|
)
|
547
566
|
|
548
567
|
puts response.dig("data", 0, "embedding")
|
@@ -688,9 +707,9 @@ You can then use this file ID to create a fine tuning job:
|
|
688
707
|
|
689
708
|
```ruby
|
690
709
|
response = client.finetunes.create(
|
691
|
-
|
692
|
-
|
693
|
-
|
710
|
+
parameters: {
|
711
|
+
training_file: file_id,
|
712
|
+
model: "gpt-4o"
|
694
713
|
})
|
695
714
|
fine_tune_id = response["id"]
|
696
715
|
```
|
@@ -713,17 +732,17 @@ This fine-tuned model name can then be used in chat completions:
|
|
713
732
|
|
714
733
|
```ruby
|
715
734
|
response = client.chat(
|
716
|
-
|
717
|
-
|
718
|
-
|
719
|
-
|
735
|
+
parameters: {
|
736
|
+
model: fine_tuned_model,
|
737
|
+
messages: [{ role: "user", content: "I love Mondays!" }]
|
738
|
+
}
|
720
739
|
)
|
721
740
|
response.dig("choices", 0, "message", "content")
|
722
741
|
```
|
723
742
|
|
724
743
|
You can also capture the events for a job:
|
725
744
|
|
726
|
-
```
|
745
|
+
```ruby
|
727
746
|
client.finetunes.list_events(id: fine_tune_id)
|
728
747
|
```
|
729
748
|
|
@@ -868,25 +887,26 @@ To create a new assistant:
|
|
868
887
|
|
869
888
|
```ruby
|
870
889
|
response = client.assistants.create(
|
871
|
-
|
872
|
-
|
873
|
-
|
874
|
-
|
875
|
-
|
876
|
-
|
877
|
-
|
878
|
-
|
879
|
-
|
880
|
-
|
881
|
-
|
882
|
-
|
883
|
-
|
884
|
-
|
885
|
-
|
886
|
-
|
887
|
-
|
888
|
-
|
889
|
-
|
890
|
+
parameters: {
|
891
|
+
model: "gpt-4o",
|
892
|
+
name: "OpenAI-Ruby test assistant",
|
893
|
+
description: nil,
|
894
|
+
instructions: "You are a Ruby dev bot. When asked a question, write and run Ruby code to answer the question",
|
895
|
+
tools: [
|
896
|
+
{ type: "code_interpreter" },
|
897
|
+
{ type: "file_search" }
|
898
|
+
],
|
899
|
+
tool_resources: {
|
900
|
+
code_interpreter: {
|
901
|
+
file_ids: [] # See Files section above for how to upload files
|
902
|
+
},
|
903
|
+
file_search: {
|
904
|
+
vector_store_ids: [] # See Vector Stores section above for how to add vector stores
|
905
|
+
}
|
906
|
+
},
|
907
|
+
"metadata": { my_internal_version_id: "1.0.0" }
|
908
|
+
}
|
909
|
+
)
|
890
910
|
assistant_id = response["id"]
|
891
911
|
```
|
892
912
|
|
@@ -906,16 +926,17 @@ You can modify an existing assistant using the assistant's id (see [API document
|
|
906
926
|
|
907
927
|
```ruby
|
908
928
|
response = client.assistants.modify(
|
909
|
-
|
910
|
-
|
911
|
-
|
912
|
-
|
913
|
-
|
929
|
+
id: assistant_id,
|
930
|
+
parameters: {
|
931
|
+
name: "Modified Test Assistant for OpenAI-Ruby",
|
932
|
+
metadata: { my_internal_version_id: '1.0.1' }
|
933
|
+
}
|
934
|
+
)
|
914
935
|
```
|
915
936
|
|
916
937
|
You can delete assistants:
|
917
938
|
|
918
|
-
```
|
939
|
+
```ruby
|
919
940
|
client.assistants.delete(id: assistant_id)
|
920
941
|
```
|
921
942
|
|
@@ -931,11 +952,12 @@ thread_id = response["id"]
|
|
931
952
|
|
932
953
|
# Add initial message from user (see https://platform.openai.com/docs/api-reference/messages/createMessage)
|
933
954
|
message_id = client.messages.create(
|
934
|
-
|
935
|
-
|
936
|
-
|
937
|
-
|
938
|
-
|
955
|
+
thread_id: thread_id,
|
956
|
+
parameters: {
|
957
|
+
role: "user", # Required for manually created messages
|
958
|
+
content: "Can you help me write an API library to interact with the OpenAI API please?"
|
959
|
+
}
|
960
|
+
)["id"]
|
939
961
|
|
940
962
|
# Retrieve individual message
|
941
963
|
message = client.messages.retrieve(thread_id: thread_id, id: message_id)
|
@@ -959,32 +981,38 @@ To submit a thread to be evaluated with the model of an assistant, create a `Run
|
|
959
981
|
|
960
982
|
```ruby
|
961
983
|
# Create run (will use instruction/model/tools from Assistant's definition)
|
962
|
-
response = client.runs.create(
|
963
|
-
|
964
|
-
|
965
|
-
|
966
|
-
|
967
|
-
|
984
|
+
response = client.runs.create(
|
985
|
+
thread_id: thread_id,
|
986
|
+
parameters: {
|
987
|
+
assistant_id: assistant_id,
|
988
|
+
max_prompt_tokens: 256,
|
989
|
+
max_completion_tokens: 16
|
990
|
+
}
|
991
|
+
)
|
968
992
|
run_id = response['id']
|
969
993
|
```
|
970
994
|
|
971
995
|
You can stream the message chunks as they come through:
|
972
996
|
|
973
997
|
```ruby
|
974
|
-
client.runs.create(
|
975
|
-
|
976
|
-
|
977
|
-
|
978
|
-
|
979
|
-
|
980
|
-
|
981
|
-
|
982
|
-
|
998
|
+
client.runs.create(
|
999
|
+
thread_id: thread_id,
|
1000
|
+
parameters: {
|
1001
|
+
assistant_id: assistant_id,
|
1002
|
+
max_prompt_tokens: 256,
|
1003
|
+
max_completion_tokens: 16,
|
1004
|
+
stream: proc do |chunk, _bytesize|
|
1005
|
+
if chunk["object"] == "thread.message.delta"
|
1006
|
+
print chunk.dig("delta", "content", 0, "text", "value")
|
1007
|
+
end
|
1008
|
+
end
|
1009
|
+
}
|
1010
|
+
)
|
983
1011
|
```
|
984
1012
|
|
985
1013
|
To get the status of a Run:
|
986
1014
|
|
987
|
-
```
|
1015
|
+
```ruby
|
988
1016
|
response = client.runs.retrieve(id: run_id, thread_id: thread_id)
|
989
1017
|
status = response['status']
|
990
1018
|
```
|
@@ -993,23 +1021,23 @@ The `status` response can include the following strings `queued`, `in_progress`,
|
|
993
1021
|
|
994
1022
|
```ruby
|
995
1023
|
while true do
|
996
|
-
|
997
|
-
|
998
|
-
|
999
|
-
|
1000
|
-
|
1001
|
-
|
1002
|
-
|
1003
|
-
|
1004
|
-
|
1005
|
-
|
1006
|
-
|
1007
|
-
|
1008
|
-
|
1009
|
-
|
1010
|
-
|
1011
|
-
|
1012
|
-
|
1024
|
+
response = client.runs.retrieve(id: run_id, thread_id: thread_id)
|
1025
|
+
status = response['status']
|
1026
|
+
|
1027
|
+
case status
|
1028
|
+
when 'queued', 'in_progress', 'cancelling'
|
1029
|
+
puts 'Sleeping'
|
1030
|
+
sleep 1 # Wait one second and poll again
|
1031
|
+
when 'completed'
|
1032
|
+
break # Exit loop and report result to user
|
1033
|
+
when 'requires_action'
|
1034
|
+
# Handle tool calls (see below)
|
1035
|
+
when 'cancelled', 'failed', 'expired'
|
1036
|
+
puts response['last_error'].inspect
|
1037
|
+
break # or `exit`
|
1038
|
+
else
|
1039
|
+
puts "Unknown status response: #{status}"
|
1040
|
+
end
|
1013
1041
|
end
|
1014
1042
|
```
|
1015
1043
|
|
@@ -1021,30 +1049,30 @@ messages = client.messages.list(thread_id: thread_id, parameters: { order: 'asc'
|
|
1021
1049
|
|
1022
1050
|
# Alternatively retrieve the `run steps` for the run which link to the messages:
|
1023
1051
|
run_steps = client.run_steps.list(thread_id: thread_id, run_id: run_id, parameters: { order: 'asc' })
|
1024
|
-
new_message_ids = run_steps['data'].filter_map
|
1052
|
+
new_message_ids = run_steps['data'].filter_map do |step|
|
1025
1053
|
if step['type'] == 'message_creation'
|
1026
1054
|
step.dig('step_details', "message_creation", "message_id")
|
1027
1055
|
end # Ignore tool calls, because they don't create new messages.
|
1028
|
-
|
1056
|
+
end
|
1029
1057
|
|
1030
1058
|
# Retrieve the individual messages
|
1031
|
-
new_messages = new_message_ids.map
|
1059
|
+
new_messages = new_message_ids.map do |msg_id|
|
1032
1060
|
client.messages.retrieve(id: msg_id, thread_id: thread_id)
|
1033
|
-
|
1061
|
+
end
|
1034
1062
|
|
1035
1063
|
# Find the actual response text in the content array of the messages
|
1036
|
-
new_messages.each
|
1037
|
-
|
1038
|
-
|
1039
|
-
|
1040
|
-
|
1041
|
-
|
1042
|
-
|
1043
|
-
|
1044
|
-
|
1045
|
-
|
1046
|
-
|
1047
|
-
|
1064
|
+
new_messages.each do |msg|
|
1065
|
+
msg['content'].each do |content_item|
|
1066
|
+
case content_item['type']
|
1067
|
+
when 'text'
|
1068
|
+
puts content_item.dig('text', 'value')
|
1069
|
+
# Also handle annotations
|
1070
|
+
when 'image_file'
|
1071
|
+
# Use File endpoint to retrieve file contents via id
|
1072
|
+
id = content_item.dig('image_file', 'file_id')
|
1073
|
+
end
|
1074
|
+
end
|
1075
|
+
end
|
1048
1076
|
```
|
1049
1077
|
|
1050
1078
|
You can also update the metadata on messages, including messages that come from the assistant.
|
@@ -1053,7 +1081,11 @@ You can also update the metadata on messages, including messages that come from
|
|
1053
1081
|
metadata = {
|
1054
1082
|
user_id: "abc123"
|
1055
1083
|
}
|
1056
|
-
message = client.messages.modify(
|
1084
|
+
message = client.messages.modify(
|
1085
|
+
id: message_id,
|
1086
|
+
thread_id: thread_id,
|
1087
|
+
parameters: { metadata: metadata },
|
1088
|
+
)
|
1057
1089
|
```
|
1058
1090
|
|
1059
1091
|
At any time you can list all runs which have been performed on a particular thread or are currently running:
|
@@ -1072,41 +1104,117 @@ run_id = response['id']
|
|
1072
1104
|
thread_id = response['thread_id']
|
1073
1105
|
```
|
1074
1106
|
|
1107
|
+
#### Vision in a thread
|
1108
|
+
|
1109
|
+
You can include images in a thread and they will be described & read by the LLM. In this example I'm using [this file](https://upload.wikimedia.org/wikipedia/commons/7/70/Example.png):
|
1110
|
+
|
1111
|
+
```ruby
|
1112
|
+
require "openai"
|
1113
|
+
|
1114
|
+
# Make a client
|
1115
|
+
client = OpenAI::Client.new(
|
1116
|
+
access_token: "access_token_goes_here",
|
1117
|
+
log_errors: true # Don't log errors in production.
|
1118
|
+
)
|
1119
|
+
|
1120
|
+
# Upload image as a file
|
1121
|
+
file_id = client.files.upload(
|
1122
|
+
parameters: {
|
1123
|
+
file: "path/to/example.png",
|
1124
|
+
purpose: "assistants",
|
1125
|
+
}
|
1126
|
+
)["id"]
|
1127
|
+
|
1128
|
+
# Create assistant (You could also use an existing one here)
|
1129
|
+
assistant_id = client.assistants.create(
|
1130
|
+
parameters: {
|
1131
|
+
model: "gpt-4o",
|
1132
|
+
name: "Image reader",
|
1133
|
+
instructions: "You are an image describer. You describe the contents of images.",
|
1134
|
+
}
|
1135
|
+
)["id"]
|
1136
|
+
|
1137
|
+
# Create thread
|
1138
|
+
thread_id = client.threads.create["id"]
|
1139
|
+
|
1140
|
+
# Add image in message
|
1141
|
+
client.messages.create(
|
1142
|
+
thread_id: thread_id,
|
1143
|
+
parameters: {
|
1144
|
+
role: "user", # Required for manually created messages
|
1145
|
+
content: [
|
1146
|
+
{
|
1147
|
+
"type": "text",
|
1148
|
+
"text": "What's in this image?"
|
1149
|
+
},
|
1150
|
+
{
|
1151
|
+
"type": "image_file",
|
1152
|
+
"image_file": { "file_id": file_id }
|
1153
|
+
}
|
1154
|
+
]
|
1155
|
+
}
|
1156
|
+
)
|
1157
|
+
|
1158
|
+
# Run thread
|
1159
|
+
run_id = client.runs.create(
|
1160
|
+
thread_id: thread_id,
|
1161
|
+
parameters: { assistant_id: assistant_id }
|
1162
|
+
)["id"]
|
1163
|
+
|
1164
|
+
# Wait until run in complete
|
1165
|
+
status = nil
|
1166
|
+
until status == "completed" do
|
1167
|
+
sleep(0.1)
|
1168
|
+
status = client.runs.retrieve(id: run_id, thread_id: thread_id)['status']
|
1169
|
+
end
|
1170
|
+
|
1171
|
+
# Get the response
|
1172
|
+
messages = client.messages.list(thread_id: thread_id, parameters: { order: 'asc' })
|
1173
|
+
messages.dig("data", -1, "content", 0, "text", "value")
|
1174
|
+
=> "The image contains a placeholder graphic with a tilted, stylized representation of a postage stamp in the top part, which includes an abstract landscape with hills and a sun. Below the stamp, in the middle of the image, there is italicized text in a light golden color that reads, \"This is just an example.\" The background is a light pastel shade, and a yellow border frames the entire image."
|
1175
|
+
```
|
1176
|
+
|
1075
1177
|
#### Runs involving function tools
|
1076
1178
|
|
1077
1179
|
In case you are allowing the assistant to access `function` tools (they are defined in the same way as functions during chat completion), you might get a status code of `requires_action` when the assistant wants you to evaluate one or more function tools:
|
1078
1180
|
|
1079
1181
|
```ruby
|
1080
1182
|
def get_current_weather(location:, unit: "celsius")
|
1081
|
-
|
1082
|
-
|
1083
|
-
|
1084
|
-
|
1085
|
-
|
1086
|
-
|
1183
|
+
# Your function code goes here
|
1184
|
+
if location =~ /San Francisco/i
|
1185
|
+
return unit == "celsius" ? "The weather is nice 🌞 at 27°C" : "The weather is nice 🌞 at 80°F"
|
1186
|
+
else
|
1187
|
+
return unit == "celsius" ? "The weather is icy 🥶 at -5°C" : "The weather is icy 🥶 at 23°F"
|
1188
|
+
end
|
1087
1189
|
end
|
1088
1190
|
|
1089
1191
|
if status == 'requires_action'
|
1192
|
+
tools_to_call = response.dig('required_action', 'submit_tool_outputs', 'tool_calls')
|
1090
1193
|
|
1091
|
-
|
1092
|
-
|
1093
|
-
|
1094
|
-
|
1095
|
-
|
1096
|
-
|
1097
|
-
|
1098
|
-
{ symbolize_names: true },
|
1099
|
-
)
|
1194
|
+
my_tool_outputs = tools_to_call.map { |tool|
|
1195
|
+
# Call the functions based on the tool's name
|
1196
|
+
function_name = tool.dig('function', 'name')
|
1197
|
+
arguments = JSON.parse(
|
1198
|
+
tool.dig("function", "arguments"),
|
1199
|
+
{ symbolize_names: true },
|
1200
|
+
)
|
1100
1201
|
|
1101
|
-
|
1102
|
-
|
1103
|
-
|
1104
|
-
|
1202
|
+
tool_output = case function_name
|
1203
|
+
when "get_current_weather"
|
1204
|
+
get_current_weather(**arguments)
|
1205
|
+
end
|
1105
1206
|
|
1106
|
-
|
1207
|
+
{
|
1208
|
+
tool_call_id: tool['id'],
|
1209
|
+
output: tool_output,
|
1107
1210
|
}
|
1211
|
+
}
|
1108
1212
|
|
1109
|
-
|
1213
|
+
client.runs.submit_tool_outputs(
|
1214
|
+
thread_id: thread_id,
|
1215
|
+
run_id: run_id,
|
1216
|
+
parameters: { tool_outputs: my_tool_outputs }
|
1217
|
+
)
|
1110
1218
|
end
|
1111
1219
|
```
|
1112
1220
|
|
@@ -1122,13 +1230,13 @@ An example spec can be found [here](https://github.com/alexrudall/ruby-openai/bl
|
|
1122
1230
|
|
1123
1231
|
Here's how to get the chunks used in a file search. In this example I'm using [this file](https://css4.pub/2015/textbook/somatosensory.pdf):
|
1124
1232
|
|
1125
|
-
```
|
1233
|
+
```ruby
|
1126
1234
|
require "openai"
|
1127
1235
|
|
1128
1236
|
# Make a client
|
1129
1237
|
client = OpenAI::Client.new(
|
1130
1238
|
access_token: "access_token_goes_here",
|
1131
|
-
log_errors: true # Don't
|
1239
|
+
log_errors: true # Don't log errors in production.
|
1132
1240
|
)
|
1133
1241
|
|
1134
1242
|
# Upload your file(s)
|
@@ -1228,7 +1336,12 @@ Generate images using DALL·E 2 or DALL·E 3!
|
|
1228
1336
|
For DALL·E 2 the size of any generated images must be one of `256x256`, `512x512` or `1024x1024` - if not specified the image will default to `1024x1024`.
|
1229
1337
|
|
1230
1338
|
```ruby
|
1231
|
-
response = client.images.generate(
|
1339
|
+
response = client.images.generate(
|
1340
|
+
parameters: {
|
1341
|
+
prompt: "A baby sea otter cooking pasta wearing a hat of some sort",
|
1342
|
+
size: "256x256",
|
1343
|
+
}
|
1344
|
+
)
|
1232
1345
|
puts response.dig("data", 0, "url")
|
1233
1346
|
# => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
|
1234
1347
|
```
|
@@ -1240,7 +1353,14 @@ puts response.dig("data", 0, "url")
|
|
1240
1353
|
For DALL·E 3 the size of any generated images must be one of `1024x1024`, `1024x1792` or `1792x1024`. Additionally the quality of the image can be specified to either `standard` or `hd`.
|
1241
1354
|
|
1242
1355
|
```ruby
|
1243
|
-
response = client.images.generate(
|
1356
|
+
response = client.images.generate(
|
1357
|
+
parameters: {
|
1358
|
+
prompt: "A springer spaniel cooking pasta wearing a hat of some sort",
|
1359
|
+
model: "dall-e-3",
|
1360
|
+
size: "1024x1792",
|
1361
|
+
quality: "standard",
|
1362
|
+
}
|
1363
|
+
)
|
1244
1364
|
puts response.dig("data", 0, "url")
|
1245
1365
|
# => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
|
1246
1366
|
```
|
@@ -1252,7 +1372,13 @@ puts response.dig("data", 0, "url")
|
|
1252
1372
|
Fill in the transparent part of an image, or upload a mask with transparent sections to indicate the parts of an image that can be changed according to your prompt...
|
1253
1373
|
|
1254
1374
|
```ruby
|
1255
|
-
response = client.images.edit(
|
1375
|
+
response = client.images.edit(
|
1376
|
+
parameters: {
|
1377
|
+
prompt: "A solid red Ruby on a blue background",
|
1378
|
+
image: "image.png",
|
1379
|
+
mask: "mask.png",
|
1380
|
+
}
|
1381
|
+
)
|
1256
1382
|
puts response.dig("data", 0, "url")
|
1257
1383
|
# => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
|
1258
1384
|
```
|
@@ -1292,10 +1418,11 @@ The translations API takes as input the audio file in any of the supported langu
|
|
1292
1418
|
|
1293
1419
|
```ruby
|
1294
1420
|
response = client.audio.translate(
|
1295
|
-
|
1296
|
-
|
1297
|
-
|
1298
|
-
|
1421
|
+
parameters: {
|
1422
|
+
model: "whisper-1",
|
1423
|
+
file: File.open("path_to_file", "rb"),
|
1424
|
+
}
|
1425
|
+
)
|
1299
1426
|
puts response["text"]
|
1300
1427
|
# => "Translation of the text"
|
1301
1428
|
```
|
@@ -1308,11 +1435,12 @@ You can pass the language of the audio file to improve transcription quality. Su
|
|
1308
1435
|
|
1309
1436
|
```ruby
|
1310
1437
|
response = client.audio.transcribe(
|
1311
|
-
|
1312
|
-
|
1313
|
-
|
1314
|
-
|
1315
|
-
|
1438
|
+
parameters: {
|
1439
|
+
model: "whisper-1",
|
1440
|
+
file: File.open("path_to_file", "rb"),
|
1441
|
+
language: "en", # Optional
|
1442
|
+
}
|
1443
|
+
)
|
1316
1444
|
puts response["text"]
|
1317
1445
|
# => "Transcription of the text"
|
1318
1446
|
```
|
@@ -1328,23 +1456,81 @@ response = client.audio.speech(
|
|
1328
1456
|
input: "This is a speech test!",
|
1329
1457
|
voice: "alloy",
|
1330
1458
|
response_format: "mp3", # Optional
|
1331
|
-
speed: 1.0 # Optional
|
1459
|
+
speed: 1.0, # Optional
|
1332
1460
|
}
|
1333
1461
|
)
|
1334
1462
|
File.binwrite('demo.mp3', response)
|
1335
1463
|
# => mp3 file that plays: "This is a speech test!"
|
1336
1464
|
```
|
1337
1465
|
|
1338
|
-
###
|
1466
|
+
### Usage
|
1467
|
+
The Usage API provides information about the cost of various OpenAI services within your organization.
|
1468
|
+
To use Admin APIs like Usage, you need to set an OPENAI_ADMIN_TOKEN, which can be generated [here](https://platform.openai.com/settings/organization/admin-keys).
|
1339
1469
|
|
1340
|
-
|
1470
|
+
```ruby
|
1471
|
+
OpenAI.configure do |config|
|
1472
|
+
config.admin_token = ENV.fetch("OPENAI_ADMIN_TOKEN")
|
1473
|
+
end
|
1474
|
+
|
1475
|
+
# or
|
1341
1476
|
|
1477
|
+
client = OpenAI::Client.new(admin_token: "123abc")
|
1342
1478
|
```
|
1343
|
-
|
1344
|
-
|
1345
|
-
|
1346
|
-
|
1479
|
+
|
1480
|
+
You can retrieve usage data for different endpoints and time periods:
|
1481
|
+
|
1482
|
+
```ruby
|
1483
|
+
one_day_ago = Time.now.to_i - 86_400
|
1484
|
+
|
1485
|
+
# Retrieve costs data
|
1486
|
+
response = client.usage.costs(parameters: { start_time: one_day_ago })
|
1487
|
+
response["data"].each do |bucket|
|
1488
|
+
bucket["results"].each do |result|
|
1489
|
+
puts "#{Time.at(bucket["start_time"]).to_date}: $#{result.dig("amount", "value").round(2)}"
|
1347
1490
|
end
|
1491
|
+
end
|
1492
|
+
=> 2025-02-09: $0.0
|
1493
|
+
=> 2025-02-10: $0.42
|
1494
|
+
|
1495
|
+
# Retrieve completions usage data
|
1496
|
+
response = client.usage.completions(parameters: { start_time: one_day_ago })
|
1497
|
+
puts response["data"]
|
1498
|
+
|
1499
|
+
# Retrieve embeddings usage data
|
1500
|
+
response = client.usage.embeddings(parameters: { start_time: one_day_ago })
|
1501
|
+
puts response["data"]
|
1502
|
+
|
1503
|
+
# Retrieve moderations usage data
|
1504
|
+
response = client.usage.moderations(parameters: { start_time: one_day_ago })
|
1505
|
+
puts response["data"]
|
1506
|
+
|
1507
|
+
# Retrieve image generation usage data
|
1508
|
+
response = client.usage.images(parameters: { start_time: one_day_ago })
|
1509
|
+
puts response["data"]
|
1510
|
+
|
1511
|
+
# Retrieve audio speech usage data
|
1512
|
+
response = client.usage.audio_speeches(parameters: { start_time: one_day_ago })
|
1513
|
+
puts response["data"]
|
1514
|
+
|
1515
|
+
# Retrieve audio transcription usage data
|
1516
|
+
response = client.usage.audio_transcriptions(parameters: { start_time: one_day_ago })
|
1517
|
+
puts response["data"]
|
1518
|
+
|
1519
|
+
# Retrieve vector stores usage data
|
1520
|
+
response = client.usage.vector_stores(parameters: { start_time: one_day_ago })
|
1521
|
+
puts response["data"]
|
1522
|
+
```
|
1523
|
+
|
1524
|
+
### Errors
|
1525
|
+
|
1526
|
+
HTTP errors can be caught like this:
|
1527
|
+
|
1528
|
+
```ruby
|
1529
|
+
begin
|
1530
|
+
OpenAI::Client.new.models.retrieve(id: "gpt-4o")
|
1531
|
+
rescue Faraday::Error => e
|
1532
|
+
raise "Got a Faraday error: #{e}"
|
1533
|
+
end
|
1348
1534
|
```
|
1349
1535
|
|
1350
1536
|
## Development
|
@@ -1356,15 +1542,11 @@ To install this gem onto your local machine, run `bundle exec rake install`.
|
|
1356
1542
|
To run all tests, execute the command `bundle exec rake`, which will also run the linter (Rubocop). This repository uses [VCR](https://github.com/vcr/vcr) to log API requests.
|
1357
1543
|
|
1358
1544
|
> [!WARNING]
|
1359
|
-
> If you have an `OPENAI_ACCESS_TOKEN` in your `ENV`, running the specs will
|
1545
|
+
> If you have an `OPENAI_ACCESS_TOKEN` and `OPENAI_ADMIN_TOKEN` in your `ENV`, running the specs will hit the actual API, which will be slow and cost you money - 2 cents or more! Remove them from your environment with `unset` or similar if you just want to run the specs against the stored VCR responses.
|
1360
1546
|
|
1361
1547
|
## Release
|
1362
1548
|
|
1363
|
-
First run the specs without VCR so they actually hit the API. This will cost 2 cents or more. Set OPENAI_ACCESS_TOKEN in your environment
|
1364
|
-
|
1365
|
-
```
|
1366
|
-
OPENAI_ACCESS_TOKEN=123abc bundle exec rspec
|
1367
|
-
```
|
1549
|
+
First run the specs without VCR so they actually hit the API. This will cost 2 cents or more. Set OPENAI_ACCESS_TOKEN and OPENAI_ADMIN_TOKEN in your environment.
|
1368
1550
|
|
1369
1551
|
Then update the version number in `version.rb`, update `CHANGELOG.md`, run `bundle install` to update Gemfile.lock, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
|
1370
1552
|
|