ruby-openai 3.5.0 → 3.7.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: dfe7d331fc5d5a16f74089b9395232cd1abf0d6b05bd78be4db91e1b65d71a4c
4
- data.tar.gz: 610d363ebf1d6a53f015aeaf20cd551c1020731fd423ba4074ff3eb044239fa1
3
+ metadata.gz: 3a26687eb00056e37ab5f2eed5b7bf6e73b4a9479016a101a828ae014ed299aa
4
+ data.tar.gz: c729b1a8b2df16db50af716664108b1fb2418ef22f4159b7993148474cd93ad2
5
5
  SHA512:
6
- metadata.gz: '08b25f159fc149ccd09034da891cf5e77c999964a6d06ea6d916c62ad3e4a9ba2b1aa73757b617c0152ca1760f386514da081e978a6ca9963706f3f385814b78'
7
- data.tar.gz: 144bd11f3791fa818193eaab7d8eeb6b04c33f4b99342ca573acea76f0ffcef4604033daaea54b2ae43411abf84e572db97304f7195c08116a55fedd1524d2aa
6
+ metadata.gz: ddd2bf5992ade00580d0c2cd2b668e70eb999c39a487da4020e46ad8debe96a08b5e0e5560baa612b4e89d0f91765532394a1d69f3cd7214c6bb938edb10118f
7
+ data.tar.gz: aba44b222706820b3233fdcb623da8ba766c431485a723c870092c09608b4e1a7ab3466834d8683690b2873318bd807bcc3c57fe848831ed3c5fb8fc473d1a1a
data/CHANGELOG.md CHANGED
@@ -5,6 +5,18 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [3.7.0] - 2023-03-25
9
+
10
+ ### Added
11
+
12
+ - Allow the usage of proxy base URIs like https://www.helicone.ai/. Thanks to [@mmmaia](https://github.com/mmmaia) for the PR!
13
+
14
+ ## [3.6.0] - 2023-03-22
15
+
16
+ ### Added
17
+
18
+ - Add much-needed ability to increase HTTParty timeout, and set default to 120 seconds. Thanks to [@mbackermann](https://github.com/mbackermann) for the PR and to everyone who requested this!
19
+
8
20
  ## [3.5.0] - 2023-03-02
9
21
 
10
22
  ### Added
data/Gemfile CHANGED
@@ -7,6 +7,6 @@ gem "byebug", "~> 11.1.3"
7
7
  gem "dotenv", "~> 2.8.1"
8
8
  gem "rake", "~> 13.0"
9
9
  gem "rspec", "~> 3.12"
10
- gem "rubocop", "~> 1.46.0"
10
+ gem "rubocop", "~> 1.48.1"
11
11
  gem "vcr", "~> 6.1.0"
12
12
  gem "webmock", "~> 3.18.1"
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (3.5.0)
4
+ ruby-openai (3.7.0)
5
5
  httparty (>= 0.18.1)
6
6
 
7
7
  GEM
@@ -23,7 +23,7 @@ GEM
23
23
  mini_mime (1.1.2)
24
24
  multi_xml (0.6.0)
25
25
  parallel (1.22.1)
26
- parser (3.2.1.0)
26
+ parser (3.2.1.1)
27
27
  ast (~> 2.4.1)
28
28
  public_suffix (5.0.1)
29
29
  rainbow (3.1.1)
@@ -43,7 +43,7 @@ GEM
43
43
  diff-lcs (>= 1.2.0, < 2.0)
44
44
  rspec-support (~> 3.12.0)
45
45
  rspec-support (3.12.0)
46
- rubocop (1.46.0)
46
+ rubocop (1.48.1)
47
47
  json (~> 2.3)
48
48
  parallel (~> 1.10)
49
49
  parser (>= 3.2.0.0)
@@ -53,9 +53,9 @@ GEM
53
53
  rubocop-ast (>= 1.26.0, < 2.0)
54
54
  ruby-progressbar (~> 1.7)
55
55
  unicode-display_width (>= 2.4.0, < 3.0)
56
- rubocop-ast (1.26.0)
56
+ rubocop-ast (1.27.0)
57
57
  parser (>= 3.2.1.0)
58
- ruby-progressbar (1.11.0)
58
+ ruby-progressbar (1.13.0)
59
59
  unicode-display_width (2.4.2)
60
60
  vcr (6.1.0)
61
61
  webmock (3.18.1)
@@ -71,7 +71,7 @@ DEPENDENCIES
71
71
  dotenv (~> 2.8.1)
72
72
  rake (~> 13.0)
73
73
  rspec (~> 3.12)
74
- rubocop (~> 1.46.0)
74
+ rubocop (~> 1.48.1)
75
75
  ruby-openai!
76
76
  vcr (~> 6.1.0)
77
77
  webmock (~> 3.18.1)
data/README.md CHANGED
@@ -7,16 +7,16 @@
7
7
 
8
8
  Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
9
9
 
10
- Generate text with ChatGPT, transcribe or translate audio with Whisper, create images with DALL·E, or write code with Codex...
10
+ Generate text with ChatGPT, transcribe and translate audio with Whisper, or create images with DALL·E...
11
11
 
12
- ## Installation
12
+ Check out [Ruby AI Builders](https://discord.gg/k4Uc224xVD) on Discord!
13
13
 
14
14
  ### Bundler
15
15
 
16
16
  Add this line to your application's Gemfile:
17
17
 
18
18
  ```ruby
19
- gem "ruby-openai"
19
+ gem "ruby-openai"
20
20
  ```
21
21
 
22
22
  And then execute:
@@ -32,7 +32,7 @@ $ gem install ruby-openai
32
32
  and require with:
33
33
 
34
34
  ```ruby
35
- require "openai"
35
+ require "openai"
36
36
  ```
37
37
 
38
38
  ## Upgrading
@@ -49,7 +49,7 @@ The `::Ruby::OpenAI` module has been removed and all classes have been moved und
49
49
  For a quick test you can pass your token directly to a new client:
50
50
 
51
51
  ```ruby
52
- client = OpenAI::Client.new(access_token: "access_token_goes_here")
52
+ client = OpenAI::Client.new(access_token: "access_token_goes_here")
53
53
  ```
54
54
 
55
55
  ### With Config
@@ -57,16 +57,39 @@ For a quick test you can pass your token directly to a new client:
57
57
  For a more robust setup, you can configure the gem with your API keys, for example in an `openai.rb` initializer file. Never hardcode secrets into your codebase - instead use something like [dotenv](https://github.com/motdotla/dotenv) to pass the keys safely into your environments.
58
58
 
59
59
  ```ruby
60
- OpenAI.configure do |config|
61
- config.access_token = ENV.fetch('OPENAI_ACCESS_TOKEN')
62
- config.organization_id = ENV.fetch('OPENAI_ORGANIZATION_ID') # Optional.
63
- end
60
+ OpenAI.configure do |config|
61
+ config.access_token = ENV.fetch('OPENAI_ACCESS_TOKEN')
62
+ config.organization_id = ENV.fetch('OPENAI_ORGANIZATION_ID') # Optional.
63
+ end
64
64
  ```
65
65
 
66
66
  Then you can create a client like this:
67
67
 
68
68
  ```ruby
69
- client = OpenAI::Client.new
69
+ client = OpenAI::Client.new
70
+ ```
71
+
72
+ #### Custom timeout or base URI
73
+
74
+ The default timeout for any OpenAI request is 120 seconds. You can change that passing the `request_timeout` when initializing the client. You can also change the base URI used for all requests, eg. to use observability tools like [Helicone](https://docs.helicone.ai/quickstart/integrate-in-one-line-of-code):
75
+
76
+ ```ruby
77
+ client = OpenAI::Client.new(
78
+ access_token: "access_token_goes_here",
79
+ uri_base: "https://oai.hconeai.com",
80
+ request_timeout: 240
81
+ )
82
+ ```
83
+
84
+ or when configuring the gem:
85
+
86
+ ```ruby
87
+ OpenAI.configure do |config|
88
+ config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
89
+ config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional
90
+ config.uri_base = "https://oai.hconeai.com" # Optional
91
+ config.request_timeout = 240 # Optional
92
+ end
70
93
  ```
71
94
 
72
95
  ### Models
@@ -74,34 +97,38 @@ Then you can create a client like this:
74
97
  There are different models that can be used to generate text. For a full list and to retrieve information about a single models:
75
98
 
76
99
  ```ruby
77
- client.models.list
78
- client.models.retrieve(id: "text-ada-001")
100
+ client.models.list
101
+ client.models.retrieve(id: "text-ada-001")
79
102
  ```
80
103
 
81
104
  #### Examples
82
105
 
83
- - [GPT-3](https://beta.openai.com/docs/models/gpt-3)
106
+ - [GPT-4 (limited beta)](https://platform.openai.com/docs/models/gpt-4)
107
+ - gpt-4
108
+ - gpt-4-0314
109
+ - gpt-4-32k
110
+ - [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5)
111
+ - gpt-3.5-turbo
112
+ - gpt-3.5-turbo-0301
113
+ - text-davinci-003
114
+ - [GPT-3](https://platform.openai.com/docs/models/gpt-3)
84
115
  - text-ada-001
85
116
  - text-babbage-001
86
117
  - text-curie-001
87
- - text-davinci-001
88
- - [Codex (private beta)](https://beta.openai.com/docs/models/codex-series-private-beta)
89
- - code-davinci-002
90
- - code-cushman-001
91
118
 
92
119
  ### ChatGPT
93
120
 
94
- ChatGPT is a model that can be used to generate text in a conversational style. You can use it to generate a response to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
121
+ ChatGPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
95
122
 
96
123
  ```ruby
97
- response = client.chat(
98
- parameters: {
99
- model: "gpt-3.5-turbo",
100
- messages: [{ role: "user", content: "Hello!"}],
101
- })
102
- puts response.dig("choices", 0, "message", "content")
103
- => "Hello! How may I assist you today?"
104
-
124
+ response = client.chat(
125
+ parameters: {
126
+ model: "gpt-3.5-turbo", # Required.
127
+ messages: [{ role: "user", content: "Hello!"}], # Required.
128
+ temperature: 0.7,
129
+ })
130
+ puts response.dig("choices", 0, "message", "content")
131
+ # => "Hello! How may I assist you today?"
105
132
  ```
106
133
 
107
134
  ### Completions
@@ -109,14 +136,14 @@ ChatGPT is a model that can be used to generate text in a conversational style.
109
136
  Hit the OpenAI API for a completion using other GPT-3 models:
110
137
 
111
138
  ```ruby
112
- response = client.completions(
113
- parameters: {
114
- model: "text-davinci-001",
115
- prompt: "Once upon a time",
116
- max_tokens: 5
117
- })
118
- puts response["choices"].map { |c| c["text"] }
119
- => [", there lived a great"]
139
+ response = client.completions(
140
+ parameters: {
141
+ model: "text-davinci-001",
142
+ prompt: "Once upon a time",
143
+ max_tokens: 5
144
+ })
145
+ puts response["choices"].map { |c| c["text"] }
146
+ # => [", there lived a great"]
120
147
  ```
121
148
 
122
149
  ### Edits
@@ -124,15 +151,15 @@ Hit the OpenAI API for a completion using other GPT-3 models:
124
151
  Send a string and some instructions for what to do to the string:
125
152
 
126
153
  ```ruby
127
- response = client.edits(
128
- parameters: {
129
- model: "text-davinci-edit-001",
130
- input: "What day of the wek is it?",
131
- instruction: "Fix the spelling mistakes"
132
- }
133
- )
134
- puts response.dig("choices", 0, "text")
135
- => What day of the week is it?
154
+ response = client.edits(
155
+ parameters: {
156
+ model: "text-davinci-edit-001",
157
+ input: "What day of the wek is it?",
158
+ instruction: "Fix the spelling mistakes"
159
+ }
160
+ )
161
+ puts response.dig("choices", 0, "text")
162
+ # => What day of the week is it?
136
163
  ```
137
164
 
138
165
  ### Embeddings
@@ -140,12 +167,12 @@ Send a string and some instructions for what to do to the string:
140
167
  You can use the embeddings endpoint to get a vector of numbers representing an input. You can then compare these vectors for different inputs to efficiently check how similar the inputs are.
141
168
 
142
169
  ```ruby
143
- client.embeddings(
144
- parameters: {
145
- model: "babbage-similarity",
146
- input: "The food was delicious and the waiter..."
147
- }
148
- )
170
+ client.embeddings(
171
+ parameters: {
172
+ model: "babbage-similarity",
173
+ input: "The food was delicious and the waiter..."
174
+ }
175
+ )
149
176
  ```
150
177
 
151
178
  ### Files
@@ -153,18 +180,18 @@ You can use the embeddings endpoint to get a vector of numbers representing an i
153
180
  Put your data in a `.jsonl` file like this:
154
181
 
155
182
  ```json
156
- {"prompt":"Overjoyed with my new phone! ->", "completion":" positive"}
157
- {"prompt":"@lakers disappoint for a third straight night ->", "completion":" negative"}
183
+ {"prompt":"Overjoyed with my new phone! ->", "completion":" positive"}
184
+ {"prompt":"@lakers disappoint for a third straight night ->", "completion":" negative"}
158
185
  ```
159
186
 
160
187
  and pass the path to `client.files.upload` to upload it to OpenAI, and then interact with it:
161
188
 
162
189
  ```ruby
163
- client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
164
- client.files.list
165
- client.files.retrieve(id: 123)
166
- client.files.content(id: 123)
167
- client.files.delete(id: 123)
190
+ client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
191
+ client.files.list
192
+ client.files.retrieve(id: 123)
193
+ client.files.content(id: 123)
194
+ client.files.delete(id: 123)
168
195
  ```
169
196
 
170
197
  ### Fine-tunes
@@ -172,51 +199,51 @@ and pass the path to `client.files.upload` to upload it to OpenAI, and then inte
172
199
  Upload your fine-tuning data in a `.jsonl` file as above and get its ID:
173
200
 
174
201
  ```ruby
175
- response = client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
176
- file_id = JSON.parse(response.body)["id"]
202
+ response = client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
203
+ file_id = JSON.parse(response.body)["id"]
177
204
  ```
178
205
 
179
206
  You can then use this file ID to create a fine-tune model:
180
207
 
181
208
  ```ruby
182
- response = client.finetunes.create(
183
- parameters: {
184
- training_file: file_id,
185
- model: "text-ada-001"
186
- })
187
- fine_tune_id = JSON.parse(response.body)["id"]
209
+ response = client.finetunes.create(
210
+ parameters: {
211
+ training_file: file_id,
212
+ model: "text-ada-001"
213
+ })
214
+ fine_tune_id = JSON.parse(response.body)["id"]
188
215
  ```
189
216
 
190
217
  That will give you the fine-tune ID. If you made a mistake you can cancel the fine-tune model before it is processed:
191
218
 
192
219
  ```ruby
193
- client.finetunes.cancel(id: fine_tune_id)
220
+ client.finetunes.cancel(id: fine_tune_id)
194
221
  ```
195
222
 
196
223
  You may need to wait a short time for processing to complete. Once processed, you can use list or retrieve to get the name of the fine-tuned model:
197
224
 
198
225
  ```ruby
199
- client.finetunes.list
200
- response = client.finetunes.retrieve(id: fine_tune_id)
201
- fine_tuned_model = JSON.parse(response.body)["fine_tuned_model"]
226
+ client.finetunes.list
227
+ response = client.finetunes.retrieve(id: fine_tune_id)
228
+ fine_tuned_model = JSON.parse(response.body)["fine_tuned_model"]
202
229
  ```
203
230
 
204
231
  This fine-tuned model name can then be used in completions:
205
232
 
206
233
  ```ruby
207
- response = client.completions(
208
- parameters: {
209
- model: fine_tuned_model,
210
- prompt: "I love Mondays!"
211
- }
212
- )
213
- JSON.parse(response.body)["choices"].map { |c| c["text"] }
234
+ response = client.completions(
235
+ parameters: {
236
+ model: fine_tuned_model,
237
+ prompt: "I love Mondays!"
238
+ }
239
+ )
240
+ JSON.parse(response.body)["choices"].map { |c| c["text"] }
214
241
  ```
215
242
 
216
243
  You can delete the fine-tuned model when you are done with it:
217
244
 
218
245
  ```ruby
219
- client.finetunes.delete(fine_tuned_model: fine_tuned_model)
246
+ client.finetunes.delete(fine_tuned_model: fine_tuned_model)
220
247
  ```
221
248
 
222
249
  ### Image Generation
@@ -225,9 +252,9 @@ Generate an image using DALL·E! The size of any generated images must be one of
225
252
  if not specified the image will default to `1024x1024`.
226
253
 
227
254
  ```ruby
228
- response = client.images.generate(parameters: { prompt: "A baby sea otter cooking pasta wearing a hat of some sort", size: "256x256" })
229
- puts response.dig("data", 0, "url")
230
- => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
255
+ response = client.images.generate(parameters: { prompt: "A baby sea otter cooking pasta wearing a hat of some sort", size: "256x256" })
256
+ puts response.dig("data", 0, "url")
257
+ # => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
231
258
  ```
232
259
 
233
260
  ![Ruby](https://i.ibb.co/6y4HJFx/img-d-Tx-Rf-RHj-SO5-Gho-Cbd8o-LJvw3.png)
@@ -237,9 +264,9 @@ if not specified the image will default to `1024x1024`.
237
264
  Fill in the transparent part of an image, or upload a mask with transparent sections to indicate the parts of an image that can be changed according to your prompt...
238
265
 
239
266
  ```ruby
240
- response = client.images.edit(parameters: { prompt: "A solid red Ruby on a blue background", image: "image.png", mask: "mask.png" })
241
- puts response.dig("data", 0, "url")
242
- => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
267
+ response = client.images.edit(parameters: { prompt: "A solid red Ruby on a blue background", image: "image.png", mask: "mask.png" })
268
+ puts response.dig("data", 0, "url")
269
+ # => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
243
270
  ```
244
271
 
245
272
  ![Ruby](https://i.ibb.co/sWVh3BX/dalle-ruby.png)
@@ -249,9 +276,9 @@ Fill in the transparent part of an image, or upload a mask with transparent sect
249
276
  Create n variations of an image.
250
277
 
251
278
  ```ruby
252
- response = client.images.variations(parameters: { image: "image.png", n: 2 })
253
- puts response.dig("data", 0, "url")
254
- => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
279
+ response = client.images.variations(parameters: { image: "image.png", n: 2 })
280
+ puts response.dig("data", 0, "url")
281
+ # => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
255
282
  ```
256
283
 
257
284
  ![Ruby](https://i.ibb.co/TWJLP2y/img-miu-Wk-Nl0-QNy-Xtj-Lerc3c0l-NW.png)
@@ -262,27 +289,27 @@ Create n variations of an image.
262
289
  Pass a string to check if it violates OpenAI's Content Policy:
263
290
 
264
291
  ```ruby
265
- response = client.moderations(parameters: { input: "I'm worried about that." })
266
- puts response.dig("results", 0, "category_scores", "hate")
267
- => 5.505014632944949e-05
292
+ response = client.moderations(parameters: { input: "I'm worried about that." })
293
+ puts response.dig("results", 0, "category_scores", "hate")
294
+ # => 5.505014632944949e-05
268
295
  ```
269
296
 
270
297
  ### Whisper
271
298
 
272
- Whisper is a speech to text model that can be used to generate text based on an audio files:
299
+ Whisper is a speech to text model that can be used to generate text based on audio files:
273
300
 
274
301
  #### Translate
275
302
 
276
303
  The translations API takes as input the audio file in any of the supported languages and transcribes the audio into English.
277
304
 
278
305
  ```ruby
279
- response = client.translate(
280
- parameters: {
281
- model: "whisper-1",
282
- file: File.open('path_to_file'),
283
- })
284
- puts response.parsed_body['text']
285
- => "Translation of the text"
306
+ response = client.translate(
307
+ parameters: {
308
+ model: "whisper-1",
309
+ file: File.open('path_to_file', 'rb'),
310
+ })
311
+ puts response.parsed_response['text']
312
+ # => "Translation of the text"
286
313
  ```
287
314
 
288
315
  #### Transcribe
@@ -290,13 +317,13 @@ The translations API takes as input the audio file in any of the supported langu
290
317
  The transcriptions API takes as input the audio file you want to transcribe and returns the text in the desired output file format.
291
318
 
292
319
  ```ruby
293
- response = client.transcribe(
294
- parameters: {
295
- model: "whisper-1",
296
- file: File.open('path_to_file'),
297
- })
298
- puts response.parsed_body['text']
299
- => "Transcription of the text"
320
+ response = client.transcribe(
321
+ parameters: {
322
+ model: "whisper-1",
323
+ file: File.open('path_to_file', 'rb'),
324
+ })
325
+ puts response.parsed_response['text']
326
+ # => "Transcription of the text"
300
327
  ```
301
328
 
302
329
  ## Development
data/lib/openai/client.rb CHANGED
@@ -1,10 +1,10 @@
1
1
  module OpenAI
2
2
  class Client
3
- URI_BASE = "https://api.openai.com/".freeze
4
-
5
- def initialize(access_token: nil, organization_id: nil)
3
+ def initialize(access_token: nil, organization_id: nil, uri_base: nil, request_timeout: nil)
6
4
  OpenAI.configuration.access_token = access_token if access_token
7
5
  OpenAI.configuration.organization_id = organization_id if organization_id
6
+ OpenAI.configuration.uri_base = uri_base if uri_base
7
+ OpenAI.configuration.request_timeout = request_timeout if request_timeout
8
8
  end
9
9
 
10
10
  def chat(parameters: {})
@@ -54,7 +54,8 @@ module OpenAI
54
54
  def self.get(path:)
55
55
  HTTParty.get(
56
56
  uri(path: path),
57
- headers: headers
57
+ headers: headers,
58
+ timeout: request_timeout
58
59
  )
59
60
  end
60
61
 
@@ -62,7 +63,8 @@ module OpenAI
62
63
  HTTParty.post(
63
64
  uri(path: path),
64
65
  headers: headers,
65
- body: parameters&.to_json
66
+ body: parameters&.to_json,
67
+ timeout: request_timeout
66
68
  )
67
69
  end
68
70
 
@@ -70,19 +72,21 @@ module OpenAI
70
72
  HTTParty.post(
71
73
  uri(path: path),
72
74
  headers: headers.merge({ "Content-Type" => "multipart/form-data" }),
73
- body: parameters
75
+ body: parameters,
76
+ timeout: request_timeout
74
77
  )
75
78
  end
76
79
 
77
80
  def self.delete(path:)
78
81
  HTTParty.delete(
79
82
  uri(path: path),
80
- headers: headers
83
+ headers: headers,
84
+ timeout: request_timeout
81
85
  )
82
86
  end
83
87
 
84
88
  private_class_method def self.uri(path:)
85
- URI_BASE + OpenAI.configuration.api_version + path
89
+ OpenAI.configuration.uri_base + OpenAI.configuration.api_version + path
86
90
  end
87
91
 
88
92
  private_class_method def self.headers
@@ -92,5 +96,9 @@ module OpenAI
92
96
  "OpenAI-Organization" => OpenAI.configuration.organization_id
93
97
  }
94
98
  end
99
+
100
+ private_class_method def self.request_timeout
101
+ OpenAI.configuration.request_timeout
102
+ end
95
103
  end
96
104
  end
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "3.5.0".freeze
2
+ VERSION = "3.7.0".freeze
3
3
  end
data/lib/openai.rb CHANGED
@@ -13,14 +13,18 @@ module OpenAI
13
13
 
14
14
  class Configuration
15
15
  attr_writer :access_token
16
- attr_accessor :api_version, :organization_id
16
+ attr_accessor :api_version, :organization_id, :uri_base, :request_timeout
17
17
 
18
18
  DEFAULT_API_VERSION = "v1".freeze
19
+ DEFAULT_URI_BASE = "https://api.openai.com/".freeze
20
+ DEFAULT_REQUEST_TIMEOUT = 120
19
21
 
20
22
  def initialize
21
23
  @access_token = nil
22
24
  @api_version = DEFAULT_API_VERSION
23
25
  @organization_id = nil
26
+ @uri_base = DEFAULT_URI_BASE
27
+ @request_timeout = DEFAULT_REQUEST_TIMEOUT
24
28
  end
25
29
 
26
30
  def access_token
data/ruby-openai.gemspec CHANGED
@@ -6,7 +6,7 @@ Gem::Specification.new do |spec|
6
6
  spec.authors = ["Alex"]
7
7
  spec.email = ["alexrudall@users.noreply.github.com"]
8
8
 
9
- spec.summary = "A Ruby gem for the OpenAI GPT-3 API"
9
+ spec.summary = "OpenAI API + Ruby! 🤖❤️"
10
10
  spec.homepage = "https://github.com/alexrudall/ruby-openai"
11
11
  spec.license = "MIT"
12
12
  spec.required_ruby_version = Gem::Requirement.new(">= 2.6.0")
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 3.5.0
4
+ version: 3.7.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
- autorequire:
8
+ autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2023-03-02 00:00:00.000000000 Z
11
+ date: 2023-03-25 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: httparty
@@ -24,7 +24,7 @@ dependencies:
24
24
  - - ">="
25
25
  - !ruby/object:Gem::Version
26
26
  version: 0.18.1
27
- description:
27
+ description:
28
28
  email:
29
29
  - alexrudall@users.noreply.github.com
30
30
  executables: []
@@ -85,8 +85,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
85
85
  - !ruby/object:Gem::Version
86
86
  version: '0'
87
87
  requirements: []
88
- rubygems_version: 3.4.6
89
- signing_key:
88
+ rubygems_version: 3.1.4
89
+ signing_key:
90
90
  specification_version: 4
91
- summary: A Ruby gem for the OpenAI GPT-3 API
91
+ summary: "OpenAI API + Ruby! \U0001F916❤️"
92
92
  test_files: []