ruby-openai 3.4.0 → 3.6.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f3d2b8f98be44b7f33fce40f271bccc3dc1c72237df7ad64baed43f4ea79fb51
4
- data.tar.gz: f19cc5e7155b47e6f7ccb7c5642679e5d8f7a6600e6d4f9e592ebc16313b74f0
3
+ metadata.gz: 0cee1b8ea9256b9e74487bd152c640acf176ee65f7860521b3ee9f0901dddb5f
4
+ data.tar.gz: 12bb3313fbcfd6d9098a13975b2ea9784126e603a321a63e4540891efbf720ab
5
5
  SHA512:
6
- metadata.gz: 8f2c05fae48718593ea7990c480f6108844a698a943c8db2e3e01f3e31e95ede1ca2a5bd0dec5364a9b1730698c3802e24eecec585d20e59b03e1a421c8a3c4e
7
- data.tar.gz: 303d6df6cea50a4f9ac50f8cab7329e585e201aeb2532709b161b830896cc1e733e0c48dfa6ebff22d13d930c3a0ff76425b8e0294c7e175a1bc4e07b84710a2
6
+ metadata.gz: c684505ae1469c9066c2b6731dc6e9ee32b4efed151a005bf83f6c070991d60d714122bb10dc7a6239dcbf99fbee9486397d22de0df74287754ac5088460732c
7
+ data.tar.gz: 70381208b506f04536ced0db66b1519e34d3dfc368796c9f17e7994fa8f95d46f902746f4513ae570a7e41d36d788a88c6ca69f6b999b88ad6be247f6547b913
data/CHANGELOG.md CHANGED
@@ -5,6 +5,18 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [3.6.0] - 2023-03-22
9
+
10
+ ### Added
11
+
12
+ - Add much-needed ability to increase HTTParty timeout, and set default to 120 seconds. Thanks to [@mbackermann](https://github.com/mbackermann) for the PR and to everyone who requested this!
13
+
14
+ ## [3.5.0] - 2023-03-02
15
+
16
+ ### Added
17
+
18
+ - Add Client#transcribe and Client translate endpoints - Whisper over the wire! Thanks to [@Clemalfroy](https://github.com/Clemalfroy)
19
+
8
20
  ## [3.4.0] - 2023-03-01
9
21
 
10
22
  ### Added
data/Gemfile CHANGED
@@ -7,6 +7,6 @@ gem "byebug", "~> 11.1.3"
7
7
  gem "dotenv", "~> 2.8.1"
8
8
  gem "rake", "~> 13.0"
9
9
  gem "rspec", "~> 3.12"
10
- gem "rubocop", "~> 1.46.0"
10
+ gem "rubocop", "~> 1.48.1"
11
11
  gem "vcr", "~> 6.1.0"
12
12
  gem "webmock", "~> 3.18.1"
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (3.4.0)
4
+ ruby-openai (3.6.0)
5
5
  httparty (>= 0.18.1)
6
6
 
7
7
  GEM
@@ -23,7 +23,7 @@ GEM
23
23
  mini_mime (1.1.2)
24
24
  multi_xml (0.6.0)
25
25
  parallel (1.22.1)
26
- parser (3.2.1.0)
26
+ parser (3.2.1.1)
27
27
  ast (~> 2.4.1)
28
28
  public_suffix (5.0.1)
29
29
  rainbow (3.1.1)
@@ -43,7 +43,7 @@ GEM
43
43
  diff-lcs (>= 1.2.0, < 2.0)
44
44
  rspec-support (~> 3.12.0)
45
45
  rspec-support (3.12.0)
46
- rubocop (1.46.0)
46
+ rubocop (1.48.1)
47
47
  json (~> 2.3)
48
48
  parallel (~> 1.10)
49
49
  parser (>= 3.2.0.0)
@@ -53,9 +53,9 @@ GEM
53
53
  rubocop-ast (>= 1.26.0, < 2.0)
54
54
  ruby-progressbar (~> 1.7)
55
55
  unicode-display_width (>= 2.4.0, < 3.0)
56
- rubocop-ast (1.26.0)
56
+ rubocop-ast (1.27.0)
57
57
  parser (>= 3.2.1.0)
58
- ruby-progressbar (1.11.0)
58
+ ruby-progressbar (1.13.0)
59
59
  unicode-display_width (2.4.2)
60
60
  vcr (6.1.0)
61
61
  webmock (3.18.1)
@@ -71,7 +71,7 @@ DEPENDENCIES
71
71
  dotenv (~> 2.8.1)
72
72
  rake (~> 13.0)
73
73
  rspec (~> 3.12)
74
- rubocop (~> 1.46.0)
74
+ rubocop (~> 1.48.1)
75
75
  ruby-openai!
76
76
  vcr (~> 6.1.0)
77
77
  webmock (~> 3.18.1)
data/README.md CHANGED
@@ -7,16 +7,16 @@
7
7
 
8
8
  Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
9
9
 
10
- Generate text with ChatGPT, create images with DALL·E, or write code with Codex...
10
+ Generate text with ChatGPT, transcribe or translate audio with Whisper, create images with DALL·E, or write code with Codex...
11
11
 
12
- ## Installation
12
+ Check out [Ruby AI Builders](https://discord.gg/k4Uc224xVD) on Discord!
13
13
 
14
14
  ### Bundler
15
15
 
16
16
  Add this line to your application's Gemfile:
17
17
 
18
18
  ```ruby
19
- gem "ruby-openai"
19
+ gem "ruby-openai"
20
20
  ```
21
21
 
22
22
  And then execute:
@@ -32,7 +32,7 @@ $ gem install ruby-openai
32
32
  and require with:
33
33
 
34
34
  ```ruby
35
- require "openai"
35
+ require "openai"
36
36
  ```
37
37
 
38
38
  ## Upgrading
@@ -49,7 +49,7 @@ The `::Ruby::OpenAI` module has been removed and all classes have been moved und
49
49
  For a quick test you can pass your token directly to a new client:
50
50
 
51
51
  ```ruby
52
- client = OpenAI::Client.new(access_token: "access_token_goes_here")
52
+ client = OpenAI::Client.new(access_token: "access_token_goes_here")
53
53
  ```
54
54
 
55
55
  ### With Config
@@ -57,16 +57,34 @@ For a quick test you can pass your token directly to a new client:
57
57
  For a more robust setup, you can configure the gem with your API keys, for example in an `openai.rb` initializer file. Never hardcode secrets into your codebase - instead use something like [dotenv](https://github.com/motdotla/dotenv) to pass the keys safely into your environments.
58
58
 
59
59
  ```ruby
60
- OpenAI.configure do |config|
61
- config.access_token = ENV.fetch('OPENAI_ACCESS_TOKEN')
62
- config.organization_id = ENV.fetch('OPENAI_ORGANIZATION_ID') # Optional.
63
- end
60
+ OpenAI.configure do |config|
61
+ config.access_token = ENV.fetch('OPENAI_ACCESS_TOKEN')
62
+ config.organization_id = ENV.fetch('OPENAI_ORGANIZATION_ID') # Optional.
63
+ end
64
64
  ```
65
65
 
66
66
  Then you can create a client like this:
67
67
 
68
68
  ```ruby
69
- client = OpenAI::Client.new
69
+ client = OpenAI::Client.new
70
+ ```
71
+
72
+ #### Setting request timeout
73
+
74
+ The default timeout for any OpenAI request is 120 seconds. You can change that passing the `request_timeout` when initializing the client:
75
+
76
+ ```ruby
77
+ client = OpenAI::Client.new(access_token: "access_token_goes_here", request_timeout: 25)
78
+ ```
79
+
80
+ or when configuring the gem:
81
+
82
+ ```ruby
83
+ OpenAI.configure do |config|
84
+ config.access_token = ENV.fetch('OPENAI_ACCESS_TOKEN')
85
+ config.organization_id = ENV.fetch('OPENAI_ORGANIZATION_ID') # Optional.
86
+ config.request_timeout = 25 # Optional
87
+ end
70
88
  ```
71
89
 
72
90
  ### Models
@@ -74,8 +92,8 @@ Then you can create a client like this:
74
92
  There are different models that can be used to generate text. For a full list and to retrieve information about a single models:
75
93
 
76
94
  ```ruby
77
- client.models.list
78
- client.models.retrieve(id: "text-ada-001")
95
+ client.models.list
96
+ client.models.retrieve(id: "text-ada-001")
79
97
  ```
80
98
 
81
99
  #### Examples
@@ -91,17 +109,17 @@ There are different models that can be used to generate text. For a full list an
91
109
 
92
110
  ### ChatGPT
93
111
 
94
- ChatGPT is a model that can be used to generate text in a conversational style. You can use it to generate a response to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
112
+ ChatGPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
95
113
 
96
114
  ```ruby
97
- response = client.chat(
98
- parameters: {
99
- model: "gpt-3.5-turbo",
100
- messages: [{ role: "user", content: "Hello!"}],
101
- })
102
- puts response.dig("choices", 0, "message", "content")
103
- => "Hello! How may I assist you today?"
104
-
115
+ response = client.chat(
116
+ parameters: {
117
+ model: "gpt-3.5-turbo", # Required.
118
+ messages: [{ role: "user", content: "Hello!"}], # Required.
119
+ temperature: 0.7,
120
+ })
121
+ puts response.dig("choices", 0, "message", "content")
122
+ # => "Hello! How may I assist you today?"
105
123
  ```
106
124
 
107
125
  ### Completions
@@ -109,14 +127,14 @@ ChatGPT is a model that can be used to generate text in a conversational style.
109
127
  Hit the OpenAI API for a completion using other GPT-3 models:
110
128
 
111
129
  ```ruby
112
- response = client.completions(
113
- parameters: {
114
- model: "text-davinci-001",
115
- prompt: "Once upon a time",
116
- max_tokens: 5
117
- })
118
- puts response["choices"].map { |c| c["text"] }
119
- => [", there lived a great"]
130
+ response = client.completions(
131
+ parameters: {
132
+ model: "text-davinci-001",
133
+ prompt: "Once upon a time",
134
+ max_tokens: 5
135
+ })
136
+ puts response["choices"].map { |c| c["text"] }
137
+ # => [", there lived a great"]
120
138
  ```
121
139
 
122
140
  ### Edits
@@ -124,15 +142,15 @@ Hit the OpenAI API for a completion using other GPT-3 models:
124
142
  Send a string and some instructions for what to do to the string:
125
143
 
126
144
  ```ruby
127
- response = client.edits(
128
- parameters: {
129
- model: "text-davinci-edit-001",
130
- input: "What day of the wek is it?",
131
- instruction: "Fix the spelling mistakes"
132
- }
133
- )
134
- puts response.dig("choices", 0, "text")
135
- => What day of the week is it?
145
+ response = client.edits(
146
+ parameters: {
147
+ model: "text-davinci-edit-001",
148
+ input: "What day of the wek is it?",
149
+ instruction: "Fix the spelling mistakes"
150
+ }
151
+ )
152
+ puts response.dig("choices", 0, "text")
153
+ # => What day of the week is it?
136
154
  ```
137
155
 
138
156
  ### Embeddings
@@ -140,12 +158,12 @@ Send a string and some instructions for what to do to the string:
140
158
  You can use the embeddings endpoint to get a vector of numbers representing an input. You can then compare these vectors for different inputs to efficiently check how similar the inputs are.
141
159
 
142
160
  ```ruby
143
- client.embeddings(
144
- parameters: {
145
- model: "babbage-similarity",
146
- input: "The food was delicious and the waiter..."
147
- }
148
- )
161
+ client.embeddings(
162
+ parameters: {
163
+ model: "babbage-similarity",
164
+ input: "The food was delicious and the waiter..."
165
+ }
166
+ )
149
167
  ```
150
168
 
151
169
  ### Files
@@ -153,18 +171,18 @@ You can use the embeddings endpoint to get a vector of numbers representing an i
153
171
  Put your data in a `.jsonl` file like this:
154
172
 
155
173
  ```json
156
- {"prompt":"Overjoyed with my new phone! ->", "completion":" positive"}
157
- {"prompt":"@lakers disappoint for a third straight night ->", "completion":" negative"}
174
+ {"prompt":"Overjoyed with my new phone! ->", "completion":" positive"}
175
+ {"prompt":"@lakers disappoint for a third straight night ->", "completion":" negative"}
158
176
  ```
159
177
 
160
178
  and pass the path to `client.files.upload` to upload it to OpenAI, and then interact with it:
161
179
 
162
180
  ```ruby
163
- client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
164
- client.files.list
165
- client.files.retrieve(id: 123)
166
- client.files.content(id: 123)
167
- client.files.delete(id: 123)
181
+ client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
182
+ client.files.list
183
+ client.files.retrieve(id: 123)
184
+ client.files.content(id: 123)
185
+ client.files.delete(id: 123)
168
186
  ```
169
187
 
170
188
  ### Fine-tunes
@@ -172,51 +190,51 @@ and pass the path to `client.files.upload` to upload it to OpenAI, and then inte
172
190
  Upload your fine-tuning data in a `.jsonl` file as above and get its ID:
173
191
 
174
192
  ```ruby
175
- response = client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
176
- file_id = JSON.parse(response.body)["id"]
193
+ response = client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
194
+ file_id = JSON.parse(response.body)["id"]
177
195
  ```
178
196
 
179
197
  You can then use this file ID to create a fine-tune model:
180
198
 
181
199
  ```ruby
182
- response = client.finetunes.create(
183
- parameters: {
184
- training_file: file_id,
185
- model: "text-ada-001"
186
- })
187
- fine_tune_id = JSON.parse(response.body)["id"]
200
+ response = client.finetunes.create(
201
+ parameters: {
202
+ training_file: file_id,
203
+ model: "text-ada-001"
204
+ })
205
+ fine_tune_id = JSON.parse(response.body)["id"]
188
206
  ```
189
207
 
190
208
  That will give you the fine-tune ID. If you made a mistake you can cancel the fine-tune model before it is processed:
191
209
 
192
210
  ```ruby
193
- client.finetunes.cancel(id: fine_tune_id)
211
+ client.finetunes.cancel(id: fine_tune_id)
194
212
  ```
195
213
 
196
214
  You may need to wait a short time for processing to complete. Once processed, you can use list or retrieve to get the name of the fine-tuned model:
197
215
 
198
216
  ```ruby
199
- client.finetunes.list
200
- response = client.finetunes.retrieve(id: fine_tune_id)
201
- fine_tuned_model = JSON.parse(response.body)["fine_tuned_model"]
217
+ client.finetunes.list
218
+ response = client.finetunes.retrieve(id: fine_tune_id)
219
+ fine_tuned_model = JSON.parse(response.body)["fine_tuned_model"]
202
220
  ```
203
221
 
204
222
  This fine-tuned model name can then be used in completions:
205
223
 
206
224
  ```ruby
207
- response = client.completions(
208
- parameters: {
209
- model: fine_tuned_model,
210
- prompt: "I love Mondays!"
211
- }
212
- )
213
- JSON.parse(response.body)["choices"].map { |c| c["text"] }
225
+ response = client.completions(
226
+ parameters: {
227
+ model: fine_tuned_model,
228
+ prompt: "I love Mondays!"
229
+ }
230
+ )
231
+ JSON.parse(response.body)["choices"].map { |c| c["text"] }
214
232
  ```
215
233
 
216
234
  You can delete the fine-tuned model when you are done with it:
217
235
 
218
236
  ```ruby
219
- client.finetunes.delete(fine_tuned_model: fine_tuned_model)
237
+ client.finetunes.delete(fine_tuned_model: fine_tuned_model)
220
238
  ```
221
239
 
222
240
  ### Image Generation
@@ -225,9 +243,9 @@ Generate an image using DALL·E! The size of any generated images must be one of
225
243
  if not specified the image will default to `1024x1024`.
226
244
 
227
245
  ```ruby
228
- response = client.images.generate(parameters: { prompt: "A baby sea otter cooking pasta wearing a hat of some sort", size: "256x256" })
229
- puts response.dig("data", 0, "url")
230
- => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
246
+ response = client.images.generate(parameters: { prompt: "A baby sea otter cooking pasta wearing a hat of some sort", size: "256x256" })
247
+ puts response.dig("data", 0, "url")
248
+ # => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
231
249
  ```
232
250
 
233
251
  ![Ruby](https://i.ibb.co/6y4HJFx/img-d-Tx-Rf-RHj-SO5-Gho-Cbd8o-LJvw3.png)
@@ -237,9 +255,9 @@ if not specified the image will default to `1024x1024`.
237
255
  Fill in the transparent part of an image, or upload a mask with transparent sections to indicate the parts of an image that can be changed according to your prompt...
238
256
 
239
257
  ```ruby
240
- response = client.images.edit(parameters: { prompt: "A solid red Ruby on a blue background", image: "image.png", mask: "mask.png" })
241
- puts response.dig("data", 0, "url")
242
- => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
258
+ response = client.images.edit(parameters: { prompt: "A solid red Ruby on a blue background", image: "image.png", mask: "mask.png" })
259
+ puts response.dig("data", 0, "url")
260
+ # => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
243
261
  ```
244
262
 
245
263
  ![Ruby](https://i.ibb.co/sWVh3BX/dalle-ruby.png)
@@ -249,9 +267,9 @@ Fill in the transparent part of an image, or upload a mask with transparent sect
249
267
  Create n variations of an image.
250
268
 
251
269
  ```ruby
252
- response = client.images.variations(parameters: { image: "image.png", n: 2 })
253
- puts response.dig("data", 0, "url")
254
- => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
270
+ response = client.images.variations(parameters: { image: "image.png", n: 2 })
271
+ puts response.dig("data", 0, "url")
272
+ # => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
255
273
  ```
256
274
 
257
275
  ![Ruby](https://i.ibb.co/TWJLP2y/img-miu-Wk-Nl0-QNy-Xtj-Lerc3c0l-NW.png)
@@ -262,9 +280,41 @@ Create n variations of an image.
262
280
  Pass a string to check if it violates OpenAI's Content Policy:
263
281
 
264
282
  ```ruby
265
- response = client.moderations(parameters: { input: "I'm worried about that." })
266
- puts response.dig("results", 0, "category_scores", "hate")
267
- => 5.505014632944949e-05
283
+ response = client.moderations(parameters: { input: "I'm worried about that." })
284
+ puts response.dig("results", 0, "category_scores", "hate")
285
+ # => 5.505014632944949e-05
286
+ ```
287
+
288
+ ### Whisper
289
+
290
+ Whisper is a speech to text model that can be used to generate text based on audio files:
291
+
292
+ #### Translate
293
+
294
+ The translations API takes as input the audio file in any of the supported languages and transcribes the audio into English.
295
+
296
+ ```ruby
297
+ response = client.translate(
298
+ parameters: {
299
+ model: "whisper-1",
300
+ file: File.open('path_to_file'),
301
+ })
302
+ puts response.parsed_response['text']
303
+ # => "Translation of the text"
304
+ ```
305
+
306
+ #### Transcribe
307
+
308
+ The transcriptions API takes as input the audio file you want to transcribe and returns the text in the desired output file format.
309
+
310
+ ```ruby
311
+ response = client.transcribe(
312
+ parameters: {
313
+ model: "whisper-1",
314
+ file: File.open('path_to_file'),
315
+ })
316
+ puts response.parsed_response['text']
317
+ # => "Transcription of the text"
268
318
  ```
269
319
 
270
320
  ## Development
data/lib/openai/client.rb CHANGED
@@ -2,9 +2,10 @@ module OpenAI
2
2
  class Client
3
3
  URI_BASE = "https://api.openai.com/".freeze
4
4
 
5
- def initialize(access_token: nil, organization_id: nil)
5
+ def initialize(access_token: nil, organization_id: nil, request_timeout: nil)
6
6
  OpenAI.configuration.access_token = access_token if access_token
7
7
  OpenAI.configuration.organization_id = organization_id if organization_id
8
+ OpenAI.configuration.request_timeout = request_timeout if request_timeout
8
9
  end
9
10
 
10
11
  def chat(parameters: {})
@@ -43,10 +44,19 @@ module OpenAI
43
44
  OpenAI::Client.json_post(path: "/moderations", parameters: parameters)
44
45
  end
45
46
 
47
+ def transcribe(parameters: {})
48
+ OpenAI::Client.multipart_post(path: "/audio/transcriptions", parameters: parameters)
49
+ end
50
+
51
+ def translate(parameters: {})
52
+ OpenAI::Client.multipart_post(path: "/audio/translations", parameters: parameters)
53
+ end
54
+
46
55
  def self.get(path:)
47
56
  HTTParty.get(
48
57
  uri(path: path),
49
- headers: headers
58
+ headers: headers,
59
+ timeout: request_timeout
50
60
  )
51
61
  end
52
62
 
@@ -54,7 +64,8 @@ module OpenAI
54
64
  HTTParty.post(
55
65
  uri(path: path),
56
66
  headers: headers,
57
- body: parameters&.to_json
67
+ body: parameters&.to_json,
68
+ timeout: request_timeout
58
69
  )
59
70
  end
60
71
 
@@ -62,14 +73,16 @@ module OpenAI
62
73
  HTTParty.post(
63
74
  uri(path: path),
64
75
  headers: headers.merge({ "Content-Type" => "multipart/form-data" }),
65
- body: parameters
76
+ body: parameters,
77
+ timeout: request_timeout
66
78
  )
67
79
  end
68
80
 
69
81
  def self.delete(path:)
70
82
  HTTParty.delete(
71
83
  uri(path: path),
72
- headers: headers
84
+ headers: headers,
85
+ timeout: request_timeout
73
86
  )
74
87
  end
75
88
 
@@ -84,5 +97,9 @@ module OpenAI
84
97
  "OpenAI-Organization" => OpenAI.configuration.organization_id
85
98
  }
86
99
  end
100
+
101
+ private_class_method def self.request_timeout
102
+ OpenAI.configuration.request_timeout
103
+ end
87
104
  end
88
105
  end
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "3.4.0".freeze
2
+ VERSION = "3.6.0".freeze
3
3
  end
data/lib/openai.rb CHANGED
@@ -13,14 +13,16 @@ module OpenAI
13
13
 
14
14
  class Configuration
15
15
  attr_writer :access_token
16
- attr_accessor :api_version, :organization_id
16
+ attr_accessor :api_version, :organization_id, :request_timeout
17
17
 
18
18
  DEFAULT_API_VERSION = "v1".freeze
19
+ DEFAULT_REQUEST_TIMEOUT = 120
19
20
 
20
21
  def initialize
21
22
  @access_token = nil
22
23
  @api_version = DEFAULT_API_VERSION
23
24
  @organization_id = nil
25
+ @request_timeout = DEFAULT_REQUEST_TIMEOUT
24
26
  end
25
27
 
26
28
  def access_token
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 3.4.0
4
+ version: 3.6.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
- autorequire:
8
+ autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2023-03-01 00:00:00.000000000 Z
11
+ date: 2023-03-22 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: httparty
@@ -24,7 +24,7 @@ dependencies:
24
24
  - - ">="
25
25
  - !ruby/object:Gem::Version
26
26
  version: 0.18.1
27
- description:
27
+ description:
28
28
  email:
29
29
  - alexrudall@users.noreply.github.com
30
30
  executables: []
@@ -85,8 +85,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
85
85
  - !ruby/object:Gem::Version
86
86
  version: '0'
87
87
  requirements: []
88
- rubygems_version: 3.4.6
89
- signing_key:
88
+ rubygems_version: 3.1.4
89
+ signing_key:
90
90
  specification_version: 4
91
91
  summary: A Ruby gem for the OpenAI GPT-3 API
92
92
  test_files: []