ruby-openai 5.2.0 → 6.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 996d39cd32c3c05c73efea0177c12d0751b5dda208b2855aaac440af7b2702d8
4
- data.tar.gz: 65471a670e34f537fe4878322c87978f1c2beaf93336a7f2104baaa86b018c60
3
+ metadata.gz: 88af78318c57c49636755099d6c8837e62762dee65e27465160b1d3778721edd
4
+ data.tar.gz: d6abf778678cf1813ce61efcf5874ecb0e4157a9cb08f6e958948b6d4f5d74ab
5
5
  SHA512:
6
- metadata.gz: deab41c7c7f4ee21b4ed1a17f289b147b2e4960b33fd12ce863d5bdb8c835a955215d01438890c1ab8d9a1c7026faba0e5b8359c1fe3d9139082f8de58dce616
7
- data.tar.gz: 3309d1c3a68736816c4f3bd1d465021ee3f162b5f5c3dbb7915ed5ce6f3a8d7014f9f1c4b07cf630f3f90201bdbe0ec308f1dc00fb6b075f45546fe519afb553
6
+ metadata.gz: 046c91484f51ea3973698c2bd3a7ff627df3a8413097757e4cd7debaf540ee40eb162cfdc6df41b2973cba3a5ef17014c8c18f544c777a5a45d93cb8530fe232
7
+ data.tar.gz: 196cd9c830e5e2639c16fc02431b2dfad8286cc098c85b47c6c42c2c64a2f570a8412f021846ffdff0c057a184a298c43c82e69b0f2ad1b25f048b0c76969422
data/CHANGELOG.md CHANGED
@@ -5,6 +5,18 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [6.0.0] - 2023-11-06
9
+
10
+ ### Added
11
+
12
+ - [BREAKING] HTTP errors will now be raised by ruby-openai as Faraday:Errors, including when streaming! Implemented by [@atesgoral](https://github.com/atesgoral)
13
+ - [BREAKING] Switch from legacy Finetunes to the new Fine-tune-jobs endpoints. Implemented by [@lancecarlson](https://github.com/lancecarlson)
14
+ - [BREAKING] Remove deprecated Completions endpoints - use Chat instead.
15
+
16
+ ### Fix
17
+
18
+ - [BREAKING] Fix issue where :stream parameters where replaced by a boolean in the client application. Thanks to [@martinjaimem](https://github.com/martinjaimem), [@vickymadrid03](https://github.com/vickymadrid03) and [@nicastelo](https://github.com/nicastelo) for spotting and fixing this issue.
19
+
8
20
  ## [5.2.0] - 2023-10-30
9
21
 
10
22
  ### Fix
data/Gemfile CHANGED
@@ -5,8 +5,8 @@ gemspec
5
5
 
6
6
  gem "byebug", "~> 11.1.3"
7
7
  gem "dotenv", "~> 2.8.1"
8
- gem "rake", "~> 13.0"
8
+ gem "rake", "~> 13.1"
9
9
  gem "rspec", "~> 3.12"
10
10
  gem "rubocop", "~> 1.50.2"
11
11
  gem "vcr", "~> 6.1.0"
12
- gem "webmock", "~> 3.18.1"
12
+ gem "webmock", "~> 3.19.1"
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (5.2.0)
4
+ ruby-openai (6.0.0)
5
5
  event_stream_parser (>= 0.3.0, < 1.0.0)
6
6
  faraday (>= 1)
7
7
  faraday-multipart (>= 1)
@@ -9,16 +9,18 @@ PATH
9
9
  GEM
10
10
  remote: https://rubygems.org/
11
11
  specs:
12
- addressable (2.8.1)
12
+ addressable (2.8.5)
13
13
  public_suffix (>= 2.0.2, < 6.0)
14
14
  ast (2.4.2)
15
+ base64 (0.1.1)
15
16
  byebug (11.1.3)
16
17
  crack (0.4.5)
17
18
  rexml
18
19
  diff-lcs (1.5.0)
19
20
  dotenv (2.8.1)
20
21
  event_stream_parser (0.3.0)
21
- faraday (2.7.10)
22
+ faraday (2.7.11)
23
+ base64
22
24
  faraday-net_http (>= 2.0, < 3.1)
23
25
  ruby2_keywords (>= 0.0.4)
24
26
  faraday-multipart (1.0.4)
@@ -30,11 +32,11 @@ GEM
30
32
  parallel (1.22.1)
31
33
  parser (3.2.2.0)
32
34
  ast (~> 2.4.1)
33
- public_suffix (5.0.1)
35
+ public_suffix (5.0.3)
34
36
  rainbow (3.1.1)
35
- rake (13.0.6)
37
+ rake (13.1.0)
36
38
  regexp_parser (2.8.0)
37
- rexml (3.2.5)
39
+ rexml (3.2.6)
38
40
  rspec (3.12.0)
39
41
  rspec-core (~> 3.12.0)
40
42
  rspec-expectations (~> 3.12.0)
@@ -64,7 +66,7 @@ GEM
64
66
  ruby2_keywords (0.0.5)
65
67
  unicode-display_width (2.4.2)
66
68
  vcr (6.1.0)
67
- webmock (3.18.1)
69
+ webmock (3.19.1)
68
70
  addressable (>= 2.8.0)
69
71
  crack (>= 0.3.2)
70
72
  hashdiff (>= 0.4.0, < 2.0.0)
@@ -75,12 +77,12 @@ PLATFORMS
75
77
  DEPENDENCIES
76
78
  byebug (~> 11.1.3)
77
79
  dotenv (~> 2.8.1)
78
- rake (~> 13.0)
80
+ rake (~> 13.1)
79
81
  rspec (~> 3.12)
80
82
  rubocop (~> 1.50.2)
81
83
  ruby-openai!
82
84
  vcr (~> 6.1.0)
83
- webmock (~> 3.18.1)
85
+ webmock (~> 3.19.1)
84
86
 
85
87
  BUNDLED WITH
86
88
  2.4.5
data/README.md CHANGED
@@ -149,7 +149,7 @@ client.models.retrieve(id: "text-ada-001")
149
149
  #### Examples
150
150
 
151
151
  - [GPT-4 (limited beta)](https://platform.openai.com/docs/models/gpt-4)
152
- - gpt-4
152
+ - gpt-4 (uses current version)
153
153
  - gpt-4-0314
154
154
  - gpt-4-32k
155
155
  - [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5)
@@ -161,9 +161,9 @@ client.models.retrieve(id: "text-ada-001")
161
161
  - text-babbage-001
162
162
  - text-curie-001
163
163
 
164
- ### ChatGPT
164
+ ### Chat
165
165
 
166
- ChatGPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
166
+ GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
167
167
 
168
168
  ```ruby
169
169
  response = client.chat(
@@ -176,11 +176,11 @@ puts response.dig("choices", 0, "message", "content")
176
176
  # => "Hello! How may I assist you today?"
177
177
  ```
178
178
 
179
- ### Streaming ChatGPT
179
+ ### Streaming Chat
180
180
 
181
- [Quick guide to streaming ChatGPT with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)
181
+ [Quick guide to streaming Chat with Rails 7 and Hotwire](https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d)
182
182
 
183
- You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) (or any object with a `#call` method) to the `stream` parameter to receive the stream of text chunks as they are generated. Each time one or more chunks is received, the proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will pass that to your proc as a Hash.
183
+ You can stream from the API in realtime, which can be much faster and used to create a more engaging user experience. Pass a [Proc](https://ruby-doc.org/core-2.6/Proc.html) (or any object with a `#call` method) to the `stream` parameter to receive the stream of completion chunks as they are generated. Each time one or more chunks is received, the proc will be called once with each chunk, parsed as a Hash. If OpenAI returns an error, `ruby-openai` will raise a Faraday error.
184
184
 
185
185
  ```ruby
186
186
  client.chat(
@@ -195,7 +195,7 @@ client.chat(
195
195
  # => "Anna is a young woman in her mid-twenties, with wavy chestnut hair that falls to her shoulders..."
196
196
  ```
197
197
 
198
- Note: the API docs state that token usage is included in the streamed chat chunk objects, but this doesn't currently appear to be the case. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby).
198
+ Note: OpenAPI currently does not report token usage for streaming responses. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby). We think that each call to the stream proc corresponds to a single token, so you can also try counting the number of calls to the proc to get the completion token count.
199
199
 
200
200
  ### Functions
201
201
 
@@ -257,21 +257,6 @@ end
257
257
  # => "The weather is nice 🌞"
258
258
  ```
259
259
 
260
- ### Completions
261
-
262
- Hit the OpenAI API for a completion using other GPT-3 models:
263
-
264
- ```ruby
265
- response = client.completions(
266
- parameters: {
267
- model: "text-davinci-001",
268
- prompt: "Once upon a time",
269
- max_tokens: 5
270
- })
271
- puts response["choices"].map { |c| c["text"] }
272
- # => [", there lived a great"]
273
- ```
274
-
275
260
  ### Edits
276
261
 
277
262
  Send a string and some instructions for what to do to the string:
@@ -323,22 +308,22 @@ client.files.content(id: "file-123")
323
308
  client.files.delete(id: "file-123")
324
309
  ```
325
310
 
326
- ### Fine-tunes
311
+ ### Finetunes
327
312
 
328
313
  Upload your fine-tuning data in a `.jsonl` file as above and get its ID:
329
314
 
330
315
  ```ruby
331
- response = client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
316
+ response = client.files.upload(parameters: { file: "path/to/sarcasm.jsonl", purpose: "fine-tune" })
332
317
  file_id = JSON.parse(response.body)["id"]
333
318
  ```
334
319
 
335
- You can then use this file ID to create a fine-tune model:
320
+ You can then use this file ID to create a fine tuning job:
336
321
 
337
322
  ```ruby
338
323
  response = client.finetunes.create(
339
324
  parameters: {
340
325
  training_file: file_id,
341
- model: "ada"
326
+ model: "gpt-3.5-turbo-0613"
342
327
  })
343
328
  fine_tune_id = response["id"]
344
329
  ```
@@ -369,10 +354,10 @@ response = client.completions(
369
354
  response.dig("choices", 0, "text")
370
355
  ```
371
356
 
372
- You can delete the fine-tuned model when you are done with it:
357
+ You can also capture the events for a job:
373
358
 
374
- ```ruby
375
- client.finetunes.delete(fine_tuned_model: fine_tuned_model)
359
+ ```
360
+ client.finetunes.list_events(id: fine_tune_id)
376
361
  ```
377
362
 
378
363
  ### Image Generation
@@ -455,6 +440,18 @@ puts response["text"]
455
440
  # => "Transcription of the text"
456
441
  ```
457
442
 
443
+ #### Errors
444
+
445
+ HTTP errors can be caught like this:
446
+
447
+ ```
448
+ begin
449
+ OpenAI::Client.new.models.retrieve(id: "text-ada-001")
450
+ rescue Faraday::Error => e
451
+ raise "Got a Faraday error: #{e}"
452
+ end
453
+ ```
454
+
458
455
  ## Development
459
456
 
460
457
  After checking out the repo, run `bin/setup` to install dependencies. You can run `bin/console` for an interactive prompt that will allow you to experiment.
data/lib/openai/client.rb CHANGED
@@ -25,10 +25,6 @@ module OpenAI
25
25
  json_post(path: "/chat/completions", parameters: parameters)
26
26
  end
27
27
 
28
- def completions(parameters: {})
29
- json_post(path: "/completions", parameters: parameters)
30
- end
31
-
32
28
  def edits(parameters: {})
33
29
  json_post(path: "/edits", parameters: parameters)
34
30
  end
@@ -5,31 +5,23 @@ module OpenAI
5
5
  end
6
6
 
7
7
  def list
8
- @client.get(path: "/fine-tunes")
8
+ @client.get(path: "/fine_tuning/jobs")
9
9
  end
10
10
 
11
11
  def create(parameters: {})
12
- @client.json_post(path: "/fine-tunes", parameters: parameters)
12
+ @client.json_post(path: "/fine_tuning/jobs", parameters: parameters)
13
13
  end
14
14
 
15
15
  def retrieve(id:)
16
- @client.get(path: "/fine-tunes/#{id}")
16
+ @client.get(path: "/fine_tuning/jobs/#{id}")
17
17
  end
18
18
 
19
19
  def cancel(id:)
20
- @client.multipart_post(path: "/fine-tunes/#{id}/cancel")
20
+ @client.json_post(path: "/fine_tuning/jobs/#{id}/cancel", parameters: {})
21
21
  end
22
22
 
23
- def events(id:)
24
- @client.get(path: "/fine-tunes/#{id}/events")
25
- end
26
-
27
- def delete(fine_tuned_model:)
28
- if fine_tuned_model.start_with?("ft-")
29
- raise ArgumentError, "Please give a fine_tuned_model name, not a fine-tune ID"
30
- end
31
-
32
- @client.delete(path: "/models/#{fine_tuned_model}")
23
+ def list_events(id:)
24
+ @client.get(path: "/fine_tuning/jobs/#{id}/events")
33
25
  end
34
26
  end
35
27
  end
data/lib/openai/http.rb CHANGED
@@ -3,55 +3,46 @@ require "event_stream_parser"
3
3
  module OpenAI
4
4
  module HTTP
5
5
  def get(path:)
6
- to_json(conn.get(uri(path: path)) do |req|
6
+ parse_jsonl(conn.get(uri(path: path)) do |req|
7
7
  req.headers = headers
8
8
  end&.body)
9
9
  end
10
10
 
11
11
  def json_post(path:, parameters:)
12
- to_json(conn.post(uri(path: path)) do |req|
13
- if parameters[:stream].respond_to?(:call)
14
- req.options.on_data = to_json_stream(user_proc: parameters[:stream])
15
- parameters[:stream] = true # Necessary to tell OpenAI to stream.
16
- elsif parameters[:stream]
17
- raise ArgumentError, "The stream parameter must be a Proc or have a #call method"
18
- end
19
-
20
- req.headers = headers
21
- req.body = parameters.to_json
22
- end&.body)
12
+ conn.post(uri(path: path)) do |req|
13
+ configure_json_post_request(req, parameters)
14
+ end&.body
23
15
  end
24
16
 
25
17
  def multipart_post(path:, parameters: nil)
26
- to_json(conn(multipart: true).post(uri(path: path)) do |req|
18
+ conn(multipart: true).post(uri(path: path)) do |req|
27
19
  req.headers = headers.merge({ "Content-Type" => "multipart/form-data" })
28
20
  req.body = multipart_parameters(parameters)
29
- end&.body)
21
+ end&.body
30
22
  end
31
23
 
32
24
  def delete(path:)
33
- to_json(conn.delete(uri(path: path)) do |req|
25
+ conn.delete(uri(path: path)) do |req|
34
26
  req.headers = headers
35
- end&.body)
27
+ end&.body
36
28
  end
37
29
 
38
30
  private
39
31
 
40
- def to_json(string)
41
- return unless string
32
+ def parse_jsonl(response)
33
+ return unless response
34
+ return response unless response.is_a?(String)
42
35
 
43
- JSON.parse(string)
44
- rescue JSON::ParserError
45
36
  # Convert a multiline string of JSON objects to a JSON array.
46
- JSON.parse(string.gsub("}\n{", "},{").prepend("[").concat("]"))
37
+ response = response.gsub("}\n{", "},{").prepend("[").concat("]")
38
+
39
+ JSON.parse(response)
47
40
  end
48
41
 
49
42
  # Given a proc, returns an outer proc that can be used to iterate over a JSON stream of chunks.
50
43
  # For each chunk, the inner user_proc is called giving it the JSON object. The JSON object could
51
44
  # be a data object or an error object as described in the OpenAI API documentation.
52
45
  #
53
- # If the JSON object for a given data or error message is invalid, it is ignored.
54
- #
55
46
  # @param user_proc [Proc] The inner proc to call for each JSON object in the chunk.
56
47
  # @return [Proc] An outer proc that iterates over a raw stream, converting it to JSON.
57
48
  def to_json_stream(user_proc:)
@@ -59,25 +50,22 @@ module OpenAI
59
50
 
60
51
  proc do |chunk, _bytes, env|
61
52
  if env && env.status != 200
62
- emit_json(json: chunk, user_proc: user_proc)
63
- else
64
- parser.feed(chunk) do |_type, data|
65
- emit_json(json: data, user_proc: user_proc) unless data == "[DONE]"
66
- end
53
+ raise_error = Faraday::Response::RaiseError.new
54
+ raise_error.on_complete(env.merge(body: JSON.parse(chunk)))
67
55
  end
68
- end
69
- end
70
56
 
71
- def emit_json(json:, user_proc:)
72
- user_proc.call(JSON.parse(json))
73
- rescue JSON::ParserError
74
- # Ignore invalid JSON.
57
+ parser.feed(chunk) do |_type, data|
58
+ user_proc.call(JSON.parse(data)) unless data == "[DONE]"
59
+ end
60
+ end
75
61
  end
76
62
 
77
63
  def conn(multipart: false)
78
64
  Faraday.new do |f|
79
65
  f.options[:timeout] = @request_timeout
80
66
  f.request(:multipart) if multipart
67
+ f.response :raise_error
68
+ f.response :json
81
69
  end
82
70
  end
83
71
 
@@ -123,5 +111,19 @@ module OpenAI
123
111
  Faraday::UploadIO.new(value, "", value.path)
124
112
  end
125
113
  end
114
+
115
+ def configure_json_post_request(req, parameters)
116
+ req_parameters = parameters.dup
117
+
118
+ if parameters[:stream].respond_to?(:call)
119
+ req.options.on_data = to_json_stream(user_proc: parameters[:stream])
120
+ req_parameters[:stream] = true # Necessary to tell OpenAI to stream.
121
+ elsif parameters[:stream]
122
+ raise ArgumentError, "The stream parameter must be a Proc or have a #call method"
123
+ end
124
+
125
+ req.headers = headers
126
+ req.body = req_parameters.to_json
127
+ end
126
128
  end
127
129
  end
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "5.2.0".freeze
2
+ VERSION = "6.0.0".freeze
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.2.0
4
+ version: 6.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2023-10-30 00:00:00.000000000 Z
11
+ date: 2023-11-06 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: event_stream_parser