ruby-openai 8.1.0 → 8.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 25124958f17722e0cc4168772bf02daf283b0d43dd2a2c143575ad23fc5539ac
4
- data.tar.gz: cfaa4cb81ee668a4296774d7bce0591a2c23a3f0944794acde1db3e89586d199
3
+ metadata.gz: 35ebfe68ebf25eaf11f33bac4f5c537813e6f5d6f326aff65cc14f273fa93696
4
+ data.tar.gz: c7f452502d64d2be1d819f35412a7a102b616a9f217b35b6d111ecb7da2aa832
5
5
  SHA512:
6
- metadata.gz: d2ded6da66a157a190c24b0f7748a833d13ae9a4da67b52879eae8510e74cd90d80ffa6faaf573375c3a26d74b091165af6b30ca51c4cf860b1ba6f5c633cbdf
7
- data.tar.gz: d31a28fafe6cac7caa99537f97a0fa1b0b918ebea09371d730e6d425b9bf228c726d9207c7a2bd08070103d49c9aa71e34ecf4f7ea888c2321f5fa128abf4fdc
6
+ metadata.gz: 47bd06aacf6a5d66686526520fadcd2b0e9f0a62bd9f9840735ca4aee210714e8c48e2430af3cef295a0c7e6b356c0e63701d4d908bc2332c44d3b43c792868c
7
+ data.tar.gz: 24cb42e2a8ef4735b3fc07456cd308bdfe82a99cba3079d5ef642ee1118c1e2de43ca6013355de0d726210a49c19e234da1368292f9528518b9914ae089d83fb
data/.gitignore CHANGED
@@ -70,3 +70,5 @@ build-iPhoneSimulator/
70
70
 
71
71
  # Mac
72
72
  .DS_Store
73
+
74
+ INCIDENT_RESPONSE_PLAN.md
data/CHANGELOG.md CHANGED
@@ -5,6 +5,19 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [8.2.0] - 2025-08-10
9
+
10
+ ### Added
11
+
12
+ - Add Security.md and activate private vulnerability reporting
13
+ - Add RealTime endpoint to create WebRTC token - thank you to [@ngelx](https://github.com/ngelx) for the PR and others for input!
14
+ - Add multi-image upload - thank you to [@ryankon](https://github.com/ryankon) and others for requesting.
15
+ - Refactor streaming so that Chat, Responses, Assistant Runs and any others where events are streamed now send the event to the Proc, replacing unused _bytesize. Search the README for `_event` to see how to use this. Important change implemented by [@ingemar](https://github.com/ingemar)!
16
+ - Handle OpenAI::Files request parameters - thank you to [@okorepanov](https://github.com/okorepanov) for the PR.
17
+ - Add Gemini docs - thanks to [@francis](https://github.com/francis).
18
+ - Add web proxy debugging docs - thanks to [@cpb](https://github.com/cpb).
19
+ - Add Rails / ActiveStorage transcription docs - thanks to [@AndreyAzimov](https://github.com/AndreyAzimov).
20
+
8
21
  ## [8.1.0] - 2025-03-30
9
22
 
10
23
  ### Added
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (8.1.0)
4
+ ruby-openai (8.2.0)
5
5
  event_stream_parser (>= 0.3.0, < 2.0.0)
6
6
  faraday (>= 1)
7
7
  faraday-multipart (>= 1)
data/README.md CHANGED
@@ -6,13 +6,30 @@
6
6
 
7
7
  Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
8
8
 
9
- Stream chats with the Responses API, transcribe and translate audio with Whisper, create images with DALL·E, and much more...
9
+ Stream GPT-5 chats with the Responses API, initiate Realtime WebRTC conversations, and much more...
10
10
 
11
- 💥 Click [subscribe now](https://mailchi.mp/8c7b574726a9/ruby-openai) to hear first about new releases in the Rails AI newsletter!
11
+ **Sponsors**
12
12
 
13
- [![RailsAI Newsletter](https://github.com/user-attachments/assets/737cbb99-6029-42b8-9f22-a106725a4b1f)](https://mailchi.mp/8c7b574726a9/ruby-openai)
13
+ <table>
14
+ <tr>
15
+ <td width="300" align="center" valign="top">
14
16
 
15
- [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 X](https://x.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney)
17
+ [<img src="https://github.com/user-attachments/assets/b97e036d-3f22-4116-be97-8f8d1c432a4f" alt="InferToGo logo: man in suit falling, black and white" width="300" height="300">](https://infertogo.com/?utm_source=ruby-openai)
18
+
19
+ <sub>_[InferToGo](https://infertogo.com/?utm_source=ruby-openai) - The inference addon for your PaaS application._</sub>
20
+
21
+ </td>
22
+ <td width="300" align="center" valign="top">
23
+
24
+ [<img src="https://github.com/user-attachments/assets/3feb834c-2721-404c-a64d-02104ed4aba7" alt="SerpApi logo: Purple rounded square with 4 connected white holes" width="300" height="300">](https://serpapi.com/?utm_source=ruby-openai)
25
+
26
+ <sub>_[SerpApi - Search API](https://serpapi.com/?utm_source=ruby-openai) - Enhance your LLM's knowledge with data from search engines like Google and Bing using our simple API._</sub>
27
+
28
+ </td>
29
+ </tr>
30
+ </table>
31
+
32
+ [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 X](https://x.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney) | [♥️ Thanks to all sponsors!](https://github.com/sponsors/alexrudall)
16
33
 
17
34
  ## Contents
18
35
 
@@ -33,6 +50,7 @@ Stream chats with the Responses API, transcribe and translate audio with Whisper
33
50
  - [Deepseek](#deepseek)
34
51
  - [Ollama](#ollama)
35
52
  - [Groq](#groq)
53
+ - [Gemini](#gemini)
36
54
  - [Counting Tokens](#counting-tokens)
37
55
  - [Models](#models)
38
56
  - [Chat](#chat)
@@ -75,6 +93,7 @@ Stream chats with the Responses API, transcribe and translate audio with Whisper
75
93
  - [Translate](#translate)
76
94
  - [Transcribe](#transcribe)
77
95
  - [Speech](#speech)
96
+ - [Real-Time](#real-time)
78
97
  - [Usage](#usage)
79
98
  - [Errors](#errors-1)
80
99
  - [Development](#development)
@@ -216,7 +235,9 @@ client = OpenAI::Client.new(log_errors: true)
216
235
 
217
236
  ##### Faraday middleware
218
237
 
219
- You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middleware/index) to the client in a block, eg. to enable verbose logging with Ruby's [Logger](https://ruby-doc.org/3.2.2/stdlibs/logger/Logger.html):
238
+ You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middleware/index) to the client in a block, eg:
239
+
240
+ - To enable verbose logging with Ruby's [Logger](https://ruby-doc.org/3.2.2/stdlibs/logger/Logger.html):
220
241
 
221
242
  ```ruby
222
243
  client = OpenAI::Client.new do |f|
@@ -224,6 +245,13 @@ client = OpenAI::Client.new do |f|
224
245
  end
225
246
  ```
226
247
 
248
+ - To add a web debugging proxy like [Charles](https://www.charlesproxy.com/documentation/welcome/):
249
+
250
+ ```ruby
251
+ client = OpenAI::Client.new do |f|
252
+ f.proxy = { uri: "http://localhost:8888" }
253
+ end
254
+ ```
227
255
  #### Azure
228
256
 
229
257
  To use the [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) API, you can configure the gem like this:
@@ -254,7 +282,7 @@ client.chat(
254
282
  model: "deepseek-chat", # Required.
255
283
  messages: [{ role: "user", content: "Hello!"}], # Required.
256
284
  temperature: 0.7,
257
- stream: proc do |chunk, _bytesize|
285
+ stream: proc do |chunk, _event|
258
286
  print chunk.dig("choices", 0, "delta", "content")
259
287
  end
260
288
  }
@@ -285,7 +313,7 @@ client.chat(
285
313
  model: "llama3", # Required.
286
314
  messages: [{ role: "user", content: "Hello!"}], # Required.
287
315
  temperature: 0.7,
288
- stream: proc do |chunk, _bytesize|
316
+ stream: proc do |chunk, _event|
289
317
  print chunk.dig("choices", 0, "delta", "content")
290
318
  end
291
319
  }
@@ -309,11 +337,35 @@ client.chat(
309
337
  model: "llama3-8b-8192", # Required.
310
338
  messages: [{ role: "user", content: "Hello!"}], # Required.
311
339
  temperature: 0.7,
340
+ stream: proc do |chunk, _event|
341
+ print chunk.dig("choices", 0, "delta", "content")
342
+ end
343
+ }
344
+ )
345
+ ```
346
+
347
+ #### Gemini
348
+
349
+ [Gemini API Chat](https://ai.google.dev/gemini-api/docs/openai) is also broadly compatible with the OpenAI API, and [currently in beta](https://ai.google.dev/gemini-api/docs/openai#current-limitations). Get an access token from [here](https://aistudio.google.com/app/apikey), then:
350
+
351
+ ```ruby
352
+ client = OpenAI::Client.new(
353
+ access_token: "gemini_access_token_goes_here",
354
+ uri_base: "https://generativelanguage.googleapis.com/v1beta/openai/"
355
+ )
356
+
357
+ client.chat(
358
+ parameters: {
359
+ model: "gemini-1.5-flash", # Required.
360
+ messages: [{ role: "user", content: "Hello!"}], # Required.
361
+ temperature: 0.7,
312
362
  stream: proc do |chunk, _bytesize|
313
363
  print chunk.dig("choices", 0, "delta", "content")
314
364
  end
315
365
  }
316
366
  )
367
+
368
+ # => Hello there! How can I help you today?
317
369
  ```
318
370
 
319
371
  ### Counting Tokens
@@ -371,7 +423,7 @@ client.chat(
371
423
  model: "gpt-4o", # Required.
372
424
  messages: [{ role: "user", content: "Describe a character called Anna!"}], # Required.
373
425
  temperature: 0.7,
374
- stream: proc do |chunk, _bytesize|
426
+ stream: proc do |chunk, _event|
375
427
  print chunk.dig("choices", 0, "delta", "content")
376
428
  end
377
429
  }
@@ -457,7 +509,7 @@ You can stream it as well!
457
509
  model: "gpt-4o",
458
510
  messages: [{ role: "user", content: "Can I have some JSON please?"}],
459
511
  response_format: { type: "json_object" },
460
- stream: proc do |chunk, _bytesize|
512
+ stream: proc do |chunk, _event|
461
513
  print chunk.dig("choices", 0, "delta", "content")
462
514
  end
463
515
  }
@@ -488,11 +540,14 @@ You can stream it as well!
488
540
 
489
541
  ```ruby
490
542
  response = client.responses.create(parameters: {
491
- model: "gpt-4o",
492
- input: "Hello! I'm Szymon!"
543
+ model: "gpt-5",
544
+ input: "Hello! I'm Szymon!",
545
+ reasoning: {
546
+ "effort": "minimal"
547
+ }
493
548
  })
494
549
  puts response.dig("output", 0, "content", 0, "text")
495
- # => Hello Szymon! How can I assist you today?
550
+ # => Hi Szymon! Great to meet you. How can I help today?
496
551
  ```
497
552
 
498
553
  #### Follow-up Messages
@@ -542,7 +597,7 @@ client.responses.create(
542
597
  parameters: {
543
598
  model: "gpt-4o", # Required.
544
599
  input: "Hello!", # Required.
545
- stream: proc do |chunk, _bytesize|
600
+ stream: proc do |chunk, _event|
546
601
  if chunk["type"] == "response.output_text.delta"
547
602
  print chunk["delta"]
548
603
  $stdout.flush # Ensure output is displayed immediately
@@ -630,6 +685,9 @@ response =
630
685
  message = response.dig("choices", 0, "message")
631
686
 
632
687
  if message["role"] == "assistant" && message["tool_calls"]
688
+ # For a subsequent message with the role "tool", OpenAI requires the preceding message to have a single tool_calls argument.
689
+ messages << message
690
+
633
691
  message["tool_calls"].each do |tool_call|
634
692
  tool_call_id = tool_call.dig("id")
635
693
  function_name = tool_call.dig("function", "name")
@@ -645,9 +703,6 @@ if message["role"] == "assistant" && message["tool_calls"]
645
703
  # decide how to handle
646
704
  end
647
705
 
648
- # For a subsequent message with the role "tool", OpenAI requires the preceding message to have a tool_calls argument.
649
- messages << message
650
-
651
706
  messages << {
652
707
  tool_call_id: tool_call_id,
653
708
  role: "tool",
@@ -1163,7 +1218,7 @@ client.runs.create(
1163
1218
  assistant_id: assistant_id,
1164
1219
  max_prompt_tokens: 256,
1165
1220
  max_completion_tokens: 16,
1166
- stream: proc do |chunk, _bytesize|
1221
+ stream: proc do |chunk, _event|
1167
1222
  if chunk["object"] == "thread.message.delta"
1168
1223
  print chunk.dig("delta", "content", 0, "text", "value")
1169
1224
  end
@@ -1547,6 +1602,21 @@ puts response.dig("data", 0, "url")
1547
1602
 
1548
1603
  ![Ruby](https://i.ibb.co/sWVh3BX/dalle-ruby.png)
1549
1604
 
1605
+ You can also upload arrays of images, eg.
1606
+
1607
+ ```ruby
1608
+ client = OpenAI::Client.new
1609
+ response = client.images.edit(
1610
+ parameters: {
1611
+ model: "gpt-image-1",
1612
+ image: [File.open(base_image_path, "rb"), "image.png"],
1613
+ prompt: "Take the first image as base and apply the second image as a watermark on the bottom right corner",
1614
+ size: "1024x1024"
1615
+ # Removed response_format parameter as it's not supported with gpt-image-1
1616
+ }
1617
+ )
1618
+ ```
1619
+
1550
1620
  ### Image Variations
1551
1621
 
1552
1622
  Create n variations of an image.
@@ -1607,6 +1677,20 @@ puts response["text"]
1607
1677
  # => "Transcription of the text"
1608
1678
  ```
1609
1679
 
1680
+ If you are using Ruby on Rails with Active Storage, you would need to send an audio or video file like this (User has_one_attached):
1681
+ ```ruby
1682
+ user.media.blob.open do |file|
1683
+ response = client.audio.transcribe(
1684
+ parameters: {
1685
+ model: "whisper-1",
1686
+ file: File.open(file, "rb"),
1687
+ language: "en" # Optional
1688
+ })
1689
+ puts response["text"]
1690
+ # => "Transcription of the text"
1691
+ end
1692
+ ```
1693
+
1610
1694
  #### Speech
1611
1695
 
1612
1696
  The speech API takes as input the text and a voice and returns the content of an audio file you can listen to.
@@ -1625,6 +1709,33 @@ File.binwrite('demo.mp3', response)
1625
1709
  # => mp3 file that plays: "This is a speech test!"
1626
1710
  ```
1627
1711
 
1712
+ ### Realtime
1713
+
1714
+ The [Realtime API](https://platform.openai.com/docs/guides/realtime) allows you to create a live speech-to-speech session with an OpenAI model. It responds with a session object, plus a client_secret key which contains a usable ephemeral API token that can be used to [authenticate browser clients for a WebRTC connection](https://platform.openai.com/docs/guides/realtime#connect-with-webrtc).
1715
+
1716
+ ```ruby
1717
+ response = client.realtime.create(parameters: { model: "gpt-4o-realtime-preview-2024-12-17" })
1718
+ puts "ephemeral key: #{response.dig('client_secret', 'value')}"
1719
+ # => "ephemeral key: ek_abc123"
1720
+ ```
1721
+
1722
+ Then in the client-side Javascript application, make a POST request to the Real-Time API with the ephemeral key and the SDP offer.
1723
+
1724
+ ```js
1725
+ const OPENAI_REALTIME_URL = 'https://api.openai.com/v1/realtime/sessions'
1726
+ const MODEL = 'gpt-4o-realtime-preview-2024-12-17'
1727
+
1728
+ const response = await fetch(`${OPENAI_REALTIME_URL}?model=${MODEL}`, {
1729
+ method: 'POST',
1730
+ headers: {
1731
+ 'Content-Type': 'application/sdp',
1732
+ 'Authorization': `Bearer ${ephemeralKey}`,
1733
+ 'OpenAI-Beta': 'realtime=v1'
1734
+ },
1735
+ body: offer.sdp
1736
+ })
1737
+ ```
1738
+
1628
1739
  ### Usage
1629
1740
 
1630
1741
  The Usage API provides information about the cost of various OpenAI services within your organization.
data/SECURITY.md ADDED
@@ -0,0 +1,9 @@
1
+ # Security Policy
2
+
3
+ Thank you for helping us keep ruby-openai and any systems it interacts with secure.
4
+
5
+ ## Reporting Security Issues
6
+
7
+ The security of our systems and user data is our top priority. We appreciate the work of security researchers acting in good faith in identifying and reporting potential vulnerabilities.
8
+
9
+ Any validated vulnerability in this functionality can be reported through Github - click on the [Security Tab](https://github.com/alexrudall/ruby-openai/security) and click "Report a vulnerability".
data/lib/openai/client.rb CHANGED
@@ -1,3 +1,4 @@
1
+ # rubocop:disable Metrics/ClassLength
1
2
  module OpenAI
2
3
  class Client
3
4
  include OpenAI::HTTP
@@ -92,6 +93,10 @@ module OpenAI
92
93
  @batches ||= OpenAI::Batches.new(client: self)
93
94
  end
94
95
 
96
+ def realtime
97
+ @realtime ||= OpenAI::Realtime.new(client: self)
98
+ end
99
+
95
100
  def moderations(parameters: {})
96
101
  json_post(path: "/moderations", parameters: parameters)
97
102
  end
@@ -132,3 +137,4 @@ module OpenAI
132
137
  end
133
138
  end
134
139
  end
140
+ # rubocop:enable Metrics/ClassLength
data/lib/openai/files.rb CHANGED
@@ -29,12 +29,12 @@ module OpenAI
29
29
  file.close if file.is_a?(File)
30
30
  end
31
31
 
32
- def retrieve(id:)
33
- @client.get(path: "/files/#{id}")
32
+ def retrieve(id:, parameters: {})
33
+ @client.get(path: "/files/#{id}", parameters: parameters)
34
34
  end
35
35
 
36
- def content(id:)
37
- @client.get(path: "/files/#{id}/content")
36
+ def content(id:, parameters: {})
37
+ @client.get(path: "/files/#{id}/content", parameters: parameters)
38
38
  end
39
39
 
40
40
  def delete(id:)
data/lib/openai/http.rb CHANGED
@@ -55,27 +55,6 @@ module OpenAI
55
55
  original_response
56
56
  end
57
57
 
58
- # Given a proc, returns an outer proc that can be used to iterate over a JSON stream of chunks.
59
- # For each chunk, the inner user_proc is called giving it the JSON object. The JSON object could
60
- # be a data object or an error object as described in the OpenAI API documentation.
61
- #
62
- # @param user_proc [Proc] The inner proc to call for each JSON object in the chunk.
63
- # @return [Proc] An outer proc that iterates over a raw stream, converting it to JSON.
64
- def to_json_stream(user_proc:)
65
- parser = EventStreamParser::Parser.new
66
-
67
- proc do |chunk, _bytes, env|
68
- if env && env.status != 200
69
- raise_error = Faraday::Response::RaiseError.new
70
- raise_error.on_complete(env.merge(body: try_parse_json(chunk)))
71
- end
72
-
73
- parser.feed(chunk) do |_type, data|
74
- user_proc.call(JSON.parse(data)) unless data == "[DONE]"
75
- end
76
- end
77
- end
78
-
79
58
  def conn(multipart: false)
80
59
  connection = Faraday.new do |f|
81
60
  f.options[:timeout] = @request_timeout
@@ -120,7 +99,7 @@ module OpenAI
120
99
  req_parameters = parameters.dup
121
100
 
122
101
  if parameters[:stream].respond_to?(:call)
123
- req.options.on_data = to_json_stream(user_proc: parameters[:stream])
102
+ req.options.on_data = Stream.new(user_proc: parameters[:stream]).to_proc
124
103
  req_parameters[:stream] = true # Necessary to tell OpenAI to stream.
125
104
  elsif parameters[:stream]
126
105
  raise ArgumentError, "The stream parameter must be a Proc or have a #call method"
@@ -129,11 +108,5 @@ module OpenAI
129
108
  req.headers = headers
130
109
  req.body = req_parameters.to_json
131
110
  end
132
-
133
- def try_parse_json(maybe_json)
134
- JSON.parse(maybe_json)
135
- rescue JSON::ParserError
136
- maybe_json
137
- end
138
111
  end
139
112
  end
data/lib/openai/images.rb CHANGED
@@ -19,9 +19,23 @@ module OpenAI
19
19
  private
20
20
 
21
21
  def open_files(parameters)
22
- parameters = parameters.merge(image: File.open(parameters[:image]))
23
- parameters = parameters.merge(mask: File.open(parameters[:mask])) if parameters[:mask]
24
- parameters
22
+ params = parameters.dup
23
+
24
+ if params[:image].is_a?(Array)
25
+ process_image_array(params)
26
+ else
27
+ params[:image] = File.open(params[:image])
28
+ end
29
+
30
+ params[:mask] = File.open(params[:mask]) if params[:mask]
31
+ params
32
+ end
33
+
34
+ def process_image_array(params)
35
+ params[:image].each_with_index do |img_path, index|
36
+ params[:"image[#{index}]"] = File.open(img_path)
37
+ end
38
+ params.delete(:image)
25
39
  end
26
40
  end
27
41
  end
@@ -0,0 +1,19 @@
1
+ module OpenAI
2
+ class Realtime
3
+ def initialize(client:)
4
+ @client = client.beta(realtime: "v1")
5
+ end
6
+
7
+ # Create a new real-time session with OpenAI.
8
+ #
9
+ # This method sets up a new session for real-time voice interaction with an OpenAI model.
10
+ # It returns session details that can be used to establish a WebRTC connection.
11
+ #
12
+ # @param parameters [Hash] parameters for the session (see: https://platform.openai.com/docs/api-reference/realtime-sessions/create)
13
+ # @return [Hash] Session details including session ID, ICE servers, and other
14
+ # connection information
15
+ def create(parameters: {})
16
+ @client.json_post(path: "/realtime/sessions", parameters: parameters)
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,50 @@
1
+ module OpenAI
2
+ class Stream
3
+ DONE = "[DONE]".freeze
4
+ private_constant :DONE
5
+
6
+ def initialize(user_proc:, parser: EventStreamParser::Parser.new)
7
+ @user_proc = user_proc
8
+ @parser = parser
9
+
10
+ # To be backwards compatible, we need to check how many arguments the user_proc takes.
11
+ @user_proc_arity =
12
+ case user_proc
13
+ when Proc
14
+ user_proc.arity.abs
15
+ else
16
+ user_proc.method(:call).arity.abs
17
+ end
18
+ end
19
+
20
+ def call(chunk, _bytes, env)
21
+ handle_http_error(chunk: chunk, env: env) if env && env.status != 200
22
+
23
+ parser.feed(chunk) do |event, data|
24
+ next if data == DONE
25
+
26
+ args = [JSON.parse(data), event].first(user_proc_arity)
27
+ user_proc.call(*args)
28
+ end
29
+ end
30
+
31
+ def to_proc
32
+ method(:call).to_proc
33
+ end
34
+
35
+ private
36
+
37
+ attr_reader :user_proc, :parser, :user_proc_arity
38
+
39
+ def handle_http_error(chunk:, env:)
40
+ raise_error = Faraday::Response::RaiseError.new
41
+ raise_error.on_complete(env.merge(body: try_parse_json(chunk)))
42
+ end
43
+
44
+ def try_parse_json(maybe_json)
45
+ JSON.parse(maybe_json)
46
+ rescue JSON::ParserError
47
+ maybe_json
48
+ end
49
+ end
50
+ end
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "8.1.0".freeze
2
+ VERSION = "8.2.0".freeze
3
3
  end
data/lib/openai.rb CHANGED
@@ -10,8 +10,10 @@ require_relative "openai/responses"
10
10
  require_relative "openai/assistants"
11
11
  require_relative "openai/threads"
12
12
  require_relative "openai/messages"
13
+ require_relative "openai/realtime"
13
14
  require_relative "openai/runs"
14
15
  require_relative "openai/run_steps"
16
+ require_relative "openai/stream"
15
17
  require_relative "openai/vector_stores"
16
18
  require_relative "openai/vector_store_files"
17
19
  require_relative "openai/vector_store_file_batches"
metadata CHANGED
@@ -1,14 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 8.1.0
4
+ version: 8.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
- autorequire:
9
8
  bindir: exe
10
9
  cert_chain: []
11
- date: 2025-03-30 00:00:00.000000000 Z
10
+ date: 1980-01-02 00:00:00.000000000 Z
12
11
  dependencies:
13
12
  - !ruby/object:Gem::Dependency
14
13
  name: event_stream_parser
@@ -58,7 +57,6 @@ dependencies:
58
57
  - - ">="
59
58
  - !ruby/object:Gem::Version
60
59
  version: '1'
61
- description:
62
60
  email:
63
61
  - alexrudall@users.noreply.github.com
64
62
  executables: []
@@ -84,6 +82,7 @@ files:
84
82
  - LICENSE.txt
85
83
  - README.md
86
84
  - Rakefile
85
+ - SECURITY.md
87
86
  - bin/console
88
87
  - bin/setup
89
88
  - lib/openai.rb
@@ -98,9 +97,11 @@ files:
98
97
  - lib/openai/images.rb
99
98
  - lib/openai/messages.rb
100
99
  - lib/openai/models.rb
100
+ - lib/openai/realtime.rb
101
101
  - lib/openai/responses.rb
102
102
  - lib/openai/run_steps.rb
103
103
  - lib/openai/runs.rb
104
+ - lib/openai/stream.rb
104
105
  - lib/openai/threads.rb
105
106
  - lib/openai/usage.rb
106
107
  - lib/openai/vector_store_file_batches.rb
@@ -119,7 +120,6 @@ metadata:
119
120
  changelog_uri: https://github.com/alexrudall/ruby-openai/blob/main/CHANGELOG.md
120
121
  rubygems_mfa_required: 'true'
121
122
  funding_uri: https://github.com/sponsors/alexrudall
122
- post_install_message:
123
123
  rdoc_options: []
124
124
  require_paths:
125
125
  - lib
@@ -134,8 +134,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
134
134
  - !ruby/object:Gem::Version
135
135
  version: '0'
136
136
  requirements: []
137
- rubygems_version: 3.5.11
138
- signing_key:
137
+ rubygems_version: 3.6.7
139
138
  specification_version: 4
140
139
  summary: "OpenAI API + Ruby! \U0001F916❤️"
141
140
  test_files: []