ruby-openai 8.0.0 → 8.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: db56b0b2e16d752cb827fcc80f47d6880187d4f3140c29d3519f9527539dcc29
4
- data.tar.gz: dff6c57719e2cf0ae912b9815d58c9cb08c46ff3b1e7e71bf1c4952de605bbad
3
+ metadata.gz: 35ebfe68ebf25eaf11f33bac4f5c537813e6f5d6f326aff65cc14f273fa93696
4
+ data.tar.gz: c7f452502d64d2be1d819f35412a7a102b616a9f217b35b6d111ecb7da2aa832
5
5
  SHA512:
6
- metadata.gz: 6a2035500d97c58f4637ccfdc40cd09edb9545a77242ee4b02a6f53fb83b4c2dd616e2e8069265439067ab8f52d54c5316e42cf067b9096820ff67672914ce17
7
- data.tar.gz: 1ba769f55b5ca75b48aadf906e4b75618e090681d4410f0e60da6d470d432901197106538912e17b24762587f956c8b5770c7e338083878d672596736bc5b891
6
+ metadata.gz: 47bd06aacf6a5d66686526520fadcd2b0e9f0a62bd9f9840735ca4aee210714e8c48e2430af3cef295a0c7e6b356c0e63701d4d908bc2332c44d3b43c792868c
7
+ data.tar.gz: 24cb42e2a8ef4735b3fc07456cd308bdfe82a99cba3079d5ef642ee1118c1e2de43ca6013355de0d726210a49c19e234da1368292f9528518b9914ae089d83fb
data/.gitignore CHANGED
@@ -70,3 +70,5 @@ build-iPhoneSimulator/
70
70
 
71
71
  # Mac
72
72
  .DS_Store
73
+
74
+ INCIDENT_RESPONSE_PLAN.md
data/CHANGELOG.md CHANGED
@@ -5,6 +5,25 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [8.2.0] - 2025-08-10
9
+
10
+ ### Added
11
+
12
+ - Add Security.md and activate private vulnerability reporting
13
+ - Add RealTime endpoint to create WebRTC token - thank you to [@ngelx](https://github.com/ngelx) for the PR and others for input!
14
+ - Add multi-image upload - thank you to [@ryankon](https://github.com/ryankon) and others for requesting.
15
+ - Refactor streaming so that Chat, Responses, Assistant Runs and any others where events are streamed now send the event to the Proc, replacing unused _bytesize. Search the README for `_event` to see how to use this. Important change implemented by [@ingemar](https://github.com/ingemar)!
16
+ - Handle OpenAI::Files request parameters - thank you to [@okorepanov](https://github.com/okorepanov) for the PR.
17
+ - Add Gemini docs - thanks to [@francis](https://github.com/francis).
18
+ - Add web proxy debugging docs - thanks to [@cpb](https://github.com/cpb).
19
+ - Add Rails / ActiveStorage transcription docs - thanks to [@AndreyAzimov](https://github.com/AndreyAzimov).
20
+
21
+ ## [8.1.0] - 2025-03-30
22
+
23
+ ### Added
24
+
25
+ - Add Vector#search endpoint - thank you [@jaebrownn](https://github.com/jaebrownn) for this PR!
26
+
8
27
  ## [8.0.0] - 2025-03-14
9
28
 
10
29
  ### Added
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (8.0.0)
4
+ ruby-openai (8.2.0)
5
5
  event_stream_parser (>= 0.3.0, < 2.0.0)
6
6
  faraday (>= 1)
7
7
  faraday-multipart (>= 1)
data/README.md CHANGED
@@ -1,22 +1,40 @@
1
1
  # Ruby OpenAI
2
+
2
3
  [![Gem Version](https://img.shields.io/gem/v/ruby-openai.svg)](https://rubygems.org/gems/ruby-openai)
3
4
  [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/alexrudall/ruby-openai/blob/main/LICENSE.txt)
4
5
  [![CircleCI Build Status](https://circleci.com/gh/alexrudall/ruby-openai.svg?style=shield)](https://circleci.com/gh/alexrudall/ruby-openai)
5
6
 
6
7
  Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
7
8
 
8
- Stream text with GPT-4, transcribe and translate audio with Whisper, or create images with DALL·E...
9
+ Stream GPT-5 chats with the Responses API, initiate Realtime WebRTC conversations, and much more...
10
+
11
+ **Sponsors**
12
+
13
+ <table>
14
+ <tr>
15
+ <td width="300" align="center" valign="top">
16
+
17
+ [<img src="https://github.com/user-attachments/assets/b97e036d-3f22-4116-be97-8f8d1c432a4f" alt="InferToGo logo: man in suit falling, black and white" width="300" height="300">](https://infertogo.com/?utm_source=ruby-openai)
18
+
19
+ <sub>_[InferToGo](https://infertogo.com/?utm_source=ruby-openai) - The inference addon for your PaaS application._</sub>
9
20
 
10
- 💥 Click [subscribe now](https://mailchi.mp/8c7b574726a9/ruby-openai) to hear first about new releases in the Rails AI newsletter!
21
+ </td>
22
+ <td width="300" align="center" valign="top">
11
23
 
12
- [![RailsAI Newsletter](https://github.com/user-attachments/assets/737cbb99-6029-42b8-9f22-a106725a4b1f)](https://mailchi.mp/8c7b574726a9/ruby-openai)
24
+ [<img src="https://github.com/user-attachments/assets/3feb834c-2721-404c-a64d-02104ed4aba7" alt="SerpApi logo: Purple rounded square with 4 connected white holes" width="300" height="300">](https://serpapi.com/?utm_source=ruby-openai)
13
25
 
14
- [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 X](https://x.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney)
26
+ <sub>_[SerpApi - Search API](https://serpapi.com/?utm_source=ruby-openai) - Enhance your LLM's knowledge with data from search engines like Google and Bing using our simple API._</sub>
27
+
28
+ </td>
29
+ </tr>
30
+ </table>
31
+
32
+ [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 X](https://x.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney) | [♥️ Thanks to all sponsors!](https://github.com/sponsors/alexrudall)
15
33
 
16
34
  ## Contents
17
35
 
18
36
  - [Ruby OpenAI](#ruby-openai)
19
- - [Table of Contents](#table-of-contents)
37
+ - [Contents](#contents)
20
38
  - [Installation](#installation)
21
39
  - [Bundler](#bundler)
22
40
  - [Gem install](#gem-install)
@@ -32,6 +50,7 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
32
50
  - [Deepseek](#deepseek)
33
51
  - [Ollama](#ollama)
34
52
  - [Groq](#groq)
53
+ - [Gemini](#gemini)
35
54
  - [Counting Tokens](#counting-tokens)
36
55
  - [Models](#models)
37
56
  - [Chat](#chat)
@@ -39,6 +58,13 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
39
58
  - [Vision](#vision)
40
59
  - [JSON Mode](#json-mode)
41
60
  - [Responses API](#responses-api)
61
+ - [Create a Response](#create-a-response)
62
+ - [Follow-up Messages](#follow-up-messages)
63
+ - [Tool Calls](#tool-calls)
64
+ - [Streaming](#streaming)
65
+ - [Retrieve a Response](#retrieve-a-response)
66
+ - [Delete a Response](#delete-a-response)
67
+ - [List Input Items](#list-input-items)
42
68
  - [Functions](#functions)
43
69
  - [Completions](#completions)
44
70
  - [Embeddings](#embeddings)
@@ -67,9 +93,11 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
67
93
  - [Translate](#translate)
68
94
  - [Transcribe](#transcribe)
69
95
  - [Speech](#speech)
96
+ - [Real-Time](#real-time)
70
97
  - [Usage](#usage)
71
98
  - [Errors](#errors-1)
72
99
  - [Development](#development)
100
+ - [To check for deprecations](#to-check-for-deprecations)
73
101
  - [Release](#release)
74
102
  - [Contributing](#contributing)
75
103
  - [License](#license)
@@ -88,7 +116,7 @@ gem "ruby-openai"
88
116
  And then execute:
89
117
 
90
118
  ```bash
91
- $ bundle install
119
+ bundle install
92
120
  ```
93
121
 
94
122
  ### Gem install
@@ -96,7 +124,7 @@ $ bundle install
96
124
  Or install with:
97
125
 
98
126
  ```bash
99
- $ gem install ruby-openai
127
+ gem install ruby-openai
100
128
  ```
101
129
 
102
130
  and require with:
@@ -207,7 +235,9 @@ client = OpenAI::Client.new(log_errors: true)
207
235
 
208
236
  ##### Faraday middleware
209
237
 
210
- You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middleware/index) to the client in a block, eg. to enable verbose logging with Ruby's [Logger](https://ruby-doc.org/3.2.2/stdlibs/logger/Logger.html):
238
+ You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middleware/index) to the client in a block, eg:
239
+
240
+ - To enable verbose logging with Ruby's [Logger](https://ruby-doc.org/3.2.2/stdlibs/logger/Logger.html):
211
241
 
212
242
  ```ruby
213
243
  client = OpenAI::Client.new do |f|
@@ -215,6 +245,13 @@ client = OpenAI::Client.new do |f|
215
245
  end
216
246
  ```
217
247
 
248
+ - To add a web debugging proxy like [Charles](https://www.charlesproxy.com/documentation/welcome/):
249
+
250
+ ```ruby
251
+ client = OpenAI::Client.new do |f|
252
+ f.proxy = { uri: "http://localhost:8888" }
253
+ end
254
+ ```
218
255
  #### Azure
219
256
 
220
257
  To use the [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/) API, you can configure the gem like this:
@@ -245,7 +282,7 @@ client.chat(
245
282
  model: "deepseek-chat", # Required.
246
283
  messages: [{ role: "user", content: "Hello!"}], # Required.
247
284
  temperature: 0.7,
248
- stream: proc do |chunk, _bytesize|
285
+ stream: proc do |chunk, _event|
249
286
  print chunk.dig("choices", 0, "delta", "content")
250
287
  end
251
288
  }
@@ -276,7 +313,7 @@ client.chat(
276
313
  model: "llama3", # Required.
277
314
  messages: [{ role: "user", content: "Hello!"}], # Required.
278
315
  temperature: 0.7,
279
- stream: proc do |chunk, _bytesize|
316
+ stream: proc do |chunk, _event|
280
317
  print chunk.dig("choices", 0, "delta", "content")
281
318
  end
282
319
  }
@@ -300,11 +337,35 @@ client.chat(
300
337
  model: "llama3-8b-8192", # Required.
301
338
  messages: [{ role: "user", content: "Hello!"}], # Required.
302
339
  temperature: 0.7,
340
+ stream: proc do |chunk, _event|
341
+ print chunk.dig("choices", 0, "delta", "content")
342
+ end
343
+ }
344
+ )
345
+ ```
346
+
347
+ #### Gemini
348
+
349
+ [Gemini API Chat](https://ai.google.dev/gemini-api/docs/openai) is also broadly compatible with the OpenAI API, and [currently in beta](https://ai.google.dev/gemini-api/docs/openai#current-limitations). Get an access token from [here](https://aistudio.google.com/app/apikey), then:
350
+
351
+ ```ruby
352
+ client = OpenAI::Client.new(
353
+ access_token: "gemini_access_token_goes_here",
354
+ uri_base: "https://generativelanguage.googleapis.com/v1beta/openai/"
355
+ )
356
+
357
+ client.chat(
358
+ parameters: {
359
+ model: "gemini-1.5-flash", # Required.
360
+ messages: [{ role: "user", content: "Hello!"}], # Required.
361
+ temperature: 0.7,
303
362
  stream: proc do |chunk, _bytesize|
304
363
  print chunk.dig("choices", 0, "delta", "content")
305
364
  end
306
365
  }
307
366
  )
367
+
368
+ # => Hello there! How can I help you today?
308
369
  ```
309
370
 
310
371
  ### Counting Tokens
@@ -362,7 +423,7 @@ client.chat(
362
423
  model: "gpt-4o", # Required.
363
424
  messages: [{ role: "user", content: "Describe a character called Anna!"}], # Required.
364
425
  temperature: 0.7,
365
- stream: proc do |chunk, _bytesize|
426
+ stream: proc do |chunk, _event|
366
427
  print chunk.dig("choices", 0, "delta", "content")
367
428
  end
368
429
  }
@@ -448,7 +509,7 @@ You can stream it as well!
448
509
  model: "gpt-4o",
449
510
  messages: [{ role: "user", content: "Can I have some JSON please?"}],
450
511
  response_format: { type: "json_object" },
451
- stream: proc do |chunk, _bytesize|
512
+ stream: proc do |chunk, _event|
452
513
  print chunk.dig("choices", 0, "delta", "content")
453
514
  end
454
515
  }
@@ -472,19 +533,25 @@ You can stream it as well!
472
533
  ```
473
534
 
474
535
  ### Responses API
536
+
475
537
  [OpenAI's most advanced interface for generating model responses](https://platform.openai.com/docs/api-reference/responses). Supports text and image inputs, and text outputs. Create stateful interactions with the model, using the output of previous responses as input. Extend the model's capabilities with built-in tools for file search, web search, computer use, and more. Allow the model access to external systems and data using function calling.
476
538
 
477
539
  #### Create a Response
540
+
478
541
  ```ruby
479
542
  response = client.responses.create(parameters: {
480
- model: "gpt-4o",
481
- input: "Hello! I'm Szymon!"
543
+ model: "gpt-5",
544
+ input: "Hello! I'm Szymon!",
545
+ reasoning: {
546
+ "effort": "minimal"
547
+ }
482
548
  })
483
549
  puts response.dig("output", 0, "content", 0, "text")
484
- # => Hello Szymon! How can I assist you today?
550
+ # => Hi Szymon! Great to meet you. How can I help today?
485
551
  ```
486
552
 
487
553
  #### Follow-up Messages
554
+
488
555
  ```ruby
489
556
  followup = client.responses.create(parameters: {
490
557
  model: "gpt-4o",
@@ -496,6 +563,7 @@ puts followup.dig("output", 0, "content", 0, "text")
496
563
  ```
497
564
 
498
565
  #### Tool Calls
566
+
499
567
  ```ruby
500
568
  response = client.responses.create(parameters: {
501
569
  model: "gpt-4o",
@@ -523,12 +591,13 @@ puts response.dig("output", 0, "name")
523
591
  ```
524
592
 
525
593
  #### Streaming
594
+
526
595
  ```ruby
527
596
  client.responses.create(
528
597
  parameters: {
529
598
  model: "gpt-4o", # Required.
530
599
  input: "Hello!", # Required.
531
- stream: proc do |chunk, _bytesize|
600
+ stream: proc do |chunk, _event|
532
601
  if chunk["type"] == "response.output_text.delta"
533
602
  print chunk["delta"]
534
603
  $stdout.flush # Ensure output is displayed immediately
@@ -540,6 +609,7 @@ client.responses.create(
540
609
  ```
541
610
 
542
611
  #### Retrieve a Response
612
+
543
613
  ```ruby
544
614
  retrieved_response = client.responses.retrieve(response_id: response["id"])
545
615
  puts retrieved_response["object"]
@@ -547,6 +617,7 @@ puts retrieved_response["object"]
547
617
  ```
548
618
 
549
619
  #### Delete a Response
620
+
550
621
  ```ruby
551
622
  deletion = client.responses.delete(response_id: response["id"])
552
623
  puts deletion["deleted"]
@@ -554,6 +625,7 @@ puts deletion["deleted"]
554
625
  ```
555
626
 
556
627
  #### List Input Items
628
+
557
629
  ```ruby
558
630
  input_items = client.responses.input_items(response_id: response["id"])
559
631
  puts input_items["object"] # => "list"
@@ -613,6 +685,9 @@ response =
613
685
  message = response.dig("choices", 0, "message")
614
686
 
615
687
  if message["role"] == "assistant" && message["tool_calls"]
688
+ # For a subsequent message with the role "tool", OpenAI requires the preceding message to have a single tool_calls argument.
689
+ messages << message
690
+
616
691
  message["tool_calls"].each do |tool_call|
617
692
  tool_call_id = tool_call.dig("id")
618
693
  function_name = tool_call.dig("function", "name")
@@ -628,9 +703,6 @@ if message["role"] == "assistant" && message["tool_calls"]
628
703
  # decide how to handle
629
704
  end
630
705
 
631
- # For a subsequent message with the role "tool", OpenAI requires the preceding message to have a tool_calls argument.
632
- messages << message
633
-
634
706
  messages << {
635
707
  tool_call_id: tool_call_id,
636
708
  role: "tool",
@@ -910,6 +982,27 @@ response = client.vector_stores.modify(
910
982
  )
911
983
  ```
912
984
 
985
+ You can search a vector store for relevant chunks based on a query:
986
+
987
+ ```ruby
988
+ response = client.vector_stores.search(
989
+ id: vector_store_id,
990
+ parameters: {
991
+ query: "What is the return policy?",
992
+ max_num_results: 20,
993
+ ranking_options: {
994
+ # Add any ranking options here in line with the API documentation
995
+ },
996
+ rewrite_query: true,
997
+ filters: {
998
+ type: "eq",
999
+ property: "region",
1000
+ value: "us"
1001
+ }
1002
+ }
1003
+ )
1004
+ ```
1005
+
913
1006
  You can delete vector stores:
914
1007
 
915
1008
  ```ruby
@@ -1125,7 +1218,7 @@ client.runs.create(
1125
1218
  assistant_id: assistant_id,
1126
1219
  max_prompt_tokens: 256,
1127
1220
  max_completion_tokens: 16,
1128
- stream: proc do |chunk, _bytesize|
1221
+ stream: proc do |chunk, _event|
1129
1222
  if chunk["object"] == "thread.message.delta"
1130
1223
  print chunk.dig("delta", "content", 0, "text", "value")
1131
1224
  end
@@ -1509,6 +1602,21 @@ puts response.dig("data", 0, "url")
1509
1602
 
1510
1603
  ![Ruby](https://i.ibb.co/sWVh3BX/dalle-ruby.png)
1511
1604
 
1605
+ You can also upload arrays of images, eg.
1606
+
1607
+ ```ruby
1608
+ client = OpenAI::Client.new
1609
+ response = client.images.edit(
1610
+ parameters: {
1611
+ model: "gpt-image-1",
1612
+ image: [File.open(base_image_path, "rb"), "image.png"],
1613
+ prompt: "Take the first image as base and apply the second image as a watermark on the bottom right corner",
1614
+ size: "1024x1024"
1615
+ # Removed response_format parameter as it's not supported with gpt-image-1
1616
+ }
1617
+ )
1618
+ ```
1619
+
1512
1620
  ### Image Variations
1513
1621
 
1514
1622
  Create n variations of an image.
@@ -1569,6 +1677,20 @@ puts response["text"]
1569
1677
  # => "Transcription of the text"
1570
1678
  ```
1571
1679
 
1680
+ If you are using Ruby on Rails with Active Storage, you would need to send an audio or video file like this (User has_one_attached):
1681
+ ```ruby
1682
+ user.media.blob.open do |file|
1683
+ response = client.audio.transcribe(
1684
+ parameters: {
1685
+ model: "whisper-1",
1686
+ file: File.open(file, "rb"),
1687
+ language: "en" # Optional
1688
+ })
1689
+ puts response["text"]
1690
+ # => "Transcription of the text"
1691
+ end
1692
+ ```
1693
+
1572
1694
  #### Speech
1573
1695
 
1574
1696
  The speech API takes as input the text and a voice and returns the content of an audio file you can listen to.
@@ -1587,7 +1709,35 @@ File.binwrite('demo.mp3', response)
1587
1709
  # => mp3 file that plays: "This is a speech test!"
1588
1710
  ```
1589
1711
 
1712
+ ### Realtime
1713
+
1714
+ The [Realtime API](https://platform.openai.com/docs/guides/realtime) allows you to create a live speech-to-speech session with an OpenAI model. It responds with a session object, plus a client_secret key which contains a usable ephemeral API token that can be used to [authenticate browser clients for a WebRTC connection](https://platform.openai.com/docs/guides/realtime#connect-with-webrtc).
1715
+
1716
+ ```ruby
1717
+ response = client.realtime.create(parameters: { model: "gpt-4o-realtime-preview-2024-12-17" })
1718
+ puts "ephemeral key: #{response.dig('client_secret', 'value')}"
1719
+ # => "ephemeral key: ek_abc123"
1720
+ ```
1721
+
1722
+ Then in the client-side Javascript application, make a POST request to the Real-Time API with the ephemeral key and the SDP offer.
1723
+
1724
+ ```js
1725
+ const OPENAI_REALTIME_URL = 'https://api.openai.com/v1/realtime/sessions'
1726
+ const MODEL = 'gpt-4o-realtime-preview-2024-12-17'
1727
+
1728
+ const response = await fetch(`${OPENAI_REALTIME_URL}?model=${MODEL}`, {
1729
+ method: 'POST',
1730
+ headers: {
1731
+ 'Content-Type': 'application/sdp',
1732
+ 'Authorization': `Bearer ${ephemeralKey}`,
1733
+ 'OpenAI-Beta': 'realtime=v1'
1734
+ },
1735
+ body: offer.sdp
1736
+ })
1737
+ ```
1738
+
1590
1739
  ### Usage
1740
+
1591
1741
  The Usage API provides information about the cost of various OpenAI services within your organization.
1592
1742
  To use Admin APIs like Usage, you need to set an OPENAI_ADMIN_TOKEN, which can be generated [here](https://platform.openai.com/settings/organization/admin-keys).
1593
1743
 
data/SECURITY.md ADDED
@@ -0,0 +1,9 @@
1
+ # Security Policy
2
+
3
+ Thank you for helping us keep ruby-openai and any systems it interacts with secure.
4
+
5
+ ## Reporting Security Issues
6
+
7
+ The security of our systems and user data is our top priority. We appreciate the work of security researchers acting in good faith in identifying and reporting potential vulnerabilities.
8
+
9
+ Any validated vulnerability in this functionality can be reported through Github - click on the [Security Tab](https://github.com/alexrudall/ruby-openai/security) and click "Report a vulnerability".
data/lib/openai/client.rb CHANGED
@@ -1,3 +1,4 @@
1
+ # rubocop:disable Metrics/ClassLength
1
2
  module OpenAI
2
3
  class Client
3
4
  include OpenAI::HTTP
@@ -92,6 +93,10 @@ module OpenAI
92
93
  @batches ||= OpenAI::Batches.new(client: self)
93
94
  end
94
95
 
96
+ def realtime
97
+ @realtime ||= OpenAI::Realtime.new(client: self)
98
+ end
99
+
95
100
  def moderations(parameters: {})
96
101
  json_post(path: "/moderations", parameters: parameters)
97
102
  end
@@ -132,3 +137,4 @@ module OpenAI
132
137
  end
133
138
  end
134
139
  end
140
+ # rubocop:enable Metrics/ClassLength
data/lib/openai/files.rb CHANGED
@@ -29,12 +29,12 @@ module OpenAI
29
29
  file.close if file.is_a?(File)
30
30
  end
31
31
 
32
- def retrieve(id:)
33
- @client.get(path: "/files/#{id}")
32
+ def retrieve(id:, parameters: {})
33
+ @client.get(path: "/files/#{id}", parameters: parameters)
34
34
  end
35
35
 
36
- def content(id:)
37
- @client.get(path: "/files/#{id}/content")
36
+ def content(id:, parameters: {})
37
+ @client.get(path: "/files/#{id}/content", parameters: parameters)
38
38
  end
39
39
 
40
40
  def delete(id:)
data/lib/openai/http.rb CHANGED
@@ -55,27 +55,6 @@ module OpenAI
55
55
  original_response
56
56
  end
57
57
 
58
- # Given a proc, returns an outer proc that can be used to iterate over a JSON stream of chunks.
59
- # For each chunk, the inner user_proc is called giving it the JSON object. The JSON object could
60
- # be a data object or an error object as described in the OpenAI API documentation.
61
- #
62
- # @param user_proc [Proc] The inner proc to call for each JSON object in the chunk.
63
- # @return [Proc] An outer proc that iterates over a raw stream, converting it to JSON.
64
- def to_json_stream(user_proc:)
65
- parser = EventStreamParser::Parser.new
66
-
67
- proc do |chunk, _bytes, env|
68
- if env && env.status != 200
69
- raise_error = Faraday::Response::RaiseError.new
70
- raise_error.on_complete(env.merge(body: try_parse_json(chunk)))
71
- end
72
-
73
- parser.feed(chunk) do |_type, data|
74
- user_proc.call(JSON.parse(data)) unless data == "[DONE]"
75
- end
76
- end
77
- end
78
-
79
58
  def conn(multipart: false)
80
59
  connection = Faraday.new do |f|
81
60
  f.options[:timeout] = @request_timeout
@@ -120,7 +99,7 @@ module OpenAI
120
99
  req_parameters = parameters.dup
121
100
 
122
101
  if parameters[:stream].respond_to?(:call)
123
- req.options.on_data = to_json_stream(user_proc: parameters[:stream])
102
+ req.options.on_data = Stream.new(user_proc: parameters[:stream]).to_proc
124
103
  req_parameters[:stream] = true # Necessary to tell OpenAI to stream.
125
104
  elsif parameters[:stream]
126
105
  raise ArgumentError, "The stream parameter must be a Proc or have a #call method"
@@ -129,11 +108,5 @@ module OpenAI
129
108
  req.headers = headers
130
109
  req.body = req_parameters.to_json
131
110
  end
132
-
133
- def try_parse_json(maybe_json)
134
- JSON.parse(maybe_json)
135
- rescue JSON::ParserError
136
- maybe_json
137
- end
138
111
  end
139
112
  end
data/lib/openai/images.rb CHANGED
@@ -19,9 +19,23 @@ module OpenAI
19
19
  private
20
20
 
21
21
  def open_files(parameters)
22
- parameters = parameters.merge(image: File.open(parameters[:image]))
23
- parameters = parameters.merge(mask: File.open(parameters[:mask])) if parameters[:mask]
24
- parameters
22
+ params = parameters.dup
23
+
24
+ if params[:image].is_a?(Array)
25
+ process_image_array(params)
26
+ else
27
+ params[:image] = File.open(params[:image])
28
+ end
29
+
30
+ params[:mask] = File.open(params[:mask]) if params[:mask]
31
+ params
32
+ end
33
+
34
+ def process_image_array(params)
35
+ params[:image].each_with_index do |img_path, index|
36
+ params[:"image[#{index}]"] = File.open(img_path)
37
+ end
38
+ params.delete(:image)
25
39
  end
26
40
  end
27
41
  end
@@ -0,0 +1,19 @@
1
+ module OpenAI
2
+ class Realtime
3
+ def initialize(client:)
4
+ @client = client.beta(realtime: "v1")
5
+ end
6
+
7
+ # Create a new real-time session with OpenAI.
8
+ #
9
+ # This method sets up a new session for real-time voice interaction with an OpenAI model.
10
+ # It returns session details that can be used to establish a WebRTC connection.
11
+ #
12
+ # @param parameters [Hash] parameters for the session (see: https://platform.openai.com/docs/api-reference/realtime-sessions/create)
13
+ # @return [Hash] Session details including session ID, ICE servers, and other
14
+ # connection information
15
+ def create(parameters: {})
16
+ @client.json_post(path: "/realtime/sessions", parameters: parameters)
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,50 @@
1
+ module OpenAI
2
+ class Stream
3
+ DONE = "[DONE]".freeze
4
+ private_constant :DONE
5
+
6
+ def initialize(user_proc:, parser: EventStreamParser::Parser.new)
7
+ @user_proc = user_proc
8
+ @parser = parser
9
+
10
+ # To be backwards compatible, we need to check how many arguments the user_proc takes.
11
+ @user_proc_arity =
12
+ case user_proc
13
+ when Proc
14
+ user_proc.arity.abs
15
+ else
16
+ user_proc.method(:call).arity.abs
17
+ end
18
+ end
19
+
20
+ def call(chunk, _bytes, env)
21
+ handle_http_error(chunk: chunk, env: env) if env && env.status != 200
22
+
23
+ parser.feed(chunk) do |event, data|
24
+ next if data == DONE
25
+
26
+ args = [JSON.parse(data), event].first(user_proc_arity)
27
+ user_proc.call(*args)
28
+ end
29
+ end
30
+
31
+ def to_proc
32
+ method(:call).to_proc
33
+ end
34
+
35
+ private
36
+
37
+ attr_reader :user_proc, :parser, :user_proc_arity
38
+
39
+ def handle_http_error(chunk:, env:)
40
+ raise_error = Faraday::Response::RaiseError.new
41
+ raise_error.on_complete(env.merge(body: try_parse_json(chunk)))
42
+ end
43
+
44
+ def try_parse_json(maybe_json)
45
+ JSON.parse(maybe_json)
46
+ rescue JSON::ParserError
47
+ maybe_json
48
+ end
49
+ end
50
+ end
@@ -23,5 +23,9 @@ module OpenAI
23
23
  def delete(id:)
24
24
  @client.delete(path: "/vector_stores/#{id}")
25
25
  end
26
+
27
+ def search(id:, parameters: {})
28
+ @client.json_post(path: "/vector_stores/#{id}/search", parameters: parameters)
29
+ end
26
30
  end
27
31
  end
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "8.0.0".freeze
2
+ VERSION = "8.2.0".freeze
3
3
  end
data/lib/openai.rb CHANGED
@@ -10,8 +10,10 @@ require_relative "openai/responses"
10
10
  require_relative "openai/assistants"
11
11
  require_relative "openai/threads"
12
12
  require_relative "openai/messages"
13
+ require_relative "openai/realtime"
13
14
  require_relative "openai/runs"
14
15
  require_relative "openai/run_steps"
16
+ require_relative "openai/stream"
15
17
  require_relative "openai/vector_stores"
16
18
  require_relative "openai/vector_store_files"
17
19
  require_relative "openai/vector_store_file_batches"
metadata CHANGED
@@ -1,14 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 8.0.0
4
+ version: 8.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
- autorequire:
9
8
  bindir: exe
10
9
  cert_chain: []
11
- date: 2025-03-14 00:00:00.000000000 Z
10
+ date: 1980-01-02 00:00:00.000000000 Z
12
11
  dependencies:
13
12
  - !ruby/object:Gem::Dependency
14
13
  name: event_stream_parser
@@ -58,7 +57,6 @@ dependencies:
58
57
  - - ">="
59
58
  - !ruby/object:Gem::Version
60
59
  version: '1'
61
- description:
62
60
  email:
63
61
  - alexrudall@users.noreply.github.com
64
62
  executables: []
@@ -84,6 +82,7 @@ files:
84
82
  - LICENSE.txt
85
83
  - README.md
86
84
  - Rakefile
85
+ - SECURITY.md
87
86
  - bin/console
88
87
  - bin/setup
89
88
  - lib/openai.rb
@@ -98,9 +97,11 @@ files:
98
97
  - lib/openai/images.rb
99
98
  - lib/openai/messages.rb
100
99
  - lib/openai/models.rb
100
+ - lib/openai/realtime.rb
101
101
  - lib/openai/responses.rb
102
102
  - lib/openai/run_steps.rb
103
103
  - lib/openai/runs.rb
104
+ - lib/openai/stream.rb
104
105
  - lib/openai/threads.rb
105
106
  - lib/openai/usage.rb
106
107
  - lib/openai/vector_store_file_batches.rb
@@ -119,7 +120,6 @@ metadata:
119
120
  changelog_uri: https://github.com/alexrudall/ruby-openai/blob/main/CHANGELOG.md
120
121
  rubygems_mfa_required: 'true'
121
122
  funding_uri: https://github.com/sponsors/alexrudall
122
- post_install_message:
123
123
  rdoc_options: []
124
124
  require_paths:
125
125
  - lib
@@ -134,8 +134,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
134
134
  - !ruby/object:Gem::Version
135
135
  version: '0'
136
136
  requirements: []
137
- rubygems_version: 3.5.11
138
- signing_key:
137
+ rubygems_version: 3.6.7
139
138
  specification_version: 4
140
139
  summary: "OpenAI API + Ruby! \U0001F916❤️"
141
140
  test_files: []