ruby-openai 6.3.1 → 6.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 958a72f1590182a35f447bb66e286adde77217e58b5d81e9621cd00e7f62cf58
4
- data.tar.gz: 46729b5fc28187ba376af8c24d02e6b96f8dd4a18dca5cd19ad6fcea09e9d6a1
3
+ metadata.gz: f3db6f0c15b1015a875950a23ad157cb9f6f4eaed2005c35f75fd97197ee3dd6
4
+ data.tar.gz: e11f7020b2db2d627646c584feb7dfb2163f3bd8233860e5115ffe0f8d65c681
5
5
  SHA512:
6
- metadata.gz: f50b6b5044c70a5e549600c5563ec57daf1aefd4301cf04ca5bb992942d5ff3a361ad26e1defefa9cdc490b4eeae7ced18c4b304a6b44245516501286005c159
7
- data.tar.gz: 26ce015243210f29239e4f993baabdee6e5a0265708ee407d5f7d3bef17db50b398e35c7ab225724aaec8519abc35f80e2d928d44e821bdfe27f6ff9db2bcc79
6
+ metadata.gz: 76100b527ed276b83190df15c1241be39d58288285e93caaeb20ca94c937082e1d19dd99b9ea69e1758352105e749e38940f1f0192b97c50fb18a8e81d321911
7
+ data.tar.gz: 425db82e5ae928dba3136351b697ac53335ffbce5afb81c98920efda2a0b4c600cf83a8273c8c627ad4b25a352ba76d2e77d88f00a48d4993dcbbc318077be53
data/CHANGELOG.md CHANGED
@@ -5,6 +5,28 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [6.5.0] - 2024-03-31
9
+
10
+ ### Added
11
+
12
+ - Add back the deprecated Completions endpoint that I removed a bit prematurely. Thanks, [@mishranant](https://github.com/
13
+ mishranant) and everyone who requested this.
14
+
15
+ ## [6.4.0] - 2024-03-27
16
+
17
+ ### Added
18
+
19
+ - Add DALL·E 3 to specs and README - thanks to [@Gary-H9](https://github.com/Gary-H9)
20
+ - Add Whisper transcription language selection parameter to README - thanks to [@nfedyashev](https://github.com/nfedyashev)
21
+ - Add bundle exec rake lint and bundle exec rake test to make development easier - thanks to [@ignacio-chiazzo](https://github.com/ignacio-chiazzo)
22
+ - Add link to [https://github.com/sponsors/alexrudall](https://github.com/sponsors/alexrudall) when users run `bundle fund`
23
+
24
+ ### Fixed
25
+
26
+ - Update README and spec to use tool calls instead of functions - thanks to [@mpallenjr](https://github.com/mpallenjr)
27
+ - Remove nonexistent Thread#list method - thanks again! to [@ignacio-chiazzo](https://github.com/ignacio-chiazzo)
28
+ - Update finetunes docs in README to use chat instead of completions endpoint - thanks to [@blefev](https://github.com/blefev)
29
+
8
30
  ## [6.3.1] - 2023-12-04
9
31
 
10
32
  ### Fixed
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (6.3.1)
4
+ ruby-openai (6.5.0)
5
5
  event_stream_parser (>= 0.3.0, < 2.0.0)
6
6
  faraday (>= 1)
7
7
  faraday-multipart (>= 1)
@@ -12,14 +12,14 @@ GEM
12
12
  addressable (2.8.5)
13
13
  public_suffix (>= 2.0.2, < 6.0)
14
14
  ast (2.4.2)
15
- base64 (0.1.1)
15
+ base64 (0.2.0)
16
16
  byebug (11.1.3)
17
17
  crack (0.4.5)
18
18
  rexml
19
19
  diff-lcs (1.5.0)
20
20
  dotenv (2.8.1)
21
21
  event_stream_parser (1.0.0)
22
- faraday (2.7.11)
22
+ faraday (2.7.12)
23
23
  base64
24
24
  faraday-net_http (>= 2.0, < 3.1)
25
25
  ruby2_keywords (>= 0.0.4)
data/README.md CHANGED
@@ -10,6 +10,55 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
10
10
 
11
11
  [🚢 Hire me](https://peaceterms.com?utm_source=ruby-openai&utm_medium=readme&utm_id=26072023) | [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 Twitter](https://twitter.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney)
12
12
 
13
+ # Table of Contents
14
+
15
+ - [Ruby OpenAI](#ruby-openai)
16
+ - [Table of Contents](#table-of-contents)
17
+ - [Installation](#installation)
18
+ - [Bundler](#bundler)
19
+ - [Gem install](#gem-install)
20
+ - [Usage](#usage)
21
+ - [Quickstart](#quickstart)
22
+ - [With Config](#with-config)
23
+ - [Custom timeout or base URI](#custom-timeout-or-base-uri)
24
+ - [Extra Headers per Client](#extra-headers-per-client)
25
+ - [Verbose Logging](#verbose-logging)
26
+ - [Azure](#azure)
27
+ - [Counting Tokens](#counting-tokens)
28
+ - [Models](#models)
29
+ - [Examples](#examples)
30
+ - [Chat](#chat)
31
+ - [Streaming Chat](#streaming-chat)
32
+ - [Vision](#vision)
33
+ - [JSON Mode](#json-mode)
34
+ - [Functions](#functions)
35
+ - [Edits](#edits)
36
+ - [Embeddings](#embeddings)
37
+ - [Files](#files)
38
+ - [Finetunes](#finetunes)
39
+ - [Assistants](#assistants)
40
+ - [Threads and Messages](#threads-and-messages)
41
+ - [Runs](#runs)
42
+ - [Runs involving function tools](#runs-involving-function-tools)
43
+ - [Image Generation](#image-generation)
44
+ - [DALL·E 2](#dalle-2)
45
+ - [DALL·E 3](#dalle-3)
46
+ - [Image Edit](#image-edit)
47
+ - [Image Variations](#image-variations)
48
+ - [Moderations](#moderations)
49
+ - [Whisper](#whisper)
50
+ - [Translate](#translate)
51
+ - [Transcribe](#transcribe)
52
+ - [Speech](#speech)
53
+ - [Errors](#errors)
54
+ - [Development](#development)
55
+ - [Release](#release)
56
+ - [Contributing](#contributing)
57
+ - [License](#license)
58
+ - [Code of Conduct](#code-of-conduct)
59
+
60
+ ## Installation
61
+
13
62
  ### Bundler
14
63
 
15
64
  Add this line to your application's Gemfile:
@@ -108,6 +157,15 @@ OpenAI.configure do |config|
108
157
  end
109
158
  ```
110
159
 
160
+ #### Extra Headers per Client
161
+
162
+ You can dynamically pass headers per client object, which will be merged with any headers set globally with OpenAI.configure:
163
+
164
+ ```ruby
165
+ client = OpenAI::Client.new(access_token: "access_token_goes_here")
166
+ client.add_headers("X-Proxy-TTL" => "43200")
167
+ ```
168
+
111
169
  #### Verbose Logging
112
170
 
113
171
  You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middleware/index) to the client in a block, eg. to enable verbose logging with Ruby's [Logger](https://ruby-doc.org/3.2.2/stdlibs/logger/Logger.html):
@@ -297,36 +355,39 @@ response =
297
355
  "content": "What is the weather like in San Francisco?",
298
356
  },
299
357
  ],
300
- functions: [
358
+ tools: [
301
359
  {
302
- name: "get_current_weather",
303
- description: "Get the current weather in a given location",
304
- parameters: {
305
- type: :object,
306
- properties: {
307
- location: {
308
- type: :string,
309
- description: "The city and state, e.g. San Francisco, CA",
310
- },
311
- unit: {
312
- type: "string",
313
- enum: %w[celsius fahrenheit],
360
+ type: "function",
361
+ function: {
362
+ name: "get_current_weather",
363
+ description: "Get the current weather in a given location",
364
+ parameters: {
365
+ type: :object,
366
+ properties: {
367
+ location: {
368
+ type: :string,
369
+ description: "The city and state, e.g. San Francisco, CA",
370
+ },
371
+ unit: {
372
+ type: "string",
373
+ enum: %w[celsius fahrenheit],
374
+ },
314
375
  },
376
+ required: ["location"],
315
377
  },
316
- required: ["location"],
317
378
  },
318
- },
379
+ }
319
380
  ],
320
381
  },
321
382
  )
322
383
 
323
384
  message = response.dig("choices", 0, "message")
324
385
 
325
- if message["role"] == "assistant" && message["function_call"]
326
- function_name = message.dig("function_call", "name")
386
+ if message["role"] == "assistant" && message["tool_calls"]
387
+ function_name = message.dig("tool_calls", "function", "name")
327
388
  args =
328
389
  JSON.parse(
329
- message.dig("function_call", "arguments"),
390
+ message.dig("tool_calls", "function", "arguments"),
330
391
  { symbolize_names: true },
331
392
  )
332
393
 
@@ -338,6 +399,21 @@ end
338
399
  # => "The weather is nice 🌞"
339
400
  ```
340
401
 
402
+ ### Completions
403
+
404
+ Hit the OpenAI API for a completion using other GPT-3 models:
405
+
406
+ ```ruby
407
+ response = client.completions(
408
+ parameters: {
409
+ model: "text-davinci-001",
410
+ prompt: "Once upon a time",
411
+ max_tokens: 5
412
+ })
413
+ puts response["choices"].map { |c| c["text"] }
414
+ # => [", there lived a great"]
415
+ ```
416
+
341
417
  ### Edits
342
418
 
343
419
  Send a string and some instructions for what to do to the string:
@@ -423,16 +499,16 @@ response = client.finetunes.retrieve(id: fine_tune_id)
423
499
  fine_tuned_model = response["fine_tuned_model"]
424
500
  ```
425
501
 
426
- This fine-tuned model name can then be used in completions:
502
+ This fine-tuned model name can then be used in chat completions:
427
503
 
428
504
  ```ruby
429
- response = client.completions(
505
+ response = client.chat(
430
506
  parameters: {
431
507
  model: fine_tuned_model,
432
- prompt: "I love Mondays!"
508
+ messages: [{ role: "user", content: "I love Mondays!"}]
433
509
  }
434
510
  )
435
- response.dig("choices", 0, "text")
511
+ response.dig("choices", 0, "message", "content")
436
512
  ```
437
513
 
438
514
  You can also capture the events for a job:
@@ -441,10 +517,221 @@ You can also capture the events for a job:
441
517
  client.finetunes.list_events(id: fine_tune_id)
442
518
  ```
443
519
 
520
+ ### Assistants
521
+
522
+ Assistants can call models to interact with threads and use tools to perform tasks (see [Assistant Overview](https://platform.openai.com/docs/assistants/overview)).
523
+
524
+ To create a new assistant (see [API documentation](https://platform.openai.com/docs/api-reference/assistants/createAssistant)):
525
+
526
+ ```ruby
527
+ response = client.assistants.create(
528
+ parameters: {
529
+ model: "gpt-3.5-turbo-1106", # Retrieve via client.models.list. Assistants need 'gpt-3.5-turbo-1106' or later.
530
+ name: "OpenAI-Ruby test assistant",
531
+ description: nil,
532
+ instructions: "You are a helpful assistant for coding a OpenAI API client using the OpenAI-Ruby gem.",
533
+ tools: [
534
+ { type: 'retrieval' }, # Allow access to files attached using file_ids
535
+ { type: 'code_interpreter' }, # Allow access to Python code interpreter
536
+ ],
537
+ "file_ids": ["file-123"], # See Files section above for how to upload files
538
+ "metadata": { my_internal_version_id: '1.0.0' }
539
+ })
540
+ assistant_id = response["id"]
541
+ ```
542
+
543
+ Given an `assistant_id` you can `retrieve` the current field values:
544
+
545
+ ```ruby
546
+ client.assistants.retrieve(id: assistant_id)
547
+ ```
548
+
549
+ You can get a `list` of all assistants currently available under the organization:
550
+
551
+ ```ruby
552
+ client.assistants.list
553
+ ```
554
+
555
+ You can modify an existing assistant using the assistant's id (see [API documentation](https://platform.openai.com/docs/api-reference/assistants/modifyAssistant)):
556
+
557
+ ```ruby
558
+ response = client.assistants.modify(
559
+ id: assistant_id,
560
+ parameters: {
561
+ name: "Modified Test Assistant for OpenAI-Ruby",
562
+ metadata: { my_internal_version_id: '1.0.1' }
563
+ })
564
+ ```
565
+
566
+ You can delete assistants:
567
+
568
+ ```
569
+ client.assistants.delete(id: assistant_id)
570
+ ```
571
+
572
+ ### Threads and Messages
573
+
574
+ Once you have created an assistant as described above, you need to prepare a `Thread` of `Messages` for the assistant to work on (see [introduction on Assistants](https://platform.openai.com/docs/assistants/how-it-works)). For example, as an initial setup you could do:
575
+
576
+ ```ruby
577
+ # Create thread
578
+ response = client.threads.create # Note: Once you create a thread, there is no way to list it
579
+ # or recover it currently (as of 2023-12-10). So hold onto the `id`
580
+ thread_id = response["id"]
581
+
582
+ # Add initial message from user (see https://platform.openai.com/docs/api-reference/messages/createMessage)
583
+ message_id = client.messages.create(
584
+ thread_id: thread_id,
585
+ parameters: {
586
+ role: "user", # Required for manually created messages
587
+ content: "Can you help me write an API library to interact with the OpenAI API please?"
588
+ })["id"]
589
+
590
+ # Retrieve individual message
591
+ message = client.messages.retrieve(thread_id: thread_id, id: message_id)
592
+
593
+ # Review all messages on the thread
594
+ messages = client.messages.list(thread_id: thread_id)
595
+ ```
596
+
597
+ To clean up after a thread is no longer needed:
598
+
599
+ ```ruby
600
+ # To delete the thread (and all associated messages):
601
+ client.threads.delete(id: thread_id)
602
+
603
+ client.messages.retrieve(thread_id: thread_id, id: message_id) # -> Fails after thread is deleted
604
+ ```
605
+
606
+ ### Runs
607
+
608
+ To submit a thread to be evaluated with the model of an assistant, create a `Run` as follows (Note: This is one place where OpenAI will take your money):
609
+
610
+ ```ruby
611
+ # Create run (will use instruction/model/tools from Assistant's definition)
612
+ response = client.runs.create(thread_id: thread_id,
613
+ parameters: {
614
+ assistant_id: assistant_id
615
+ })
616
+ run_id = response['id']
617
+
618
+ # Retrieve/poll Run to observe status
619
+ response = client.runs.retrieve(id: run_id, thread_id: thread_id)
620
+ status = response['status']
621
+ ```
622
+
623
+ The `status` response can include the following strings `queued`, `in_progress`, `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`, or `expired` which you can handle as follows:
624
+
625
+ ```ruby
626
+ while true do
627
+
628
+ response = client.runs.retrieve(id: run_id, thread_id: thread_id)
629
+ status = response['status']
630
+
631
+ case status
632
+ when 'queued', 'in_progress', 'cancelling'
633
+ puts 'Sleeping'
634
+ sleep 1 # Wait one second and poll again
635
+ when 'completed'
636
+ break # Exit loop and report result to user
637
+ when 'requires_action'
638
+ # Handle tool calls (see below)
639
+ when 'cancelled', 'failed', 'expired'
640
+ puts response['last_error'].inspect
641
+ break # or `exit`
642
+ else
643
+ puts "Unknown status response: #{status}"
644
+ end
645
+ end
646
+ ```
647
+
648
+ If the `status` response indicates that the `run` is `completed`, the associated `thread` will have one or more new `messages` attached:
649
+
650
+ ```ruby
651
+ # Either retrieve all messages in bulk again, or...
652
+ messages = client.messages.list(thread_id: thread_id) # Note: as of 2023-12-11 adding limit or order options isn't working, yet
653
+
654
+ # Alternatively retrieve the `run steps` for the run which link to the messages:
655
+ run_steps = client.run_steps.list(thread_id: thread_id, run_id: run_id)
656
+ new_message_ids = run_steps['data'].filter_map { |step|
657
+ if step['type'] == 'message_creation'
658
+ step.dig('step_details', "message_creation", "message_id")
659
+ end # Ignore tool calls, because they don't create new messages.
660
+ }
661
+
662
+ # Retrieve the individual messages
663
+ new_messages = new_message_ids.map { |msg_id|
664
+ client.messages.retrieve(id: msg_id, thread_id: thread_id)
665
+ }
666
+
667
+ # Find the actual response text in the content array of the messages
668
+ new_messages.each { |msg|
669
+ msg['content'].each { |content_item|
670
+ case content_item['type']
671
+ when 'text'
672
+ puts content_item.dig('text', 'value')
673
+ # Also handle annotations
674
+ when 'image_file'
675
+ # Use File endpoint to retrieve file contents via id
676
+ id = content_item.dig('image_file', 'file_id')
677
+ end
678
+ }
679
+ }
680
+ ```
681
+
682
+ At any time you can list all runs which have been performed on a particular thread or are currently running (in descending/newest first order):
683
+
684
+ ```ruby
685
+ client.runs.list(thread_id: thread_id)
686
+ ```
687
+
688
+ #### Runs involving function tools
689
+
690
+ In case you are allowing the assistant to access `function` tools (they are defined in the same way as functions during chat completion), you might get a status code of `requires_action` when the assistant wants you to evaluate one or more function tools:
691
+
692
+ ```ruby
693
+ def get_current_weather(location:, unit: "celsius")
694
+ # Your function code goes here
695
+ if location =~ /San Francisco/i
696
+ return unit == "celsius" ? "The weather is nice 🌞 at 27°C" : "The weather is nice 🌞 at 80°F"
697
+ else
698
+ return unit == "celsius" ? "The weather is icy 🥶 at -5°C" : "The weather is icy 🥶 at 23°F"
699
+ end
700
+ end
701
+
702
+ if status == 'requires_action'
703
+
704
+ tools_to_call = response.dig('required_action', 'submit_tool_outputs', 'tool_calls')
705
+
706
+ my_tool_outputs = tools_to_call.map { |tool|
707
+ # Call the functions based on the tool's name
708
+ function_name = tool.dig('function', 'name')
709
+ arguments = JSON.parse(
710
+ tool.dig("function", "arguments"),
711
+ { symbolize_names: true },
712
+ )
713
+
714
+ tool_output = case function_name
715
+ when "get_current_weather"
716
+ get_current_weather(**arguments)
717
+ end
718
+
719
+ { tool_call_id: tool['id'], output: tool_output }
720
+ }
721
+
722
+ client.runs.submit_tool_outputs(thread_id: thread_id, run_id: run_id, parameters: { tool_outputs: my_tool_outputs })
723
+ end
724
+ ```
725
+
726
+ Note that you have 10 minutes to submit your tool output before the run expires.
727
+
444
728
  ### Image Generation
445
729
 
446
- Generate an image using DALL·E! The size of any generated images must be one of `256x256`, `512x512` or `1024x1024` -
447
- if not specified the image will default to `1024x1024`.
730
+ Generate images using DALL·E 2 or DALL·E 3!
731
+
732
+ #### DALL·E 2
733
+
734
+ For DALL·E 2 the size of any generated images must be one of `256x256`, `512x512` or `1024x1024` - if not specified the image will default to `1024x1024`.
448
735
 
449
736
  ```ruby
450
737
  response = client.images.generate(parameters: { prompt: "A baby sea otter cooking pasta wearing a hat of some sort", size: "256x256" })
@@ -454,6 +741,18 @@ puts response.dig("data", 0, "url")
454
741
 
455
742
  ![Ruby](https://i.ibb.co/6y4HJFx/img-d-Tx-Rf-RHj-SO5-Gho-Cbd8o-LJvw3.png)
456
743
 
744
+ #### DALL·E 3
745
+
746
+ For DALL·E 3 the size of any generated images must be one of `1024x1024`, `1024x1792` or `1792x1024`. Additionally the quality of the image can be specified to either `standard` or `hd`.
747
+
748
+ ```ruby
749
+ response = client.images.generate(parameters: { prompt: "A springer spaniel cooking pasta wearing a hat of some sort", size: "1024x1792", quality: "standard" })
750
+ puts response.dig("data", 0, "url")
751
+ # => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
752
+ ```
753
+
754
+ ![Ruby](https://i.ibb.co/z2tCKv9/img-Goio0l-S0i81-NUNa-BIx-Eh-CT6-L.png)
755
+
457
756
  ### Image Edit
458
757
 
459
758
  Fill in the transparent part of an image, or upload a mask with transparent sections to indicate the parts of an image that can be changed according to your prompt...
@@ -511,11 +810,14 @@ puts response["text"]
511
810
 
512
811
  The transcriptions API takes as input the audio file you want to transcribe and returns the text in the desired output file format.
513
812
 
813
+ You can pass the language of the audio file to improve transcription quality. Supported languages are listed [here](https://github.com/openai/whisper#available-models-and-languages). You need to provide the language as an ISO-639-1 code, eg. "en" for English or "ne" for Nepali. You can look up the codes [here](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes).
814
+
514
815
  ```ruby
515
816
  response = client.audio.transcribe(
516
817
  parameters: {
517
818
  model: "whisper-1",
518
819
  file: File.open("path_to_file", "rb"),
820
+ language: "en" # Optional.
519
821
  })
520
822
  puts response["text"]
521
823
  # => "Transcription of the text"
@@ -555,9 +857,10 @@ After checking out the repo, run `bin/setup` to install dependencies. You can ru
555
857
 
556
858
  To install this gem onto your local machine, run `bundle exec rake install`.
557
859
 
558
- ### Warning
860
+ To run all tests, execute the command `bundle exec rake`, which will also run the linter (Rubocop). This repository uses [VCR](https://github.com/vcr/vcr) to log API requests.
559
861
 
560
- If you have an `OPENAI_ACCESS_TOKEN` in your `ENV`, running the specs will use this to run the specs against the actual API, which will be slow and cost you money - 2 cents or more! Remove it from your environment with `unset` or similar if you just want to run the specs against the stored VCR responses.
862
+ > [!WARNING]
863
+ > If you have an `OPENAI_ACCESS_TOKEN` in your `ENV`, running the specs will use this to run the specs against the actual API, which will be slow and cost you money - 2 cents or more! Remove it from your environment with `unset` or similar if you just want to run the specs against the stored VCR responses.
561
864
 
562
865
  ## Release
563
866
 
data/Rakefile CHANGED
@@ -1,6 +1,19 @@
1
1
  require "bundler/gem_tasks"
2
2
  require "rspec/core/rake_task"
3
+ require "rubocop/rake_task"
3
4
 
4
5
  RSpec::Core::RakeTask.new(:spec)
5
6
 
6
- task default: :spec
7
+ task :default do
8
+ Rake::Task["test"].invoke
9
+ Rake::Task["lint"].invoke
10
+ end
11
+
12
+ task :test do
13
+ Rake::Task["spec"].invoke
14
+ end
15
+
16
+ task :lint do
17
+ RuboCop::RakeTask.new(:rubocop)
18
+ Rake::Task["rubocop"].invoke
19
+ end
data/lib/openai/client.rb CHANGED
@@ -34,6 +34,10 @@ module OpenAI
34
34
  json_post(path: "/embeddings", parameters: parameters)
35
35
  end
36
36
 
37
+ def completions(parameters: {})
38
+ json_post(path: "/completions", parameters: parameters)
39
+ end
40
+
37
41
  def audio
38
42
  @audio ||= OpenAI::Audio.new(client: self)
39
43
  end
@@ -4,10 +4,6 @@ module OpenAI
4
4
  @client = client.beta(assistants: "v1")
5
5
  end
6
6
 
7
- def list
8
- @client.get(path: "/threads")
9
- end
10
-
11
7
  def retrieve(id:)
12
8
  @client.get(path: "/threads/#{id}")
13
9
  end
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "6.3.1".freeze
2
+ VERSION = "6.5.0".freeze
3
3
  end
data/ruby-openai.gemspec CHANGED
@@ -15,6 +15,7 @@ Gem::Specification.new do |spec|
15
15
  spec.metadata["source_code_uri"] = "https://github.com/alexrudall/ruby-openai"
16
16
  spec.metadata["changelog_uri"] = "https://github.com/alexrudall/ruby-openai/blob/main/CHANGELOG.md"
17
17
  spec.metadata["rubygems_mfa_required"] = "true"
18
+ spec.metadata["funding_uri"] = "https://github.com/sponsors/alexrudall"
18
19
 
19
20
  # Specify which files should be added to the gem when it is released.
20
21
  # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 6.3.1
4
+ version: 6.5.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
- autorequire:
8
+ autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2023-12-04 00:00:00.000000000 Z
11
+ date: 2024-03-31 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: event_stream_parser
@@ -58,7 +58,7 @@ dependencies:
58
58
  - - ">="
59
59
  - !ruby/object:Gem::Version
60
60
  version: '1'
61
- description:
61
+ description:
62
62
  email:
63
63
  - alexrudall@users.noreply.github.com
64
64
  executables: []
@@ -113,7 +113,8 @@ metadata:
113
113
  source_code_uri: https://github.com/alexrudall/ruby-openai
114
114
  changelog_uri: https://github.com/alexrudall/ruby-openai/blob/main/CHANGELOG.md
115
115
  rubygems_mfa_required: 'true'
116
- post_install_message:
116
+ funding_uri: https://github.com/sponsors/alexrudall
117
+ post_install_message:
117
118
  rdoc_options: []
118
119
  require_paths:
119
120
  - lib
@@ -128,8 +129,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
128
129
  - !ruby/object:Gem::Version
129
130
  version: '0'
130
131
  requirements: []
131
- rubygems_version: 3.4.10
132
- signing_key:
132
+ rubygems_version: 3.4.22
133
+ signing_key:
133
134
  specification_version: 4
134
135
  summary: "OpenAI API + Ruby! \U0001F916\U0001FA75"
135
136
  test_files: []