ruby-openai 7.0.0 → 7.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8334c2f0658ff33f39e96e8316b5c7122ca8810f5bfaca945c8d5f1add38b85a
4
- data.tar.gz: b761ed842f7c27ba7f3e6e3137f6d60f2d51cb1a32f585989fd2b1992d256b46
3
+ metadata.gz: 4067ee94a830c3637d339dff7445d0176610bcb1d381754915a9279bedac59fa
4
+ data.tar.gz: b2f6e4e7331e1f60f32aa3c1fb5628124c3b09539b5d4e2a99ebf9e539420cd2
5
5
  SHA512:
6
- metadata.gz: 7c53448def0a2a9849744a09c10f91fb4c16eb23fbf0d52be748574a9a96b05cb5641821c7728b9d95764cf50399f23c48bfdf3444d86ab80e65602e2264f6ba
7
- data.tar.gz: 8806bbe03d2cde1b1fdfb691bcc2c4d803837faed6f001027135bdf9ed54a4bdf4eb01128c36d9d84b35622876ec087c494980e67b01a72a8ecb01040180cc72
6
+ metadata.gz: 6f81268cc24c111b011aa87e1a0a877a72caf7a8b9d52cea367c35edf222909f5b8648d6925e30755a71692bd00974fc6d630c055a1639d5bbb8e9a33da11ef9
7
+ data.tar.gz: 2393f1d34582d37c2ee23f07d81d792fb60b0d5f3e9e5e2b900aa056d4ec49b23244d1d7766a3256a521e7621d17fa349f53134f9bdd64499a1fb760b2818038
data/CHANGELOG.md CHANGED
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [7.0.1] - 2024-04-30
9
+
10
+ ### Fixed
11
+
12
+ - Update to v2 of Assistants in Messages, Runs, RunSteps and Threads - thanks to [@willywg](https://github.com/willywg) and others for pointing this out.
13
+
8
14
  ## [7.0.0] - 2024-04-27
9
15
 
10
16
  ### Added
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (7.0.0)
4
+ ruby-openai (7.0.1)
5
5
  event_stream_parser (>= 0.3.0, < 2.0.0)
6
6
  faraday (>= 1)
7
7
  faraday-multipart (>= 1)
data/README.md CHANGED
@@ -4,13 +4,13 @@
4
4
  [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/alexrudall/ruby-openai/blob/main/LICENSE.txt)
5
5
  [![CircleCI Build Status](https://circleci.com/gh/alexrudall/ruby-openai.svg?style=shield)](https://circleci.com/gh/alexrudall/ruby-openai)
6
6
 
7
- Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖🩵
7
+ Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
8
8
 
9
9
  Stream text with GPT-4, transcribe and translate audio with Whisper, or create images with DALL·E...
10
10
 
11
11
  [🚢 Hire me](https://peaceterms.com?utm_source=ruby-openai&utm_medium=readme&utm_id=26072023) | [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 Twitter](https://twitter.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney)
12
12
 
13
- # Table of Contents
13
+ ## Contents
14
14
 
15
15
  - [Ruby OpenAI](#ruby-openai)
16
16
  - [Table of Contents](#table-of-contents)
@@ -27,6 +27,7 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
27
27
  - [Faraday middleware](#faraday-middleware)
28
28
  - [Azure](#azure)
29
29
  - [Ollama](#ollama)
30
+ - [Groq](#groq)
30
31
  - [Counting Tokens](#counting-tokens)
31
32
  - [Models](#models)
32
33
  - [Chat](#chat)
@@ -100,7 +101,10 @@ require "openai"
100
101
  For a quick test you can pass your token directly to a new client:
101
102
 
102
103
  ```ruby
103
- client = OpenAI::Client.new(access_token: "access_token_goes_here")
104
+ client = OpenAI::Client.new(
105
+ access_token: "access_token_goes_here",
106
+ log_errors: true # Highly recommended in development, so you can see what errors OpenAI is returning. Not recommended in production.
107
+ )
104
108
  ```
105
109
 
106
110
  ### With Config
@@ -109,8 +113,9 @@ For a more robust setup, you can configure the gem with your API keys, for examp
109
113
 
110
114
  ```ruby
111
115
  OpenAI.configure do |config|
112
- config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
113
- config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional.
116
+ config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
117
+ config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional.
118
+ config.log_errors = true # Highly recommended in development, so you can see what errors OpenAI is returning. Not recommended in production.
114
119
  end
115
120
  ```
116
121
 
@@ -239,6 +244,27 @@ client.chat(
239
244
  # => Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
240
245
  ```
241
246
 
247
+ #### Groq
248
+
249
+ [Groq API Chat](https://console.groq.com/docs/quickstart) is broadly compatible with the OpenAI API, with a [few minor differences](https://console.groq.com/docs/openai). Get an access token from [here](https://console.groq.com/keys), then:
250
+
251
+ ```ruby
252
+ client = OpenAI::Client.new(
253
+ access_token: "groq_access_token_goes_here",
254
+ uri_base: "https://api.groq.com/"
255
+ )
256
+
257
+ client.chat(
258
+ parameters: {
259
+ model: "llama3-8b-8192", # Required.
260
+ messages: [{ role: "user", content: "Hello!"}], # Required.
261
+ temperature: 0.7,
262
+ stream: proc do |chunk, _bytesize|
263
+ print chunk.dig("choices", 0, "delta", "content")
264
+ end
265
+ })
266
+ ```
267
+
242
268
  ### Counting Tokens
243
269
 
244
270
  OpenAI parses prompt text into [tokens](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them), which are words or portions of words. (These tokens are unrelated to your API access_token.) Counting tokens can help you estimate your [costs](https://openai.com/pricing). It can also help you ensure your prompt text size is within the max-token limits of your model's context window, and choose an appropriate [`max_tokens`](https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens) completion parameter so your response will fit as well.
@@ -247,7 +273,7 @@ To estimate the token-count of your text:
247
273
 
248
274
  ```ruby
249
275
  OpenAI.rough_token_count("Your text")
250
- ```
276
+ ````
251
277
 
252
278
  If you need a more accurate count, try [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby).
253
279
 
@@ -320,12 +346,12 @@ puts response.dig("choices", 0, "message", "content")
320
346
 
321
347
  #### JSON Mode
322
348
 
323
- You can set the response_format to ask for responses in JSON (at least for `gpt-3.5-turbo-1106`):
349
+ You can set the response_format to ask for responses in JSON:
324
350
 
325
351
  ```ruby
326
352
  response = client.chat(
327
353
  parameters: {
328
- model: "gpt-3.5-turbo-1106",
354
+ model: "gpt-3.5-turbo",
329
355
  response_format: { type: "json_object" },
330
356
  messages: [{ role: "user", content: "Hello! Give me some JSON please."}],
331
357
  temperature: 0.7,
@@ -345,7 +371,7 @@ You can stream it as well!
345
371
  ```ruby
346
372
  response = client.chat(
347
373
  parameters: {
348
- model: "gpt-3.5-turbo-1106",
374
+ model: "gpt-3.5-turbo",
349
375
  messages: [{ role: "user", content: "Can I have some JSON please?"}],
350
376
  response_format: { type: "json_object" },
351
377
  stream: proc do |chunk, _bytesize|
@@ -542,7 +568,7 @@ These files are in JSONL format, with each line representing the output or error
542
568
  "id": "chatcmpl-abc123",
543
569
  "object": "chat.completion",
544
570
  "created": 1677858242,
545
- "model": "gpt-3.5-turbo-0301",
571
+ "model": "gpt-3.5-turbo",
546
572
  "choices": [
547
573
  {
548
574
  "index": 0,
@@ -638,16 +664,19 @@ To create a new assistant:
638
664
  ```ruby
639
665
  response = client.assistants.create(
640
666
  parameters: {
641
- model: "gpt-3.5-turbo-1106", # Retrieve via client.models.list. Assistants need 'gpt-3.5-turbo-1106' or later.
667
+ model: "gpt-3.5-turbo",
642
668
  name: "OpenAI-Ruby test assistant",
643
669
  description: nil,
644
- instructions: "You are a helpful assistant for coding a OpenAI API client using the OpenAI-Ruby gem.",
670
+ instructions: "You are a Ruby dev bot. When asked a question, write and run Ruby code to answer the question",
645
671
  tools: [
646
- { type: 'retrieval' }, # Allow access to files attached using file_ids
647
- { type: 'code_interpreter' }, # Allow access to Python code interpreter
672
+ { type: "code_interpreter" },
648
673
  ],
649
- "file_ids": ["file-123"], # See Files section above for how to upload files
650
- "metadata": { my_internal_version_id: '1.0.0' }
674
+ tool_resources: {
675
+ "code_interpreter": {
676
+ "file_ids": [] # See Files section above for how to upload files
677
+ }
678
+ },
679
+ "metadata": { my_internal_version_id: "1.0.0" }
651
680
  })
652
681
  assistant_id = response["id"]
653
682
  ```
@@ -829,11 +858,7 @@ client.runs.list(thread_id: thread_id, parameters: { order: "asc", limit: 3 })
829
858
  You can also create a thread and run in one call like this:
830
859
 
831
860
  ```ruby
832
- response = client.threads.create_and_run(
833
- parameters: {
834
- model: 'gpt-3.5-turbo',
835
- messages: [{ role: 'user', content: "What's deep learning?"}]
836
- })
861
+ response = client.runs.create_thread_and_run(parameters: { assistant_id: assistant_id })
837
862
  run_id = response['id']
838
863
  thread_id = response['thread_id']
839
864
  ```
@@ -1,7 +1,9 @@
1
1
  module OpenAI
2
2
  class Assistants
3
+ BETA_VERSION = "v2".freeze
4
+
3
5
  def initialize(client:)
4
- @client = client.beta(assistants: "v2")
6
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
7
  end
6
8
 
7
9
  def list
@@ -1,7 +1,7 @@
1
1
  module OpenAI
2
2
  class Batches
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
7
  def list
@@ -1,7 +1,7 @@
1
1
  module OpenAI
2
2
  class Messages
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
7
  def list(thread_id:, parameters: {})
@@ -1,7 +1,7 @@
1
1
  module OpenAI
2
2
  class RunSteps
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
7
  def list(thread_id:, run_id:, parameters: {})
data/lib/openai/runs.rb CHANGED
@@ -1,7 +1,7 @@
1
1
  module OpenAI
2
2
  class Runs
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
7
  def list(thread_id:, parameters: {})
@@ -1,7 +1,7 @@
1
1
  module OpenAI
2
2
  class Threads
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
7
  def retrieve(id:)
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "7.0.0".freeze
2
+ VERSION = "7.0.1".freeze
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 7.0.0
4
+ version: 7.0.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-04-27 00:00:00.000000000 Z
11
+ date: 2024-04-29 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: event_stream_parser