ruby-openai 7.1.0 โ†’ 7.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a86dc627f27eeea7cf3eb1bf2eec2b0209d0bb8c11fef0eb6fd6518f7f10cfe9
4
- data.tar.gz: 712ab627670853d680c8858a9d27aef5a82be09de8d53e5f7156ee608ba8d939
3
+ metadata.gz: 7108bf76aa6f30cd7c38b41967b0162d6a4014698c1c8364b8116e5c665c044f
4
+ data.tar.gz: 736587458668b4608e49fa8ec50e460da14eeda0dfb8a739c68806e3bd5d0a2c
5
5
  SHA512:
6
- metadata.gz: 72e14dc39495046b71ca147953582a24f8c9261955f2ca2a8d898ca7f8e136b459c31583620c16db3fa80c39da61c1f3a4cc932c5b3f2e71741fed42719eaeaf
7
- data.tar.gz: 82db19d40f9b44fedb73d8f310771af71096d3d7e8e56f96d000a70f4c61abb1f21cbe98187d70fec1c3d639921eabe8cd8e456cac92dbcfa6e2efcb655f865b
6
+ metadata.gz: 76cac4818b5941d5732becebc91675c1ebeaba4f27c1710888afbb1893f3f8ff4d18e24a3c90f4d4332eb89730b0f6195507cf99519e764876a614ca3927a0cb
7
+ data.tar.gz: 4757d8e11b494a75d0ea839ae073610adfa77317ba0e13ddabdd21bf10fd34127a5747138644dba4d9073aa509341bfeaed69a3d00150d1ae90a42ddeadd53e4
data/CHANGELOG.md CHANGED
@@ -5,6 +5,14 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [7.2.0] - 2024-10-10
9
+
10
+ ### Added
11
+
12
+ - Add ability to pass parameters to Files#list endpoint - thanks to [@parterburn](https://github.com/parterburn)!
13
+ - Add Velvet observability platform to README - thanks to [@philipithomas](https://github.com/philipithomas)
14
+ - Add Assistants::Messages#delete endpoint - thanks to [@mochetts](https://github.com/mochetts)!
15
+
8
16
  ## [7.1.0] - 2024-06-10
9
17
 
10
18
  ### Added
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (7.1.0)
4
+ ruby-openai (7.2.0)
5
5
  event_stream_parser (>= 0.3.0, < 2.0.0)
6
6
  faraday (>= 1)
7
7
  faraday-multipart (>= 1)
@@ -38,7 +38,7 @@ GEM
38
38
  rainbow (3.1.1)
39
39
  rake (13.2.1)
40
40
  regexp_parser (2.8.0)
41
- rexml (3.2.9)
41
+ rexml (3.3.6)
42
42
  strscan
43
43
  rspec (3.13.0)
44
44
  rspec-core (~> 3.13.0)
data/README.md CHANGED
@@ -8,7 +8,7 @@ Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! ๐Ÿค–โค๏ธ
8
8
 
9
9
  Stream text with GPT-4o, transcribe and translate audio with Whisper, or create images with DALLยทE...
10
10
 
11
- [๐Ÿšข Hire me](https://peaceterms.com?utm_source=ruby-openai&utm_medium=readme&utm_id=26072023) | [๐ŸŽฎ Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [๐Ÿฆ Twitter](https://twitter.com/alexrudall) | [๐Ÿง  Anthropic Gem](https://github.com/alexrudall/anthropic) | [๐Ÿš‚ Midjourney Gem](https://github.com/alexrudall/midjourney)
11
+ [๐Ÿ“š Rails AI (FREE Book)](https://railsai.com) | [๐ŸŽฎ Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [๐Ÿฆ X](https://x.com/alexrudall) | [๐Ÿง  Anthropic Gem](https://github.com/alexrudall/anthropic) | [๐Ÿš‚ Midjourney Gem](https://github.com/alexrudall/midjourney)
12
12
 
13
13
  ## Contents
14
14
 
@@ -139,7 +139,9 @@ client = OpenAI::Client.new(access_token: "access_token_goes_here")
139
139
 
140
140
  #### Custom timeout or base URI
141
141
 
142
- The default timeout for any request using this library is 120 seconds. You can change that by passing a number of seconds to the `request_timeout` when initializing the client. You can also change the base URI used for all requests, eg. to use observability tools like [Helicone](https://docs.helicone.ai/quickstart/integrate-in-one-line-of-code), and add arbitrary other headers e.g. for [openai-caching-proxy-worker](https://github.com/6/openai-caching-proxy-worker):
142
+ - The default timeout for any request using this library is 120 seconds. You can change that by passing a number of seconds to the `request_timeout` when initializing the client.
143
+ - You can also change the base URI used for all requests, eg. to use observability tools like [Helicone](https://docs.helicone.ai/quickstart/integrate-in-one-line-of-code) or [Velvet](https://docs.usevelvet.com/docs/getting-started)
144
+ - You can also add arbitrary other headers e.g. for [openai-caching-proxy-worker](https://github.com/6/openai-caching-proxy-worker), eg.:
143
145
 
144
146
  ```ruby
145
147
  client = OpenAI::Client.new(
@@ -326,7 +328,28 @@ client.chat(
326
328
  # => "Anna is a young woman in her mid-twenties, with wavy chestnut hair that falls to her shoulders..."
327
329
  ```
328
330
 
329
- Note: OpenAPI currently does not report token usage for streaming responses. To count tokens while streaming, try `OpenAI.rough_token_count` or [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby). We think that each call to the stream proc corresponds to a single token, so you can also try counting the number of calls to the proc to get the completion token count.
331
+ Note: In order to get usage information, you can provide the [`stream_options` parameter](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options) and OpenAI will provide a final chunk with the usage. Here is an example:
332
+
333
+ ```ruby
334
+ stream_proc = proc { |chunk, _bytesize| puts "--------------"; puts chunk.inspect; }
335
+ client.chat(
336
+ parameters: {
337
+ model: "gpt-4o",
338
+ stream: stream_proc,
339
+ stream_options: { include_usage: true },
340
+ messages: [{ role: "user", content: "Hello!"}],
341
+ })
342
+ # => --------------
343
+ # => {"id"=>"chatcmpl-7bbq05PiZqlHxjV1j7OHnKKDURKaf", "object"=>"chat.completion.chunk", "created"=>1718750612, "model"=>"gpt-4o-2024-05-13", "system_fingerprint"=>"fp_9cb5d38cf7", "choices"=>[{"index"=>0, "delta"=>{"role"=>"assistant", "content"=>""}, "logprobs"=>nil, "finish_reason"=>nil}], "usage"=>nil}
344
+ # => --------------
345
+ # => {"id"=>"chatcmpl-7bbq05PiZqlHxjV1j7OHnKKDURKaf", "object"=>"chat.completion.chunk", "created"=>1718750612, "model"=>"gpt-4o-2024-05-13", "system_fingerprint"=>"fp_9cb5d38cf7", "choices"=>[{"index"=>0, "delta"=>{"content"=>"Hello"}, "logprobs"=>nil, "finish_reason"=>nil}], "usage"=>nil}
346
+ # => --------------
347
+ # => ... more content chunks
348
+ # => --------------
349
+ # => {"id"=>"chatcmpl-7bbq05PiZqlHxjV1j7OHnKKDURKaf", "object"=>"chat.completion.chunk", "created"=>1718750612, "model"=>"gpt-4o-2024-05-13", "system_fingerprint"=>"fp_9cb5d38cf7", "choices"=>[{"index"=>0, "delta"=>{}, "logprobs"=>nil, "finish_reason"=>"stop"}], "usage"=>nil}
350
+ # => --------------
351
+ # => {"id"=>"chatcmpl-7bbq05PiZqlHxjV1j7OHnKKDURKaf", "object"=>"chat.completion.chunk", "created"=>1718750612, "model"=>"gpt-4o-2024-05-13", "system_fingerprint"=>"fp_9cb5d38cf7", "choices"=>[], "usage"=>{"prompt_tokens"=>9, "completion_tokens"=>9, "total_tokens"=>18}}
352
+ ```
330
353
 
331
354
  #### Vision
332
355
 
@@ -526,9 +549,11 @@ puts response.dig("data", 0, "embedding")
526
549
  ```
527
550
 
528
551
  ### Batches
552
+
529
553
  The Batches endpoint allows you to create and manage large batches of API requests to run asynchronously. Currently, the supported endpoints for batches are `/v1/chat/completions` (Chat Completions API) and `/v1/embeddings` (Embeddings API).
530
554
 
531
555
  To use the Batches endpoint, you need to first upload a JSONL file containing the batch requests using the Files endpoint. The file must be uploaded with the purpose set to `batch`. Each line in the JSONL file represents a single request and should have the following format:
556
+
532
557
  ```json
533
558
  {
534
559
  "custom_id": "request-1",
@@ -612,7 +637,9 @@ These files are in JSONL format, with each line representing the output or error
612
637
  If a request fails with a non-HTTP error, the error object will contain more information about the cause of the failure.
613
638
 
614
639
  ### Files
640
+
615
641
  #### For fine-tuning purposes
642
+
616
643
  Put your data in a `.jsonl` file like this:
617
644
 
618
645
  ```json
@@ -645,7 +672,6 @@ my_file = File.open("path/to/file.pdf", "rb")
645
672
  client.files.upload(parameters: { file: my_file, purpose: "assistants" })
646
673
  ```
647
674
 
648
-
649
675
  See supported file types on [API documentation](https://platform.openai.com/docs/assistants/tools/file-search/supported-files).
650
676
 
651
677
  ### Finetunes
@@ -701,6 +727,7 @@ client.finetunes.list_events(id: fine_tune_id)
701
727
  ```
702
728
 
703
729
  ### Vector Stores
730
+
704
731
  Vector Store objects give the File Search tool the ability to search your files.
705
732
 
706
733
  You can create a new vector store:
@@ -746,6 +773,7 @@ client.vector_stores.delete(id: vector_store_id)
746
773
  ```
747
774
 
748
775
  ### Vector Store Files
776
+
749
777
  Vector store files represent files inside a vector store.
750
778
 
751
779
  You can create a new vector store file by attaching a File to a vector store.
@@ -784,9 +812,11 @@ client.vector_store_files.delete(
784
812
  id: vector_store_file_id
785
813
  )
786
814
  ```
815
+
787
816
  Note: This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint.
788
817
 
789
818
  ### Vector Store File Batches
819
+
790
820
  Vector store file batches represent operations to add multiple files to a vector store.
791
821
 
792
822
  You can create a new vector store file batch by attaching multiple Files to a vector store.
data/lib/openai/client.rb CHANGED
@@ -2,6 +2,7 @@ module OpenAI
2
2
  class Client
3
3
  include OpenAI::HTTP
4
4
 
5
+ SENSITIVE_ATTRIBUTES = %i[@access_token @organization_id @extra_headers].freeze
5
6
  CONFIG_KEYS = %i[
6
7
  api_type
7
8
  api_version
@@ -107,5 +108,15 @@ module OpenAI
107
108
  client.add_headers("OpenAI-Beta": apis.map { |k, v| "#{k}=#{v}" }.join(";"))
108
109
  end
109
110
  end
111
+
112
+ def inspect
113
+ vars = instance_variables.map do |var|
114
+ value = instance_variable_get(var)
115
+
116
+ SENSITIVE_ATTRIBUTES.include?(var) ? "#{var}=[REDACTED]" : "#{var}=#{value.inspect}"
117
+ end
118
+
119
+ "#<#{self.class}:#{object_id} #{vars.join(', ')}>"
120
+ end
110
121
  end
111
122
  end
data/lib/openai/files.rb CHANGED
@@ -11,8 +11,8 @@ module OpenAI
11
11
  @client = client
12
12
  end
13
13
 
14
- def list
15
- @client.get(path: "/files")
14
+ def list(parameters: {})
15
+ @client.get(path: "/files", parameters: parameters)
16
16
  end
17
17
 
18
18
  def upload(parameters: {})
@@ -16,8 +16,12 @@ module OpenAI
16
16
  @client.json_post(path: "/threads/#{thread_id}/messages", parameters: parameters)
17
17
  end
18
18
 
19
- def modify(id:, thread_id:, parameters: {})
19
+ def modify(thread_id:, id:, parameters: {})
20
20
  @client.json_post(path: "/threads/#{thread_id}/messages/#{id}", parameters: parameters)
21
21
  end
22
+
23
+ def delete(thread_id:, id:)
24
+ @client.delete(path: "/threads/#{thread_id}/messages/#{id}")
25
+ end
22
26
  end
23
27
  end
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "7.1.0".freeze
2
+ VERSION = "7.2.0".freeze
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 7.1.0
4
+ version: 7.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-06-10 00:00:00.000000000 Z
11
+ date: 2024-10-10 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: event_stream_parser