ruby-openai 6.5.0 → 7.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f3db6f0c15b1015a875950a23ad157cb9f6f4eaed2005c35f75fd97197ee3dd6
4
- data.tar.gz: e11f7020b2db2d627646c584feb7dfb2163f3bd8233860e5115ffe0f8d65c681
3
+ metadata.gz: 4067ee94a830c3637d339dff7445d0176610bcb1d381754915a9279bedac59fa
4
+ data.tar.gz: b2f6e4e7331e1f60f32aa3c1fb5628124c3b09539b5d4e2a99ebf9e539420cd2
5
5
  SHA512:
6
- metadata.gz: 76100b527ed276b83190df15c1241be39d58288285e93caaeb20ca94c937082e1d19dd99b9ea69e1758352105e749e38940f1f0192b97c50fb18a8e81d321911
7
- data.tar.gz: 425db82e5ae928dba3136351b697ac53335ffbce5afb81c98920efda2a0b4c600cf83a8273c8c627ad4b25a352ba76d2e77d88f00a48d4993dcbbc318077be53
6
+ metadata.gz: 6f81268cc24c111b011aa87e1a0a877a72caf7a8b9d52cea367c35edf222909f5b8648d6925e30755a71692bd00974fc6d630c055a1639d5bbb8e9a33da11ef9
7
+ data.tar.gz: 2393f1d34582d37c2ee23f07d81d792fb60b0d5f3e9e5e2b900aa056d4ec49b23244d1d7766a3256a521e7621d17fa349f53134f9bdd64499a1fb760b2818038
data/.circleci/config.yml CHANGED
@@ -8,7 +8,7 @@ jobs:
8
8
  rubocop:
9
9
  parallelism: 1
10
10
  docker:
11
- - image: cimg/ruby:3.1-node
11
+ - image: cimg/ruby:3.2-node
12
12
  steps:
13
13
  - checkout
14
14
  - ruby/install-deps
@@ -43,3 +43,4 @@ workflows:
43
43
  - cimg/ruby:3.0-node
44
44
  - cimg/ruby:3.1-node
45
45
  - cimg/ruby:3.2-node
46
+ - cimg/ruby:3.3-node
data/CHANGELOG.md CHANGED
@@ -5,6 +5,41 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [7.0.1] - 2024-04-30
9
+
10
+ ### Fixed
11
+
12
+ - Update to v2 of Assistants in Messages, Runs, RunSteps and Threads - thanks to [@willywg](https://github.com/willywg) and others for pointing this out.
13
+
14
+ ## [7.0.0] - 2024-04-27
15
+
16
+ ### Added
17
+
18
+ - Add support for Batches, thanks to [@simonx1](https://github.com/simonx1) for the PR!
19
+ - Allow use of local LLMs like Ollama! Thanks to [@ThomasSevestre](https://github.com/ThomasSevestre)
20
+ - Update to v2 of the Assistants beta & add documentation on streaming from an Assistant.
21
+ - Add Assistants endpoint to create and run a thread in one go, thank you [@quocphien90](https://github.com/
22
+ quocphien90)
23
+ - Add missing parameters (order, limit, etc) to Runs, RunSteps and Messages - thanks to [@shalecraig](https://github.com/shalecraig) and [@coezbek](https://github.com/coezbek)
24
+ - Add missing Messages#list spec - thanks [@adammeghji](https://github.com/adammeghji)
25
+ - Add Messages#modify to README - thanks to [@nas887](https://github.com/nas887)
26
+ - Don't add the api_version (`/v1/`) to base_uris that already include it - thanks to [@kaiwren](https://github.com/kaiwren) for raising this issue
27
+ - Allow passing a `StringIO` to Files#upload - thanks again to [@simonx1](https://github.com/simonx1)
28
+ - Add Ruby 3.3 to CI
29
+
30
+ ### Security
31
+
32
+ - [BREAKING] ruby-openai will no longer log out API errors by default - you can reenable by passing `log_errors: true` to your client. This will help to prevent leaking secrets to logs. Thanks to [@lalunamel](https://github.com/lalunamel) for this PR.
33
+
34
+ ### Removed
35
+
36
+ - [BREAKING] Remove deprecated edits endpoint.
37
+
38
+ ### Fixed
39
+
40
+ - Fix README DALL·E 3 error - thanks to [@clayton](https://github.com/clayton)
41
+ - Fix README tool_calls error and add missing tool_choice info - thanks to [@Jbrito6492](https://github.com/Jbrito6492)
42
+
8
43
  ## [6.5.0] - 2024-03-31
9
44
 
10
45
  ### Added
@@ -67,13 +102,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
67
102
  - [BREAKING] Switch from legacy Finetunes to the new Fine-tune-jobs endpoints. Implemented by [@lancecarlson](https://github.com/lancecarlson)
68
103
  - [BREAKING] Remove deprecated Completions endpoints - use Chat instead.
69
104
 
70
- ### Fix
105
+ ### Fixed
71
106
 
72
107
  - [BREAKING] Fix issue where :stream parameters were replaced by a boolean in the client application. Thanks to [@martinjaimem](https://github.com/martinjaimem), [@vickymadrid03](https://github.com/vickymadrid03) and [@nicastelo](https://github.com/nicastelo) for spotting and fixing this issue.
73
108
 
74
109
  ## [5.2.0] - 2023-10-30
75
110
 
76
- ### Fix
111
+ ### Fixed
77
112
 
78
113
  - Added more spec-compliant SSE parsing: see here https://html.spec.whatwg.org/multipage/server-sent-events.html#event-stream-interpretation
79
114
  - Fixes issue where OpenAI or an intermediary returns only partial JSON per chunk of streamed data
data/Gemfile CHANGED
@@ -6,7 +6,7 @@ gemspec
6
6
  gem "byebug", "~> 11.1.3"
7
7
  gem "dotenv", "~> 2.8.1"
8
8
  gem "rake", "~> 13.1"
9
- gem "rspec", "~> 3.12"
9
+ gem "rspec", "~> 3.13"
10
10
  gem "rubocop", "~> 1.50.2"
11
11
  gem "vcr", "~> 6.1.0"
12
12
  gem "webmock", "~> 3.19.1"
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby-openai (6.5.0)
4
+ ruby-openai (7.0.1)
5
5
  event_stream_parser (>= 0.3.0, < 2.0.0)
6
6
  faraday (>= 1)
7
7
  faraday-multipart (>= 1)
@@ -16,10 +16,10 @@ GEM
16
16
  byebug (11.1.3)
17
17
  crack (0.4.5)
18
18
  rexml
19
- diff-lcs (1.5.0)
19
+ diff-lcs (1.5.1)
20
20
  dotenv (2.8.1)
21
21
  event_stream_parser (1.0.0)
22
- faraday (2.7.12)
22
+ faraday (2.8.1)
23
23
  base64
24
24
  faraday-net_http (>= 2.0, < 3.1)
25
25
  ruby2_keywords (>= 0.0.4)
@@ -37,19 +37,19 @@ GEM
37
37
  rake (13.1.0)
38
38
  regexp_parser (2.8.0)
39
39
  rexml (3.2.6)
40
- rspec (3.12.0)
41
- rspec-core (~> 3.12.0)
42
- rspec-expectations (~> 3.12.0)
43
- rspec-mocks (~> 3.12.0)
44
- rspec-core (3.12.0)
45
- rspec-support (~> 3.12.0)
46
- rspec-expectations (3.12.2)
40
+ rspec (3.13.0)
41
+ rspec-core (~> 3.13.0)
42
+ rspec-expectations (~> 3.13.0)
43
+ rspec-mocks (~> 3.13.0)
44
+ rspec-core (3.13.0)
45
+ rspec-support (~> 3.13.0)
46
+ rspec-expectations (3.13.0)
47
47
  diff-lcs (>= 1.2.0, < 2.0)
48
- rspec-support (~> 3.12.0)
49
- rspec-mocks (3.12.3)
48
+ rspec-support (~> 3.13.0)
49
+ rspec-mocks (3.13.0)
50
50
  diff-lcs (>= 1.2.0, < 2.0)
51
- rspec-support (~> 3.12.0)
52
- rspec-support (3.12.0)
51
+ rspec-support (~> 3.13.0)
52
+ rspec-support (3.13.1)
53
53
  rubocop (1.50.2)
54
54
  json (~> 2.3)
55
55
  parallel (~> 1.10)
@@ -78,7 +78,7 @@ DEPENDENCIES
78
78
  byebug (~> 11.1.3)
79
79
  dotenv (~> 2.8.1)
80
80
  rake (~> 13.1)
81
- rspec (~> 3.12)
81
+ rspec (~> 3.13)
82
82
  rubocop (~> 1.50.2)
83
83
  ruby-openai!
84
84
  vcr (~> 6.1.0)
data/README.md CHANGED
@@ -4,13 +4,13 @@
4
4
  [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/alexrudall/ruby-openai/blob/main/LICENSE.txt)
5
5
  [![CircleCI Build Status](https://circleci.com/gh/alexrudall/ruby-openai.svg?style=shield)](https://circleci.com/gh/alexrudall/ruby-openai)
6
6
 
7
- Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖🩵
7
+ Use the [OpenAI API](https://openai.com/blog/openai-api/) with Ruby! 🤖❤️
8
8
 
9
9
  Stream text with GPT-4, transcribe and translate audio with Whisper, or create images with DALL·E...
10
10
 
11
11
  [🚢 Hire me](https://peaceterms.com?utm_source=ruby-openai&utm_medium=readme&utm_id=26072023) | [🎮 Ruby AI Builders Discord](https://discord.gg/k4Uc224xVD) | [🐦 Twitter](https://twitter.com/alexrudall) | [🧠 Anthropic Gem](https://github.com/alexrudall/anthropic) | [🚂 Midjourney Gem](https://github.com/alexrudall/midjourney)
12
12
 
13
- # Table of Contents
13
+ ## Contents
14
14
 
15
15
  - [Ruby OpenAI](#ruby-openai)
16
16
  - [Table of Contents](#table-of-contents)
@@ -22,11 +22,14 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
22
22
  - [With Config](#with-config)
23
23
  - [Custom timeout or base URI](#custom-timeout-or-base-uri)
24
24
  - [Extra Headers per Client](#extra-headers-per-client)
25
- - [Verbose Logging](#verbose-logging)
25
+ - [Logging](#logging)
26
+ - [Errors](#errors)
27
+ - [Faraday middleware](#faraday-middleware)
26
28
  - [Azure](#azure)
29
+ - [Ollama](#ollama)
30
+ - [Groq](#groq)
27
31
  - [Counting Tokens](#counting-tokens)
28
32
  - [Models](#models)
29
- - [Examples](#examples)
30
33
  - [Chat](#chat)
31
34
  - [Streaming Chat](#streaming-chat)
32
35
  - [Vision](#vision)
@@ -34,6 +37,7 @@ Stream text with GPT-4, transcribe and translate audio with Whisper, or create i
34
37
  - [Functions](#functions)
35
38
  - [Edits](#edits)
36
39
  - [Embeddings](#embeddings)
40
+ - [Batches](#batches)
37
41
  - [Files](#files)
38
42
  - [Finetunes](#finetunes)
39
43
  - [Assistants](#assistants)
@@ -97,7 +101,10 @@ require "openai"
97
101
  For a quick test you can pass your token directly to a new client:
98
102
 
99
103
  ```ruby
100
- client = OpenAI::Client.new(access_token: "access_token_goes_here")
104
+ client = OpenAI::Client.new(
105
+ access_token: "access_token_goes_here",
106
+ log_errors: true # Highly recommended in development, so you can see what errors OpenAI is returning. Not recommended in production.
107
+ )
101
108
  ```
102
109
 
103
110
  ### With Config
@@ -106,8 +113,9 @@ For a more robust setup, you can configure the gem with your API keys, for examp
106
113
 
107
114
  ```ruby
108
115
  OpenAI.configure do |config|
109
- config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
110
- config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional.
116
+ config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
117
+ config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional.
118
+ config.log_errors = true # Highly recommended in development, so you can see what errors OpenAI is returning. Not recommended in production.
111
119
  end
112
120
  ```
113
121
 
@@ -146,6 +154,7 @@ or when configuring the gem:
146
154
  ```ruby
147
155
  OpenAI.configure do |config|
148
156
  config.access_token = ENV.fetch("OPENAI_ACCESS_TOKEN")
157
+ config.log_errors = true # Optional
149
158
  config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID") # Optional
150
159
  config.uri_base = "https://oai.hconeai.com/" # Optional
151
160
  config.request_timeout = 240 # Optional
@@ -166,7 +175,19 @@ client = OpenAI::Client.new(access_token: "access_token_goes_here")
166
175
  client.add_headers("X-Proxy-TTL" => "43200")
167
176
  ```
168
177
 
169
- #### Verbose Logging
178
+ #### Logging
179
+
180
+ ##### Errors
181
+
182
+ By default, `ruby-openai` does not log any `Faraday::Error`s encountered while executing a network request to avoid leaking data (e.g. 400s, 500s, SSL errors and more - see [here](https://www.rubydoc.info/github/lostisland/faraday/Faraday/Error) for a complete list of subclasses of `Faraday::Error` and what can cause them).
183
+
184
+ If you would like to enable this functionality, you can set `log_errors` to `true` when configuring the client:
185
+
186
+ ```ruby
187
+ client = OpenAI::Client.new(log_errors: true)
188
+ ```
189
+
190
+ ##### Faraday middleware
170
191
 
171
192
  You can pass [Faraday middleware](https://lostisland.github.io/faraday/#/middleware/index) to the client in a block, eg. to enable verbose logging with Ruby's [Logger](https://ruby-doc.org/3.2.2/stdlibs/logger/Logger.html):
172
193
 
@@ -191,6 +212,59 @@ To use the [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/cognit
191
212
 
192
213
  where `AZURE_OPENAI_URI` is e.g. `https://custom-domain.openai.azure.com/openai/deployments/gpt-35-turbo`
193
214
 
215
+ #### Ollama
216
+
217
+ Ollama allows you to run open-source LLMs, such as Llama 3, locally. It [offers chat compatibility](https://github.com/ollama/ollama/blob/main/docs/openai.md) with the OpenAI API.
218
+
219
+ You can download Ollama [here](https://ollama.com/). On macOS you can install and run Ollama like this:
220
+
221
+ ```bash
222
+ brew install ollama
223
+ ollama serve
224
+ ollama pull llama3:latest # In new terminal tab.
225
+ ```
226
+
227
+ Create a client using your Ollama server and the pulled model, and stream a conversation for free:
228
+
229
+ ```ruby
230
+ client = OpenAI::Client.new(
231
+ uri_base: "http://localhost:11434"
232
+ )
233
+
234
+ client.chat(
235
+ parameters: {
236
+ model: "llama3", # Required.
237
+ messages: [{ role: "user", content: "Hello!"}], # Required.
238
+ temperature: 0.7,
239
+ stream: proc do |chunk, _bytesize|
240
+ print chunk.dig("choices", 0, "delta", "content")
241
+ end
242
+ })
243
+
244
+ # => Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
245
+ ```
246
+
247
+ #### Groq
248
+
249
+ [Groq API Chat](https://console.groq.com/docs/quickstart) is broadly compatible with the OpenAI API, with a [few minor differences](https://console.groq.com/docs/openai). Get an access token from [here](https://console.groq.com/keys), then:
250
+
251
+ ```ruby
252
+ client = OpenAI::Client.new(
253
+ access_token: "groq_access_token_goes_here",
254
+ uri_base: "https://api.groq.com/"
255
+ )
256
+
257
+ client.chat(
258
+ parameters: {
259
+ model: "llama3-8b-8192", # Required.
260
+ messages: [{ role: "user", content: "Hello!"}], # Required.
261
+ temperature: 0.7,
262
+ stream: proc do |chunk, _bytesize|
263
+ print chunk.dig("choices", 0, "delta", "content")
264
+ end
265
+ })
266
+ ```
267
+
194
268
  ### Counting Tokens
195
269
 
196
270
  OpenAI parses prompt text into [tokens](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them), which are words or portions of words. (These tokens are unrelated to your API access_token.) Counting tokens can help you estimate your [costs](https://openai.com/pricing). It can also help you ensure your prompt text size is within the max-token limits of your model's context window, and choose an appropriate [`max_tokens`](https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens) completion parameter so your response will fit as well.
@@ -199,7 +273,7 @@ To estimate the token-count of your text:
199
273
 
200
274
  ```ruby
201
275
  OpenAI.rough_token_count("Your text")
202
- ```
276
+ ````
203
277
 
204
278
  If you need a more accurate count, try [tiktoken_ruby](https://github.com/IAPark/tiktoken_ruby).
205
279
 
@@ -209,24 +283,9 @@ There are different models that can be used to generate text. For a full list an
209
283
 
210
284
  ```ruby
211
285
  client.models.list
212
- client.models.retrieve(id: "text-ada-001")
286
+ client.models.retrieve(id: "gpt-3.5-turbo")
213
287
  ```
214
288
 
215
- #### Examples
216
-
217
- - [GPT-4 (limited beta)](https://platform.openai.com/docs/models/gpt-4)
218
- - gpt-4 (uses current version)
219
- - gpt-4-0314
220
- - gpt-4-32k
221
- - [GPT-3.5](https://platform.openai.com/docs/models/gpt-3-5)
222
- - gpt-3.5-turbo
223
- - gpt-3.5-turbo-0301
224
- - text-davinci-003
225
- - [GPT-3](https://platform.openai.com/docs/models/gpt-3)
226
- - text-ada-001
227
- - text-babbage-001
228
- - text-curie-001
229
-
230
289
  ### Chat
231
290
 
232
291
  GPT is a model that can be used to generate text in a conversational style. You can use it to [generate a response](https://platform.openai.com/docs/api-reference/chat/create) to a sequence of [messages](https://platform.openai.com/docs/guides/chat/introduction):
@@ -287,12 +346,12 @@ puts response.dig("choices", 0, "message", "content")
287
346
 
288
347
  #### JSON Mode
289
348
 
290
- You can set the response_format to ask for responses in JSON (at least for `gpt-3.5-turbo-1106`):
349
+ You can set the response_format to ask for responses in JSON:
291
350
 
292
351
  ```ruby
293
352
  response = client.chat(
294
353
  parameters: {
295
- model: "gpt-3.5-turbo-1106",
354
+ model: "gpt-3.5-turbo",
296
355
  response_format: { type: "json_object" },
297
356
  messages: [{ role: "user", content: "Hello! Give me some JSON please."}],
298
357
  temperature: 0.7,
@@ -312,7 +371,7 @@ You can stream it as well!
312
371
  ```ruby
313
372
  response = client.chat(
314
373
  parameters: {
315
- model: "gpt-3.5-turbo-1106",
374
+ model: "gpt-3.5-turbo",
316
375
  messages: [{ role: "user", content: "Can I have some JSON please?"}],
317
376
  response_format: { type: "json_object" },
318
377
  stream: proc do |chunk, _bytesize|
@@ -338,9 +397,10 @@ You can stream it as well!
338
397
 
339
398
  ### Functions
340
399
 
341
- You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call those them. For example, if you want the model to use your method `get_current_weather` to get the current weather in a given location:
400
+ You can describe and pass in functions and the model will intelligently choose to output a JSON object containing arguments to call them - eg., to use your method `get_current_weather` to get the weather in a given location. Note that tool_choice is optional, but if you exclude it, the model will choose whether to use the function or not ([see here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice)).
342
401
 
343
402
  ```ruby
403
+
344
404
  def get_current_weather(location:, unit: "fahrenheit")
345
405
  # use a weather api to fetch weather
346
406
  end
@@ -348,7 +408,7 @@ end
348
408
  response =
349
409
  client.chat(
350
410
  parameters: {
351
- model: "gpt-3.5-turbo-0613",
411
+ model: "gpt-3.5-turbo",
352
412
  messages: [
353
413
  {
354
414
  "role": "user",
@@ -378,16 +438,22 @@ response =
378
438
  },
379
439
  }
380
440
  ],
441
+ tool_choice: {
442
+ type: "function",
443
+ function: {
444
+ name: "get_current_weather"
445
+ }
446
+ }
381
447
  },
382
448
  )
383
449
 
384
450
  message = response.dig("choices", 0, "message")
385
451
 
386
452
  if message["role"] == "assistant" && message["tool_calls"]
387
- function_name = message.dig("tool_calls", "function", "name")
453
+ function_name = message.dig("tool_calls", 0, "function", "name")
388
454
  args =
389
455
  JSON.parse(
390
- message.dig("tool_calls", "function", "arguments"),
456
+ message.dig("tool_calls", 0, "function", "arguments"),
391
457
  { symbolize_names: true },
392
458
  )
393
459
 
@@ -406,7 +472,7 @@ Hit the OpenAI API for a completion using other GPT-3 models:
406
472
  ```ruby
407
473
  response = client.completions(
408
474
  parameters: {
409
- model: "text-davinci-001",
475
+ model: "gpt-3.5-turbo",
410
476
  prompt: "Once upon a time",
411
477
  max_tokens: 5
412
478
  })
@@ -414,22 +480,6 @@ puts response["choices"].map { |c| c["text"] }
414
480
  # => [", there lived a great"]
415
481
  ```
416
482
 
417
- ### Edits
418
-
419
- Send a string and some instructions for what to do to the string:
420
-
421
- ```ruby
422
- response = client.edits(
423
- parameters: {
424
- model: "text-davinci-edit-001",
425
- input: "What day of the wek is it?",
426
- instruction: "Fix the spelling mistakes"
427
- }
428
- )
429
- puts response.dig("choices", 0, "text")
430
- # => What day of the week is it?
431
- ```
432
-
433
483
  ### Embeddings
434
484
 
435
485
  You can use the embeddings endpoint to get a vector of numbers representing an input. You can then compare these vectors for different inputs to efficiently check how similar the inputs are.
@@ -446,6 +496,94 @@ puts response.dig("data", 0, "embedding")
446
496
  # => Vector representation of your embedding
447
497
  ```
448
498
 
499
+ ### Batches
500
+
501
+ The Batches endpoint allows you to create and manage large batches of API requests to run asynchronously. Currently, only the `/v1/chat/completions` endpoint is supported for batches.
502
+
503
+ To use the Batches endpoint, you need to first upload a JSONL file containing the batch requests using the Files endpoint. The file must be uploaded with the purpose set to `batch`. Each line in the JSONL file represents a single request and should have the following format:
504
+
505
+ ```json
506
+ {
507
+ "custom_id": "request-1",
508
+ "method": "POST",
509
+ "url": "/v1/chat/completions",
510
+ "body": {
511
+ "model": "gpt-3.5-turbo",
512
+ "messages": [
513
+ { "role": "system", "content": "You are a helpful assistant." },
514
+ { "role": "user", "content": "What is 2+2?" }
515
+ ]
516
+ }
517
+ }
518
+ ```
519
+
520
+ Once you have uploaded the JSONL file, you can create a new batch by providing the file ID, endpoint, and completion window:
521
+
522
+ ```ruby
523
+ response = client.batches.create(
524
+ parameters: {
525
+ input_file_id: "file-abc123",
526
+ endpoint: "/v1/chat/completions",
527
+ completion_window: "24h"
528
+ }
529
+ )
530
+ batch_id = response["id"]
531
+ ```
532
+
533
+ You can retrieve information about a specific batch using its ID:
534
+
535
+ ```ruby
536
+ batch = client.batches.retrieve(id: batch_id)
537
+ ```
538
+
539
+ To cancel a batch that is in progress:
540
+
541
+ ```ruby
542
+ client.batches.cancel(id: batch_id)
543
+ ```
544
+
545
+ You can also list all the batches:
546
+
547
+ ```ruby
548
+ client.batches.list
549
+ ```
550
+
551
+ Once the batch["completed_at"] is present, you can fetch the output or error files:
552
+
553
+ ```ruby
554
+ batch = client.batches.retrieve(id: batch_id)
555
+ output_file_id = batch["output_file_id"]
556
+ output_response = client.files.content(id: output_file_id)
557
+ error_file_id = batch["error_file_id"]
558
+ error_response = client.files.content(id: error_file_id)
559
+ ```
560
+
561
+ These files are in JSONL format, with each line representing the output or error for a single request. The lines can be in any order:
562
+
563
+ ```json
564
+ {
565
+ "id": "response-1",
566
+ "custom_id": "request-1",
567
+ "response": {
568
+ "id": "chatcmpl-abc123",
569
+ "object": "chat.completion",
570
+ "created": 1677858242,
571
+ "model": "gpt-3.5-turbo",
572
+ "choices": [
573
+ {
574
+ "index": 0,
575
+ "message": {
576
+ "role": "assistant",
577
+ "content": "2+2 equals 4."
578
+ }
579
+ }
580
+ ]
581
+ }
582
+ }
583
+ ```
584
+
585
+ If a request fails with a non-HTTP error, the error object will contain more information about the cause of the failure.
586
+
449
587
  ### Files
450
588
 
451
589
  Put your data in a `.jsonl` file like this:
@@ -455,7 +593,7 @@ Put your data in a `.jsonl` file like this:
455
593
  {"prompt":"@lakers disappoint for a third straight night ->", "completion":" negative"}
456
594
  ```
457
595
 
458
- and pass the path to `client.files.upload` to upload it to OpenAI, and then interact with it:
596
+ and pass the path (or a StringIO object) to `client.files.upload` to upload it to OpenAI, and then interact with it:
459
597
 
460
598
  ```ruby
461
599
  client.files.upload(parameters: { file: "path/to/sentiment.jsonl", purpose: "fine-tune" })
@@ -480,7 +618,7 @@ You can then use this file ID to create a fine tuning job:
480
618
  response = client.finetunes.create(
481
619
  parameters: {
482
620
  training_file: file_id,
483
- model: "gpt-3.5-turbo-0613"
621
+ model: "gpt-3.5-turbo"
484
622
  })
485
623
  fine_tune_id = response["id"]
486
624
  ```
@@ -519,23 +657,26 @@ client.finetunes.list_events(id: fine_tune_id)
519
657
 
520
658
  ### Assistants
521
659
 
522
- Assistants can call models to interact with threads and use tools to perform tasks (see [Assistant Overview](https://platform.openai.com/docs/assistants/overview)).
660
+ Assistants are stateful actors that can have many conversations and use tools to perform tasks (see [Assistant Overview](https://platform.openai.com/docs/assistants/overview)).
523
661
 
524
- To create a new assistant (see [API documentation](https://platform.openai.com/docs/api-reference/assistants/createAssistant)):
662
+ To create a new assistant:
525
663
 
526
664
  ```ruby
527
665
  response = client.assistants.create(
528
666
  parameters: {
529
- model: "gpt-3.5-turbo-1106", # Retrieve via client.models.list. Assistants need 'gpt-3.5-turbo-1106' or later.
667
+ model: "gpt-3.5-turbo",
530
668
  name: "OpenAI-Ruby test assistant",
531
669
  description: nil,
532
- instructions: "You are a helpful assistant for coding a OpenAI API client using the OpenAI-Ruby gem.",
670
+ instructions: "You are a Ruby dev bot. When asked a question, write and run Ruby code to answer the question",
533
671
  tools: [
534
- { type: 'retrieval' }, # Allow access to files attached using file_ids
535
- { type: 'code_interpreter' }, # Allow access to Python code interpreter
672
+ { type: "code_interpreter" },
536
673
  ],
537
- "file_ids": ["file-123"], # See Files section above for how to upload files
538
- "metadata": { my_internal_version_id: '1.0.0' }
674
+ tool_resources: {
675
+ "code_interpreter": {
676
+ "file_ids": [] # See Files section above for how to upload files
677
+ }
678
+ },
679
+ "metadata": { my_internal_version_id: "1.0.0" }
539
680
  })
540
681
  assistant_id = response["id"]
541
682
  ```
@@ -605,17 +746,36 @@ client.messages.retrieve(thread_id: thread_id, id: message_id) # -> Fails after
605
746
 
606
747
  ### Runs
607
748
 
608
- To submit a thread to be evaluated with the model of an assistant, create a `Run` as follows (Note: This is one place where OpenAI will take your money):
749
+ To submit a thread to be evaluated with the model of an assistant, create a `Run` as follows:
609
750
 
610
751
  ```ruby
611
752
  # Create run (will use instruction/model/tools from Assistant's definition)
612
753
  response = client.runs.create(thread_id: thread_id,
613
754
  parameters: {
614
- assistant_id: assistant_id
755
+ assistant_id: assistant_id,
756
+ max_prompt_tokens: 256,
757
+ max_completion_tokens: 16
615
758
  })
616
759
  run_id = response['id']
760
+ ```
761
+
762
+ You can stream the message chunks as they come through:
617
763
 
618
- # Retrieve/poll Run to observe status
764
+ ```ruby
765
+ client.runs.create(thread_id: thread_id,
766
+ parameters: {
767
+ assistant_id: assistant_id,
768
+ max_prompt_tokens: 256,
769
+ max_completion_tokens: 16,
770
+ stream: proc do |chunk, _bytesize|
771
+ print chunk.dig("delta", "content", 0, "text", "value") if chunk["object"] == "thread.message.delta"
772
+ end
773
+ })
774
+ ```
775
+
776
+ To get the status of a Run:
777
+
778
+ ```
619
779
  response = client.runs.retrieve(id: run_id, thread_id: thread_id)
620
780
  status = response['status']
621
781
  ```
@@ -624,23 +784,22 @@ The `status` response can include the following strings `queued`, `in_progress`,
624
784
 
625
785
  ```ruby
626
786
  while true do
627
-
628
787
  response = client.runs.retrieve(id: run_id, thread_id: thread_id)
629
788
  status = response['status']
630
789
 
631
790
  case status
632
791
  when 'queued', 'in_progress', 'cancelling'
633
- puts 'Sleeping'
634
- sleep 1 # Wait one second and poll again
792
+ puts 'Sleeping'
793
+ sleep 1 # Wait one second and poll again
635
794
  when 'completed'
636
- break # Exit loop and report result to user
795
+ break # Exit loop and report result to user
637
796
  when 'requires_action'
638
- # Handle tool calls (see below)
797
+ # Handle tool calls (see below)
639
798
  when 'cancelled', 'failed', 'expired'
640
- puts response['last_error'].inspect
641
- break # or `exit`
799
+ puts response['last_error'].inspect
800
+ break # or `exit`
642
801
  else
643
- puts "Unknown status response: #{status}"
802
+ puts "Unknown status response: #{status}"
644
803
  end
645
804
  end
646
805
  ```
@@ -649,10 +808,10 @@ If the `status` response indicates that the `run` is `completed`, the associated
649
808
 
650
809
  ```ruby
651
810
  # Either retrieve all messages in bulk again, or...
652
- messages = client.messages.list(thread_id: thread_id) # Note: as of 2023-12-11 adding limit or order options isn't working, yet
811
+ messages = client.messages.list(thread_id: thread_id, parameters: { order: 'asc' })
653
812
 
654
813
  # Alternatively retrieve the `run steps` for the run which link to the messages:
655
- run_steps = client.run_steps.list(thread_id: thread_id, run_id: run_id)
814
+ run_steps = client.run_steps.list(thread_id: thread_id, run_id: run_id, parameters: { order: 'asc' })
656
815
  new_message_ids = run_steps['data'].filter_map { |step|
657
816
  if step['type'] == 'message_creation'
658
817
  step.dig('step_details', "message_creation", "message_id")
@@ -679,10 +838,29 @@ new_messages.each { |msg|
679
838
  }
680
839
  ```
681
840
 
682
- At any time you can list all runs which have been performed on a particular thread or are currently running (in descending/newest first order):
841
+ You can also update the metadata on messages, including messages that come from the assistant.
683
842
 
684
843
  ```ruby
685
- client.runs.list(thread_id: thread_id)
844
+ metadata = {
845
+ user_id: "abc123"
846
+ }
847
+ message = client.messages.modify(id: message_id, thread_id: thread_id, parameters: { metadata: metadata })
848
+ ```
849
+
850
+ At any time you can list all runs which have been performed on a particular thread or are currently running:
851
+
852
+ ```ruby
853
+ client.runs.list(thread_id: thread_id, parameters: { order: "asc", limit: 3 })
854
+ ```
855
+
856
+ #### Create and Run
857
+
858
+ You can also create a thread and run in one call like this:
859
+
860
+ ```ruby
861
+ response = client.runs.create_thread_and_run(parameters: { assistant_id: assistant_id })
862
+ run_id = response['id']
863
+ thread_id = response['thread_id']
686
864
  ```
687
865
 
688
866
  #### Runs involving function tools
@@ -746,7 +924,7 @@ puts response.dig("data", 0, "url")
746
924
  For DALL·E 3 the size of any generated images must be one of `1024x1024`, `1024x1792` or `1792x1024`. Additionally the quality of the image can be specified to either `standard` or `hd`.
747
925
 
748
926
  ```ruby
749
- response = client.images.generate(parameters: { prompt: "A springer spaniel cooking pasta wearing a hat of some sort", size: "1024x1792", quality: "standard" })
927
+ response = client.images.generate(parameters: { prompt: "A springer spaniel cooking pasta wearing a hat of some sort", model: "dall-e-3", size: "1024x1792", quality: "standard" })
750
928
  puts response.dig("data", 0, "url")
751
929
  # => "https://oaidalleapiprodscus.blob.core.windows.net/private/org-Rf437IxKhh..."
752
930
  ```
@@ -845,7 +1023,7 @@ HTTP errors can be caught like this:
845
1023
 
846
1024
  ```
847
1025
  begin
848
- OpenAI::Client.new.models.retrieve(id: "text-ada-001")
1026
+ OpenAI::Client.new.models.retrieve(id: "gpt-3.5-turbo")
849
1027
  rescue Faraday::Error => e
850
1028
  raise "Got a Faraday error: #{e}"
851
1029
  end
@@ -1,7 +1,9 @@
1
1
  module OpenAI
2
2
  class Assistants
3
+ BETA_VERSION = "v2".freeze
4
+
3
5
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
6
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
7
  end
6
8
 
7
9
  def list
@@ -0,0 +1,23 @@
1
+ module OpenAI
2
+ class Batches
3
+ def initialize(client:)
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
+ end
6
+
7
+ def list
8
+ @client.get(path: "/batches")
9
+ end
10
+
11
+ def retrieve(id:)
12
+ @client.get(path: "/batches/#{id}")
13
+ end
14
+
15
+ def create(parameters: {})
16
+ @client.json_post(path: "/batches", parameters: parameters)
17
+ end
18
+
19
+ def cancel(id:)
20
+ @client.post(path: "/batches/#{id}/cancel")
21
+ end
22
+ end
23
+ end
data/lib/openai/client.rb CHANGED
@@ -6,6 +6,7 @@ module OpenAI
6
6
  api_type
7
7
  api_version
8
8
  access_token
9
+ log_errors
9
10
  organization_id
10
11
  uri_base
11
12
  request_timeout
@@ -17,7 +18,10 @@ module OpenAI
17
18
  CONFIG_KEYS.each do |key|
18
19
  # Set instance variables like api_type & access_token. Fall back to global config
19
20
  # if not present.
20
- instance_variable_set("@#{key}", config[key] || OpenAI.configuration.send(key))
21
+ instance_variable_set(
22
+ "@#{key}",
23
+ config[key].nil? ? OpenAI.configuration.send(key) : config[key]
24
+ )
21
25
  end
22
26
  @faraday_middleware = faraday_middleware
23
27
  end
@@ -26,10 +30,6 @@ module OpenAI
26
30
  json_post(path: "/chat/completions", parameters: parameters)
27
31
  end
28
32
 
29
- def edits(parameters: {})
30
- json_post(path: "/edits", parameters: parameters)
31
- end
32
-
33
33
  def embeddings(parameters: {})
34
34
  json_post(path: "/embeddings", parameters: parameters)
35
35
  end
@@ -78,6 +78,10 @@ module OpenAI
78
78
  @run_steps ||= OpenAI::RunSteps.new(client: self)
79
79
  end
80
80
 
81
+ def batches
82
+ @batches ||= OpenAI::Batches.new(client: self)
83
+ end
84
+
81
85
  def moderations(parameters: {})
82
86
  json_post(path: "/moderations", parameters: parameters)
83
87
  end
data/lib/openai/files.rb CHANGED
@@ -1,5 +1,11 @@
1
1
  module OpenAI
2
2
  class Files
3
+ PURPOSES = %w[
4
+ assistants
5
+ batch
6
+ fine-tune
7
+ ].freeze
8
+
3
9
  def initialize(client:)
4
10
  @client = client
5
11
  end
@@ -9,12 +15,17 @@ module OpenAI
9
15
  end
10
16
 
11
17
  def upload(parameters: {})
12
- validate(file: parameters[:file]) if parameters[:file].include?(".jsonl")
18
+ file_input = parameters[:file]
19
+ file = prepare_file_input(file_input: file_input)
20
+
21
+ validate(file: file, purpose: parameters[:purpose], file_input: file_input)
13
22
 
14
23
  @client.multipart_post(
15
24
  path: "/files",
16
- parameters: parameters.merge(file: File.open(parameters[:file]))
25
+ parameters: parameters.merge(file: file)
17
26
  )
27
+ ensure
28
+ file.close if file.is_a?(File)
18
29
  end
19
30
 
20
31
  def retrieve(id:)
@@ -31,12 +42,33 @@ module OpenAI
31
42
 
32
43
  private
33
44
 
34
- def validate(file:)
35
- File.open(file).each_line.with_index do |line, index|
45
+ def prepare_file_input(file_input:)
46
+ if file_input.is_a?(String)
47
+ File.open(file_input)
48
+ elsif file_input.respond_to?(:read) && file_input.respond_to?(:rewind)
49
+ file_input
50
+ else
51
+ raise ArgumentError, "Invalid file - must be a StringIO object or a path to a file."
52
+ end
53
+ end
54
+
55
+ def validate(file:, purpose:, file_input:)
56
+ raise ArgumentError, "`file` is required" if file.nil?
57
+ unless PURPOSES.include?(purpose)
58
+ raise ArgumentError, "`purpose` must be one of `#{PURPOSES.join(',')}`"
59
+ end
60
+
61
+ validate_jsonl(file: file) if file_input.is_a?(String) && file_input.end_with?(".jsonl")
62
+ end
63
+
64
+ def validate_jsonl(file:)
65
+ file.each_line.with_index do |line, index|
36
66
  JSON.parse(line)
37
67
  rescue JSON::ParserError => e
38
68
  raise JSON::ParserError, "#{e.message} - found on line #{index + 1} of #{file}"
39
69
  end
70
+ ensure
71
+ file.rewind
40
72
  end
41
73
  end
42
74
  end
data/lib/openai/http.rb CHANGED
@@ -6,8 +6,8 @@ module OpenAI
6
6
  module HTTP
7
7
  include HTTPHeaders
8
8
 
9
- def get(path:)
10
- parse_jsonl(conn.get(uri(path: path)) do |req|
9
+ def get(path:, parameters: nil)
10
+ parse_jsonl(conn.get(uri(path: path), parameters) do |req|
11
11
  req.headers = headers
12
12
  end&.body)
13
13
  end
@@ -74,7 +74,7 @@ module OpenAI
74
74
  connection = Faraday.new do |f|
75
75
  f.options[:timeout] = @request_timeout
76
76
  f.request(:multipart) if multipart
77
- f.use MiddlewareErrors
77
+ f.use MiddlewareErrors if @log_errors
78
78
  f.response :raise_error
79
79
  f.response :json
80
80
  end
@@ -88,6 +88,8 @@ module OpenAI
88
88
  if azure?
89
89
  base = File.join(@uri_base, path)
90
90
  "#{base}?api-version=#{@api_version}"
91
+ elsif @uri_base.include?(@api_version)
92
+ File.join(@uri_base, path)
91
93
  else
92
94
  File.join(@uri_base, @api_version, path)
93
95
  end
@@ -97,10 +99,14 @@ module OpenAI
97
99
  parameters&.transform_values do |value|
98
100
  next value unless value.respond_to?(:close) # File or IO object.
99
101
 
102
+ # Faraday::UploadIO does not require a path, so we will pass it
103
+ # only if it is available. This allows StringIO objects to be
104
+ # passed in as well.
105
+ path = value.respond_to?(:path) ? value.path : nil
100
106
  # Doesn't seem like OpenAI needs mime_type yet, so not worth
101
107
  # the library to figure this out. Hence the empty string
102
108
  # as the second argument.
103
- Faraday::UploadIO.new(value, "", value.path)
109
+ Faraday::UploadIO.new(value, "", path)
104
110
  end
105
111
  end
106
112
 
@@ -1,11 +1,11 @@
1
1
  module OpenAI
2
2
  class Messages
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
- def list(thread_id:)
8
- @client.get(path: "/threads/#{thread_id}/messages")
7
+ def list(thread_id:, parameters: {})
8
+ @client.get(path: "/threads/#{thread_id}/messages", parameters: parameters)
9
9
  end
10
10
 
11
11
  def retrieve(thread_id:, id:)
@@ -1,11 +1,11 @@
1
1
  module OpenAI
2
2
  class RunSteps
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
- def list(thread_id:, run_id:)
8
- @client.get(path: "/threads/#{thread_id}/runs/#{run_id}/steps")
7
+ def list(thread_id:, run_id:, parameters: {})
8
+ @client.get(path: "/threads/#{thread_id}/runs/#{run_id}/steps", parameters: parameters)
9
9
  end
10
10
 
11
11
  def retrieve(thread_id:, run_id:, id:)
data/lib/openai/runs.rb CHANGED
@@ -1,11 +1,11 @@
1
1
  module OpenAI
2
2
  class Runs
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
- def list(thread_id:)
8
- @client.get(path: "/threads/#{thread_id}/runs")
7
+ def list(thread_id:, parameters: {})
8
+ @client.get(path: "/threads/#{thread_id}/runs", parameters: parameters)
9
9
  end
10
10
 
11
11
  def retrieve(thread_id:, id:)
@@ -24,6 +24,10 @@ module OpenAI
24
24
  @client.post(path: "/threads/#{thread_id}/runs/#{id}/cancel")
25
25
  end
26
26
 
27
+ def create_thread_and_run(parameters: {})
28
+ @client.json_post(path: "/threads/runs", parameters: parameters)
29
+ end
30
+
27
31
  def submit_tool_outputs(thread_id:, run_id:, parameters: {})
28
32
  @client.json_post(path: "/threads/#{thread_id}/runs/#{run_id}/submit_tool_outputs",
29
33
  parameters: parameters)
@@ -1,7 +1,7 @@
1
1
  module OpenAI
2
2
  class Threads
3
3
  def initialize(client:)
4
- @client = client.beta(assistants: "v1")
4
+ @client = client.beta(assistants: OpenAI::Assistants::BETA_VERSION)
5
5
  end
6
6
 
7
7
  def retrieve(id:)
@@ -1,3 +1,3 @@
1
1
  module OpenAI
2
- VERSION = "6.5.0".freeze
2
+ VERSION = "7.0.1".freeze
3
3
  end
data/lib/openai.rb CHANGED
@@ -14,6 +14,7 @@ require_relative "openai/runs"
14
14
  require_relative "openai/run_steps"
15
15
  require_relative "openai/audio"
16
16
  require_relative "openai/version"
17
+ require_relative "openai/batches"
17
18
 
18
19
  module OpenAI
19
20
  class Error < StandardError; end
@@ -36,30 +37,30 @@ module OpenAI
36
37
  end
37
38
 
38
39
  class Configuration
39
- attr_writer :access_token
40
- attr_accessor :api_type, :api_version, :organization_id, :uri_base, :request_timeout,
40
+ attr_accessor :access_token,
41
+ :api_type,
42
+ :api_version,
43
+ :log_errors,
44
+ :organization_id,
45
+ :uri_base,
46
+ :request_timeout,
41
47
  :extra_headers
42
48
 
43
49
  DEFAULT_API_VERSION = "v1".freeze
44
50
  DEFAULT_URI_BASE = "https://api.openai.com/".freeze
45
51
  DEFAULT_REQUEST_TIMEOUT = 120
52
+ DEFAULT_LOG_ERRORS = false
46
53
 
47
54
  def initialize
48
55
  @access_token = nil
49
56
  @api_type = nil
50
57
  @api_version = DEFAULT_API_VERSION
58
+ @log_errors = DEFAULT_LOG_ERRORS
51
59
  @organization_id = nil
52
60
  @uri_base = DEFAULT_URI_BASE
53
61
  @request_timeout = DEFAULT_REQUEST_TIMEOUT
54
62
  @extra_headers = {}
55
63
  end
56
-
57
- def access_token
58
- return @access_token if @access_token
59
-
60
- error_text = "OpenAI access token missing! See https://github.com/alexrudall/ruby-openai#usage"
61
- raise ConfigurationError, error_text
62
- end
63
64
  end
64
65
 
65
66
  class << self
data/ruby-openai.gemspec CHANGED
@@ -6,7 +6,7 @@ Gem::Specification.new do |spec|
6
6
  spec.authors = ["Alex"]
7
7
  spec.email = ["alexrudall@users.noreply.github.com"]
8
8
 
9
- spec.summary = "OpenAI API + Ruby! 🤖🩵"
9
+ spec.summary = "OpenAI API + Ruby! 🤖❤️"
10
10
  spec.homepage = "https://github.com/alexrudall/ruby-openai"
11
11
  spec.license = "MIT"
12
12
  spec.required_ruby_version = Gem::Requirement.new(">= 2.6.0")
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-openai
3
3
  version: !ruby/object:Gem::Version
4
- version: 6.5.0
4
+ version: 7.0.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-03-31 00:00:00.000000000 Z
11
+ date: 2024-04-29 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: event_stream_parser
@@ -89,6 +89,7 @@ files:
89
89
  - lib/openai.rb
90
90
  - lib/openai/assistants.rb
91
91
  - lib/openai/audio.rb
92
+ - lib/openai/batches.rb
92
93
  - lib/openai/client.rb
93
94
  - lib/openai/compatibility.rb
94
95
  - lib/openai/files.rb
@@ -129,8 +130,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
129
130
  - !ruby/object:Gem::Version
130
131
  version: '0'
131
132
  requirements: []
132
- rubygems_version: 3.4.22
133
+ rubygems_version: 3.5.9
133
134
  signing_key:
134
135
  specification_version: 4
135
- summary: "OpenAI API + Ruby! \U0001F916\U0001FA75"
136
+ summary: "OpenAI API + Ruby! \U0001F916❤️"
136
137
  test_files: []