gemini-ai 3.2.0 → 4.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/template.md CHANGED
@@ -9,7 +9,7 @@ A Ruby Gem for interacting with [Gemini](https://deepmind.google/technologies/ge
9
9
  ## TL;DR and Quick Start
10
10
 
11
11
  ```ruby
12
- gem 'gemini-ai', '~> 3.2.0'
12
+ gem 'gemini-ai', '~> 4.1.0'
13
13
  ```
14
14
 
15
15
  ```ruby
@@ -34,6 +34,17 @@ client = Gemini.new(
34
34
  options: { model: 'gemini-pro', server_sent_events: true }
35
35
  )
36
36
 
37
+ # With the Service Account Credentials File contents
38
+ client = Gemini.new(
39
+ credentials: {
40
+ service: 'vertex-ai-api',
41
+ file_contents: File.read('google-credentials.json'),
42
+ # file_contents: ENV['GOOGLE_CREDENTIALS_FILE_CONTENTS'],
43
+ region: 'us-east4'
44
+ },
45
+ options: { model: 'gemini-pro', server_sent_events: true }
46
+ )
47
+
37
48
  # With Application Default Credentials
38
49
  client = Gemini.new(
39
50
  credentials: {
@@ -77,11 +88,11 @@ Result:
77
88
  ### Installing
78
89
 
79
90
  ```sh
80
- gem install gemini-ai -v 3.2.0
91
+ gem install gemini-ai -v 4.1.0
81
92
  ```
82
93
 
83
94
  ```sh
84
- gem 'gemini-ai', '~> 3.2.0'
95
+ gem 'gemini-ai', '~> 4.1.0'
85
96
  ```
86
97
 
87
98
  ### Credentials
@@ -163,7 +174,7 @@ Similar to [Option 2](#option-2-service-account-credentials-file-vertex-ai-api),
163
174
  For local development, you can generate your default credentials using the [gcloud CLI](https://cloud.google.com/sdk/gcloud) as follows:
164
175
 
165
176
  ```sh
166
- gcloud auth application-default login
177
+ gcloud auth application-default login
167
178
  ```
168
179
 
169
180
  For more details about alternative methods and different environments, check the official documentation:
@@ -201,6 +212,23 @@ Remember that hardcoding your API key in code is unsafe; it's preferable to use
201
212
  }
202
213
  ```
203
214
 
215
+ Alternatively, you can pass the file contents instead of the path:
216
+ ```ruby
217
+ {
218
+ service: 'vertex-ai-api',
219
+ file_contents: File.read('google-credentials.json'),
220
+ region: 'us-east4'
221
+ }
222
+ ```
223
+
224
+ ```ruby
225
+ {
226
+ service: 'vertex-ai-api',
227
+ file_contents: ENV['GOOGLE_CREDENTIALS_FILE_CONTENTS'],
228
+ region: 'us-east4'
229
+ }
230
+ ```
231
+
204
232
  **Option 3**: For _Application Default Credentials_, omit both the `api_key` and the `file_path`:
205
233
 
206
234
  ```ruby
@@ -259,6 +287,17 @@ client = Gemini.new(
259
287
  options: { model: 'gemini-pro', server_sent_events: true }
260
288
  )
261
289
 
290
+ # With the Service Account Credentials File contents
291
+ client = Gemini.new(
292
+ credentials: {
293
+ service: 'vertex-ai-api',
294
+ file_contents: File.read('google-credentials.json'),
295
+ # file_contents: ENV['GOOGLE_CREDENTIALS_FILE_CONTENTS'],
296
+ region: 'us-east4'
297
+ },
298
+ options: { model: 'gemini-pro', server_sent_events: true }
299
+ )
300
+
262
301
  # With Application Default Credentials
263
302
  client = Gemini.new(
264
303
  credentials: {
@@ -270,6 +309,48 @@ client = Gemini.new(
270
309
  )
271
310
  ```
272
311
 
312
+ ## Available Models
313
+
314
+ These models are accessible to the repository **author** as of June 2025 in the `us-east4` region. Access to models may vary by region, user, and account. All models here are expected to work, if you can access them. This is just a reference of what a "typical" user may expect to have access to right away:
315
+
316
+ | Model | Vertex AI | Generative Language |
317
+ |------------------------------------------|:---------:|:-------------------:|
318
+ | gemini-pro-vision | ✅ | 🔒 |
319
+ | gemini-pro | ✅ | ✅ |
320
+ | gemini-1.5-pro-preview-0514 | ✅ | 🔒 |
321
+ | gemini-1.5-pro-preview-0409 | ✅ | 🔒 |
322
+ | gemini-1.5-pro | ✅ | ✅ |
323
+ | gemini-1.5-flash-preview-0514 | ✅ | 🔒 |
324
+ | gemini-1.5-flash | ✅ | ✅ |
325
+ | gemini-1.0-pro-vision-latest | 🔒 | 🔒 |
326
+ | gemini-1.0-pro-vision-001 | ✅ | 🔒 |
327
+ | gemini-1.0-pro-vision | ✅ | 🔒 |
328
+ | gemini-1.0-pro-latest | 🔒 | ✅ |
329
+ | gemini-1.0-pro-002 | ✅ | 🔒 |
330
+ | gemini-1.0-pro-001 | ✅ | ✅ |
331
+ | gemini-1.0-pro | ✅ | ✅ |
332
+ | gemini-ultra | 🔒 | 🔒 |
333
+ | gemini-1.0-ultra | 🔒 | 🔒 |
334
+ | gemini-1.0-ultra-001 | 🔒 | 🔒 |
335
+ | text-embedding-preview-0514 | 🔒 | 🔒 |
336
+ | text-embedding-preview-0409 | 🔒 | 🔒 |
337
+ | text-embedding-004 | ✅ | ✅ |
338
+ | embedding-001 | 🔒 | ✅ |
339
+ | text-multilingual-embedding-002 | ✅ | 🔒 |
340
+ | textembedding-gecko-multilingual@001 | ✅ | 🔒 |
341
+ | textembedding-gecko-multilingual@latest | ✅ | 🔒 |
342
+ | textembedding-gecko@001 | ✅ | 🔒 |
343
+ | textembedding-gecko@002 | ✅ | 🔒 |
344
+ | textembedding-gecko@003 | ✅ | 🔒 |
345
+ | textembedding-gecko@latest | ✅ | 🔒 |
346
+
347
+ You can follow new models at:
348
+
349
+ - [Google models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models)
350
+ - [Model versions and lifecycle](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versioning)
351
+
352
+ This is [the code](https://gist.github.com/gbaptista/d7390901293bce81ee12ff4ec5fed62c) used for generating this table that you can use to explore your own access.
353
+
273
354
  ## Usage
274
355
 
275
356
  ### Client
@@ -298,6 +379,17 @@ client = Gemini.new(
298
379
  options: { model: 'gemini-pro', server_sent_events: true }
299
380
  )
300
381
 
382
+ # With the Service Account Credentials File contents
383
+ client = Gemini.new(
384
+ credentials: {
385
+ service: 'vertex-ai-api',
386
+ file_contents: File.read('google-credentials.json'),
387
+ # file_contents: ENV['GOOGLE_CREDENTIALS_FILE_CONTENTS'],
388
+ region: 'us-east4'
389
+ },
390
+ options: { model: 'gemini-pro', server_sent_events: true }
391
+ )
392
+
301
393
  # With Application Default Credentials
302
394
  client = Gemini.new(
303
395
  credentials: {
@@ -310,9 +402,11 @@ client = Gemini.new(
310
402
 
311
403
  ### Methods
312
404
 
313
- #### stream_generate_content
405
+ #### Chat
314
406
 
315
- ##### Receiving Stream Events
407
+ ##### stream_generate_content
408
+
409
+ ###### Receiving Stream Events
316
410
 
317
411
  Ensure that you have enabled [Server-Sent Events](#streaming-vs-server-sent-events-sse) before using blocks for streaming:
318
412
 
@@ -344,7 +438,7 @@ Event:
344
438
  } }
345
439
  ```
346
440
 
347
- ##### Without Events
441
+ ###### Without Events
348
442
 
349
443
  You can use `stream_generate_content` without events:
350
444
 
@@ -384,7 +478,7 @@ result = client.stream_generate_content(
384
478
  end
385
479
  ```
386
480
 
387
- #### generate_content
481
+ ##### generate_content
388
482
 
389
483
  ```ruby
390
484
  result = client.generate_content(
@@ -413,6 +507,58 @@ Result:
413
507
 
414
508
  As of the writing of this README, only the `generative-language-api` service supports the `generate_content` method; `vertex-ai-api` does not.
415
509
 
510
+ #### Embeddings
511
+
512
+ ##### predict
513
+
514
+ Vertex AI API generates embeddings through the `predict` method ([documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings)), and you need a client set up to use an embedding model (e.g. `text-embedding-004`):
515
+
516
+ ```ruby
517
+ result = client.predict(
518
+ { instances: [{ content: 'What is life?' }],
519
+ parameters: { autoTruncate: true } }
520
+ )
521
+ ```
522
+
523
+ Result:
524
+ ```ruby
525
+ { 'predictions' =>
526
+ [{ 'embeddings' =>
527
+ { 'statistics' => { 'truncated' => false, 'token_count' => 4 },
528
+ 'values' =>
529
+ [-0.006861076690256596,
530
+ 0.00020840796059928834,
531
+ -0.028549950569868088,
532
+ # ...
533
+ 0.0020092015620321035,
534
+ 0.03279878571629524,
535
+ -0.014905261807143688] } }],
536
+ 'metadata' => { 'billableCharacterCount' => 11 } }
537
+ ```
538
+
539
+ ##### embed_content
540
+
541
+ Generative Language API generates embeddings through the `embed_content` method ([documentation](https://ai.google.dev/api/rest/v1/models/embedContent)), and you need a client set up to use an embedding model (e.g. `text-embedding-004`):
542
+
543
+ ```ruby
544
+ result = client.embed_content(
545
+ { content: { parts: [{ text: 'What is life?' }] } }
546
+ )
547
+ ```
548
+
549
+ Result:
550
+ ```ruby
551
+ { 'embedding' =>
552
+ { 'values' =>
553
+ [-0.0065307906,
554
+ -0.0001632607,
555
+ -0.028370803,
556
+
557
+ 0.0019950708,
558
+ 0.032798845,
559
+ -0.014878989] } }
560
+ ```
561
+
416
562
  ### Modes
417
563
 
418
564
  #### Text
@@ -718,6 +864,208 @@ Result:
718
864
  } }]
719
865
  ```
720
866
 
867
+ ### Safety Settings
868
+
869
+ You can [configure safety attributes](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-attributes) for your requests.
870
+
871
+ Harm Categories:
872
+ > `HARM_CATEGORY_UNSPECIFIED`, `HARM_CATEGORY_HARASSMENT`, `HARM_CATEGORY_HATE_SPEECH`, `HARM_CATEGORY_SEXUALLY_EXPLICIT`, `HARM_CATEGORY_DANGEROUS_CONTENT`.
873
+
874
+ Thresholds:
875
+ > `BLOCK_NONE`, `BLOCK_ONLY_HIGH`, `BLOCK_MEDIUM_AND_ABOVE`, `BLOCK_LOW_AND_ABOVE`, `HARM_BLOCK_THRESHOLD_UNSPECIFIED`.
876
+
877
+ Example:
878
+ ```ruby
879
+ client.stream_generate_content(
880
+ {
881
+ contents: { role: 'user', parts: { text: 'hi!' } },
882
+ safetySettings: [
883
+ {
884
+ category: 'HARM_CATEGORY_UNSPECIFIED',
885
+ threshold: 'BLOCK_ONLY_HIGH'
886
+ },
887
+ {
888
+ category: 'HARM_CATEGORY_HARASSMENT',
889
+ threshold: 'BLOCK_ONLY_HIGH'
890
+ },
891
+ {
892
+ category: 'HARM_CATEGORY_HATE_SPEECH',
893
+ threshold: 'BLOCK_ONLY_HIGH'
894
+ },
895
+ {
896
+ category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
897
+ threshold: 'BLOCK_ONLY_HIGH'
898
+ },
899
+ {
900
+ category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
901
+ threshold: 'BLOCK_ONLY_HIGH'
902
+ }
903
+ ]
904
+ }
905
+ )
906
+ ```
907
+
908
+ Google started to block the usage of `BLOCK_NONE` unless:
909
+
910
+ > _User has requested a restricted HarmBlockThreshold setting BLOCK_NONE. You can get access either (a) through an allowlist via your Google account team, or (b) by switching your account type to monthly invoiced billing via this instruction: https://cloud.google.com/billing/docs/how-to/invoiced-billing_
911
+
912
+ ### System Instructions
913
+
914
+ Some models support [system instructions](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/system-instructions):
915
+
916
+ ```ruby
917
+ client.stream_generate_content(
918
+ { contents: { role: 'user', parts: { text: 'Hi! Who are you?' } },
919
+ system_instruction: { role: 'user', parts: { text: 'Your name is Neko.' } } }
920
+ )
921
+ ```
922
+
923
+ Output:
924
+ ```text
925
+ Hi! I'm Neko, a factual language model from Google AI.
926
+ ```
927
+
928
+ ```ruby
929
+ client.stream_generate_content(
930
+ { contents: { role: 'user', parts: { text: 'Hi! Who are you?' } },
931
+ system_instruction: {
932
+ role: 'user', parts: [
933
+ { text: 'You are a cat.' },
934
+ { text: 'Your name is Neko.' }
935
+ ]
936
+ } }
937
+ )
938
+ ```
939
+
940
+ Output:
941
+ ```text
942
+ Meow! I'm Neko, a fluffy and playful cat. :3
943
+ ```
944
+
945
+ ### JSON Format Responses
946
+
947
+ > _As of the writing of this README, only the `vertex-ai-api` service and `gemini` models version `1.5` support this feature._
948
+
949
+ The Gemini API provides a configuration parameter to [request a response in JSON](https://ai.google.dev/gemini-api/docs/api-overview#json) format:
950
+
951
+ ```ruby
952
+ require 'json'
953
+
954
+ result = client.stream_generate_content(
955
+ {
956
+ contents: {
957
+ role: 'user',
958
+ parts: {
959
+ text: 'List 3 random colors.'
960
+ }
961
+ },
962
+ generation_config: {
963
+ response_mime_type: 'application/json'
964
+ }
965
+
966
+ }
967
+ )
968
+
969
+ json_string = result
970
+ .map { |response| response.dig('candidates', 0, 'content', 'parts') }
971
+ .map { |parts| parts.map { |part| part['text'] }.join }
972
+ .join
973
+
974
+ puts JSON.parse(json_string).inspect
975
+ ```
976
+
977
+ Output:
978
+ ```ruby
979
+ { 'colors' => ['Dark Salmon', 'Indigo', 'Lavender'] }
980
+ ```
981
+
982
+ #### JSON Schema
983
+
984
+ > _While Gemini 1.5 Flash models only accept a text description of the JSON schema you want returned, the Gemini 1.5 Pro models let you pass a schema object (or a Python type equivalent), and the model output will strictly follow that schema. This is also known as controlled generation or constrained decoding._
985
+
986
+ You can also provide a [JSON Schema](https://json-schema.org) for the expected JSON output:
987
+
988
+ ```ruby
989
+ require 'json'
990
+
991
+ result = client.stream_generate_content(
992
+ {
993
+ contents: {
994
+ role: 'user',
995
+ parts: {
996
+ text: 'List 3 random colors.'
997
+ }
998
+ },
999
+ generation_config: {
1000
+ response_mime_type: 'application/json',
1001
+ response_schema: {
1002
+ type: 'object',
1003
+ properties: {
1004
+ colors: {
1005
+ type: 'array',
1006
+ items: {
1007
+ type: 'object',
1008
+ properties: {
1009
+ name: {
1010
+ type: 'string'
1011
+ }
1012
+ }
1013
+ }
1014
+ }
1015
+ }
1016
+ }
1017
+ }
1018
+ }
1019
+ )
1020
+
1021
+ json_string = result
1022
+ .map { |response| response.dig('candidates', 0, 'content', 'parts') }
1023
+ .map { |parts| parts.map { |part| part['text'] }.join }
1024
+ .join
1025
+
1026
+ puts JSON.parse(json_string).inspect
1027
+ ```
1028
+
1029
+ Output:
1030
+
1031
+ ```ruby
1032
+ { 'colors' => [
1033
+ { 'name' => 'Lavender Blush' },
1034
+ { 'name' => 'Medium Turquoise' },
1035
+ { 'name' => 'Dark Slate Gray' }
1036
+ ] }
1037
+ ```
1038
+
1039
+ #### Models That Support JSON
1040
+
1041
+ These models are accessible to the repository **author** as of June 2025 in the `us-east4` region. Access to models may vary by region, user, and account.
1042
+
1043
+ - ❌ Does not support JSON mode.
1044
+ - 🟡 Supports JSON mode but not Schema.
1045
+ - ✅ Supports JSON mode and Schema.
1046
+ - 🔒 I don't have access to the model.
1047
+
1048
+ | Model | Vertex AI | Generative Language |
1049
+ |------------------------------------------|:---------:|:-------------------:|
1050
+ | gemini-pro-vision | ❌ | 🔒 |
1051
+ | gemini-pro | 🟡 | ❌ |
1052
+ | gemini-1.5-pro-preview-0514 | ✅ | 🔒 |
1053
+ | gemini-1.5-pro-preview-0409 | ✅ | 🔒 |
1054
+ | gemini-1.5-pro | ✅ | ❌ |
1055
+ | gemini-1.5-flash-preview-0514 | 🟡 | 🔒 |
1056
+ | gemini-1.5-flash | 🟡 | ❌ |
1057
+ | gemini-1.0-pro-vision-latest | 🔒 | 🔒 |
1058
+ | gemini-1.0-pro-vision-001 | ❌ | 🔒 |
1059
+ | gemini-1.0-pro-vision | ❌ | 🔒 |
1060
+ | gemini-1.0-pro-latest | 🔒 | ❌ |
1061
+ | gemini-1.0-pro-002 | 🟡 | 🔒 |
1062
+ | gemini-1.0-pro-001 | ❌ | ❌ |
1063
+ | gemini-1.0-pro | 🟡 | ❌ |
1064
+ | gemini-ultra | 🔒 | 🔒 |
1065
+ | gemini-1.0-ultra | 🔒 | 🔒 |
1066
+ | gemini-1.0-ultra-001 | 🔒 | 🔒 |
1067
+
1068
+
721
1069
  ### Tools (Functions) Calling
722
1070
 
723
1071
  > As of the writing of this README, only the `vertex-ai-api` service and the `gemini-pro` model [supports](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling#supported_models) tools (functions) calls.
@@ -865,12 +1213,25 @@ Which will result in:
865
1213
 
866
1214
  ### New Functionalities and APIs
867
1215
 
868
- Google may launch a new endpoint that we haven't covered in the Gem yet. If that's the case, you may still be able to use it through the `request` method. For example, `stream_generate_content` is just a wrapper for `google/models/gemini-pro:streamGenerateContent`, which you can call directly like this:
1216
+ Google may launch a new endpoint that we haven't covered in the Gem yet. If that's the case, you may still be able to use it through the `request` method. For example, `stream_generate_content` is just a wrapper for `models/gemini-pro:streamGenerateContent` (Generative Language API) or `publishers/google/models/gemini-pro:streamGenerateContent` (Vertex AI API), which you can call directly like this:
869
1217
 
870
1218
  ```ruby
1219
+ # Generative Language API
871
1220
  result = client.request(
872
- 'streamGenerateContent',
873
- { contents: { role: 'user', parts: { text: 'hi!' } } }
1221
+ 'models/gemini-pro:streamGenerateContent',
1222
+ { contents: { role: 'user', parts: { text: 'hi!' } } },
1223
+ request_method: 'POST',
1224
+ server_sent_events: true
1225
+ )
1226
+ ```
1227
+
1228
+ ```ruby
1229
+ # Vertex AI API
1230
+ result = client.request(
1231
+ 'publishers/google/models/gemini-pro:streamGenerateContent',
1232
+ { contents: { role: 'user', parts: { text: 'hi!' } } },
1233
+ request_method: 'POST',
1234
+ server_sent_events: true
874
1235
  )
875
1236
  ```
876
1237
 
@@ -975,6 +1336,7 @@ GeminiError
975
1336
 
976
1337
  MissingProjectIdError
977
1338
  UnsupportedServiceError
1339
+ ConflictingCredentialsError
978
1340
  BlockWithoutServerSentEventsError
979
1341
 
980
1342
  RequestError
@@ -986,7 +1348,14 @@ RequestError
986
1348
  bundle
987
1349
  rubocop -A
988
1350
 
989
- bundle exec ruby spec/tasks/run-client.rb
1351
+ rspec
1352
+
1353
+ bundle exec ruby spec/tasks/run-available-models.rb
1354
+ bundle exec ruby spec/tasks/run-embed.rb
1355
+ bundle exec ruby spec/tasks/run-generate.rb
1356
+ bundle exec ruby spec/tasks/run-json.rb
1357
+ bundle exec ruby spec/tasks/run-safety.rb
1358
+ bundle exec ruby spec/tasks/run-system.rb
990
1359
  ```
991
1360
 
992
1361
  ### Purpose
@@ -1000,7 +1369,7 @@ gem build gemini-ai.gemspec
1000
1369
 
1001
1370
  gem signin
1002
1371
 
1003
- gem push gemini-ai-3.2.0.gem
1372
+ gem push gemini-ai-4.1.0.gem
1004
1373
  ```
1005
1374
 
1006
1375
  ### Updating the README
@@ -1044,6 +1413,11 @@ These resources and references may be useful throughout your learning process.
1044
1413
  - [Gemini API Documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini)
1045
1414
  - [Vertex AI API Documentation](https://cloud.google.com/vertex-ai/docs/reference)
1046
1415
  - [REST Documentation](https://cloud.google.com/vertex-ai/docs/reference/rest)
1416
+ - [Get text embeddings](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings)
1417
+ - [Use system instructions](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/system-instructions)
1418
+ - [Configure safety attributes](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-attributes)
1419
+ - [Google models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models)
1420
+ - [Model versions and lifecycle](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versioning)
1047
1421
  - [Google DeepMind Gemini](https://deepmind.google/technologies/gemini/)
1048
1422
  - [Stream responses from Generative AI models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/streaming)
1049
1423
  - [Function calling](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling)
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: gemini-ai
3
3
  version: !ruby/object:Gem::Version
4
- version: 3.2.0
4
+ version: 4.1.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - gbaptista
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-01-24 00:00:00.000000000 Z
11
+ date: 2024-06-23 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: event_stream_parser
@@ -31,6 +31,9 @@ dependencies:
31
31
  - - "~>"
32
32
  - !ruby/object:Gem::Version
33
33
  version: '2.9'
34
+ - - ">="
35
+ - !ruby/object:Gem::Version
36
+ version: 2.9.2
34
37
  type: :runtime
35
38
  prerelease: false
36
39
  version_requirements: !ruby/object:Gem::Requirement
@@ -38,6 +41,9 @@ dependencies:
38
41
  - - "~>"
39
42
  - !ruby/object:Gem::Version
40
43
  version: '2.9'
44
+ - - ">="
45
+ - !ruby/object:Gem::Version
46
+ version: 2.9.2
41
47
  - !ruby/object:Gem::Dependency
42
48
  name: faraday-typhoeus
43
49
  requirement: !ruby/object:Gem::Requirement
@@ -94,6 +100,7 @@ extensions: []
94
100
  extra_rdoc_files: []
95
101
  files:
96
102
  - ".gitignore"
103
+ - ".rspec"
97
104
  - ".rubocop.yml"
98
105
  - ".ruby-version"
99
106
  - Gemfile