gemini-ai 3.2.0 → 4.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.gitignore +4 -0
- data/.rspec +1 -0
- data/.rubocop.yml +3 -0
- data/Gemfile +4 -2
- data/Gemfile.lock +47 -25
- data/README.md +430 -46
- data/components/errors.rb +2 -1
- data/controllers/client.rb +85 -16
- data/gemini-ai.gemspec +5 -1
- data/static/gem.rb +1 -1
- data/tasks/generate-readme.clj +1 -1
- data/template.md +387 -13
- metadata +9 -2
data/README.md
CHANGED
@@ -9,7 +9,7 @@ A Ruby Gem for interacting with [Gemini](https://deepmind.google/technologies/ge
|
|
9
9
|
## TL;DR and Quick Start
|
10
10
|
|
11
11
|
```ruby
|
12
|
-
gem 'gemini-ai', '~>
|
12
|
+
gem 'gemini-ai', '~> 4.1.0'
|
13
13
|
```
|
14
14
|
|
15
15
|
```ruby
|
@@ -34,6 +34,17 @@ client = Gemini.new(
|
|
34
34
|
options: { model: 'gemini-pro', server_sent_events: true }
|
35
35
|
)
|
36
36
|
|
37
|
+
# With the Service Account Credentials File contents
|
38
|
+
client = Gemini.new(
|
39
|
+
credentials: {
|
40
|
+
service: 'vertex-ai-api',
|
41
|
+
file_contents: File.read('google-credentials.json'),
|
42
|
+
# file_contents: ENV['GOOGLE_CREDENTIALS_FILE_CONTENTS'],
|
43
|
+
region: 'us-east4'
|
44
|
+
},
|
45
|
+
options: { model: 'gemini-pro', server_sent_events: true }
|
46
|
+
)
|
47
|
+
|
37
48
|
# With Application Default Credentials
|
38
49
|
client = Gemini.new(
|
39
50
|
credentials: {
|
@@ -73,41 +84,51 @@ Result:
|
|
73
84
|
- [TL;DR and Quick Start](#tldr-and-quick-start)
|
74
85
|
- [Index](#index)
|
75
86
|
- [Setup](#setup)
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
|
81
|
-
|
82
|
-
|
87
|
+
- [Installing](#installing)
|
88
|
+
- [Credentials](#credentials)
|
89
|
+
- [Option 1: API Key (Generative Language API)](#option-1-api-key-generative-language-api)
|
90
|
+
- [Option 2: Service Account Credentials File (Vertex AI API)](#option-2-service-account-credentials-file-vertex-ai-api)
|
91
|
+
- [Option 3: Application Default Credentials (Vertex AI API)](#option-3-application-default-credentials-vertex-ai-api)
|
92
|
+
- [Required Data](#required-data)
|
93
|
+
- [Custom Version](#custom-version)
|
94
|
+
- [Available Models](#available-models)
|
83
95
|
- [Usage](#usage)
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
- [
|
90
|
-
|
91
|
-
|
92
|
-
|
93
|
-
|
94
|
-
|
95
|
-
|
96
|
-
|
97
|
-
- [
|
98
|
-
|
99
|
-
- [
|
100
|
-
- [
|
101
|
-
|
102
|
-
|
103
|
-
|
104
|
-
|
105
|
-
|
106
|
-
|
96
|
+
- [Client](#client)
|
97
|
+
- [Methods](#methods)
|
98
|
+
- [Chat](#chat)
|
99
|
+
- [stream_generate_content](#stream_generate_content)
|
100
|
+
- [Receiving Stream Events](#receiving-stream-events)
|
101
|
+
- [Without Events](#without-events)
|
102
|
+
- [generate_content](#generate_content)
|
103
|
+
- [Embeddings](#embeddings)
|
104
|
+
- [predict](#predict)
|
105
|
+
- [embed_content](#embed_content)
|
106
|
+
- [Modes](#modes)
|
107
|
+
- [Text](#text)
|
108
|
+
- [Image](#image)
|
109
|
+
- [Video](#video)
|
110
|
+
- [Streaming vs. Server-Sent Events (SSE)](#streaming-vs-server-sent-events-sse)
|
111
|
+
- [Server-Sent Events (SSE) Hang](#server-sent-events-sse-hang)
|
112
|
+
- [Non-Streaming](#non-streaming)
|
113
|
+
- [Back-and-Forth Conversations](#back-and-forth-conversations)
|
114
|
+
- [Safety Settings](#safety-settings)
|
115
|
+
- [System Instructions](#system-instructions)
|
116
|
+
- [JSON Format Responses](#json-format-responses)
|
117
|
+
- [JSON Schema](#json-schema)
|
118
|
+
- [Models That Support JSON](#models-that-support-json)
|
119
|
+
- [Tools (Functions) Calling](#tools-functions-calling)
|
120
|
+
- [New Functionalities and APIs](#new-functionalities-and-apis)
|
121
|
+
- [Request Options](#request-options)
|
122
|
+
- [Adapter](#adapter)
|
123
|
+
- [Timeout](#timeout)
|
124
|
+
- [Error Handling](#error-handling)
|
125
|
+
- [Rescuing](#rescuing)
|
126
|
+
- [For Short](#for-short)
|
127
|
+
- [Errors](#errors)
|
107
128
|
- [Development](#development)
|
108
|
-
|
109
|
-
|
110
|
-
|
129
|
+
- [Purpose](#purpose)
|
130
|
+
- [Publish to RubyGems](#publish-to-rubygems)
|
131
|
+
- [Updating the README](#updating-the-readme)
|
111
132
|
- [Resources and References](#resources-and-references)
|
112
133
|
- [Disclaimer](#disclaimer)
|
113
134
|
|
@@ -116,11 +137,11 @@ Result:
|
|
116
137
|
### Installing
|
117
138
|
|
118
139
|
```sh
|
119
|
-
gem install gemini-ai -v
|
140
|
+
gem install gemini-ai -v 4.1.0
|
120
141
|
```
|
121
142
|
|
122
143
|
```sh
|
123
|
-
gem 'gemini-ai', '~>
|
144
|
+
gem 'gemini-ai', '~> 4.1.0'
|
124
145
|
```
|
125
146
|
|
126
147
|
### Credentials
|
@@ -202,7 +223,7 @@ Similar to [Option 2](#option-2-service-account-credentials-file-vertex-ai-api),
|
|
202
223
|
For local development, you can generate your default credentials using the [gcloud CLI](https://cloud.google.com/sdk/gcloud) as follows:
|
203
224
|
|
204
225
|
```sh
|
205
|
-
gcloud auth application-default login
|
226
|
+
gcloud auth application-default login
|
206
227
|
```
|
207
228
|
|
208
229
|
For more details about alternative methods and different environments, check the official documentation:
|
@@ -240,6 +261,23 @@ Remember that hardcoding your API key in code is unsafe; it's preferable to use
|
|
240
261
|
}
|
241
262
|
```
|
242
263
|
|
264
|
+
Alternatively, you can pass the file contents instead of the path:
|
265
|
+
```ruby
|
266
|
+
{
|
267
|
+
service: 'vertex-ai-api',
|
268
|
+
file_contents: File.read('google-credentials.json'),
|
269
|
+
region: 'us-east4'
|
270
|
+
}
|
271
|
+
```
|
272
|
+
|
273
|
+
```ruby
|
274
|
+
{
|
275
|
+
service: 'vertex-ai-api',
|
276
|
+
file_contents: ENV['GOOGLE_CREDENTIALS_FILE_CONTENTS'],
|
277
|
+
region: 'us-east4'
|
278
|
+
}
|
279
|
+
```
|
280
|
+
|
243
281
|
**Option 3**: For _Application Default Credentials_, omit both the `api_key` and the `file_path`:
|
244
282
|
|
245
283
|
```ruby
|
@@ -298,6 +336,17 @@ client = Gemini.new(
|
|
298
336
|
options: { model: 'gemini-pro', server_sent_events: true }
|
299
337
|
)
|
300
338
|
|
339
|
+
# With the Service Account Credentials File contents
|
340
|
+
client = Gemini.new(
|
341
|
+
credentials: {
|
342
|
+
service: 'vertex-ai-api',
|
343
|
+
file_contents: File.read('google-credentials.json'),
|
344
|
+
# file_contents: ENV['GOOGLE_CREDENTIALS_FILE_CONTENTS'],
|
345
|
+
region: 'us-east4'
|
346
|
+
},
|
347
|
+
options: { model: 'gemini-pro', server_sent_events: true }
|
348
|
+
)
|
349
|
+
|
301
350
|
# With Application Default Credentials
|
302
351
|
client = Gemini.new(
|
303
352
|
credentials: {
|
@@ -309,6 +358,48 @@ client = Gemini.new(
|
|
309
358
|
)
|
310
359
|
```
|
311
360
|
|
361
|
+
## Available Models
|
362
|
+
|
363
|
+
These models are accessible to the repository **author** as of June 2025 in the `us-east4` region. Access to models may vary by region, user, and account. All models here are expected to work, if you can access them. This is just a reference of what a "typical" user may expect to have access to right away:
|
364
|
+
|
365
|
+
| Model | Vertex AI | Generative Language |
|
366
|
+
|------------------------------------------|:---------:|:-------------------:|
|
367
|
+
| gemini-pro-vision | ✅ | 🔒 |
|
368
|
+
| gemini-pro | ✅ | ✅ |
|
369
|
+
| gemini-1.5-pro-preview-0514 | ✅ | 🔒 |
|
370
|
+
| gemini-1.5-pro-preview-0409 | ✅ | 🔒 |
|
371
|
+
| gemini-1.5-pro | ✅ | ✅ |
|
372
|
+
| gemini-1.5-flash-preview-0514 | ✅ | 🔒 |
|
373
|
+
| gemini-1.5-flash | ✅ | ✅ |
|
374
|
+
| gemini-1.0-pro-vision-latest | 🔒 | 🔒 |
|
375
|
+
| gemini-1.0-pro-vision-001 | ✅ | 🔒 |
|
376
|
+
| gemini-1.0-pro-vision | ✅ | 🔒 |
|
377
|
+
| gemini-1.0-pro-latest | 🔒 | ✅ |
|
378
|
+
| gemini-1.0-pro-002 | ✅ | 🔒 |
|
379
|
+
| gemini-1.0-pro-001 | ✅ | ✅ |
|
380
|
+
| gemini-1.0-pro | ✅ | ✅ |
|
381
|
+
| gemini-ultra | 🔒 | 🔒 |
|
382
|
+
| gemini-1.0-ultra | 🔒 | 🔒 |
|
383
|
+
| gemini-1.0-ultra-001 | 🔒 | 🔒 |
|
384
|
+
| text-embedding-preview-0514 | 🔒 | 🔒 |
|
385
|
+
| text-embedding-preview-0409 | 🔒 | 🔒 |
|
386
|
+
| text-embedding-004 | ✅ | ✅ |
|
387
|
+
| embedding-001 | 🔒 | ✅ |
|
388
|
+
| text-multilingual-embedding-002 | ✅ | 🔒 |
|
389
|
+
| textembedding-gecko-multilingual@001 | ✅ | 🔒 |
|
390
|
+
| textembedding-gecko-multilingual@latest | ✅ | 🔒 |
|
391
|
+
| textembedding-gecko@001 | ✅ | 🔒 |
|
392
|
+
| textembedding-gecko@002 | ✅ | 🔒 |
|
393
|
+
| textembedding-gecko@003 | ✅ | 🔒 |
|
394
|
+
| textembedding-gecko@latest | ✅ | 🔒 |
|
395
|
+
|
396
|
+
You can follow new models at:
|
397
|
+
|
398
|
+
- [Google models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models)
|
399
|
+
- [Model versions and lifecycle](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versioning)
|
400
|
+
|
401
|
+
This is [the code](https://gist.github.com/gbaptista/d7390901293bce81ee12ff4ec5fed62c) used for generating this table that you can use to explore your own access.
|
402
|
+
|
312
403
|
## Usage
|
313
404
|
|
314
405
|
### Client
|
@@ -337,6 +428,17 @@ client = Gemini.new(
|
|
337
428
|
options: { model: 'gemini-pro', server_sent_events: true }
|
338
429
|
)
|
339
430
|
|
431
|
+
# With the Service Account Credentials File contents
|
432
|
+
client = Gemini.new(
|
433
|
+
credentials: {
|
434
|
+
service: 'vertex-ai-api',
|
435
|
+
file_contents: File.read('google-credentials.json'),
|
436
|
+
# file_contents: ENV['GOOGLE_CREDENTIALS_FILE_CONTENTS'],
|
437
|
+
region: 'us-east4'
|
438
|
+
},
|
439
|
+
options: { model: 'gemini-pro', server_sent_events: true }
|
440
|
+
)
|
441
|
+
|
340
442
|
# With Application Default Credentials
|
341
443
|
client = Gemini.new(
|
342
444
|
credentials: {
|
@@ -349,9 +451,11 @@ client = Gemini.new(
|
|
349
451
|
|
350
452
|
### Methods
|
351
453
|
|
352
|
-
####
|
454
|
+
#### Chat
|
353
455
|
|
354
|
-
#####
|
456
|
+
##### stream_generate_content
|
457
|
+
|
458
|
+
###### Receiving Stream Events
|
355
459
|
|
356
460
|
Ensure that you have enabled [Server-Sent Events](#streaming-vs-server-sent-events-sse) before using blocks for streaming:
|
357
461
|
|
@@ -383,7 +487,7 @@ Event:
|
|
383
487
|
} }
|
384
488
|
```
|
385
489
|
|
386
|
-
|
490
|
+
###### Without Events
|
387
491
|
|
388
492
|
You can use `stream_generate_content` without events:
|
389
493
|
|
@@ -423,7 +527,7 @@ result = client.stream_generate_content(
|
|
423
527
|
end
|
424
528
|
```
|
425
529
|
|
426
|
-
|
530
|
+
##### generate_content
|
427
531
|
|
428
532
|
```ruby
|
429
533
|
result = client.generate_content(
|
@@ -452,6 +556,58 @@ Result:
|
|
452
556
|
|
453
557
|
As of the writing of this README, only the `generative-language-api` service supports the `generate_content` method; `vertex-ai-api` does not.
|
454
558
|
|
559
|
+
#### Embeddings
|
560
|
+
|
561
|
+
##### predict
|
562
|
+
|
563
|
+
Vertex AI API generates embeddings through the `predict` method ([documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings)), and you need a client set up to use an embedding model (e.g. `text-embedding-004`):
|
564
|
+
|
565
|
+
```ruby
|
566
|
+
result = client.predict(
|
567
|
+
{ instances: [{ content: 'What is life?' }],
|
568
|
+
parameters: { autoTruncate: true } }
|
569
|
+
)
|
570
|
+
```
|
571
|
+
|
572
|
+
Result:
|
573
|
+
```ruby
|
574
|
+
{ 'predictions' =>
|
575
|
+
[{ 'embeddings' =>
|
576
|
+
{ 'statistics' => { 'truncated' => false, 'token_count' => 4 },
|
577
|
+
'values' =>
|
578
|
+
[-0.006861076690256596,
|
579
|
+
0.00020840796059928834,
|
580
|
+
-0.028549950569868088,
|
581
|
+
# ...
|
582
|
+
0.0020092015620321035,
|
583
|
+
0.03279878571629524,
|
584
|
+
-0.014905261807143688] } }],
|
585
|
+
'metadata' => { 'billableCharacterCount' => 11 } }
|
586
|
+
```
|
587
|
+
|
588
|
+
##### embed_content
|
589
|
+
|
590
|
+
Generative Language API generates embeddings through the `embed_content` method ([documentation](https://ai.google.dev/api/rest/v1/models/embedContent)), and you need a client set up to use an embedding model (e.g. `text-embedding-004`):
|
591
|
+
|
592
|
+
```ruby
|
593
|
+
result = client.embed_content(
|
594
|
+
{ content: { parts: [{ text: 'What is life?' }] } }
|
595
|
+
)
|
596
|
+
```
|
597
|
+
|
598
|
+
Result:
|
599
|
+
```ruby
|
600
|
+
{ 'embedding' =>
|
601
|
+
{ 'values' =>
|
602
|
+
[-0.0065307906,
|
603
|
+
-0.0001632607,
|
604
|
+
-0.028370803,
|
605
|
+
|
606
|
+
0.0019950708,
|
607
|
+
0.032798845,
|
608
|
+
-0.014878989] } }
|
609
|
+
```
|
610
|
+
|
455
611
|
### Modes
|
456
612
|
|
457
613
|
#### Text
|
@@ -757,6 +913,208 @@ Result:
|
|
757
913
|
} }]
|
758
914
|
```
|
759
915
|
|
916
|
+
### Safety Settings
|
917
|
+
|
918
|
+
You can [configure safety attributes](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-attributes) for your requests.
|
919
|
+
|
920
|
+
Harm Categories:
|
921
|
+
> `HARM_CATEGORY_UNSPECIFIED`, `HARM_CATEGORY_HARASSMENT`, `HARM_CATEGORY_HATE_SPEECH`, `HARM_CATEGORY_SEXUALLY_EXPLICIT`, `HARM_CATEGORY_DANGEROUS_CONTENT`.
|
922
|
+
|
923
|
+
Thresholds:
|
924
|
+
> `BLOCK_NONE`, `BLOCK_ONLY_HIGH`, `BLOCK_MEDIUM_AND_ABOVE`, `BLOCK_LOW_AND_ABOVE`, `HARM_BLOCK_THRESHOLD_UNSPECIFIED`.
|
925
|
+
|
926
|
+
Example:
|
927
|
+
```ruby
|
928
|
+
client.stream_generate_content(
|
929
|
+
{
|
930
|
+
contents: { role: 'user', parts: { text: 'hi!' } },
|
931
|
+
safetySettings: [
|
932
|
+
{
|
933
|
+
category: 'HARM_CATEGORY_UNSPECIFIED',
|
934
|
+
threshold: 'BLOCK_ONLY_HIGH'
|
935
|
+
},
|
936
|
+
{
|
937
|
+
category: 'HARM_CATEGORY_HARASSMENT',
|
938
|
+
threshold: 'BLOCK_ONLY_HIGH'
|
939
|
+
},
|
940
|
+
{
|
941
|
+
category: 'HARM_CATEGORY_HATE_SPEECH',
|
942
|
+
threshold: 'BLOCK_ONLY_HIGH'
|
943
|
+
},
|
944
|
+
{
|
945
|
+
category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
|
946
|
+
threshold: 'BLOCK_ONLY_HIGH'
|
947
|
+
},
|
948
|
+
{
|
949
|
+
category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
|
950
|
+
threshold: 'BLOCK_ONLY_HIGH'
|
951
|
+
}
|
952
|
+
]
|
953
|
+
}
|
954
|
+
)
|
955
|
+
```
|
956
|
+
|
957
|
+
Google started to block the usage of `BLOCK_NONE` unless:
|
958
|
+
|
959
|
+
> _User has requested a restricted HarmBlockThreshold setting BLOCK_NONE. You can get access either (a) through an allowlist via your Google account team, or (b) by switching your account type to monthly invoiced billing via this instruction: https://cloud.google.com/billing/docs/how-to/invoiced-billing_
|
960
|
+
|
961
|
+
### System Instructions
|
962
|
+
|
963
|
+
Some models support [system instructions](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/system-instructions):
|
964
|
+
|
965
|
+
```ruby
|
966
|
+
client.stream_generate_content(
|
967
|
+
{ contents: { role: 'user', parts: { text: 'Hi! Who are you?' } },
|
968
|
+
system_instruction: { role: 'user', parts: { text: 'Your name is Neko.' } } }
|
969
|
+
)
|
970
|
+
```
|
971
|
+
|
972
|
+
Output:
|
973
|
+
```text
|
974
|
+
Hi! I'm Neko, a factual language model from Google AI.
|
975
|
+
```
|
976
|
+
|
977
|
+
```ruby
|
978
|
+
client.stream_generate_content(
|
979
|
+
{ contents: { role: 'user', parts: { text: 'Hi! Who are you?' } },
|
980
|
+
system_instruction: {
|
981
|
+
role: 'user', parts: [
|
982
|
+
{ text: 'You are a cat.' },
|
983
|
+
{ text: 'Your name is Neko.' }
|
984
|
+
]
|
985
|
+
} }
|
986
|
+
)
|
987
|
+
```
|
988
|
+
|
989
|
+
Output:
|
990
|
+
```text
|
991
|
+
Meow! I'm Neko, a fluffy and playful cat. :3
|
992
|
+
```
|
993
|
+
|
994
|
+
### JSON Format Responses
|
995
|
+
|
996
|
+
> _As of the writing of this README, only the `vertex-ai-api` service and `gemini` models version `1.5` support this feature._
|
997
|
+
|
998
|
+
The Gemini API provides a configuration parameter to [request a response in JSON](https://ai.google.dev/gemini-api/docs/api-overview#json) format:
|
999
|
+
|
1000
|
+
```ruby
|
1001
|
+
require 'json'
|
1002
|
+
|
1003
|
+
result = client.stream_generate_content(
|
1004
|
+
{
|
1005
|
+
contents: {
|
1006
|
+
role: 'user',
|
1007
|
+
parts: {
|
1008
|
+
text: 'List 3 random colors.'
|
1009
|
+
}
|
1010
|
+
},
|
1011
|
+
generation_config: {
|
1012
|
+
response_mime_type: 'application/json'
|
1013
|
+
}
|
1014
|
+
|
1015
|
+
}
|
1016
|
+
)
|
1017
|
+
|
1018
|
+
json_string = result
|
1019
|
+
.map { |response| response.dig('candidates', 0, 'content', 'parts') }
|
1020
|
+
.map { |parts| parts.map { |part| part['text'] }.join }
|
1021
|
+
.join
|
1022
|
+
|
1023
|
+
puts JSON.parse(json_string).inspect
|
1024
|
+
```
|
1025
|
+
|
1026
|
+
Output:
|
1027
|
+
```ruby
|
1028
|
+
{ 'colors' => ['Dark Salmon', 'Indigo', 'Lavender'] }
|
1029
|
+
```
|
1030
|
+
|
1031
|
+
#### JSON Schema
|
1032
|
+
|
1033
|
+
> _While Gemini 1.5 Flash models only accept a text description of the JSON schema you want returned, the Gemini 1.5 Pro models let you pass a schema object (or a Python type equivalent), and the model output will strictly follow that schema. This is also known as controlled generation or constrained decoding._
|
1034
|
+
|
1035
|
+
You can also provide a [JSON Schema](https://json-schema.org) for the expected JSON output:
|
1036
|
+
|
1037
|
+
```ruby
|
1038
|
+
require 'json'
|
1039
|
+
|
1040
|
+
result = client.stream_generate_content(
|
1041
|
+
{
|
1042
|
+
contents: {
|
1043
|
+
role: 'user',
|
1044
|
+
parts: {
|
1045
|
+
text: 'List 3 random colors.'
|
1046
|
+
}
|
1047
|
+
},
|
1048
|
+
generation_config: {
|
1049
|
+
response_mime_type: 'application/json',
|
1050
|
+
response_schema: {
|
1051
|
+
type: 'object',
|
1052
|
+
properties: {
|
1053
|
+
colors: {
|
1054
|
+
type: 'array',
|
1055
|
+
items: {
|
1056
|
+
type: 'object',
|
1057
|
+
properties: {
|
1058
|
+
name: {
|
1059
|
+
type: 'string'
|
1060
|
+
}
|
1061
|
+
}
|
1062
|
+
}
|
1063
|
+
}
|
1064
|
+
}
|
1065
|
+
}
|
1066
|
+
}
|
1067
|
+
}
|
1068
|
+
)
|
1069
|
+
|
1070
|
+
json_string = result
|
1071
|
+
.map { |response| response.dig('candidates', 0, 'content', 'parts') }
|
1072
|
+
.map { |parts| parts.map { |part| part['text'] }.join }
|
1073
|
+
.join
|
1074
|
+
|
1075
|
+
puts JSON.parse(json_string).inspect
|
1076
|
+
```
|
1077
|
+
|
1078
|
+
Output:
|
1079
|
+
|
1080
|
+
```ruby
|
1081
|
+
{ 'colors' => [
|
1082
|
+
{ 'name' => 'Lavender Blush' },
|
1083
|
+
{ 'name' => 'Medium Turquoise' },
|
1084
|
+
{ 'name' => 'Dark Slate Gray' }
|
1085
|
+
] }
|
1086
|
+
```
|
1087
|
+
|
1088
|
+
#### Models That Support JSON
|
1089
|
+
|
1090
|
+
These models are accessible to the repository **author** as of June 2025 in the `us-east4` region. Access to models may vary by region, user, and account.
|
1091
|
+
|
1092
|
+
- ❌ Does not support JSON mode.
|
1093
|
+
- 🟡 Supports JSON mode but not Schema.
|
1094
|
+
- ✅ Supports JSON mode and Schema.
|
1095
|
+
- 🔒 I don't have access to the model.
|
1096
|
+
|
1097
|
+
| Model | Vertex AI | Generative Language |
|
1098
|
+
|------------------------------------------|:---------:|:-------------------:|
|
1099
|
+
| gemini-pro-vision | ❌ | 🔒 |
|
1100
|
+
| gemini-pro | 🟡 | ❌ |
|
1101
|
+
| gemini-1.5-pro-preview-0514 | ✅ | 🔒 |
|
1102
|
+
| gemini-1.5-pro-preview-0409 | ✅ | 🔒 |
|
1103
|
+
| gemini-1.5-pro | ✅ | ❌ |
|
1104
|
+
| gemini-1.5-flash-preview-0514 | 🟡 | 🔒 |
|
1105
|
+
| gemini-1.5-flash | 🟡 | ❌ |
|
1106
|
+
| gemini-1.0-pro-vision-latest | 🔒 | 🔒 |
|
1107
|
+
| gemini-1.0-pro-vision-001 | ❌ | 🔒 |
|
1108
|
+
| gemini-1.0-pro-vision | ❌ | 🔒 |
|
1109
|
+
| gemini-1.0-pro-latest | 🔒 | ❌ |
|
1110
|
+
| gemini-1.0-pro-002 | 🟡 | 🔒 |
|
1111
|
+
| gemini-1.0-pro-001 | ❌ | ❌ |
|
1112
|
+
| gemini-1.0-pro | 🟡 | ❌ |
|
1113
|
+
| gemini-ultra | 🔒 | 🔒 |
|
1114
|
+
| gemini-1.0-ultra | 🔒 | 🔒 |
|
1115
|
+
| gemini-1.0-ultra-001 | 🔒 | 🔒 |
|
1116
|
+
|
1117
|
+
|
760
1118
|
### Tools (Functions) Calling
|
761
1119
|
|
762
1120
|
> As of the writing of this README, only the `vertex-ai-api` service and the `gemini-pro` model [supports](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling#supported_models) tools (functions) calls.
|
@@ -904,12 +1262,25 @@ Which will result in:
|
|
904
1262
|
|
905
1263
|
### New Functionalities and APIs
|
906
1264
|
|
907
|
-
Google may launch a new endpoint that we haven't covered in the Gem yet. If that's the case, you may still be able to use it through the `request` method. For example, `stream_generate_content` is just a wrapper for `google/models/gemini-pro:streamGenerateContent
|
1265
|
+
Google may launch a new endpoint that we haven't covered in the Gem yet. If that's the case, you may still be able to use it through the `request` method. For example, `stream_generate_content` is just a wrapper for `models/gemini-pro:streamGenerateContent` (Generative Language API) or `publishers/google/models/gemini-pro:streamGenerateContent` (Vertex AI API), which you can call directly like this:
|
908
1266
|
|
909
1267
|
```ruby
|
1268
|
+
# Generative Language API
|
910
1269
|
result = client.request(
|
911
|
-
'streamGenerateContent',
|
912
|
-
{ contents: { role: 'user', parts: { text: 'hi!' } } }
|
1270
|
+
'models/gemini-pro:streamGenerateContent',
|
1271
|
+
{ contents: { role: 'user', parts: { text: 'hi!' } } },
|
1272
|
+
request_method: 'POST',
|
1273
|
+
server_sent_events: true
|
1274
|
+
)
|
1275
|
+
```
|
1276
|
+
|
1277
|
+
```ruby
|
1278
|
+
# Vertex AI API
|
1279
|
+
result = client.request(
|
1280
|
+
'publishers/google/models/gemini-pro:streamGenerateContent',
|
1281
|
+
{ contents: { role: 'user', parts: { text: 'hi!' } } },
|
1282
|
+
request_method: 'POST',
|
1283
|
+
server_sent_events: true
|
913
1284
|
)
|
914
1285
|
```
|
915
1286
|
|
@@ -1014,6 +1385,7 @@ GeminiError
|
|
1014
1385
|
|
1015
1386
|
MissingProjectIdError
|
1016
1387
|
UnsupportedServiceError
|
1388
|
+
ConflictingCredentialsError
|
1017
1389
|
BlockWithoutServerSentEventsError
|
1018
1390
|
|
1019
1391
|
RequestError
|
@@ -1025,7 +1397,14 @@ RequestError
|
|
1025
1397
|
bundle
|
1026
1398
|
rubocop -A
|
1027
1399
|
|
1028
|
-
|
1400
|
+
rspec
|
1401
|
+
|
1402
|
+
bundle exec ruby spec/tasks/run-available-models.rb
|
1403
|
+
bundle exec ruby spec/tasks/run-embed.rb
|
1404
|
+
bundle exec ruby spec/tasks/run-generate.rb
|
1405
|
+
bundle exec ruby spec/tasks/run-json.rb
|
1406
|
+
bundle exec ruby spec/tasks/run-safety.rb
|
1407
|
+
bundle exec ruby spec/tasks/run-system.rb
|
1029
1408
|
```
|
1030
1409
|
|
1031
1410
|
### Purpose
|
@@ -1039,7 +1418,7 @@ gem build gemini-ai.gemspec
|
|
1039
1418
|
|
1040
1419
|
gem signin
|
1041
1420
|
|
1042
|
-
gem push gemini-ai-
|
1421
|
+
gem push gemini-ai-4.1.0.gem
|
1043
1422
|
```
|
1044
1423
|
|
1045
1424
|
### Updating the README
|
@@ -1083,6 +1462,11 @@ These resources and references may be useful throughout your learning process.
|
|
1083
1462
|
- [Gemini API Documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini)
|
1084
1463
|
- [Vertex AI API Documentation](https://cloud.google.com/vertex-ai/docs/reference)
|
1085
1464
|
- [REST Documentation](https://cloud.google.com/vertex-ai/docs/reference/rest)
|
1465
|
+
- [Get text embeddings](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings)
|
1466
|
+
- [Use system instructions](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/system-instructions)
|
1467
|
+
- [Configure safety attributes](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-attributes)
|
1468
|
+
- [Google models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models)
|
1469
|
+
- [Model versions and lifecycle](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versioning)
|
1086
1470
|
- [Google DeepMind Gemini](https://deepmind.google/technologies/gemini/)
|
1087
1471
|
- [Stream responses from Generative AI models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/streaming)
|
1088
1472
|
- [Function calling](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling)
|
data/components/errors.rb
CHANGED
@@ -4,12 +4,13 @@ module Gemini
|
|
4
4
|
module Errors
|
5
5
|
class GeminiError < StandardError
|
6
6
|
def initialize(message = nil)
|
7
|
-
super
|
7
|
+
super
|
8
8
|
end
|
9
9
|
end
|
10
10
|
|
11
11
|
class MissingProjectIdError < GeminiError; end
|
12
12
|
class UnsupportedServiceError < GeminiError; end
|
13
|
+
class ConflictingCredentialsError < GeminiError; end
|
13
14
|
class BlockWithoutServerSentEventsError < GeminiError; end
|
14
15
|
|
15
16
|
class RequestError < GeminiError
|