gemini-ai 1.0.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/.gitignore +1 -0
- data/.rubocop.yml +6 -0
- data/.ruby-version +1 -0
- data/Gemfile +10 -0
- data/Gemfile.lock +85 -0
- data/LICENSE +9 -0
- data/README.md +547 -0
- data/controllers/client.rb +84 -0
- data/gemini-ai.gemspec +37 -0
- data/ports/dsl/gemini-ai.rb +14 -0
- data/static/gem.rb +15 -0
- data/tasks/generate-readme.clj +39 -0
- data/template.md +528 -0
- metadata +114 -0
data/template.md
ADDED
@@ -0,0 +1,528 @@
|
|
1
|
+
# Gemini AI
|
2
|
+
|
3
|
+
A Ruby Gem for interacting with [Gemini](https://deepmind.google/technologies/gemini/) through [Vertex AI](https://cloud.google.com/vertex-ai), Google's generative AI service.
|
4
|
+
|
5
|
+
> _This Gem is designed to provide low-level access to Gemini, enabling people to build abstractions on top of it. If you are interested in more high-level abstractions or more user-friendly tools, you may want to consider [Nano Bots](https://github.com/icebaker/ruby-nano-bots) 💎 🤖._
|
6
|
+
|
7
|
+
## TL;DR and Quick Start
|
8
|
+
|
9
|
+
```ruby
|
10
|
+
gem 'gemini-ai', '~> 1.0'
|
11
|
+
```
|
12
|
+
|
13
|
+
```ruby
|
14
|
+
require 'gemini-ai'
|
15
|
+
|
16
|
+
client = Gemini.new(
|
17
|
+
credentials: { file_path: 'google-credentials.json', project_id: 'PROJECT_ID', region: 'us-east4' },
|
18
|
+
settings: { model: 'gemini-pro', stream: false }
|
19
|
+
)
|
20
|
+
|
21
|
+
result = client.stream_generate_content({
|
22
|
+
contents: { role: 'user', parts: { text: 'hi!' } }
|
23
|
+
})
|
24
|
+
```
|
25
|
+
|
26
|
+
Result:
|
27
|
+
```ruby
|
28
|
+
[{ 'candidates' =>
|
29
|
+
[{ 'content' => {
|
30
|
+
'role' => 'model',
|
31
|
+
'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
|
32
|
+
},
|
33
|
+
'finishReason' => 'STOP',
|
34
|
+
'safetyRatings' =>
|
35
|
+
[{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
|
36
|
+
{ 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
|
37
|
+
{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
|
38
|
+
{ 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
|
39
|
+
'usageMetadata' => {
|
40
|
+
'promptTokenCount' => 2,
|
41
|
+
'candidatesTokenCount' => 8,
|
42
|
+
'totalTokenCount' => 10
|
43
|
+
} }]
|
44
|
+
```
|
45
|
+
|
46
|
+
## Index
|
47
|
+
|
48
|
+
{index}
|
49
|
+
|
50
|
+
## Setup
|
51
|
+
|
52
|
+
```sh
|
53
|
+
gem install gemini-ai -v 1.0.0
|
54
|
+
```
|
55
|
+
|
56
|
+
```sh
|
57
|
+
gem 'gemini-ai', '~> 1.0'
|
58
|
+
```
|
59
|
+
|
60
|
+
### Credentials
|
61
|
+
|
62
|
+
> ⚠️ DISCLAIMER: Be careful with what you are doing, and never trust others' code related to this. These commands and instructions alter the level of access to your Google Cloud Account, and running them naively can lead to security risks as well as financial risks. People with access to your account can use it to steal data or incur charges. Run these commands at your own responsibility and due diligence; expect no warranties from the contributors of this project.
|
63
|
+
|
64
|
+
You need a [Google Cloud](https://console.cloud.google.com) [_Project_](https://cloud.google.com/resource-manager/docs/creating-managing-projects) and a [_Service Account_](https://cloud.google.com/iam/docs/service-account-overview) to use [Vertex AI](https://cloud.google.com/vertex-ai) API.
|
65
|
+
|
66
|
+
After creating them, you need to enable the Vertex AI API for your project by clicking `Enable` here: [Vertex AI API](https://console.cloud.google.com/apis/library/aiplatform.googleapis.com).
|
67
|
+
|
68
|
+
You can create credentials for your _Service Account_ [here](https://console.cloud.google.com/apis/credentials), where you will be able to download a JSON file named `google-credentials.json` that should have content similar to this:
|
69
|
+
|
70
|
+
```json
|
71
|
+
{
|
72
|
+
"type": "service_account",
|
73
|
+
"project_id": "YOUR_PROJECT_ID",
|
74
|
+
"private_key_id": "a00...",
|
75
|
+
"private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
|
76
|
+
"client_email": "PROJECT_ID@PROJECT_ID.iam.gserviceaccount.com",
|
77
|
+
"client_id": "000...",
|
78
|
+
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
79
|
+
"token_uri": "https://oauth2.googleapis.com/token",
|
80
|
+
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
81
|
+
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/..."
|
82
|
+
}
|
83
|
+
```
|
84
|
+
|
85
|
+
You need to have the necessary [policies](https://cloud.google.com/iam/docs/policies) (`roles/aiplatform.user` and possibly `roles/ml.admin`) in place to use the Vertex AI API.
|
86
|
+
|
87
|
+
You can add them by navigating to the [IAM Console](https://console.cloud.google.com/iam-admin/iam) and clicking on the _"Edit principal"_ (✏️ pencil icon) next to your _Service Account_.
|
88
|
+
|
89
|
+
Alternatively, you can add them through the [gcloud CLI](https://cloud.google.com/sdk/gcloud) as follows:
|
90
|
+
|
91
|
+
```sh
|
92
|
+
gcloud projects add-iam-policy-binding PROJECT_ID \
|
93
|
+
--member='serviceAccount:PROJECT_ID@PROJECT_ID.iam.gserviceaccount.com' \
|
94
|
+
--role='roles/aiplatform.user'
|
95
|
+
```
|
96
|
+
|
97
|
+
Some people reported having trouble accessing the API, and adding the role `roles/ml.admin` fixed it:
|
98
|
+
|
99
|
+
```sh
|
100
|
+
gcloud projects add-iam-policy-binding PROJECT_ID \
|
101
|
+
--member='serviceAccount:PROJECT_ID@PROJECT_ID.iam.gserviceaccount.com' \
|
102
|
+
--role='roles/ml.admin'
|
103
|
+
```
|
104
|
+
|
105
|
+
If you are not using a _Service Account_:
|
106
|
+
```sh
|
107
|
+
gcloud projects add-iam-policy-binding PROJECT_ID \
|
108
|
+
--member='user:YOUR@MAIL.COM' \
|
109
|
+
--role='roles/aiplatform.user'
|
110
|
+
|
111
|
+
gcloud projects add-iam-policy-binding PROJECT_ID \
|
112
|
+
--member='user:YOUR@MAIL.COM' \
|
113
|
+
--role='roles/ml.admin'
|
114
|
+
```
|
115
|
+
|
116
|
+
> ⚠️ DISCLAIMER: Be careful with what you are doing, and never trust others' code related to this. These commands and instructions alter the level of access to your Google Cloud Account, and running them naively can lead to security risks as well as financial risks. People with access to your account can use it to steal data or incur charges. Run these commands at your own responsibility and due diligence; expect no warranties from the contributors of this project.
|
117
|
+
|
118
|
+
#### Required Data
|
119
|
+
|
120
|
+
After this, you should have all the necessary data and access to use Gemini: a `google-credentials.json` file, a `PROJECT_ID`, and a `REGION`:
|
121
|
+
|
122
|
+
```ruby
|
123
|
+
{
|
124
|
+
file_path: 'google-credentials.json',
|
125
|
+
project_id: 'PROJECT_ID',
|
126
|
+
region: 'us-east4'
|
127
|
+
}
|
128
|
+
```
|
129
|
+
|
130
|
+
As of the writing of this README, the following regions support Gemini:
|
131
|
+
```text
|
132
|
+
Iowa (us-central1)
|
133
|
+
Las Vegas, Nevada (us-west4)
|
134
|
+
Montréal, Canada (northamerica-northeast1)
|
135
|
+
Northern Virginia (us-east4)
|
136
|
+
Oregon (us-west1)
|
137
|
+
Seoul, Korea (asia-northeast3)
|
138
|
+
Singapore (asia-southeast1)
|
139
|
+
Tokyo, Japan (asia-northeast1)
|
140
|
+
```
|
141
|
+
|
142
|
+
You can follow here if new regions are available: [Gemini API](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini)
|
143
|
+
|
144
|
+
## Usage
|
145
|
+
|
146
|
+
### Client
|
147
|
+
Ensure that you have all the [required data](#required-data) for authentication.
|
148
|
+
|
149
|
+
Create a new client:
|
150
|
+
```ruby
|
151
|
+
require 'gemini-ai'
|
152
|
+
|
153
|
+
client = Gemini.new(
|
154
|
+
credentials: { file_path: 'google-credentials.json', project_id: 'PROJECT_ID', region: 'us-east4' },
|
155
|
+
settings: { model: 'gemini-pro', stream: false }
|
156
|
+
)
|
157
|
+
```
|
158
|
+
|
159
|
+
### Generate Content
|
160
|
+
|
161
|
+
#### Synchronous
|
162
|
+
|
163
|
+
```ruby
|
164
|
+
result = client.stream_generate_content({
|
165
|
+
contents: { role: 'user', parts: { text: 'hi!' } }
|
166
|
+
})
|
167
|
+
```
|
168
|
+
|
169
|
+
Result:
|
170
|
+
```ruby
|
171
|
+
[{ 'candidates' =>
|
172
|
+
[{ 'content' => {
|
173
|
+
'role' => 'model',
|
174
|
+
'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
|
175
|
+
},
|
176
|
+
'finishReason' => 'STOP',
|
177
|
+
'safetyRatings' =>
|
178
|
+
[{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
|
179
|
+
{ 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
|
180
|
+
{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
|
181
|
+
{ 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
|
182
|
+
'usageMetadata' => {
|
183
|
+
'promptTokenCount' => 2,
|
184
|
+
'candidatesTokenCount' => 8,
|
185
|
+
'totalTokenCount' => 10
|
186
|
+
} }]
|
187
|
+
```
|
188
|
+
|
189
|
+
#### Streaming
|
190
|
+
|
191
|
+
You can set up the client to use streaming for all supported endpoints:
|
192
|
+
```ruby
|
193
|
+
client = Gemini.new(
|
194
|
+
credentials: { file_path: 'google-credentials.json', project_id: 'PROJECT_ID', region: 'us-east4' },
|
195
|
+
settings: { model: 'gemini-pro', stream: true }
|
196
|
+
)
|
197
|
+
```
|
198
|
+
|
199
|
+
Or, you can decide on a request basis:
|
200
|
+
```ruby
|
201
|
+
client.stream_generate_content(
|
202
|
+
{ contents: { role: 'user', parts: { text: 'hi!' } } },
|
203
|
+
stream: true
|
204
|
+
)
|
205
|
+
```
|
206
|
+
|
207
|
+
With streaming enabled, you can use a block to receive the results:
|
208
|
+
|
209
|
+
```ruby
|
210
|
+
client.stream_generate_content(
|
211
|
+
{ contents: { role: 'user', parts: { text: 'hi!' } } }
|
212
|
+
) do |event, parsed, raw|
|
213
|
+
puts event
|
214
|
+
end
|
215
|
+
```
|
216
|
+
|
217
|
+
Event:
|
218
|
+
```ruby
|
219
|
+
{ 'candidates' =>
|
220
|
+
[{ 'content' => {
|
221
|
+
'role' => 'model',
|
222
|
+
'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
|
223
|
+
},
|
224
|
+
'finishReason' => 'STOP',
|
225
|
+
'safetyRatings' =>
|
226
|
+
[{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
|
227
|
+
{ 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
|
228
|
+
{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
|
229
|
+
{ 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
|
230
|
+
'usageMetadata' => {
|
231
|
+
'promptTokenCount' => 2,
|
232
|
+
'candidatesTokenCount' => 8,
|
233
|
+
'totalTokenCount' => 10
|
234
|
+
} }
|
235
|
+
```
|
236
|
+
|
237
|
+
#### Streaming Hang
|
238
|
+
|
239
|
+
Method calls will _hang_ until the stream finishes, so even without providing a block, you can get the final results of the stream events:
|
240
|
+
|
241
|
+
```ruby
|
242
|
+
result = client.stream_generate_content(
|
243
|
+
{ contents: { role: 'user', parts: { text: 'hi!' } } },
|
244
|
+
stream: true
|
245
|
+
)
|
246
|
+
```
|
247
|
+
|
248
|
+
Result:
|
249
|
+
```ruby
|
250
|
+
[{ 'candidates' =>
|
251
|
+
[{ 'content' => {
|
252
|
+
'role' => 'model',
|
253
|
+
'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
|
254
|
+
},
|
255
|
+
'finishReason' => 'STOP',
|
256
|
+
'safetyRatings' =>
|
257
|
+
[{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
|
258
|
+
{ 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
|
259
|
+
{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
|
260
|
+
{ 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
|
261
|
+
'usageMetadata' => {
|
262
|
+
'promptTokenCount' => 2,
|
263
|
+
'candidatesTokenCount' => 8,
|
264
|
+
'totalTokenCount' => 10
|
265
|
+
} }]
|
266
|
+
```
|
267
|
+
|
268
|
+
### Back-and-Forth Conversations
|
269
|
+
|
270
|
+
To maintain a back-and-forth conversation, you need to append the received responses and build a history for your requests:
|
271
|
+
|
272
|
+
```rb
|
273
|
+
result = client.stream_generate_content(
|
274
|
+
{ contents: [
|
275
|
+
{ role: 'user', parts: { text: 'Hi! My name is Purple.' } },
|
276
|
+
{ role: 'model', parts: { text: "Hello Purple! It's nice to meet you." } },
|
277
|
+
{ role: 'user', parts: { text: "What's my name?" } }
|
278
|
+
] }
|
279
|
+
)
|
280
|
+
```
|
281
|
+
|
282
|
+
Result:
|
283
|
+
```ruby
|
284
|
+
[{ 'candidates' =>
|
285
|
+
[{ 'content' =>
|
286
|
+
{ 'role' => 'model',
|
287
|
+
'parts' => [
|
288
|
+
{ 'text' => "Purple.\n\nYou told me your name was Purple in your first message to me.\n\nIs there anything" }
|
289
|
+
] },
|
290
|
+
'safetyRatings' =>
|
291
|
+
[{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
|
292
|
+
{ 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
|
293
|
+
{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
|
294
|
+
{ 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }] },
|
295
|
+
{ 'candidates' =>
|
296
|
+
[{ 'content' => { 'role' => 'model', 'parts' => [{ 'text' => ' else I can help you with today, Purple?' }] },
|
297
|
+
'finishReason' => 'STOP',
|
298
|
+
'safetyRatings' =>
|
299
|
+
[{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
|
300
|
+
{ 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
|
301
|
+
{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
|
302
|
+
{ 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
|
303
|
+
'usageMetadata' => {
|
304
|
+
'promptTokenCount' => 24,
|
305
|
+
'candidatesTokenCount' => 31,
|
306
|
+
'totalTokenCount' => 55
|
307
|
+
} }]
|
308
|
+
```
|
309
|
+
|
310
|
+
### Tools (Functions) Calling
|
311
|
+
|
312
|
+
> As of the writing of this README, only the `gemini-pro` model [supports](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling#supported_models) tools (functions) calls.
|
313
|
+
|
314
|
+
You can provide specifications for [tools (functions)](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling) using [JSON Schema](https://json-schema.org) to generate potential calls to them:
|
315
|
+
|
316
|
+
```ruby
|
317
|
+
input = {
|
318
|
+
tools: {
|
319
|
+
function_declarations: [
|
320
|
+
{
|
321
|
+
name: 'date_and_time',
|
322
|
+
description: 'Returns the current date and time in the ISO 8601 format for a given timezone.',
|
323
|
+
parameters: {
|
324
|
+
type: 'object',
|
325
|
+
properties: {
|
326
|
+
timezone: {
|
327
|
+
type: 'string',
|
328
|
+
description: 'A string represents the timezone to be used for providing a datetime, following the IANA (Internet Assigned Numbers Authority) Time Zone Database. Examples include "Asia/Tokyo" and "Europe/Paris". If not provided, the default timezone is the user\'s current timezone.'
|
329
|
+
}
|
330
|
+
}
|
331
|
+
}
|
332
|
+
}
|
333
|
+
]
|
334
|
+
},
|
335
|
+
contents: [
|
336
|
+
{ role: 'user', parts: { text: 'What time is it?' } }
|
337
|
+
]
|
338
|
+
}
|
339
|
+
|
340
|
+
result = client.stream_generate_content(input)
|
341
|
+
```
|
342
|
+
|
343
|
+
Which may return a request to perform a call:
|
344
|
+
```ruby
|
345
|
+
[{ 'candidates' =>
|
346
|
+
[{ 'content' => {
|
347
|
+
'role' => 'model',
|
348
|
+
'parts' => [{ 'functionCall' => {
|
349
|
+
'name' => 'date_and_time',
|
350
|
+
'args' => { 'timezone' => 'local' }
|
351
|
+
} }]
|
352
|
+
},
|
353
|
+
'finishReason' => 'STOP',
|
354
|
+
'safetyRatings' =>
|
355
|
+
[{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
|
356
|
+
{ 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
|
357
|
+
{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
|
358
|
+
{ 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
|
359
|
+
'usageMetadata' => { 'promptTokenCount' => 5, 'totalTokenCount' => 5 } }]
|
360
|
+
```
|
361
|
+
|
362
|
+
Based on these results, you can perform the requested calls and provide their outputs:
|
363
|
+
```ruby
|
364
|
+
gem 'tzinfo', '~> 2.0', '>= 2.0.6'
|
365
|
+
```
|
366
|
+
|
367
|
+
```ruby
|
368
|
+
require 'tzinfo'
|
369
|
+
require 'time'
|
370
|
+
|
371
|
+
function_calls = result.dig(0, 'candidates', 0, 'content', 'parts').filter do |part|
|
372
|
+
part.key?('functionCall')
|
373
|
+
end
|
374
|
+
|
375
|
+
function_parts = []
|
376
|
+
|
377
|
+
function_calls.each do |function_call|
|
378
|
+
next unless function_call['functionCall']['name'] == 'date_and_time'
|
379
|
+
|
380
|
+
timezone = function_call.dig('functionCall', 'args', 'timezone')
|
381
|
+
|
382
|
+
time = if !timezone.nil? && timezone != '' && timezone.downcase != 'local'
|
383
|
+
TZInfo::Timezone.get(timezone).now
|
384
|
+
else
|
385
|
+
Time.now
|
386
|
+
end
|
387
|
+
|
388
|
+
function_output = time.iso8601
|
389
|
+
|
390
|
+
function_parts << {
|
391
|
+
functionResponse: {
|
392
|
+
name: function_call['functionCall']['name'],
|
393
|
+
response: {
|
394
|
+
name: function_call['functionCall']['name'],
|
395
|
+
content: function_output
|
396
|
+
}
|
397
|
+
}
|
398
|
+
}
|
399
|
+
end
|
400
|
+
|
401
|
+
input[:contents] << result.dig(0, 'candidates', 0, 'content')
|
402
|
+
input[:contents] << { role: 'function', parts: function_parts }
|
403
|
+
```
|
404
|
+
|
405
|
+
This will be equivalent to the following final input:
|
406
|
+
```ruby
|
407
|
+
{ tools: { function_declarations: [
|
408
|
+
{ name: 'date_and_time',
|
409
|
+
description: 'Returns the current date and time in the ISO 8601 format for a given timezone.',
|
410
|
+
parameters: {
|
411
|
+
type: 'object',
|
412
|
+
properties: {
|
413
|
+
timezone: {
|
414
|
+
type: 'string',
|
415
|
+
description: "A string represents the timezone to be used for providing a datetime, following the IANA (Internet Assigned Numbers Authority) Time Zone Database. Examples include \"Asia/Tokyo\" and \"Europe/Paris\". If not provided, the default timezone is the user's current timezone."
|
416
|
+
}
|
417
|
+
}
|
418
|
+
} }
|
419
|
+
] },
|
420
|
+
contents: [
|
421
|
+
{ role: 'user', parts: { text: 'What time is it?' } },
|
422
|
+
{ role: 'model',
|
423
|
+
parts: [
|
424
|
+
{ functionCall: { name: 'date_and_time', args: { timezone: 'local' } } }
|
425
|
+
] },
|
426
|
+
{ role: 'function',
|
427
|
+
parts: [{ functionResponse: {
|
428
|
+
name: 'date_and_time',
|
429
|
+
response: {
|
430
|
+
name: 'date_and_time',
|
431
|
+
content: '2023-12-13T21:15:11-03:00'
|
432
|
+
}
|
433
|
+
} }] }
|
434
|
+
] }
|
435
|
+
```
|
436
|
+
|
437
|
+
With the input properly arranged, you can make another request:
|
438
|
+
```ruby
|
439
|
+
result = client.stream_generate_content(input)
|
440
|
+
```
|
441
|
+
|
442
|
+
Which will result in:
|
443
|
+
```ruby
|
444
|
+
[{ 'candidates' =>
|
445
|
+
[{ 'content' => { 'role' => 'model', 'parts' => [{ 'text' => 'It is 21:15.' }] },
|
446
|
+
'finishReason' => 'STOP',
|
447
|
+
'safetyRatings' =>
|
448
|
+
[{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
|
449
|
+
{ 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
|
450
|
+
{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
|
451
|
+
{ 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
|
452
|
+
'usageMetadata' => { 'promptTokenCount' => 5, 'candidatesTokenCount' => 9, 'totalTokenCount' => 14 } }]
|
453
|
+
```
|
454
|
+
|
455
|
+
### New Functionalities and APIs
|
456
|
+
|
457
|
+
Google may launch a new endpoint that we haven't covered in the Gem yet. If that's the case, you may still be able to use it through the `request` method. For example, `stream_generate_content` is just a wrapper for `google/models/gemini-pro:streamGenerateContent`, which you can call directly like this:
|
458
|
+
|
459
|
+
```ruby
|
460
|
+
result = client.request(
|
461
|
+
'streamGenerateContent',
|
462
|
+
{ contents: { role: 'user', parts: { text: 'hi!' } } }
|
463
|
+
)
|
464
|
+
```
|
465
|
+
|
466
|
+
## Development
|
467
|
+
|
468
|
+
```bash
|
469
|
+
bundle
|
470
|
+
rubocop -A
|
471
|
+
```
|
472
|
+
|
473
|
+
### Purpose
|
474
|
+
|
475
|
+
This Gem is designed to provide low-level access to Gemini, enabling people to build abstractions on top of it. If you are interested in more high-level abstractions or more user-friendly tools, you may want to consider [Nano Bots](https://github.com/icebaker/ruby-nano-bots) 💎 🤖.
|
476
|
+
|
477
|
+
### Publish to RubyGems
|
478
|
+
|
479
|
+
```bash
|
480
|
+
gem build gemini-ai.gemspec
|
481
|
+
|
482
|
+
gem signin
|
483
|
+
|
484
|
+
gem push gemini-ai-1.0.0.gem
|
485
|
+
```
|
486
|
+
|
487
|
+
### Updating the README
|
488
|
+
|
489
|
+
Update the `template.md` file and then:
|
490
|
+
|
491
|
+
```sh
|
492
|
+
bb tasks/generate-readme.clj
|
493
|
+
```
|
494
|
+
|
495
|
+
Trick for automatically updating the `README.md` when `template.md` changes:
|
496
|
+
|
497
|
+
```sh
|
498
|
+
sudo pacman -S inotify-tools # Arch / Manjaro
|
499
|
+
sudo apt-get install inotify-tools # Debian / Ubuntu / Raspberry Pi OS
|
500
|
+
sudo dnf install inotify-tools # Fedora / CentOS / RHEL
|
501
|
+
|
502
|
+
while inotifywait -e modify template.md; do bb tasks/generate-readme.clj; done
|
503
|
+
```
|
504
|
+
|
505
|
+
Trick for Markdown Live Preview:
|
506
|
+
```sh
|
507
|
+
pip install -U markdown_live_preview
|
508
|
+
|
509
|
+
mlp README.md -p 8076
|
510
|
+
```
|
511
|
+
|
512
|
+
## Resources and References
|
513
|
+
|
514
|
+
These resources and references may be useful throughout your learning process.
|
515
|
+
|
516
|
+
- [Getting Started with the Vertex AI Gemini API with cURL](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/getting-started/intro_gemini_curl.ipynb)
|
517
|
+
- [Gemini API Documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini)
|
518
|
+
- [Vertex AI API Documentation](https://cloud.google.com/vertex-ai/docs/reference)
|
519
|
+
- [REST Documentation](https://cloud.google.com/vertex-ai/docs/reference/rest)
|
520
|
+
- [Google DeepMind Gemini](https://deepmind.google/technologies/gemini/)
|
521
|
+
- [Stream responses from Generative AI models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/streaming)
|
522
|
+
- [Function calling](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling)
|
523
|
+
|
524
|
+
## Disclaimer
|
525
|
+
|
526
|
+
This is not an official Google project, nor is it affiliated with Google in any way.
|
527
|
+
|
528
|
+
The software is distributed under the MIT License, which can be found at [https://github.com/gbaptista/gemini-ai/blob/main/LICENSE](https://github.com/gbaptista/gemini-ai/blob/main/LICENSE). This license includes a disclaimer of warranty. Moreover, the authors assume no responsibility for any damage or costs that may result from using this project. Use the Gemini AI Ruby Gem at your own risk.
|
metadata
ADDED
@@ -0,0 +1,114 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: gemini-ai
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 1.0.0
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- gbaptista
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2023-12-14 00:00:00.000000000 Z
|
12
|
+
dependencies:
|
13
|
+
- !ruby/object:Gem::Dependency
|
14
|
+
name: event_stream_parser
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
16
|
+
requirements:
|
17
|
+
- - "~>"
|
18
|
+
- !ruby/object:Gem::Version
|
19
|
+
version: '1.0'
|
20
|
+
type: :runtime
|
21
|
+
prerelease: false
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
23
|
+
requirements:
|
24
|
+
- - "~>"
|
25
|
+
- !ruby/object:Gem::Version
|
26
|
+
version: '1.0'
|
27
|
+
- !ruby/object:Gem::Dependency
|
28
|
+
name: faraday
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
30
|
+
requirements:
|
31
|
+
- - "~>"
|
32
|
+
- !ruby/object:Gem::Version
|
33
|
+
version: '2.7'
|
34
|
+
- - ">="
|
35
|
+
- !ruby/object:Gem::Version
|
36
|
+
version: 2.7.12
|
37
|
+
type: :runtime
|
38
|
+
prerelease: false
|
39
|
+
version_requirements: !ruby/object:Gem::Requirement
|
40
|
+
requirements:
|
41
|
+
- - "~>"
|
42
|
+
- !ruby/object:Gem::Version
|
43
|
+
version: '2.7'
|
44
|
+
- - ">="
|
45
|
+
- !ruby/object:Gem::Version
|
46
|
+
version: 2.7.12
|
47
|
+
- !ruby/object:Gem::Dependency
|
48
|
+
name: googleauth
|
49
|
+
requirement: !ruby/object:Gem::Requirement
|
50
|
+
requirements:
|
51
|
+
- - "~>"
|
52
|
+
- !ruby/object:Gem::Version
|
53
|
+
version: '1.9'
|
54
|
+
- - ">="
|
55
|
+
- !ruby/object:Gem::Version
|
56
|
+
version: 1.9.1
|
57
|
+
type: :runtime
|
58
|
+
prerelease: false
|
59
|
+
version_requirements: !ruby/object:Gem::Requirement
|
60
|
+
requirements:
|
61
|
+
- - "~>"
|
62
|
+
- !ruby/object:Gem::Version
|
63
|
+
version: '1.9'
|
64
|
+
- - ">="
|
65
|
+
- !ruby/object:Gem::Version
|
66
|
+
version: 1.9.1
|
67
|
+
description: A Ruby Gem for interacting with Gemini through Vertex AI, Google's generative
|
68
|
+
AI service.
|
69
|
+
email:
|
70
|
+
executables: []
|
71
|
+
extensions: []
|
72
|
+
extra_rdoc_files: []
|
73
|
+
files:
|
74
|
+
- ".gitignore"
|
75
|
+
- ".rubocop.yml"
|
76
|
+
- ".ruby-version"
|
77
|
+
- Gemfile
|
78
|
+
- Gemfile.lock
|
79
|
+
- LICENSE
|
80
|
+
- README.md
|
81
|
+
- controllers/client.rb
|
82
|
+
- gemini-ai.gemspec
|
83
|
+
- ports/dsl/gemini-ai.rb
|
84
|
+
- static/gem.rb
|
85
|
+
- tasks/generate-readme.clj
|
86
|
+
- template.md
|
87
|
+
homepage: https://github.com/gbaptista/gemini-ai
|
88
|
+
licenses:
|
89
|
+
- MIT
|
90
|
+
metadata:
|
91
|
+
allowed_push_host: https://rubygems.org
|
92
|
+
homepage_uri: https://github.com/gbaptista/gemini-ai
|
93
|
+
source_code_uri: https://github.com/gbaptista/gemini-ai
|
94
|
+
rubygems_mfa_required: 'true'
|
95
|
+
post_install_message:
|
96
|
+
rdoc_options: []
|
97
|
+
require_paths:
|
98
|
+
- ports/dsl
|
99
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
100
|
+
requirements:
|
101
|
+
- - ">="
|
102
|
+
- !ruby/object:Gem::Version
|
103
|
+
version: 3.1.0
|
104
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
105
|
+
requirements:
|
106
|
+
- - ">="
|
107
|
+
- !ruby/object:Gem::Version
|
108
|
+
version: '0'
|
109
|
+
requirements: []
|
110
|
+
rubygems_version: 3.4.22
|
111
|
+
signing_key:
|
112
|
+
specification_version: 4
|
113
|
+
summary: Interact with Google's Gemini AI.
|
114
|
+
test_files: []
|