gemini-ai 2.1.0 → 3.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 3c6743cedf074b42aed890203de2d5db7697b6302c349c66bae9ac538bf2c680
4
- data.tar.gz: ad0c8dd1ba69f58c0e37ed170bbd758231adea02245a63eb10fffa9a57fa2400
3
+ metadata.gz: 80621f6cd2de526141e994a339d645b53492f6ece960955bc56f2e7430be1d0c
4
+ data.tar.gz: b627386a0dacc899112e0b08806f63b571dd980f2fb749df7681d7c70656d707
5
5
  SHA512:
6
- metadata.gz: 558e2a13c12931f6bfbde1d625cb38b7779e08d8eb5bc0a845861902f50fbcbc43b64c67348220f35316b4a37504daa220c334b0479fb5493973bc49ab34e77c
7
- data.tar.gz: 2df9f463de540a3d76f86252bd178ca5638caca7e172d59ea244f5f12d04c86d22dcbe2e219049a33ad318ffecc9aaff07c28ffe30aa69ebc109befa08acbcca
6
+ metadata.gz: f5425c3239dac6eca23cccdb2483f90bea23500b0830b865e8acc0fe235cf2ad3cb001d110e7a6ec0cbb59ed5c2f65e74f883edef961cb675b76e7d6117d88fc
7
+ data.tar.gz: 041122dc5a6d1d596411ea697503aa8d7846b4d2bc6c493b8b7d9102033c9ef3226957340d8caeb443751f9f4f465624a80ef1749e0f5893833e4c72d43ba907
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- gemini-ai (2.1.0)
4
+ gemini-ai (3.0.0)
5
5
  event_stream_parser (~> 1.0)
6
6
  faraday (~> 2.7, >= 2.7.12)
7
7
  googleauth (~> 1.9, >= 1.9.1)
data/README.md CHANGED
@@ -9,7 +9,7 @@ A Ruby Gem for interacting with [Gemini](https://deepmind.google/technologies/ge
9
9
  ## TL;DR and Quick Start
10
10
 
11
11
  ```ruby
12
- gem 'gemini-ai', '~> 2.1.0'
12
+ gem 'gemini-ai', '~> 3.0.0'
13
13
  ```
14
14
 
15
15
  ```ruby
@@ -21,7 +21,7 @@ client = Gemini.new(
21
21
  service: 'generative-language-api',
22
22
  api_key: ENV['GOOGLE_API_KEY']
23
23
  },
24
- options: { model: 'gemini-pro', stream: false }
24
+ options: { model: 'gemini-pro', server_sent_events: true }
25
25
  )
26
26
 
27
27
  # With a Service Account Credentials File
@@ -31,7 +31,7 @@ client = Gemini.new(
31
31
  file_path: 'google-credentials.json',
32
32
  region: 'us-east4'
33
33
  },
34
- options: { model: 'gemini-pro', stream: false }
34
+ options: { model: 'gemini-pro', server_sent_events: true }
35
35
  )
36
36
 
37
37
  # With Application Default Credentials
@@ -40,7 +40,7 @@ client = Gemini.new(
40
40
  service: 'vertex-ai-api',
41
41
  region: 'us-east4'
42
42
  },
43
- options: { model: 'gemini-pro', stream: false }
43
+ options: { model: 'gemini-pro', server_sent_events: true }
44
44
  )
45
45
 
46
46
  result = client.stream_generate_content({
@@ -81,13 +81,25 @@ Result:
81
81
  - [Required Data](#required-data)
82
82
  - [Usage](#usage)
83
83
  - [Client](#client)
84
- - [Generate Content](#generate-content)
85
- - [Synchronous](#synchronous)
86
- - [Streaming](#streaming)
87
- - [Streaming Hang](#streaming-hang)
84
+ - [Methods](#methods)
85
+ - [stream_generate_content](#stream_generate_content)
86
+ - [Receiving Stream Events](#receiving-stream-events)
87
+ - [Without Events](#without-events)
88
+ - [generate_content](#generate_content)
89
+ - [Modes](#modes)
90
+ - [Text](#text)
91
+ - [Image](#image)
92
+ - [Video](#video)
93
+ - [Streaming vs. Server-Sent Events (SSE)](#streaming-vs-server-sent-events-sse)
94
+ - [Server-Sent Events (SSE) Hang](#server-sent-events-sse-hang)
95
+ - [Non-Streaming](#non-streaming)
88
96
  - [Back-and-Forth Conversations](#back-and-forth-conversations)
89
97
  - [Tools (Functions) Calling](#tools-functions-calling)
90
98
  - [New Functionalities and APIs](#new-functionalities-and-apis)
99
+ - [Error Handling](#error-handling)
100
+ - [Rescuing](#rescuing)
101
+ - [For Short](#for-short)
102
+ - [Errors](#errors)
91
103
  - [Development](#development)
92
104
  - [Purpose](#purpose)
93
105
  - [Publish to RubyGems](#publish-to-rubygems)
@@ -100,15 +112,20 @@ Result:
100
112
  ### Installing
101
113
 
102
114
  ```sh
103
- gem install gemini-ai -v 2.1.0
115
+ gem install gemini-ai -v 3.0.0
104
116
  ```
105
117
 
106
118
  ```sh
107
- gem 'gemini-ai', '~> 2.1.0'
119
+ gem 'gemini-ai', '~> 3.0.0'
108
120
  ```
109
121
 
110
122
  ### Credentials
111
123
 
124
+ - [Option 1: API Key (Generative Language API)](#option-1-api-key-generative-language-api)
125
+ - [Option 2: Service Account Credentials File (Vertex AI API)](#option-2-service-account-credentials-file-vertex-ai-api)
126
+ - [Option 3: Application Default Credentials (Vertex AI API)](#option-3-application-default-credentials-vertex-ai-api)
127
+ - [Required Data](#required-data)
128
+
112
129
  > ⚠️ DISCLAIMER: Be careful with what you are doing, and never trust others' code related to this. These commands and instructions alter the level of access to your Google Cloud Account, and running them naively can lead to security risks as well as financial risks. People with access to your account can use it to steal data or incur charges. Run these commands at your own responsibility and due diligence; expect no warranties from the contributors of this project.
113
130
 
114
131
  #### Option 1: API Key (Generative Language API)
@@ -266,7 +283,7 @@ client = Gemini.new(
266
283
  service: 'generative-language-api',
267
284
  api_key: ENV['GOOGLE_API_KEY']
268
285
  },
269
- options: { model: 'gemini-pro', stream: false }
286
+ options: { model: 'gemini-pro', server_sent_events: true }
270
287
  )
271
288
 
272
289
  # With a Service Account Credentials File
@@ -276,7 +293,7 @@ client = Gemini.new(
276
293
  file_path: 'google-credentials.json',
277
294
  region: 'us-east4'
278
295
  },
279
- options: { model: 'gemini-pro', stream: false }
296
+ options: { model: 'gemini-pro', server_sent_events: true }
280
297
  )
281
298
 
282
299
  # With Application Default Credentials
@@ -285,13 +302,118 @@ client = Gemini.new(
285
302
  service: 'vertex-ai-api',
286
303
  region: 'us-east4'
287
304
  },
288
- options: { model: 'gemini-pro', stream: false }
305
+ options: { model: 'gemini-pro', server_sent_events: true }
306
+ )
307
+ ```
308
+
309
+ ### Methods
310
+
311
+ #### stream_generate_content
312
+
313
+ ##### Receiving Stream Events
314
+
315
+ Ensure that you have enabled [Server-Sent Events](#streaming-vs-server-sent-events-sse) before using blocks for streaming:
316
+
317
+ ```ruby
318
+ client.stream_generate_content(
319
+ { contents: { role: 'user', parts: { text: 'hi!' } } }
320
+ ) do |event, parsed, raw|
321
+ puts event
322
+ end
323
+ ```
324
+
325
+ Event:
326
+ ```ruby
327
+ { 'candidates' =>
328
+ [{ 'content' => {
329
+ 'role' => 'model',
330
+ 'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
331
+ },
332
+ 'finishReason' => 'STOP',
333
+ 'safetyRatings' =>
334
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
335
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
336
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
337
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
338
+ 'usageMetadata' => {
339
+ 'promptTokenCount' => 2,
340
+ 'candidatesTokenCount' => 8,
341
+ 'totalTokenCount' => 10
342
+ } }
343
+ ```
344
+
345
+ ##### Without Events
346
+
347
+ You can use `stream_generate_content` without events:
348
+
349
+ ```ruby
350
+ result = client.stream_generate_content(
351
+ { contents: { role: 'user', parts: { text: 'hi!' } } }
352
+ )
353
+ ```
354
+
355
+ In this case, the result will be an array with all the received events:
356
+
357
+ ```ruby
358
+ [{ 'candidates' =>
359
+ [{ 'content' => {
360
+ 'role' => 'model',
361
+ 'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
362
+ },
363
+ 'finishReason' => 'STOP',
364
+ 'safetyRatings' =>
365
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
366
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
367
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
368
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
369
+ 'usageMetadata' => {
370
+ 'promptTokenCount' => 2,
371
+ 'candidatesTokenCount' => 8,
372
+ 'totalTokenCount' => 10
373
+ } }]
374
+ ```
375
+
376
+ You can mix both as well:
377
+ ```ruby
378
+ result = client.stream_generate_content(
379
+ { contents: { role: 'user', parts: { text: 'hi!' } } }
380
+ ) do |event, parsed, raw|
381
+ puts event
382
+ end
383
+ ```
384
+
385
+ #### generate_content
386
+
387
+ ```ruby
388
+ result = client.generate_content(
389
+ { contents: { role: 'user', parts: { text: 'hi!' } } }
289
390
  )
290
391
  ```
291
392
 
292
- ### Generate Content
393
+ Result:
394
+ ```ruby
395
+ { 'candidates' =>
396
+ [{ 'content' => { 'parts' => [{ 'text' => 'Hello! How can I assist you today?' }], 'role' => 'model' },
397
+ 'finishReason' => 'STOP',
398
+ 'index' => 0,
399
+ 'safetyRatings' =>
400
+ [{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
401
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
402
+ { 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
403
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
404
+ 'promptFeedback' =>
405
+ { 'safetyRatings' =>
406
+ [{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
407
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
408
+ { 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
409
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] } }
410
+ ```
293
411
 
294
- #### Synchronous
412
+ As of the writing of this README, only the `generative-language-api` service supports the `generate_content` method; `vertex-ai-api` does not.
413
+
414
+ ### Modes
415
+
416
+ #### Text
295
417
 
296
418
  ```ruby
297
419
  result = client.stream_generate_content({
@@ -319,13 +441,132 @@ Result:
319
441
  } }]
320
442
  ```
321
443
 
322
- #### Streaming
444
+ #### Image
445
+
446
+ ![A black and white image of an old piano. The piano is an upright model, with the keys on the right side of the image. The piano is sitting on a tiled floor. There is a small round object on the top of the piano.](https://raw.githubusercontent.com/gbaptista/assets/main/gemini-ai/piano.jpg)
447
+
448
+ > _Courtesy of [Unsplash](https://unsplash.com/photos/greyscale-photo-of-grand-piano-czPs0z3-Ggg)_
449
+
450
+ Switch to the `gemini-pro-vision` model:
451
+
452
+ ```ruby
453
+ client = Gemini.new(
454
+ credentials: { service: 'vertex-ai-api', region: 'us-east4' },
455
+ options: { model: 'gemini-pro-vision', server_sent_events: true }
456
+ )
457
+ ```
458
+
459
+ Then, encode the image as [Base64](https://en.wikipedia.org/wiki/Base64) and add its [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types):
460
+
461
+ ```ruby
462
+ require 'base64'
463
+
464
+ result = client.stream_generate_content(
465
+ { contents: [
466
+ { role: 'user', parts: [
467
+ { text: 'Please describe this image.' },
468
+ { inline_data: {
469
+ mime_type: 'image/jpeg',
470
+ data: Base64.strict_encode64(File.read('piano.jpg'))
471
+ } }
472
+ ] }
473
+ ] }
474
+ )
475
+ ```
476
+
477
+ The result:
478
+ ```ruby
479
+ [{ 'candidates' =>
480
+ [{ 'content' =>
481
+ { 'role' => 'model',
482
+ 'parts' =>
483
+ [{ 'text' =>
484
+ ' A black and white image of an old piano. The piano is an upright model, with the keys on the right side of the image. The piano is' }] },
485
+ 'safetyRatings' =>
486
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
487
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
488
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
489
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }] },
490
+ { 'candidates' =>
491
+ [{ 'content' => { 'role' => 'model', 'parts' => [{ 'text' => ' sitting on a tiled floor. There is a small round object on the top of the piano.' }] },
492
+ 'finishReason' => 'STOP',
493
+ 'safetyRatings' =>
494
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
495
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
496
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
497
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
498
+ 'usageMetadata' => { 'promptTokenCount' => 263, 'candidatesTokenCount' => 50, 'totalTokenCount' => 313 } }]
499
+ ```
500
+
501
+ #### Video
502
+
503
+ https://gist.github.com/assets/29520/f82bccbf-02d2-4899-9c48-eb8a0a5ef741
504
+
505
+ > ALT: A white and gold cup is being filled with coffee. The coffee is dark and rich. The cup is sitting on a black surface. The background is blurred.
506
+
507
+ > _Courtesy of [Pexels](https://www.pexels.com/video/pouring-of-coffee-855391/)_
508
+
509
+ Switch to the `gemini-pro-vision` model:
323
510
 
324
- You can set up the client to use streaming for all supported endpoints:
511
+ ```ruby
512
+ client = Gemini.new(
513
+ credentials: { service: 'vertex-ai-api', region: 'us-east4' },
514
+ options: { model: 'gemini-pro-vision', server_sent_events: true }
515
+ )
516
+ ```
517
+
518
+ Then, encode the video as [Base64](https://en.wikipedia.org/wiki/Base64) and add its [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types):
519
+
520
+ ```ruby
521
+ require 'base64'
522
+
523
+ result = client.stream_generate_content(
524
+ { contents: [
525
+ { role: 'user', parts: [
526
+ { text: 'Please describe this video.' },
527
+ { inline_data: {
528
+ mime_type: 'video/mp4',
529
+ data: Base64.strict_encode64(File.read('coffee.mp4'))
530
+ } }
531
+ ] }
532
+ ] }
533
+ )
534
+ ```
535
+
536
+ The result:
537
+ ```ruby
538
+ [{"candidates"=>
539
+ [{"content"=>
540
+ {"role"=>"model",
541
+ "parts"=>
542
+ [{"text"=>
543
+ " A white and gold cup is being filled with coffee. The coffee is dark and rich. The cup is sitting on a black surface. The background is blurred"}]},
544
+ "safetyRatings"=>
545
+ [{"category"=>"HARM_CATEGORY_HARASSMENT", "probability"=>"NEGLIGIBLE"},
546
+ {"category"=>"HARM_CATEGORY_HATE_SPEECH", "probability"=>"NEGLIGIBLE"},
547
+ {"category"=>"HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability"=>"NEGLIGIBLE"},
548
+ {"category"=>"HARM_CATEGORY_DANGEROUS_CONTENT", "probability"=>"NEGLIGIBLE"}]}],
549
+ "usageMetadata"=>{"promptTokenCount"=>1037, "candidatesTokenCount"=>31, "totalTokenCount"=>1068}},
550
+ {"candidates"=>
551
+ [{"content"=>{"role"=>"model", "parts"=>[{"text"=>"."}]},
552
+ "finishReason"=>"STOP",
553
+ "safetyRatings"=>
554
+ [{"category"=>"HARM_CATEGORY_HARASSMENT", "probability"=>"NEGLIGIBLE"},
555
+ {"category"=>"HARM_CATEGORY_HATE_SPEECH", "probability"=>"NEGLIGIBLE"},
556
+ {"category"=>"HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability"=>"NEGLIGIBLE"},
557
+ {"category"=>"HARM_CATEGORY_DANGEROUS_CONTENT", "probability"=>"NEGLIGIBLE"}]}],
558
+ "usageMetadata"=>{"promptTokenCount"=>1037, "candidatesTokenCount"=>32, "totalTokenCount"=>1069}}]
559
+ ```
560
+
561
+ ### Streaming vs. Server-Sent Events (SSE)
562
+
563
+ [Server-Sent Events (SSE)](https://en.wikipedia.org/wiki/Server-sent_events) is a technology that allows certain endpoints to offer streaming capabilities, such as creating the impression that "the model is typing along with you," rather than delivering the entire answer all at once.
564
+
565
+ You can set up the client to use Server-Sent Events (SSE) for all supported endpoints:
325
566
  ```ruby
326
567
  client = Gemini.new(
327
568
  credentials: { ... },
328
- options: { model: 'gemini-pro', stream: true }
569
+ options: { model: 'gemini-pro', server_sent_events: true }
329
570
  )
330
571
  ```
331
572
 
@@ -333,11 +574,11 @@ Or, you can decide on a request basis:
333
574
  ```ruby
334
575
  client.stream_generate_content(
335
576
  { contents: { role: 'user', parts: { text: 'hi!' } } },
336
- stream: true
577
+ server_sent_events: true
337
578
  )
338
579
  ```
339
580
 
340
- With streaming enabled, you can use a block to receive the results:
581
+ With Server-Sent Events (SSE) enabled, you can use a block to receive partial results via events. This feature is particularly useful for methods that offer streaming capabilities, such as `stream_generate_content`:
341
582
 
342
583
  ```ruby
343
584
  client.stream_generate_content(
@@ -367,14 +608,16 @@ Event:
367
608
  } }
368
609
  ```
369
610
 
370
- #### Streaming Hang
611
+ Even though streaming methods utilize Server-Sent Events (SSE), using this feature doesn't necessarily mean streaming data. For example, when `generate_content` is called with SSE enabled, you will receive all the data at once in a single event, rather than through multiple partial events. This occurs because `generate_content` isn't designed for streaming, even though it is capable of utilizing Server-Sent Events.
612
+
613
+ #### Server-Sent Events (SSE) Hang
371
614
 
372
- Method calls will _hang_ until the stream finishes, so even without providing a block, you can get the final results of the stream events:
615
+ Method calls will _hang_ until the server-sent events finish, so even without providing a block, you can obtain the final results of the received events:
373
616
 
374
617
  ```ruby
375
618
  result = client.stream_generate_content(
376
619
  { contents: { role: 'user', parts: { text: 'hi!' } } },
377
- stream: true
620
+ server_sent_events: true
378
621
  )
379
622
  ```
380
623
 
@@ -398,6 +641,39 @@ Result:
398
641
  } }]
399
642
  ```
400
643
 
644
+ #### Non-Streaming
645
+
646
+ Depending on the service, you can use the [`generate_content`](#generate_content) method, which does not stream the answer.
647
+
648
+ You can also use methods designed for streaming without necessarily processing partial events; instead, you can wait for the result of all received events:
649
+
650
+ ```ruby
651
+ result = client.stream_generate_content({
652
+ contents: { role: 'user', parts: { text: 'hi!' } },
653
+ server_sent_events: false
654
+ })
655
+ ```
656
+
657
+ Result:
658
+ ```ruby
659
+ [{ 'candidates' =>
660
+ [{ 'content' => {
661
+ 'role' => 'model',
662
+ 'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
663
+ },
664
+ 'finishReason' => 'STOP',
665
+ 'safetyRatings' =>
666
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
667
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
668
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
669
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
670
+ 'usageMetadata' => {
671
+ 'promptTokenCount' => 2,
672
+ 'candidatesTokenCount' => 8,
673
+ 'totalTokenCount' => 10
674
+ } }]
675
+ ```
676
+
401
677
  ### Back-and-Forth Conversations
402
678
 
403
679
  To maintain a back-and-forth conversation, you need to append the received responses and build a history for your requests:
@@ -596,6 +872,58 @@ result = client.request(
596
872
  )
597
873
  ```
598
874
 
875
+ ### Error Handling
876
+
877
+ #### Rescuing
878
+
879
+ ```ruby
880
+ require 'gemini-ai'
881
+
882
+ begin
883
+ client.stream_generate_content({
884
+ contents: { role: 'user', parts: { text: 'hi!' } }
885
+ })
886
+ rescue Gemini::Errors::GeminiError => error
887
+ puts error.class # Gemini::Errors::RequestError
888
+ puts error.message # 'the server responded with status 500'
889
+
890
+ puts error.payload
891
+ # { contents: [{ role: 'user', parts: { text: 'hi!' } }],
892
+ # generationConfig: { candidateCount: 1 },
893
+ # ...
894
+ # }
895
+
896
+ puts error.request
897
+ # #<Faraday::ServerError response={:status=>500, :headers...
898
+ end
899
+ ```
900
+
901
+ #### For Short
902
+
903
+ ```ruby
904
+ require 'gemini-ai/errors'
905
+
906
+ begin
907
+ client.stream_generate_content({
908
+ contents: { role: 'user', parts: { text: 'hi!' } }
909
+ })
910
+ rescue GeminiError => error
911
+ puts error.class # Gemini::Errors::RequestError
912
+ end
913
+ ```
914
+
915
+ #### Errors
916
+
917
+ ```ruby
918
+ GeminiError
919
+
920
+ MissingProjectIdError
921
+ UnsupportedServiceError
922
+ BlockWithoutServerSentEventsError
923
+
924
+ RequestError
925
+ ```
926
+
599
927
  ## Development
600
928
 
601
929
  ```bash
@@ -614,7 +942,7 @@ gem build gemini-ai.gemspec
614
942
 
615
943
  gem signin
616
944
 
617
- gem push gemini-ai-2.1.0.gem
945
+ gem push gemini-ai-3.0.0.gem
618
946
  ```
619
947
 
620
948
  ### Updating the README
@@ -0,0 +1,26 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Gemini
4
+ module Errors
5
+ class GeminiError < StandardError
6
+ def initialize(message = nil)
7
+ super(message)
8
+ end
9
+ end
10
+
11
+ class MissingProjectIdError < GeminiError; end
12
+ class UnsupportedServiceError < GeminiError; end
13
+ class BlockWithoutServerSentEventsError < GeminiError; end
14
+
15
+ class RequestError < GeminiError
16
+ attr_reader :request, :payload
17
+
18
+ def initialize(message = nil, request: nil, payload: nil)
19
+ @request = request
20
+ @payload = payload
21
+
22
+ super(message)
23
+ end
24
+ end
25
+ end
26
+ end
@@ -5,6 +5,8 @@ require 'faraday'
5
5
  require 'json'
6
6
  require 'googleauth'
7
7
 
8
+ require_relative '../ports/dsl/gemini-ai/errors'
9
+
8
10
  module Gemini
9
11
  module Controllers
10
12
  class Client
@@ -24,48 +26,57 @@ module Gemini
24
26
  end
25
27
 
26
28
  if @authentication == :service_account || @authentication == :default_credentials
27
- @project_id = if config[:credentials][:project_id].nil?
28
- @authorizer.project_id || @authorizer.quota_project_id
29
- else
30
- config[:credentials][:project_id]
31
- end
29
+ @project_id = config[:credentials][:project_id] || @authorizer.project_id || @authorizer.quota_project_id
32
30
 
33
- raise StandardError, 'Could not determine project_id, which is required.' if @project_id.nil?
31
+ raise MissingProjectIdError, 'Could not determine project_id, which is required.' if @project_id.nil?
34
32
  end
35
33
 
36
- @address = case config[:credentials][:service]
34
+ @service = config[:credentials][:service]
35
+
36
+ @address = case @service
37
37
  when 'vertex-ai-api'
38
38
  "https://#{config[:credentials][:region]}-aiplatform.googleapis.com/v1/projects/#{@project_id}/locations/#{config[:credentials][:region]}/publishers/google/models/#{config[:options][:model]}"
39
39
  when 'generative-language-api'
40
40
  "https://generativelanguage.googleapis.com/v1/models/#{config[:options][:model]}"
41
41
  else
42
- raise StandardError, "Unsupported service: #{config[:credentials][:service]}"
42
+ raise UnsupportedServiceError, "Unsupported service: #{@service}"
43
43
  end
44
44
 
45
- @stream = config[:options][:stream]
45
+ @server_sent_events = config[:options][:server_sent_events]
46
+ end
47
+
48
+ def stream_generate_content(payload, server_sent_events: nil, &callback)
49
+ request('streamGenerateContent', payload, server_sent_events:, &callback)
46
50
  end
47
51
 
48
- def stream_generate_content(payload, stream: nil, &callback)
49
- request('streamGenerateContent', payload, stream:, &callback)
52
+ def generate_content(payload, server_sent_events: nil, &callback)
53
+ result = request('generateContent', payload, server_sent_events:, &callback)
54
+
55
+ return result.first if result.is_a?(Array) && result.size == 1
56
+
57
+ result
50
58
  end
51
59
 
52
- def request(path, payload, stream: nil, &callback)
53
- stream_enabled = stream.nil? ? @stream : stream
60
+ def request(path, payload, server_sent_events: nil, &callback)
61
+ server_sent_events_enabled = server_sent_events.nil? ? @server_sent_events : server_sent_events
54
62
  url = "#{@address}:#{path}"
55
63
  params = []
56
64
 
57
- params << 'alt=sse' if stream_enabled
65
+ params << 'alt=sse' if server_sent_events_enabled
58
66
  params << "key=#{@api_key}" if @authentication == :api_key
59
67
 
60
68
  url += "?#{params.join('&')}" if params.size.positive?
61
69
 
62
- if !callback.nil? && !stream_enabled
63
- raise StandardError, 'You are trying to use a block without stream enabled."'
70
+ if !callback.nil? && !server_sent_events_enabled
71
+ raise BlockWithoutServerSentEventsError,
72
+ 'You are trying to use a block without Server Sent Events (SSE) enabled.'
64
73
  end
65
74
 
66
75
  results = []
67
76
 
68
- response = Faraday.new.post do |request|
77
+ response = Faraday.new do |faraday|
78
+ faraday.response :raise_error
79
+ end.post do |request|
69
80
  request.url url
70
81
  request.headers['Content-Type'] = 'application/json'
71
82
  if @authentication == :service_account || @authentication == :default_credentials
@@ -74,7 +85,7 @@ module Gemini
74
85
 
75
86
  request.body = payload.to_json
76
87
 
77
- if stream_enabled
88
+ if server_sent_events_enabled
78
89
  parser = EventStreamParser::Parser.new
79
90
 
80
91
  request.options.on_data = proc do |chunk, bytes, env|
@@ -103,9 +114,11 @@ module Gemini
103
114
  end
104
115
  end
105
116
 
106
- return safe_parse_json(response.body) unless stream_enabled
117
+ return safe_parse_json(response.body) unless server_sent_events_enabled
107
118
 
108
119
  results.map { |result| result[:event] }
120
+ rescue Faraday::ServerError => e
121
+ raise RequestError.new(e.message, request: e, payload:)
109
122
  end
110
123
 
111
124
  def safe_parse_json(raw)
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative '../../../components/errors'
4
+
5
+ include Gemini::Errors
data/static/gem.rb CHANGED
@@ -3,7 +3,7 @@
3
3
  module Gemini
4
4
  GEM = {
5
5
  name: 'gemini-ai',
6
- version: '2.1.0',
6
+ version: '3.0.0',
7
7
  author: 'gbaptista',
8
8
  summary: "Interact with Google's Gemini AI.",
9
9
  description: "A Ruby Gem for interacting with Gemini through Vertex AI, Generative Language API, or AI Studio, Google's generative AI services.",
@@ -4,7 +4,7 @@
4
4
  (-> text
5
5
  (clojure.string/lower-case)
6
6
  (clojure.string/replace " " "-")
7
- (clojure.string/replace #"[^a-z0-9\-]" "")))
7
+ (clojure.string/replace #"[^a-z0-9\-_]" "")))
8
8
 
9
9
  (defn remove-code-blocks [content]
10
10
  (let [code-block-regex #"(?s)```.*?```"]
data/template.md CHANGED
@@ -9,7 +9,7 @@ A Ruby Gem for interacting with [Gemini](https://deepmind.google/technologies/ge
9
9
  ## TL;DR and Quick Start
10
10
 
11
11
  ```ruby
12
- gem 'gemini-ai', '~> 2.1.0'
12
+ gem 'gemini-ai', '~> 3.0.0'
13
13
  ```
14
14
 
15
15
  ```ruby
@@ -21,7 +21,7 @@ client = Gemini.new(
21
21
  service: 'generative-language-api',
22
22
  api_key: ENV['GOOGLE_API_KEY']
23
23
  },
24
- options: { model: 'gemini-pro', stream: false }
24
+ options: { model: 'gemini-pro', server_sent_events: true }
25
25
  )
26
26
 
27
27
  # With a Service Account Credentials File
@@ -31,7 +31,7 @@ client = Gemini.new(
31
31
  file_path: 'google-credentials.json',
32
32
  region: 'us-east4'
33
33
  },
34
- options: { model: 'gemini-pro', stream: false }
34
+ options: { model: 'gemini-pro', server_sent_events: true }
35
35
  )
36
36
 
37
37
  # With Application Default Credentials
@@ -40,7 +40,7 @@ client = Gemini.new(
40
40
  service: 'vertex-ai-api',
41
41
  region: 'us-east4'
42
42
  },
43
- options: { model: 'gemini-pro', stream: false }
43
+ options: { model: 'gemini-pro', server_sent_events: true }
44
44
  )
45
45
 
46
46
  result = client.stream_generate_content({
@@ -77,15 +77,20 @@ Result:
77
77
  ### Installing
78
78
 
79
79
  ```sh
80
- gem install gemini-ai -v 2.1.0
80
+ gem install gemini-ai -v 3.0.0
81
81
  ```
82
82
 
83
83
  ```sh
84
- gem 'gemini-ai', '~> 2.1.0'
84
+ gem 'gemini-ai', '~> 3.0.0'
85
85
  ```
86
86
 
87
87
  ### Credentials
88
88
 
89
+ - [Option 1: API Key (Generative Language API)](#option-1-api-key-generative-language-api)
90
+ - [Option 2: Service Account Credentials File (Vertex AI API)](#option-2-service-account-credentials-file-vertex-ai-api)
91
+ - [Option 3: Application Default Credentials (Vertex AI API)](#option-3-application-default-credentials-vertex-ai-api)
92
+ - [Required Data](#required-data)
93
+
89
94
  > ⚠️ DISCLAIMER: Be careful with what you are doing, and never trust others' code related to this. These commands and instructions alter the level of access to your Google Cloud Account, and running them naively can lead to security risks as well as financial risks. People with access to your account can use it to steal data or incur charges. Run these commands at your own responsibility and due diligence; expect no warranties from the contributors of this project.
90
95
 
91
96
  #### Option 1: API Key (Generative Language API)
@@ -243,7 +248,7 @@ client = Gemini.new(
243
248
  service: 'generative-language-api',
244
249
  api_key: ENV['GOOGLE_API_KEY']
245
250
  },
246
- options: { model: 'gemini-pro', stream: false }
251
+ options: { model: 'gemini-pro', server_sent_events: true }
247
252
  )
248
253
 
249
254
  # With a Service Account Credentials File
@@ -253,7 +258,7 @@ client = Gemini.new(
253
258
  file_path: 'google-credentials.json',
254
259
  region: 'us-east4'
255
260
  },
256
- options: { model: 'gemini-pro', stream: false }
261
+ options: { model: 'gemini-pro', server_sent_events: true }
257
262
  )
258
263
 
259
264
  # With Application Default Credentials
@@ -262,13 +267,118 @@ client = Gemini.new(
262
267
  service: 'vertex-ai-api',
263
268
  region: 'us-east4'
264
269
  },
265
- options: { model: 'gemini-pro', stream: false }
270
+ options: { model: 'gemini-pro', server_sent_events: true }
271
+ )
272
+ ```
273
+
274
+ ### Methods
275
+
276
+ #### stream_generate_content
277
+
278
+ ##### Receiving Stream Events
279
+
280
+ Ensure that you have enabled [Server-Sent Events](#streaming-vs-server-sent-events-sse) before using blocks for streaming:
281
+
282
+ ```ruby
283
+ client.stream_generate_content(
284
+ { contents: { role: 'user', parts: { text: 'hi!' } } }
285
+ ) do |event, parsed, raw|
286
+ puts event
287
+ end
288
+ ```
289
+
290
+ Event:
291
+ ```ruby
292
+ { 'candidates' =>
293
+ [{ 'content' => {
294
+ 'role' => 'model',
295
+ 'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
296
+ },
297
+ 'finishReason' => 'STOP',
298
+ 'safetyRatings' =>
299
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
300
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
301
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
302
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
303
+ 'usageMetadata' => {
304
+ 'promptTokenCount' => 2,
305
+ 'candidatesTokenCount' => 8,
306
+ 'totalTokenCount' => 10
307
+ } }
308
+ ```
309
+
310
+ ##### Without Events
311
+
312
+ You can use `stream_generate_content` without events:
313
+
314
+ ```ruby
315
+ result = client.stream_generate_content(
316
+ { contents: { role: 'user', parts: { text: 'hi!' } } }
317
+ )
318
+ ```
319
+
320
+ In this case, the result will be an array with all the received events:
321
+
322
+ ```ruby
323
+ [{ 'candidates' =>
324
+ [{ 'content' => {
325
+ 'role' => 'model',
326
+ 'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
327
+ },
328
+ 'finishReason' => 'STOP',
329
+ 'safetyRatings' =>
330
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
331
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
332
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
333
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
334
+ 'usageMetadata' => {
335
+ 'promptTokenCount' => 2,
336
+ 'candidatesTokenCount' => 8,
337
+ 'totalTokenCount' => 10
338
+ } }]
339
+ ```
340
+
341
+ You can mix both as well:
342
+ ```ruby
343
+ result = client.stream_generate_content(
344
+ { contents: { role: 'user', parts: { text: 'hi!' } } }
345
+ ) do |event, parsed, raw|
346
+ puts event
347
+ end
348
+ ```
349
+
350
+ #### generate_content
351
+
352
+ ```ruby
353
+ result = client.generate_content(
354
+ { contents: { role: 'user', parts: { text: 'hi!' } } }
266
355
  )
267
356
  ```
268
357
 
269
- ### Generate Content
358
+ Result:
359
+ ```ruby
360
+ { 'candidates' =>
361
+ [{ 'content' => { 'parts' => [{ 'text' => 'Hello! How can I assist you today?' }], 'role' => 'model' },
362
+ 'finishReason' => 'STOP',
363
+ 'index' => 0,
364
+ 'safetyRatings' =>
365
+ [{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
366
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
367
+ { 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
368
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
369
+ 'promptFeedback' =>
370
+ { 'safetyRatings' =>
371
+ [{ 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
372
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
373
+ { 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
374
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] } }
375
+ ```
270
376
 
271
- #### Synchronous
377
+ As of the writing of this README, only the `generative-language-api` service supports the `generate_content` method; `vertex-ai-api` does not.
378
+
379
+ ### Modes
380
+
381
+ #### Text
272
382
 
273
383
  ```ruby
274
384
  result = client.stream_generate_content({
@@ -296,13 +406,132 @@ Result:
296
406
  } }]
297
407
  ```
298
408
 
299
- #### Streaming
409
+ #### Image
410
+
411
+ ![A black and white image of an old piano. The piano is an upright model, with the keys on the right side of the image. The piano is sitting on a tiled floor. There is a small round object on the top of the piano.](https://raw.githubusercontent.com/gbaptista/assets/main/gemini-ai/piano.jpg)
412
+
413
+ > _Courtesy of [Unsplash](https://unsplash.com/photos/greyscale-photo-of-grand-piano-czPs0z3-Ggg)_
414
+
415
+ Switch to the `gemini-pro-vision` model:
416
+
417
+ ```ruby
418
+ client = Gemini.new(
419
+ credentials: { service: 'vertex-ai-api', region: 'us-east4' },
420
+ options: { model: 'gemini-pro-vision', server_sent_events: true }
421
+ )
422
+ ```
423
+
424
+ Then, encode the image as [Base64](https://en.wikipedia.org/wiki/Base64) and add its [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types):
425
+
426
+ ```ruby
427
+ require 'base64'
428
+
429
+ result = client.stream_generate_content(
430
+ { contents: [
431
+ { role: 'user', parts: [
432
+ { text: 'Please describe this image.' },
433
+ { inline_data: {
434
+ mime_type: 'image/jpeg',
435
+ data: Base64.strict_encode64(File.read('piano.jpg'))
436
+ } }
437
+ ] }
438
+ ] }
439
+ )
440
+ ```
441
+
442
+ The result:
443
+ ```ruby
444
+ [{ 'candidates' =>
445
+ [{ 'content' =>
446
+ { 'role' => 'model',
447
+ 'parts' =>
448
+ [{ 'text' =>
449
+ ' A black and white image of an old piano. The piano is an upright model, with the keys on the right side of the image. The piano is' }] },
450
+ 'safetyRatings' =>
451
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
452
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
453
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
454
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }] },
455
+ { 'candidates' =>
456
+ [{ 'content' => { 'role' => 'model', 'parts' => [{ 'text' => ' sitting on a tiled floor. There is a small round object on the top of the piano.' }] },
457
+ 'finishReason' => 'STOP',
458
+ 'safetyRatings' =>
459
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
460
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
461
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
462
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
463
+ 'usageMetadata' => { 'promptTokenCount' => 263, 'candidatesTokenCount' => 50, 'totalTokenCount' => 313 } }]
464
+ ```
465
+
466
+ #### Video
467
+
468
+ https://gist.github.com/assets/29520/f82bccbf-02d2-4899-9c48-eb8a0a5ef741
469
+
470
+ > ALT: A white and gold cup is being filled with coffee. The coffee is dark and rich. The cup is sitting on a black surface. The background is blurred.
471
+
472
+ > _Courtesy of [Pexels](https://www.pexels.com/video/pouring-of-coffee-855391/)_
473
+
474
+ Switch to the `gemini-pro-vision` model:
300
475
 
301
- You can set up the client to use streaming for all supported endpoints:
476
+ ```ruby
477
+ client = Gemini.new(
478
+ credentials: { service: 'vertex-ai-api', region: 'us-east4' },
479
+ options: { model: 'gemini-pro-vision', server_sent_events: true }
480
+ )
481
+ ```
482
+
483
+ Then, encode the video as [Base64](https://en.wikipedia.org/wiki/Base64) and add its [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types):
484
+
485
+ ```ruby
486
+ require 'base64'
487
+
488
+ result = client.stream_generate_content(
489
+ { contents: [
490
+ { role: 'user', parts: [
491
+ { text: 'Please describe this video.' },
492
+ { inline_data: {
493
+ mime_type: 'video/mp4',
494
+ data: Base64.strict_encode64(File.read('coffee.mp4'))
495
+ } }
496
+ ] }
497
+ ] }
498
+ )
499
+ ```
500
+
501
+ The result:
502
+ ```ruby
503
+ [{"candidates"=>
504
+ [{"content"=>
505
+ {"role"=>"model",
506
+ "parts"=>
507
+ [{"text"=>
508
+ " A white and gold cup is being filled with coffee. The coffee is dark and rich. The cup is sitting on a black surface. The background is blurred"}]},
509
+ "safetyRatings"=>
510
+ [{"category"=>"HARM_CATEGORY_HARASSMENT", "probability"=>"NEGLIGIBLE"},
511
+ {"category"=>"HARM_CATEGORY_HATE_SPEECH", "probability"=>"NEGLIGIBLE"},
512
+ {"category"=>"HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability"=>"NEGLIGIBLE"},
513
+ {"category"=>"HARM_CATEGORY_DANGEROUS_CONTENT", "probability"=>"NEGLIGIBLE"}]}],
514
+ "usageMetadata"=>{"promptTokenCount"=>1037, "candidatesTokenCount"=>31, "totalTokenCount"=>1068}},
515
+ {"candidates"=>
516
+ [{"content"=>{"role"=>"model", "parts"=>[{"text"=>"."}]},
517
+ "finishReason"=>"STOP",
518
+ "safetyRatings"=>
519
+ [{"category"=>"HARM_CATEGORY_HARASSMENT", "probability"=>"NEGLIGIBLE"},
520
+ {"category"=>"HARM_CATEGORY_HATE_SPEECH", "probability"=>"NEGLIGIBLE"},
521
+ {"category"=>"HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability"=>"NEGLIGIBLE"},
522
+ {"category"=>"HARM_CATEGORY_DANGEROUS_CONTENT", "probability"=>"NEGLIGIBLE"}]}],
523
+ "usageMetadata"=>{"promptTokenCount"=>1037, "candidatesTokenCount"=>32, "totalTokenCount"=>1069}}]
524
+ ```
525
+
526
+ ### Streaming vs. Server-Sent Events (SSE)
527
+
528
+ [Server-Sent Events (SSE)](https://en.wikipedia.org/wiki/Server-sent_events) is a technology that allows certain endpoints to offer streaming capabilities, such as creating the impression that "the model is typing along with you," rather than delivering the entire answer all at once.
529
+
530
+ You can set up the client to use Server-Sent Events (SSE) for all supported endpoints:
302
531
  ```ruby
303
532
  client = Gemini.new(
304
533
  credentials: { ... },
305
- options: { model: 'gemini-pro', stream: true }
534
+ options: { model: 'gemini-pro', server_sent_events: true }
306
535
  )
307
536
  ```
308
537
 
@@ -310,11 +539,11 @@ Or, you can decide on a request basis:
310
539
  ```ruby
311
540
  client.stream_generate_content(
312
541
  { contents: { role: 'user', parts: { text: 'hi!' } } },
313
- stream: true
542
+ server_sent_events: true
314
543
  )
315
544
  ```
316
545
 
317
- With streaming enabled, you can use a block to receive the results:
546
+ With Server-Sent Events (SSE) enabled, you can use a block to receive partial results via events. This feature is particularly useful for methods that offer streaming capabilities, such as `stream_generate_content`:
318
547
 
319
548
  ```ruby
320
549
  client.stream_generate_content(
@@ -344,14 +573,16 @@ Event:
344
573
  } }
345
574
  ```
346
575
 
347
- #### Streaming Hang
576
+ Even though streaming methods utilize Server-Sent Events (SSE), using this feature doesn't necessarily mean streaming data. For example, when `generate_content` is called with SSE enabled, you will receive all the data at once in a single event, rather than through multiple partial events. This occurs because `generate_content` isn't designed for streaming, even though it is capable of utilizing Server-Sent Events.
577
+
578
+ #### Server-Sent Events (SSE) Hang
348
579
 
349
- Method calls will _hang_ until the stream finishes, so even without providing a block, you can get the final results of the stream events:
580
+ Method calls will _hang_ until the server-sent events finish, so even without providing a block, you can obtain the final results of the received events:
350
581
 
351
582
  ```ruby
352
583
  result = client.stream_generate_content(
353
584
  { contents: { role: 'user', parts: { text: 'hi!' } } },
354
- stream: true
585
+ server_sent_events: true
355
586
  )
356
587
  ```
357
588
 
@@ -375,6 +606,39 @@ Result:
375
606
  } }]
376
607
  ```
377
608
 
609
+ #### Non-Streaming
610
+
611
+ Depending on the service, you can use the [`generate_content`](#generate_content) method, which does not stream the answer.
612
+
613
+ You can also use methods designed for streaming without necessarily processing partial events; instead, you can wait for the result of all received events:
614
+
615
+ ```ruby
616
+ result = client.stream_generate_content({
617
+ contents: { role: 'user', parts: { text: 'hi!' } },
618
+ server_sent_events: false
619
+ })
620
+ ```
621
+
622
+ Result:
623
+ ```ruby
624
+ [{ 'candidates' =>
625
+ [{ 'content' => {
626
+ 'role' => 'model',
627
+ 'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
628
+ },
629
+ 'finishReason' => 'STOP',
630
+ 'safetyRatings' =>
631
+ [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
632
+ { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
633
+ { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
634
+ { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
635
+ 'usageMetadata' => {
636
+ 'promptTokenCount' => 2,
637
+ 'candidatesTokenCount' => 8,
638
+ 'totalTokenCount' => 10
639
+ } }]
640
+ ```
641
+
378
642
  ### Back-and-Forth Conversations
379
643
 
380
644
  To maintain a back-and-forth conversation, you need to append the received responses and build a history for your requests:
@@ -573,6 +837,58 @@ result = client.request(
573
837
  )
574
838
  ```
575
839
 
840
+ ### Error Handling
841
+
842
+ #### Rescuing
843
+
844
+ ```ruby
845
+ require 'gemini-ai'
846
+
847
+ begin
848
+ client.stream_generate_content({
849
+ contents: { role: 'user', parts: { text: 'hi!' } }
850
+ })
851
+ rescue Gemini::Errors::GeminiError => error
852
+ puts error.class # Gemini::Errors::RequestError
853
+ puts error.message # 'the server responded with status 500'
854
+
855
+ puts error.payload
856
+ # { contents: [{ role: 'user', parts: { text: 'hi!' } }],
857
+ # generationConfig: { candidateCount: 1 },
858
+ # ...
859
+ # }
860
+
861
+ puts error.request
862
+ # #<Faraday::ServerError response={:status=>500, :headers...
863
+ end
864
+ ```
865
+
866
+ #### For Short
867
+
868
+ ```ruby
869
+ require 'gemini-ai/errors'
870
+
871
+ begin
872
+ client.stream_generate_content({
873
+ contents: { role: 'user', parts: { text: 'hi!' } }
874
+ })
875
+ rescue GeminiError => error
876
+ puts error.class # Gemini::Errors::RequestError
877
+ end
878
+ ```
879
+
880
+ #### Errors
881
+
882
+ ```ruby
883
+ GeminiError
884
+
885
+ MissingProjectIdError
886
+ UnsupportedServiceError
887
+ BlockWithoutServerSentEventsError
888
+
889
+ RequestError
890
+ ```
891
+
576
892
  ## Development
577
893
 
578
894
  ```bash
@@ -591,7 +907,7 @@ gem build gemini-ai.gemspec
591
907
 
592
908
  gem signin
593
909
 
594
- gem push gemini-ai-2.1.0.gem
910
+ gem push gemini-ai-3.0.0.gem
595
911
  ```
596
912
 
597
913
  ### Updating the README
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: gemini-ai
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.1.0
4
+ version: 3.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - gbaptista
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-12-16 00:00:00.000000000 Z
11
+ date: 2023-12-17 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: event_stream_parser
@@ -78,9 +78,11 @@ files:
78
78
  - Gemfile.lock
79
79
  - LICENSE
80
80
  - README.md
81
+ - components/errors.rb
81
82
  - controllers/client.rb
82
83
  - gemini-ai.gemspec
83
84
  - ports/dsl/gemini-ai.rb
85
+ - ports/dsl/gemini-ai/errors.rb
84
86
  - static/gem.rb
85
87
  - tasks/generate-readme.clj
86
88
  - template.md
@@ -107,7 +109,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
107
109
  - !ruby/object:Gem::Version
108
110
  version: '0'
109
111
  requirements: []
110
- rubygems_version: 3.3.3
112
+ rubygems_version: 3.4.22
111
113
  signing_key:
112
114
  specification_version: 4
113
115
  summary: Interact with Google's Gemini AI.