gemini-ai 2.1.0 → 2.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/Gemfile.lock +1 -1
- data/README.md +220 -8
- data/components/errors.rb +26 -0
- data/controllers/client.rb +10 -4
- data/ports/dsl/gemini-ai/errors.rb +5 -0
- data/static/gem.rb +1 -1
- data/template.md +210 -6
- metadata +3 -1
    
        checksums.yaml
    CHANGED
    
    | @@ -1,7 +1,7 @@ | |
| 1 1 | 
             
            ---
         | 
| 2 2 | 
             
            SHA256:
         | 
| 3 | 
            -
              metadata.gz:  | 
| 4 | 
            -
              data.tar.gz:  | 
| 3 | 
            +
              metadata.gz: c7c5bd6e6cd1d2195b7a437fb0664cd553dc0acbf0d293042d93c3d701e6b6e0
         | 
| 4 | 
            +
              data.tar.gz: 665458cc152b00efae9f8e2b730fe000ef2caac63f835944c1e76a83b9a0e627
         | 
| 5 5 | 
             
            SHA512:
         | 
| 6 | 
            -
              metadata.gz:  | 
| 7 | 
            -
              data.tar.gz:  | 
| 6 | 
            +
              metadata.gz: 0ca1f3f87c61276902259d937f4d57f324a756d1e63c1e5781680ba970313f3c3c29a2da49c4eb1ec0bf351f984bee378e86b22d1656b2423638e1a70bd5dddf
         | 
| 7 | 
            +
              data.tar.gz: 50988d0881d37f561e0c75ac1beea4fc9b34c757d0f16fb2dcb2f63d3e63f867194e0f826800c8429ce9f33500dd74c3fa4c5d747a0975378c81cf3e0ad4a61b
         | 
    
        data/Gemfile.lock
    CHANGED
    
    
    
        data/README.md
    CHANGED
    
    | @@ -9,7 +9,7 @@ A Ruby Gem for interacting with [Gemini](https://deepmind.google/technologies/ge | |
| 9 9 | 
             
            ## TL;DR and Quick Start
         | 
| 10 10 |  | 
| 11 11 | 
             
            ```ruby
         | 
| 12 | 
            -
            gem 'gemini-ai', '~> 2. | 
| 12 | 
            +
            gem 'gemini-ai', '~> 2.2.0'
         | 
| 13 13 | 
             
            ```
         | 
| 14 14 |  | 
| 15 15 | 
             
            ```ruby
         | 
| @@ -82,12 +82,20 @@ Result: | |
| 82 82 | 
             
            - [Usage](#usage)
         | 
| 83 83 | 
             
                - [Client](#client)
         | 
| 84 84 | 
             
                - [Generate Content](#generate-content)
         | 
| 85 | 
            +
                    - [Modes](#modes)
         | 
| 86 | 
            +
                        - [Text](#text)
         | 
| 87 | 
            +
                        - [Image](#image)
         | 
| 88 | 
            +
                        - [Video](#video)
         | 
| 85 89 | 
             
                    - [Synchronous](#synchronous)
         | 
| 86 90 | 
             
                    - [Streaming](#streaming)
         | 
| 87 91 | 
             
                    - [Streaming Hang](#streaming-hang)
         | 
| 88 | 
            -
             | 
| 89 | 
            -
             | 
| 92 | 
            +
                    - [Back-and-Forth Conversations](#back-and-forth-conversations)
         | 
| 93 | 
            +
                    - [Tools (Functions) Calling](#tools-functions-calling)
         | 
| 90 94 | 
             
                - [New Functionalities and APIs](#new-functionalities-and-apis)
         | 
| 95 | 
            +
                - [Error Handling](#error-handling)
         | 
| 96 | 
            +
                    - [Rescuing](#rescuing)
         | 
| 97 | 
            +
                    - [For Short](#for-short)
         | 
| 98 | 
            +
                    - [Errors](#errors)
         | 
| 91 99 | 
             
            - [Development](#development)
         | 
| 92 100 | 
             
                - [Purpose](#purpose)
         | 
| 93 101 | 
             
                - [Publish to RubyGems](#publish-to-rubygems)
         | 
| @@ -100,15 +108,20 @@ Result: | |
| 100 108 | 
             
            ### Installing
         | 
| 101 109 |  | 
| 102 110 | 
             
            ```sh
         | 
| 103 | 
            -
            gem install gemini-ai -v 2. | 
| 111 | 
            +
            gem install gemini-ai -v 2.2.0
         | 
| 104 112 | 
             
            ```
         | 
| 105 113 |  | 
| 106 114 | 
             
            ```sh
         | 
| 107 | 
            -
            gem 'gemini-ai', '~> 2. | 
| 115 | 
            +
            gem 'gemini-ai', '~> 2.2.0'
         | 
| 108 116 | 
             
            ```
         | 
| 109 117 |  | 
| 110 118 | 
             
            ### Credentials
         | 
| 111 119 |  | 
| 120 | 
            +
            - [Option 1: API Key (Generative Language API)](#option-1-api-key-generative-language-api)
         | 
| 121 | 
            +
            - [Option 2: Service Account Credentials File (Vertex AI API)](#option-2-service-account-credentials-file-vertex-ai-api)
         | 
| 122 | 
            +
            - [Option 3: Application Default Credentials (Vertex AI API)](#option-3-application-default-credentials-vertex-ai-api)
         | 
| 123 | 
            +
            - [Required Data](#required-data)
         | 
| 124 | 
            +
             | 
| 112 125 | 
             
            > ⚠️ DISCLAIMER: Be careful with what you are doing, and never trust others' code related to this. These commands and instructions alter the level of access to your Google Cloud Account, and running them naively can lead to security risks as well as financial risks. People with access to your account can use it to steal data or incur charges. Run these commands at your own responsibility and due diligence; expect no warranties from the contributors of this project.
         | 
| 113 126 |  | 
| 114 127 | 
             
            #### Option 1: API Key (Generative Language API)
         | 
| @@ -291,6 +304,153 @@ client = Gemini.new( | |
| 291 304 |  | 
| 292 305 | 
             
            ### Generate Content
         | 
| 293 306 |  | 
| 307 | 
            +
            #### Modes
         | 
| 308 | 
            +
             | 
| 309 | 
            +
            ##### Text
         | 
| 310 | 
            +
             | 
| 311 | 
            +
            ```ruby
         | 
| 312 | 
            +
            result = client.stream_generate_content({
         | 
| 313 | 
            +
              contents: { role: 'user', parts: { text: 'hi!' } }
         | 
| 314 | 
            +
            })
         | 
| 315 | 
            +
            ```
         | 
| 316 | 
            +
             | 
| 317 | 
            +
            Result:
         | 
| 318 | 
            +
            ```ruby
         | 
| 319 | 
            +
            [{ 'candidates' =>
         | 
| 320 | 
            +
               [{ 'content' => {
         | 
| 321 | 
            +
                    'role' => 'model',
         | 
| 322 | 
            +
                    'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
         | 
| 323 | 
            +
                  },
         | 
| 324 | 
            +
                  'finishReason' => 'STOP',
         | 
| 325 | 
            +
                  'safetyRatings' =>
         | 
| 326 | 
            +
                  [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
         | 
| 327 | 
            +
                   { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
         | 
| 328 | 
            +
                   { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
         | 
| 329 | 
            +
                   { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
         | 
| 330 | 
            +
               'usageMetadata' => {
         | 
| 331 | 
            +
                 'promptTokenCount' => 2,
         | 
| 332 | 
            +
                 'candidatesTokenCount' => 8,
         | 
| 333 | 
            +
                 'totalTokenCount' => 10
         | 
| 334 | 
            +
               } }]
         | 
| 335 | 
            +
            ```
         | 
| 336 | 
            +
             | 
| 337 | 
            +
            ##### Image
         | 
| 338 | 
            +
             | 
| 339 | 
            +
            
         | 
| 340 | 
            +
             | 
| 341 | 
            +
            > _Courtesy of [Unsplash](https://unsplash.com/photos/greyscale-photo-of-grand-piano-czPs0z3-Ggg)_
         | 
| 342 | 
            +
             | 
| 343 | 
            +
            Switch to the `gemini-pro-vision` model:
         | 
| 344 | 
            +
             | 
| 345 | 
            +
            ```ruby
         | 
| 346 | 
            +
            client = Gemini.new(
         | 
| 347 | 
            +
              credentials: { service: 'vertex-ai-api', region: 'us-east4' },
         | 
| 348 | 
            +
              options: { model: 'gemini-pro-vision', stream: true }
         | 
| 349 | 
            +
            )
         | 
| 350 | 
            +
            ```
         | 
| 351 | 
            +
             | 
| 352 | 
            +
            Then, encode the image as [Base64](https://en.wikipedia.org/wiki/Base64) and add its [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types):
         | 
| 353 | 
            +
             | 
| 354 | 
            +
            ```ruby
         | 
| 355 | 
            +
            require 'base64'
         | 
| 356 | 
            +
             | 
| 357 | 
            +
            result = client.stream_generate_content(
         | 
| 358 | 
            +
              { contents: [
         | 
| 359 | 
            +
                { role: 'user', parts: [
         | 
| 360 | 
            +
                  { text: 'Please describe this image.' },
         | 
| 361 | 
            +
                  { inline_data: {
         | 
| 362 | 
            +
                    mime_type: 'image/jpeg',
         | 
| 363 | 
            +
                    data: Base64.strict_encode64(File.read('piano.jpg'))
         | 
| 364 | 
            +
                  } }
         | 
| 365 | 
            +
                ] }
         | 
| 366 | 
            +
              ] }
         | 
| 367 | 
            +
            )
         | 
| 368 | 
            +
            ```
         | 
| 369 | 
            +
             | 
| 370 | 
            +
            The result:
         | 
| 371 | 
            +
            ```ruby
         | 
| 372 | 
            +
            [{ 'candidates' =>
         | 
| 373 | 
            +
               [{ 'content' =>
         | 
| 374 | 
            +
                  { 'role' => 'model',
         | 
| 375 | 
            +
                    'parts' =>
         | 
| 376 | 
            +
                    [{ 'text' =>
         | 
| 377 | 
            +
                       ' A black and white image of an old piano. The piano is an upright model, with the keys on the right side of the image. The piano is' }] },
         | 
| 378 | 
            +
                  'safetyRatings' =>
         | 
| 379 | 
            +
                  [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
         | 
| 380 | 
            +
                   { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
         | 
| 381 | 
            +
                   { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
         | 
| 382 | 
            +
                   { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }] },
         | 
| 383 | 
            +
             { 'candidates' =>
         | 
| 384 | 
            +
               [{ 'content' => { 'role' => 'model', 'parts' => [{ 'text' => ' sitting on a tiled floor. There is a small round object on the top of the piano.' }] },
         | 
| 385 | 
            +
                  'finishReason' => 'STOP',
         | 
| 386 | 
            +
                  'safetyRatings' =>
         | 
| 387 | 
            +
                  [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
         | 
| 388 | 
            +
                   { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
         | 
| 389 | 
            +
                   { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
         | 
| 390 | 
            +
                   { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
         | 
| 391 | 
            +
               'usageMetadata' => { 'promptTokenCount' => 263, 'candidatesTokenCount' => 50, 'totalTokenCount' => 313 } }]
         | 
| 392 | 
            +
            ```
         | 
| 393 | 
            +
             | 
| 394 | 
            +
            ##### Video
         | 
| 395 | 
            +
             | 
| 396 | 
            +
            https://gist.github.com/assets/29520/f82bccbf-02d2-4899-9c48-eb8a0a5ef741
         | 
| 397 | 
            +
             | 
| 398 | 
            +
            > ALT: A white and gold cup is being filled with coffee. The coffee is dark and rich. The cup is sitting on a black surface. The background is blurred.
         | 
| 399 | 
            +
             | 
| 400 | 
            +
            > _Courtesy of [Pexels](https://www.pexels.com/video/pouring-of-coffee-855391/)_
         | 
| 401 | 
            +
             | 
| 402 | 
            +
            Switch to the `gemini-pro-vision` model:
         | 
| 403 | 
            +
             | 
| 404 | 
            +
            ```ruby
         | 
| 405 | 
            +
            client = Gemini.new(
         | 
| 406 | 
            +
              credentials: { service: 'vertex-ai-api', region: 'us-east4' },
         | 
| 407 | 
            +
              options: { model: 'gemini-pro-vision', stream: true }
         | 
| 408 | 
            +
            )
         | 
| 409 | 
            +
            ```
         | 
| 410 | 
            +
             | 
| 411 | 
            +
            Then, encode the video as [Base64](https://en.wikipedia.org/wiki/Base64) and add its [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types):
         | 
| 412 | 
            +
             | 
| 413 | 
            +
            ```ruby
         | 
| 414 | 
            +
            require 'base64'
         | 
| 415 | 
            +
             | 
| 416 | 
            +
            result = client.stream_generate_content(
         | 
| 417 | 
            +
              { contents: [
         | 
| 418 | 
            +
                { role: 'user', parts: [
         | 
| 419 | 
            +
                  { text: 'Please describe this video.' },
         | 
| 420 | 
            +
                  { inline_data: {
         | 
| 421 | 
            +
                    mime_type: 'video/mp4',
         | 
| 422 | 
            +
                    data: Base64.strict_encode64(File.read('coffee.mp4'))
         | 
| 423 | 
            +
                  } }
         | 
| 424 | 
            +
                ] }
         | 
| 425 | 
            +
              ] }
         | 
| 426 | 
            +
            )
         | 
| 427 | 
            +
            ```
         | 
| 428 | 
            +
             | 
| 429 | 
            +
            The result:
         | 
| 430 | 
            +
            ```ruby
         | 
| 431 | 
            +
            [{"candidates"=>
         | 
| 432 | 
            +
               [{"content"=>
         | 
| 433 | 
            +
                  {"role"=>"model",
         | 
| 434 | 
            +
                   "parts"=>
         | 
| 435 | 
            +
                    [{"text"=>
         | 
| 436 | 
            +
                       " A white and gold cup is being filled with coffee. The coffee is dark and rich. The cup is sitting on a black surface. The background is blurred"}]},
         | 
| 437 | 
            +
                 "safetyRatings"=>
         | 
| 438 | 
            +
                  [{"category"=>"HARM_CATEGORY_HARASSMENT", "probability"=>"NEGLIGIBLE"},
         | 
| 439 | 
            +
                   {"category"=>"HARM_CATEGORY_HATE_SPEECH", "probability"=>"NEGLIGIBLE"},
         | 
| 440 | 
            +
                   {"category"=>"HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability"=>"NEGLIGIBLE"},
         | 
| 441 | 
            +
                   {"category"=>"HARM_CATEGORY_DANGEROUS_CONTENT", "probability"=>"NEGLIGIBLE"}]}],
         | 
| 442 | 
            +
              "usageMetadata"=>{"promptTokenCount"=>1037, "candidatesTokenCount"=>31, "totalTokenCount"=>1068}},
         | 
| 443 | 
            +
             {"candidates"=>
         | 
| 444 | 
            +
               [{"content"=>{"role"=>"model", "parts"=>[{"text"=>"."}]},
         | 
| 445 | 
            +
                 "finishReason"=>"STOP",
         | 
| 446 | 
            +
                 "safetyRatings"=>
         | 
| 447 | 
            +
                  [{"category"=>"HARM_CATEGORY_HARASSMENT", "probability"=>"NEGLIGIBLE"},
         | 
| 448 | 
            +
                   {"category"=>"HARM_CATEGORY_HATE_SPEECH", "probability"=>"NEGLIGIBLE"},
         | 
| 449 | 
            +
                   {"category"=>"HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability"=>"NEGLIGIBLE"},
         | 
| 450 | 
            +
                   {"category"=>"HARM_CATEGORY_DANGEROUS_CONTENT", "probability"=>"NEGLIGIBLE"}]}],
         | 
| 451 | 
            +
              "usageMetadata"=>{"promptTokenCount"=>1037, "candidatesTokenCount"=>32, "totalTokenCount"=>1069}}]
         | 
| 452 | 
            +
            ```
         | 
| 453 | 
            +
             | 
| 294 454 | 
             
            #### Synchronous
         | 
| 295 455 |  | 
| 296 456 | 
             
            ```ruby
         | 
| @@ -398,7 +558,7 @@ Result: | |
| 398 558 | 
             
               } }]
         | 
| 399 559 | 
             
            ```
         | 
| 400 560 |  | 
| 401 | 
            -
             | 
| 561 | 
            +
            #### Back-and-Forth Conversations
         | 
| 402 562 |  | 
| 403 563 | 
             
            To maintain a back-and-forth conversation, you need to append the received responses and build a history for your requests:
         | 
| 404 564 |  | 
| @@ -440,7 +600,7 @@ Result: | |
| 440 600 | 
             
               } }]
         | 
| 441 601 | 
             
            ```
         | 
| 442 602 |  | 
| 443 | 
            -
             | 
| 603 | 
            +
            #### Tools (Functions) Calling
         | 
| 444 604 |  | 
| 445 605 | 
             
            > As of the writing of this README, only the `vertex-ai-api` service and the `gemini-pro` model [supports](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling#supported_models) tools (functions) calls.
         | 
| 446 606 |  | 
| @@ -596,6 +756,58 @@ result = client.request( | |
| 596 756 | 
             
            )
         | 
| 597 757 | 
             
            ```
         | 
| 598 758 |  | 
| 759 | 
            +
            ### Error Handling
         | 
| 760 | 
            +
             | 
| 761 | 
            +
            #### Rescuing
         | 
| 762 | 
            +
             | 
| 763 | 
            +
            ```ruby
         | 
| 764 | 
            +
            require 'gemini-ai'
         | 
| 765 | 
            +
             | 
| 766 | 
            +
            begin
         | 
| 767 | 
            +
              client.stream_generate_content({
         | 
| 768 | 
            +
                contents: { role: 'user', parts: { text: 'hi!' } }
         | 
| 769 | 
            +
              })
         | 
| 770 | 
            +
            rescue Gemini::Errors::GeminiError => error
         | 
| 771 | 
            +
              puts error.class # Gemini::Errors::RequestError
         | 
| 772 | 
            +
              puts error.message # 'the server responded with status 500'
         | 
| 773 | 
            +
             | 
| 774 | 
            +
              puts error.payload
         | 
| 775 | 
            +
              # { contents: [{ role: 'user', parts: { text: 'hi!' } }],
         | 
| 776 | 
            +
              #   generationConfig: { candidateCount: 1 },
         | 
| 777 | 
            +
              #   ...
         | 
| 778 | 
            +
              # }
         | 
| 779 | 
            +
             | 
| 780 | 
            +
              puts error.request
         | 
| 781 | 
            +
              # #<Faraday::ServerError response={:status=>500, :headers...
         | 
| 782 | 
            +
            end
         | 
| 783 | 
            +
            ```
         | 
| 784 | 
            +
             | 
| 785 | 
            +
            #### For Short
         | 
| 786 | 
            +
             | 
| 787 | 
            +
            ```ruby
         | 
| 788 | 
            +
            require 'gemini-ai/errors'
         | 
| 789 | 
            +
             | 
| 790 | 
            +
            begin
         | 
| 791 | 
            +
              client.stream_generate_content({
         | 
| 792 | 
            +
                contents: { role: 'user', parts: { text: 'hi!' } }
         | 
| 793 | 
            +
              })
         | 
| 794 | 
            +
            rescue GeminiError => error
         | 
| 795 | 
            +
              puts error.class # Gemini::Errors::RequestError
         | 
| 796 | 
            +
            end
         | 
| 797 | 
            +
            ```
         | 
| 798 | 
            +
             | 
| 799 | 
            +
            #### Errors
         | 
| 800 | 
            +
             | 
| 801 | 
            +
            ```ruby
         | 
| 802 | 
            +
            GeminiError
         | 
| 803 | 
            +
             | 
| 804 | 
            +
            MissingProjectIdError
         | 
| 805 | 
            +
            UnsupportedServiceError
         | 
| 806 | 
            +
            BlockWithoutStreamError
         | 
| 807 | 
            +
             | 
| 808 | 
            +
            RequestError
         | 
| 809 | 
            +
            ```
         | 
| 810 | 
            +
             | 
| 599 811 | 
             
            ## Development
         | 
| 600 812 |  | 
| 601 813 | 
             
            ```bash
         | 
| @@ -614,7 +826,7 @@ gem build gemini-ai.gemspec | |
| 614 826 |  | 
| 615 827 | 
             
            gem signin
         | 
| 616 828 |  | 
| 617 | 
            -
            gem push gemini-ai-2. | 
| 829 | 
            +
            gem push gemini-ai-2.2.0.gem
         | 
| 618 830 | 
             
            ```
         | 
| 619 831 |  | 
| 620 832 | 
             
            ### Updating the README
         | 
| @@ -0,0 +1,26 @@ | |
| 1 | 
            +
            # frozen_string_literal: true
         | 
| 2 | 
            +
             | 
| 3 | 
            +
            module Gemini
         | 
| 4 | 
            +
              module Errors
         | 
| 5 | 
            +
                class GeminiError < StandardError
         | 
| 6 | 
            +
                  def initialize(message = nil)
         | 
| 7 | 
            +
                    super(message)
         | 
| 8 | 
            +
                  end
         | 
| 9 | 
            +
                end
         | 
| 10 | 
            +
             | 
| 11 | 
            +
                class MissingProjectIdError < GeminiError; end
         | 
| 12 | 
            +
                class UnsupportedServiceError < GeminiError; end
         | 
| 13 | 
            +
                class BlockWithoutStreamError < GeminiError; end
         | 
| 14 | 
            +
             | 
| 15 | 
            +
                class RequestError < GeminiError
         | 
| 16 | 
            +
                  attr_reader :request, :payload
         | 
| 17 | 
            +
             | 
| 18 | 
            +
                  def initialize(message = nil, request: nil, payload: nil)
         | 
| 19 | 
            +
                    @request = request
         | 
| 20 | 
            +
                    @payload = payload
         | 
| 21 | 
            +
             | 
| 22 | 
            +
                    super(message)
         | 
| 23 | 
            +
                  end
         | 
| 24 | 
            +
                end
         | 
| 25 | 
            +
              end
         | 
| 26 | 
            +
            end
         | 
    
        data/controllers/client.rb
    CHANGED
    
    | @@ -5,6 +5,8 @@ require 'faraday' | |
| 5 5 | 
             
            require 'json'
         | 
| 6 6 | 
             
            require 'googleauth'
         | 
| 7 7 |  | 
| 8 | 
            +
            require_relative '../components/errors'
         | 
| 9 | 
            +
             | 
| 8 10 | 
             
            module Gemini
         | 
| 9 11 | 
             
              module Controllers
         | 
| 10 12 | 
             
                class Client
         | 
| @@ -30,7 +32,7 @@ module Gemini | |
| 30 32 | 
             
                                      config[:credentials][:project_id]
         | 
| 31 33 | 
             
                                    end
         | 
| 32 34 |  | 
| 33 | 
            -
                      raise  | 
| 35 | 
            +
                      raise MissingProjectIdError, 'Could not determine project_id, which is required.' if @project_id.nil?
         | 
| 34 36 | 
             
                    end
         | 
| 35 37 |  | 
| 36 38 | 
             
                    @address = case config[:credentials][:service]
         | 
| @@ -39,7 +41,7 @@ module Gemini | |
| 39 41 | 
             
                               when 'generative-language-api'
         | 
| 40 42 | 
             
                                 "https://generativelanguage.googleapis.com/v1/models/#{config[:options][:model]}"
         | 
| 41 43 | 
             
                               else
         | 
| 42 | 
            -
                                 raise  | 
| 44 | 
            +
                                 raise UnsupportedServiceError, "Unsupported service: #{config[:credentials][:service]}"
         | 
| 43 45 | 
             
                               end
         | 
| 44 46 |  | 
| 45 47 | 
             
                    @stream = config[:options][:stream]
         | 
| @@ -60,12 +62,14 @@ module Gemini | |
| 60 62 | 
             
                    url += "?#{params.join('&')}" if params.size.positive?
         | 
| 61 63 |  | 
| 62 64 | 
             
                    if !callback.nil? && !stream_enabled
         | 
| 63 | 
            -
                      raise  | 
| 65 | 
            +
                      raise BlockWithoutStreamError, 'You are trying to use a block without stream enabled.'
         | 
| 64 66 | 
             
                    end
         | 
| 65 67 |  | 
| 66 68 | 
             
                    results = []
         | 
| 67 69 |  | 
| 68 | 
            -
                    response = Faraday.new | 
| 70 | 
            +
                    response = Faraday.new do |faraday|
         | 
| 71 | 
            +
                      faraday.response :raise_error
         | 
| 72 | 
            +
                    end.post do |request|
         | 
| 69 73 | 
             
                      request.url url
         | 
| 70 74 | 
             
                      request.headers['Content-Type'] = 'application/json'
         | 
| 71 75 | 
             
                      if @authentication == :service_account || @authentication == :default_credentials
         | 
| @@ -106,6 +110,8 @@ module Gemini | |
| 106 110 | 
             
                    return safe_parse_json(response.body) unless stream_enabled
         | 
| 107 111 |  | 
| 108 112 | 
             
                    results.map { |result| result[:event] }
         | 
| 113 | 
            +
                  rescue Faraday::ServerError => e
         | 
| 114 | 
            +
                    raise RequestError.new(e.message, request: e, payload:)
         | 
| 109 115 | 
             
                  end
         | 
| 110 116 |  | 
| 111 117 | 
             
                  def safe_parse_json(raw)
         | 
    
        data/static/gem.rb
    CHANGED
    
    | @@ -3,7 +3,7 @@ | |
| 3 3 | 
             
            module Gemini
         | 
| 4 4 | 
             
              GEM = {
         | 
| 5 5 | 
             
                name: 'gemini-ai',
         | 
| 6 | 
            -
                version: '2. | 
| 6 | 
            +
                version: '2.2.0',
         | 
| 7 7 | 
             
                author: 'gbaptista',
         | 
| 8 8 | 
             
                summary: "Interact with Google's Gemini AI.",
         | 
| 9 9 | 
             
                description: "A Ruby Gem for interacting with Gemini through Vertex AI, Generative Language API, or AI Studio, Google's generative AI services.",
         | 
    
        data/template.md
    CHANGED
    
    | @@ -9,7 +9,7 @@ A Ruby Gem for interacting with [Gemini](https://deepmind.google/technologies/ge | |
| 9 9 | 
             
            ## TL;DR and Quick Start
         | 
| 10 10 |  | 
| 11 11 | 
             
            ```ruby
         | 
| 12 | 
            -
            gem 'gemini-ai', '~> 2. | 
| 12 | 
            +
            gem 'gemini-ai', '~> 2.2.0'
         | 
| 13 13 | 
             
            ```
         | 
| 14 14 |  | 
| 15 15 | 
             
            ```ruby
         | 
| @@ -77,15 +77,20 @@ Result: | |
| 77 77 | 
             
            ### Installing
         | 
| 78 78 |  | 
| 79 79 | 
             
            ```sh
         | 
| 80 | 
            -
            gem install gemini-ai -v 2. | 
| 80 | 
            +
            gem install gemini-ai -v 2.2.0
         | 
| 81 81 | 
             
            ```
         | 
| 82 82 |  | 
| 83 83 | 
             
            ```sh
         | 
| 84 | 
            -
            gem 'gemini-ai', '~> 2. | 
| 84 | 
            +
            gem 'gemini-ai', '~> 2.2.0'
         | 
| 85 85 | 
             
            ```
         | 
| 86 86 |  | 
| 87 87 | 
             
            ### Credentials
         | 
| 88 88 |  | 
| 89 | 
            +
            - [Option 1: API Key (Generative Language API)](#option-1-api-key-generative-language-api)
         | 
| 90 | 
            +
            - [Option 2: Service Account Credentials File (Vertex AI API)](#option-2-service-account-credentials-file-vertex-ai-api)
         | 
| 91 | 
            +
            - [Option 3: Application Default Credentials (Vertex AI API)](#option-3-application-default-credentials-vertex-ai-api)
         | 
| 92 | 
            +
            - [Required Data](#required-data)
         | 
| 93 | 
            +
             | 
| 89 94 | 
             
            > ⚠️ DISCLAIMER: Be careful with what you are doing, and never trust others' code related to this. These commands and instructions alter the level of access to your Google Cloud Account, and running them naively can lead to security risks as well as financial risks. People with access to your account can use it to steal data or incur charges. Run these commands at your own responsibility and due diligence; expect no warranties from the contributors of this project.
         | 
| 90 95 |  | 
| 91 96 | 
             
            #### Option 1: API Key (Generative Language API)
         | 
| @@ -268,6 +273,153 @@ client = Gemini.new( | |
| 268 273 |  | 
| 269 274 | 
             
            ### Generate Content
         | 
| 270 275 |  | 
| 276 | 
            +
            #### Modes
         | 
| 277 | 
            +
             | 
| 278 | 
            +
            ##### Text
         | 
| 279 | 
            +
             | 
| 280 | 
            +
            ```ruby
         | 
| 281 | 
            +
            result = client.stream_generate_content({
         | 
| 282 | 
            +
              contents: { role: 'user', parts: { text: 'hi!' } }
         | 
| 283 | 
            +
            })
         | 
| 284 | 
            +
            ```
         | 
| 285 | 
            +
             | 
| 286 | 
            +
            Result:
         | 
| 287 | 
            +
            ```ruby
         | 
| 288 | 
            +
            [{ 'candidates' =>
         | 
| 289 | 
            +
               [{ 'content' => {
         | 
| 290 | 
            +
                    'role' => 'model',
         | 
| 291 | 
            +
                    'parts' => [{ 'text' => 'Hello! How may I assist you?' }]
         | 
| 292 | 
            +
                  },
         | 
| 293 | 
            +
                  'finishReason' => 'STOP',
         | 
| 294 | 
            +
                  'safetyRatings' =>
         | 
| 295 | 
            +
                  [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
         | 
| 296 | 
            +
                   { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
         | 
| 297 | 
            +
                   { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
         | 
| 298 | 
            +
                   { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
         | 
| 299 | 
            +
               'usageMetadata' => {
         | 
| 300 | 
            +
                 'promptTokenCount' => 2,
         | 
| 301 | 
            +
                 'candidatesTokenCount' => 8,
         | 
| 302 | 
            +
                 'totalTokenCount' => 10
         | 
| 303 | 
            +
               } }]
         | 
| 304 | 
            +
            ```
         | 
| 305 | 
            +
             | 
| 306 | 
            +
            ##### Image
         | 
| 307 | 
            +
             | 
| 308 | 
            +
            
         | 
| 309 | 
            +
             | 
| 310 | 
            +
            > _Courtesy of [Unsplash](https://unsplash.com/photos/greyscale-photo-of-grand-piano-czPs0z3-Ggg)_
         | 
| 311 | 
            +
             | 
| 312 | 
            +
            Switch to the `gemini-pro-vision` model:
         | 
| 313 | 
            +
             | 
| 314 | 
            +
            ```ruby
         | 
| 315 | 
            +
            client = Gemini.new(
         | 
| 316 | 
            +
              credentials: { service: 'vertex-ai-api', region: 'us-east4' },
         | 
| 317 | 
            +
              options: { model: 'gemini-pro-vision', stream: true }
         | 
| 318 | 
            +
            )
         | 
| 319 | 
            +
            ```
         | 
| 320 | 
            +
             | 
| 321 | 
            +
            Then, encode the image as [Base64](https://en.wikipedia.org/wiki/Base64) and add its [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types):
         | 
| 322 | 
            +
             | 
| 323 | 
            +
            ```ruby
         | 
| 324 | 
            +
            require 'base64'
         | 
| 325 | 
            +
             | 
| 326 | 
            +
            result = client.stream_generate_content(
         | 
| 327 | 
            +
              { contents: [
         | 
| 328 | 
            +
                { role: 'user', parts: [
         | 
| 329 | 
            +
                  { text: 'Please describe this image.' },
         | 
| 330 | 
            +
                  { inline_data: {
         | 
| 331 | 
            +
                    mime_type: 'image/jpeg',
         | 
| 332 | 
            +
                    data: Base64.strict_encode64(File.read('piano.jpg'))
         | 
| 333 | 
            +
                  } }
         | 
| 334 | 
            +
                ] }
         | 
| 335 | 
            +
              ] }
         | 
| 336 | 
            +
            )
         | 
| 337 | 
            +
            ```
         | 
| 338 | 
            +
             | 
| 339 | 
            +
            The result:
         | 
| 340 | 
            +
            ```ruby
         | 
| 341 | 
            +
            [{ 'candidates' =>
         | 
| 342 | 
            +
               [{ 'content' =>
         | 
| 343 | 
            +
                  { 'role' => 'model',
         | 
| 344 | 
            +
                    'parts' =>
         | 
| 345 | 
            +
                    [{ 'text' =>
         | 
| 346 | 
            +
                       ' A black and white image of an old piano. The piano is an upright model, with the keys on the right side of the image. The piano is' }] },
         | 
| 347 | 
            +
                  'safetyRatings' =>
         | 
| 348 | 
            +
                  [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
         | 
| 349 | 
            +
                   { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
         | 
| 350 | 
            +
                   { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
         | 
| 351 | 
            +
                   { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }] },
         | 
| 352 | 
            +
             { 'candidates' =>
         | 
| 353 | 
            +
               [{ 'content' => { 'role' => 'model', 'parts' => [{ 'text' => ' sitting on a tiled floor. There is a small round object on the top of the piano.' }] },
         | 
| 354 | 
            +
                  'finishReason' => 'STOP',
         | 
| 355 | 
            +
                  'safetyRatings' =>
         | 
| 356 | 
            +
                  [{ 'category' => 'HARM_CATEGORY_HARASSMENT', 'probability' => 'NEGLIGIBLE' },
         | 
| 357 | 
            +
                   { 'category' => 'HARM_CATEGORY_HATE_SPEECH', 'probability' => 'NEGLIGIBLE' },
         | 
| 358 | 
            +
                   { 'category' => 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability' => 'NEGLIGIBLE' },
         | 
| 359 | 
            +
                   { 'category' => 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability' => 'NEGLIGIBLE' }] }],
         | 
| 360 | 
            +
               'usageMetadata' => { 'promptTokenCount' => 263, 'candidatesTokenCount' => 50, 'totalTokenCount' => 313 } }]
         | 
| 361 | 
            +
            ```
         | 
| 362 | 
            +
             | 
| 363 | 
            +
            ##### Video
         | 
| 364 | 
            +
             | 
| 365 | 
            +
            https://gist.github.com/assets/29520/f82bccbf-02d2-4899-9c48-eb8a0a5ef741
         | 
| 366 | 
            +
             | 
| 367 | 
            +
            > ALT: A white and gold cup is being filled with coffee. The coffee is dark and rich. The cup is sitting on a black surface. The background is blurred.
         | 
| 368 | 
            +
             | 
| 369 | 
            +
            > _Courtesy of [Pexels](https://www.pexels.com/video/pouring-of-coffee-855391/)_
         | 
| 370 | 
            +
             | 
| 371 | 
            +
            Switch to the `gemini-pro-vision` model:
         | 
| 372 | 
            +
             | 
| 373 | 
            +
            ```ruby
         | 
| 374 | 
            +
            client = Gemini.new(
         | 
| 375 | 
            +
              credentials: { service: 'vertex-ai-api', region: 'us-east4' },
         | 
| 376 | 
            +
              options: { model: 'gemini-pro-vision', stream: true }
         | 
| 377 | 
            +
            )
         | 
| 378 | 
            +
            ```
         | 
| 379 | 
            +
             | 
| 380 | 
            +
            Then, encode the video as [Base64](https://en.wikipedia.org/wiki/Base64) and add its [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types):
         | 
| 381 | 
            +
             | 
| 382 | 
            +
            ```ruby
         | 
| 383 | 
            +
            require 'base64'
         | 
| 384 | 
            +
             | 
| 385 | 
            +
            result = client.stream_generate_content(
         | 
| 386 | 
            +
              { contents: [
         | 
| 387 | 
            +
                { role: 'user', parts: [
         | 
| 388 | 
            +
                  { text: 'Please describe this video.' },
         | 
| 389 | 
            +
                  { inline_data: {
         | 
| 390 | 
            +
                    mime_type: 'video/mp4',
         | 
| 391 | 
            +
                    data: Base64.strict_encode64(File.read('coffee.mp4'))
         | 
| 392 | 
            +
                  } }
         | 
| 393 | 
            +
                ] }
         | 
| 394 | 
            +
              ] }
         | 
| 395 | 
            +
            )
         | 
| 396 | 
            +
            ```
         | 
| 397 | 
            +
             | 
| 398 | 
            +
            The result:
         | 
| 399 | 
            +
            ```ruby
         | 
| 400 | 
            +
            [{"candidates"=>
         | 
| 401 | 
            +
               [{"content"=>
         | 
| 402 | 
            +
                  {"role"=>"model",
         | 
| 403 | 
            +
                   "parts"=>
         | 
| 404 | 
            +
                    [{"text"=>
         | 
| 405 | 
            +
                       " A white and gold cup is being filled with coffee. The coffee is dark and rich. The cup is sitting on a black surface. The background is blurred"}]},
         | 
| 406 | 
            +
                 "safetyRatings"=>
         | 
| 407 | 
            +
                  [{"category"=>"HARM_CATEGORY_HARASSMENT", "probability"=>"NEGLIGIBLE"},
         | 
| 408 | 
            +
                   {"category"=>"HARM_CATEGORY_HATE_SPEECH", "probability"=>"NEGLIGIBLE"},
         | 
| 409 | 
            +
                   {"category"=>"HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability"=>"NEGLIGIBLE"},
         | 
| 410 | 
            +
                   {"category"=>"HARM_CATEGORY_DANGEROUS_CONTENT", "probability"=>"NEGLIGIBLE"}]}],
         | 
| 411 | 
            +
              "usageMetadata"=>{"promptTokenCount"=>1037, "candidatesTokenCount"=>31, "totalTokenCount"=>1068}},
         | 
| 412 | 
            +
             {"candidates"=>
         | 
| 413 | 
            +
               [{"content"=>{"role"=>"model", "parts"=>[{"text"=>"."}]},
         | 
| 414 | 
            +
                 "finishReason"=>"STOP",
         | 
| 415 | 
            +
                 "safetyRatings"=>
         | 
| 416 | 
            +
                  [{"category"=>"HARM_CATEGORY_HARASSMENT", "probability"=>"NEGLIGIBLE"},
         | 
| 417 | 
            +
                   {"category"=>"HARM_CATEGORY_HATE_SPEECH", "probability"=>"NEGLIGIBLE"},
         | 
| 418 | 
            +
                   {"category"=>"HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability"=>"NEGLIGIBLE"},
         | 
| 419 | 
            +
                   {"category"=>"HARM_CATEGORY_DANGEROUS_CONTENT", "probability"=>"NEGLIGIBLE"}]}],
         | 
| 420 | 
            +
              "usageMetadata"=>{"promptTokenCount"=>1037, "candidatesTokenCount"=>32, "totalTokenCount"=>1069}}]
         | 
| 421 | 
            +
            ```
         | 
| 422 | 
            +
             | 
| 271 423 | 
             
            #### Synchronous
         | 
| 272 424 |  | 
| 273 425 | 
             
            ```ruby
         | 
| @@ -375,7 +527,7 @@ Result: | |
| 375 527 | 
             
               } }]
         | 
| 376 528 | 
             
            ```
         | 
| 377 529 |  | 
| 378 | 
            -
             | 
| 530 | 
            +
            #### Back-and-Forth Conversations
         | 
| 379 531 |  | 
| 380 532 | 
             
            To maintain a back-and-forth conversation, you need to append the received responses and build a history for your requests:
         | 
| 381 533 |  | 
| @@ -417,7 +569,7 @@ Result: | |
| 417 569 | 
             
               } }]
         | 
| 418 570 | 
             
            ```
         | 
| 419 571 |  | 
| 420 | 
            -
             | 
| 572 | 
            +
            #### Tools (Functions) Calling
         | 
| 421 573 |  | 
| 422 574 | 
             
            > As of the writing of this README, only the `vertex-ai-api` service and the `gemini-pro` model [supports](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling#supported_models) tools (functions) calls.
         | 
| 423 575 |  | 
| @@ -573,6 +725,58 @@ result = client.request( | |
| 573 725 | 
             
            )
         | 
| 574 726 | 
             
            ```
         | 
| 575 727 |  | 
| 728 | 
            +
            ### Error Handling
         | 
| 729 | 
            +
             | 
| 730 | 
            +
            #### Rescuing
         | 
| 731 | 
            +
             | 
| 732 | 
            +
            ```ruby
         | 
| 733 | 
            +
            require 'gemini-ai'
         | 
| 734 | 
            +
             | 
| 735 | 
            +
            begin
         | 
| 736 | 
            +
              client.stream_generate_content({
         | 
| 737 | 
            +
                contents: { role: 'user', parts: { text: 'hi!' } }
         | 
| 738 | 
            +
              })
         | 
| 739 | 
            +
            rescue Gemini::Errors::GeminiError => error
         | 
| 740 | 
            +
              puts error.class # Gemini::Errors::RequestError
         | 
| 741 | 
            +
              puts error.message # 'the server responded with status 500'
         | 
| 742 | 
            +
             | 
| 743 | 
            +
              puts error.payload
         | 
| 744 | 
            +
              # { contents: [{ role: 'user', parts: { text: 'hi!' } }],
         | 
| 745 | 
            +
              #   generationConfig: { candidateCount: 1 },
         | 
| 746 | 
            +
              #   ...
         | 
| 747 | 
            +
              # }
         | 
| 748 | 
            +
             | 
| 749 | 
            +
              puts error.request
         | 
| 750 | 
            +
              # #<Faraday::ServerError response={:status=>500, :headers...
         | 
| 751 | 
            +
            end
         | 
| 752 | 
            +
            ```
         | 
| 753 | 
            +
             | 
| 754 | 
            +
            #### For Short
         | 
| 755 | 
            +
             | 
| 756 | 
            +
            ```ruby
         | 
| 757 | 
            +
            require 'gemini-ai/errors'
         | 
| 758 | 
            +
             | 
| 759 | 
            +
            begin
         | 
| 760 | 
            +
              client.stream_generate_content({
         | 
| 761 | 
            +
                contents: { role: 'user', parts: { text: 'hi!' } }
         | 
| 762 | 
            +
              })
         | 
| 763 | 
            +
            rescue GeminiError => error
         | 
| 764 | 
            +
              puts error.class # Gemini::Errors::RequestError
         | 
| 765 | 
            +
            end
         | 
| 766 | 
            +
            ```
         | 
| 767 | 
            +
             | 
| 768 | 
            +
            #### Errors
         | 
| 769 | 
            +
             | 
| 770 | 
            +
            ```ruby
         | 
| 771 | 
            +
            GeminiError
         | 
| 772 | 
            +
             | 
| 773 | 
            +
            MissingProjectIdError
         | 
| 774 | 
            +
            UnsupportedServiceError
         | 
| 775 | 
            +
            BlockWithoutStreamError
         | 
| 776 | 
            +
             | 
| 777 | 
            +
            RequestError
         | 
| 778 | 
            +
            ```
         | 
| 779 | 
            +
             | 
| 576 780 | 
             
            ## Development
         | 
| 577 781 |  | 
| 578 782 | 
             
            ```bash
         | 
| @@ -591,7 +795,7 @@ gem build gemini-ai.gemspec | |
| 591 795 |  | 
| 592 796 | 
             
            gem signin
         | 
| 593 797 |  | 
| 594 | 
            -
            gem push gemini-ai-2. | 
| 798 | 
            +
            gem push gemini-ai-2.2.0.gem
         | 
| 595 799 | 
             
            ```
         | 
| 596 800 |  | 
| 597 801 | 
             
            ### Updating the README
         | 
    
        metadata
    CHANGED
    
    | @@ -1,7 +1,7 @@ | |
| 1 1 | 
             
            --- !ruby/object:Gem::Specification
         | 
| 2 2 | 
             
            name: gemini-ai
         | 
| 3 3 | 
             
            version: !ruby/object:Gem::Version
         | 
| 4 | 
            -
              version: 2. | 
| 4 | 
            +
              version: 2.2.0
         | 
| 5 5 | 
             
            platform: ruby
         | 
| 6 6 | 
             
            authors:
         | 
| 7 7 | 
             
            - gbaptista
         | 
| @@ -78,9 +78,11 @@ files: | |
| 78 78 | 
             
            - Gemfile.lock
         | 
| 79 79 | 
             
            - LICENSE
         | 
| 80 80 | 
             
            - README.md
         | 
| 81 | 
            +
            - components/errors.rb
         | 
| 81 82 | 
             
            - controllers/client.rb
         | 
| 82 83 | 
             
            - gemini-ai.gemspec
         | 
| 83 84 | 
             
            - ports/dsl/gemini-ai.rb
         | 
| 85 | 
            +
            - ports/dsl/gemini-ai/errors.rb
         | 
| 84 86 | 
             
            - static/gem.rb
         | 
| 85 87 | 
             
            - tasks/generate-readme.clj
         | 
| 86 88 | 
             
            - template.md
         |