friday_gemini_ai 0.1.3 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 66b53a7e6a9cdf4e1321c477ab5b151159f7b176907b620264bfcb81b4dc0955
4
- data.tar.gz: 42dea151c596e24f2c96f40710bc4a61bc1448516182791af4bd6ce855a2f4b2
3
+ metadata.gz: eae8a647419cdfdc347552b710246c592d3cda16ca8bea9e73df6d3e11c6f4dd
4
+ data.tar.gz: 6fe311f701d533f22bf6a754f9aa1c8e8f5bdf1d46bb64c2ff9d8b6b40f2ecf9
5
5
  SHA512:
6
- metadata.gz: dad93b02ef3f339ded728ce61d2ffd98af06f332347e5d57c707c94052f8799cb1776fd8ba4119ee1e52f46adca495303024c17b0ac134eedfd9a00373425415
7
- data.tar.gz: dc31beb2d7af9c8aae31d81df9416ed67b1c7b243c8f258fedd36f523a6860160432e4ba6424f50098889eb29da4d4bf72891437d4de12c868fcf4969155c479
6
+ metadata.gz: 84f4aae3ebcb6acfce3eb659e98a99729a67c74ba8f2ac8310d18858369f946bc1eacabbc12e35630bfd858dec1ccf17b3bf1e931f061404bee6da9dcc4812a8
7
+ data.tar.gz: 198b0d1bcf660d3110d20f87430c8f858b9fb18612f413b71dfa4f1f2817f40a6c7e5eae6977809dc207cffebb1cd2823d4f2e49c2a54f493c7460a0df99c679
data/LICENSE CHANGED
@@ -1,6 +1,6 @@
1
1
  MIT License
2
2
 
3
- Copyright (c) 2025 Niladri Das
3
+ Copyright (c) friday_gemini_ai
4
4
 
5
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
6
6
  of this software and associated documentation files (the "Software"), to deal
data/README.md CHANGED
@@ -1,104 +1,133 @@
1
- # Friday Gemini AI Ruby Gem
2
-
3
- ## Table of Contents
4
- 1. [Installation](#installation)
5
- 2. [Configuration](#configuration)
6
- 3. [Usage](#usage)
7
- - [Text Generation](#text-generation)
8
- - [Chat](#chat)
9
- - [Error Handling](#error-handling)
10
- 4. [Logging](#logging)
11
- 5. [Security](#security)
12
- 6. [Testing](#testing)
13
- 7. [Contributing](#contributing)
1
+ # Friday Gemini AI
2
+
3
+ [![Gem Version](https://badge.fury.io/rb/friday_gemini_ai.svg)](https://badge.fury.io/rb/friday_gemini_ai)
4
+ [![CI](https://github.com/bniladridas/friday_gemini_ai/actions/workflows/ci.yml/badge.svg)](https://github.com/bniladridas/friday_gemini_ai/actions/workflows/ci.yml)
5
+ [![Security](https://github.com/bniladridas/friday_gemini_ai/workflows/Security/badge.svg)](https://github.com/bniladridas/friday_gemini_ai/actions/workflows/security.yml)
6
+ [![Release](https://github.com/bniladridas/friday_gemini_ai/workflows/Release/badge.svg)](https://github.com/bniladridas/friday_gemini_ai/actions/workflows/release.yml)
7
+
8
+ Ruby gem for Google's Gemini AI models.
14
9
 
15
10
  ## Installation
16
11
 
17
- ### From RubyGems
18
12
  ```bash
19
13
  gem install friday_gemini_ai
20
14
  ```
21
15
 
22
- ### Using Bundler
23
- Add to your Gemfile:
24
- ```ruby
25
- gem 'friday_gemini_ai'
16
+ Create a `.env` file and add your API key:
26
17
  ```
27
-
28
- ## Configuration
29
-
30
- Set your API key:
31
- ```bash
32
- export GEMINI_API_KEY='your_api_key_here'
18
+ GEMINI_API_KEY=your_api_key_here
33
19
  ```
34
20
 
35
21
  ## Usage
36
22
 
37
- ### Text Generation
38
23
  ```ruby
39
- require 'gemini_ai'
24
+ require_relative 'lib/gemini'
40
25
 
41
- client = GeminiAI::Client.new
42
- response = client.generate_text('Tell me a joke')
43
- puts response
44
- ```
45
-
46
- ### Chat
47
- ```ruby
48
- messages = [
49
- { role: 'user', content: 'Hello' },
50
- { role: 'model', content: 'Hi there!' },
51
- { role: 'user', content: 'How are you?' }
52
- ]
26
+ GeminiAI.load_env
53
27
 
54
- response = client.chat(messages)
28
+ # Use the latest Gemini 2.5 Pro (default)
29
+ client = GeminiAI::Client.new
30
+ response = client.generate_text('Write a haiku about Ruby')
55
31
  puts response
56
- ```
57
32
 
58
- ### Error Handling
59
- ```ruby
60
- begin
61
- client.generate_text('')
62
- rescue GeminiAI::Error => e
63
- puts "Error: #{e.message}"
64
- end
33
+ # Or choose a specific model
34
+ flash_client = GeminiAI::Client.new(model: :flash) # Gemini 2.5 Flash
35
+ pro_client = GeminiAI::Client.new(model: :pro) # Gemini 2.5 Pro
65
36
  ```
66
37
 
67
- ## Logging
68
-
69
- ### Configuration
70
- ```ruby
71
- # Set logging level
72
- GeminiAI::Client.logger.level = Logger::INFO
73
-
74
- # Log to file
75
- GeminiAI::Client.logger = Logger.new('logfile.log')
38
+ ```bash
39
+ ./bin/gemini test
40
+ ./bin/gemini generate "Your prompt here"
41
+ ./bin/gemini chat
76
42
  ```
77
43
 
78
- ### Log Levels
79
- - DEBUG: Detailed debug information
80
- - INFO: General operational messages
81
- - WARN: Warning conditions
82
- - ERROR: Error conditions
83
-
84
- ## Security
85
-
86
- ### API Key Protection
87
- - Never hardcode API keys
88
- - Use environment variables
89
- - Automatic key masking in logs
44
+ ## Supported Models
45
+
46
+ | Model Key | Model ID | Description |
47
+ |-----------|----------|-------------|
48
+ | `:pro` | `gemini-2.5-pro` | Latest and most capable model |
49
+ | `:flash` | `gemini-2.5-flash` | Fast and efficient latest model |
50
+ | `:flash_2_0` | `gemini-2.0-flash` | Previous generation fast model |
51
+ | `:flash_lite` | `gemini-2.0-flash-lite` | Lightweight model |
52
+ | `:pro_1_5` | `gemini-1.5-pro` | Gemini 1.5 Pro model |
53
+ | `:flash_1_5` | `gemini-1.5-flash` | Gemini 1.5 Flash model |
54
+ | `:flash_8b` | `gemini-1.5-flash-8b` | Compact 8B parameter model |
55
+
56
+ **Model Selection Guide:**
57
+ - Use `:pro` for complex reasoning and analysis
58
+ - Use `:flash` for fast, general-purpose tasks
59
+ - Use `:flash_2_0` for compatibility with older workflows
60
+ - Use `:flash_lite` for simple, lightweight tasks
61
+
62
+ ## What You Can Do
63
+
64
+ **Text Generation**
65
+ - Generate creative content, stories, and articles
66
+ - Create explanations and educational content
67
+ - Write code comments and documentation
68
+
69
+ **Conversational AI**
70
+ - Build multi-turn chat applications
71
+ - Create interactive Q&A systems
72
+ - Develop conversational interfaces
73
+
74
+ **Image Analysis**
75
+ - Analyze images with text prompts
76
+ - Extract information from visual content
77
+ - Generate descriptions of images
78
+
79
+ **Quick Tasks**
80
+ - Test ideas and prompts via CLI
81
+ - Prototype AI-powered features
82
+ - Generate content with custom parameters
83
+
84
+ ## Features
85
+
86
+ - Text generation and chat conversations
87
+ - Support for Gemini 2.5, 2.0, and 1.5 model families
88
+ - Configurable parameters (temperature, tokens, top-p, top-k)
89
+ - Error handling and API key security
90
+ - CLI interface and .env integration
91
+
92
+ ## Documentation
93
+
94
+ **Getting Started**
95
+ - [Overview](docs/start/overview.md) - Features and capabilities
96
+ - [Quickstart](docs/start/quickstart.md) - 5-minute setup
97
+
98
+ **Reference**
99
+ - [API Reference](docs/reference/api.md) - Method documentation
100
+ - [Usage Guide](docs/reference/usage.md) - Examples and patterns
101
+ - [Models](docs/reference/models.md) - Gemini 2.5 Pro, Flash, and more
102
+ - [Capabilities](docs/reference/capabilities.md) - Text generation and chat
103
+ - [Cookbook](docs/reference/cookbook.md) - Code recipes
104
+ - [Versions](docs/reference/versions.md) - API compatibility
105
+
106
+ **Guides**
107
+ - [Best Practices](docs/guides/practices.md) - Security and performance
108
+ - [Troubleshooting](docs/guides/troubleshoot.md) - Common solutions
109
+ - [Workflows](docs/guides/workflows.md) - CI/CD and automation
110
+ - [Resources](docs/guides/resources.md) - Migration and extras
111
+ - [Community](docs/guides/community.md) - Contributing and support
112
+ - [Changelog](CHANGELOG.md) - Version history and changes
113
+
114
+ ## Examples
115
+
116
+ - `examples/basic.rb` - Text generation and chat
117
+ - `examples/advanced.rb` - Configuration and error handling
90
118
 
91
119
  ## Testing
92
120
 
93
- Run the test suite:
94
121
  ```bash
95
- bundle exec ruby test/gemini_ai_test.rb
122
+ ruby tests/runner.rb
123
+ ruby tests/unit/client.rb
124
+ ruby tests/integration/api.rb
96
125
  ```
97
126
 
98
127
  ## Contributing
99
128
 
100
- 1. Fork the repository
101
- 2. Create your feature branch
102
- 3. Commit your changes
103
- 4. Push to the branch
104
- 5. Create a new Pull Request
129
+ Fork, branch, commit, push, pull request.
130
+
131
+ ## License
132
+
133
+ MIT - see [LICENSE](LICENSE)
@@ -2,22 +2,35 @@ require 'httparty'
2
2
  require 'json'
3
3
  require 'base64'
4
4
  require 'logger'
5
+ require 'dotenv/load'
6
+ require_relative 'errors'
5
7
 
6
8
  module GeminiAI
7
- class Error < StandardError; end
8
-
9
+ # Core client class for Gemini AI API communication
9
10
  class Client
10
11
  BASE_URL = 'https://generativelanguage.googleapis.com/v1/models'
11
12
  MODELS = {
12
- pro: 'gemini-2.0-flash',
13
- flash: 'gemini-2.0-flash',
14
- flash_lite: 'gemini-2.0-flash-lite'
13
+ # Gemini 2.5 models (latest)
14
+ pro: 'gemini-2.5-pro',
15
+ flash: 'gemini-2.5-flash',
16
+
17
+ # Gemini 2.0 models
18
+ flash_2_0: 'gemini-2.0-flash',
19
+ flash_lite: 'gemini-2.0-flash-lite',
20
+
21
+ # Legacy aliases for backward compatibility
22
+ pro_2_0: 'gemini-2.0-flash',
23
+
24
+ # Gemini 1.5 models (for specific use cases)
25
+ pro_1_5: 'gemini-1.5-pro',
26
+ flash_1_5: 'gemini-1.5-flash',
27
+ flash_8b: 'gemini-1.5-flash-8b'
15
28
  }
16
29
 
17
30
  # Configure logging
18
31
  def self.logger
19
32
  @logger ||= Logger.new(STDOUT).tap do |log|
20
- log.level = Logger::DEBUG # Changed to DEBUG for more information
33
+ log.level = Logger::DEBUG
21
34
  log.formatter = proc do |severity, datetime, progname, msg|
22
35
  # Mask any potential API key in logs
23
36
  masked_msg = msg.to_s.gsub(/AIza[a-zA-Z0-9_-]{35,}/, '[REDACTED]')
@@ -30,6 +43,11 @@ module GeminiAI
30
43
  # Prioritize passed API key, then environment variable
31
44
  @api_key = api_key || ENV['GEMINI_API_KEY']
32
45
 
46
+ # Rate limiting - track last request time
47
+ @last_request_time = nil
48
+ # More conservative rate limiting in CI environments
49
+ @min_request_interval = (ENV['CI'] == 'true' || ENV['GITHUB_ACTIONS'] == 'true') ? 3.0 : 1.0
50
+
33
51
  # Extensive logging for debugging
34
52
  self.class.logger.debug("Initializing Client")
35
53
  self.class.logger.debug("API Key present: #{!@api_key.nil?}")
@@ -128,7 +146,10 @@ module GeminiAI
128
146
  }
129
147
  end
130
148
 
131
- def send_request(body, model: nil)
149
+ def send_request(body, model: nil, retry_count: 0)
150
+ # Rate limiting - ensure minimum interval between requests
151
+ rate_limit_delay
152
+
132
153
  current_model = model ? MODELS.fetch(model) { MODELS[:pro] } : @model
133
154
  url = "#{BASE_URL}/#{current_model}:generateContent?key=#{@api_key}"
134
155
 
@@ -145,26 +166,37 @@ module GeminiAI
145
166
  'Content-Type' => 'application/json',
146
167
  'x-goog-api-client' => 'gemini_ai_ruby_gem/0.1.0'
147
168
  },
148
- # Add timeout to prevent hanging
149
169
  timeout: 30
150
170
  )
151
171
 
152
172
  self.class.logger.debug("Response Code: #{response.code}")
153
173
  self.class.logger.debug("Response Body: #{response.body}")
154
174
 
155
- parse_response(response)
175
+ parse_response(response, retry_count, body, model)
156
176
  rescue HTTParty::Error, Net::OpenTimeout => e
157
177
  self.class.logger.error("API request failed: #{e.message}")
158
178
  raise Error, "API request failed: #{e.message}"
159
179
  end
160
180
  end
161
181
 
162
- def parse_response(response)
182
+ def parse_response(response, retry_count, body, model)
163
183
  case response.code
164
184
  when 200
165
185
  text = response.parsed_response
166
186
  .dig('candidates', 0, 'content', 'parts', 0, 'text')
167
187
  text || 'No response generated'
188
+ when 429
189
+ # Rate limit exceeded - implement exponential backoff
190
+ max_retries = 3
191
+ if retry_count < max_retries
192
+ wait_time = (2 ** retry_count) * 5 # 5, 10, 20 seconds
193
+ self.class.logger.warn("Rate limit hit (429). Retrying in #{wait_time}s (attempt #{retry_count + 1}/#{max_retries})")
194
+ sleep(wait_time)
195
+ return send_request(body, model: model, retry_count: retry_count + 1)
196
+ else
197
+ self.class.logger.error("Rate limit exceeded after #{max_retries} retries")
198
+ raise Error, "Rate limit exceeded. Please check your quota and billing details."
199
+ end
168
200
  else
169
201
  error_message = response.parsed_response['error']&.dig('message') || response.body
170
202
  self.class.logger.error("API Error: #{error_message}")
@@ -172,6 +204,22 @@ module GeminiAI
172
204
  end
173
205
  end
174
206
 
207
+ # Rate limiting to prevent hitting API limits
208
+ def rate_limit_delay
209
+ current_time = Time.now
210
+
211
+ if @last_request_time
212
+ time_since_last = current_time - @last_request_time
213
+ if time_since_last < @min_request_interval
214
+ sleep_time = @min_request_interval - time_since_last
215
+ self.class.logger.debug("Rate limiting: sleeping #{sleep_time.round(2)}s")
216
+ sleep(sleep_time)
217
+ end
218
+ end
219
+
220
+ @last_request_time = Time.now
221
+ end
222
+
175
223
  # Mask API key for logging and error reporting
176
224
  def mask_api_key(key)
177
225
  return '[REDACTED]' if key.nil?
@@ -182,4 +230,4 @@ module GeminiAI
182
230
  "#{key[0,4]}#{'*' * (key.length - 8)}#{key[-4,4]}"
183
231
  end
184
232
  end
185
- end
233
+ end
@@ -0,0 +1,19 @@
1
+ module GeminiAI
2
+ # Base error class for all GeminiAI related errors
3
+ class Error < StandardError; end
4
+
5
+ # API related errors
6
+ class APIError < Error; end
7
+
8
+ # Authentication errors
9
+ class AuthenticationError < Error; end
10
+
11
+ # Rate limit errors
12
+ class RateLimitError < Error; end
13
+
14
+ # Invalid request errors
15
+ class InvalidRequestError < Error; end
16
+
17
+ # Network/connection errors
18
+ class NetworkError < Error; end
19
+ end
@@ -0,0 +1,3 @@
1
+ module GeminiAI
2
+ VERSION = '0.1.4'
3
+ end
data/lib/gemini.rb ADDED
@@ -0,0 +1,18 @@
1
+ # Main entry point for the GeminiAI gem
2
+ require_relative 'core/version'
3
+ require_relative 'core/errors'
4
+ require_relative 'core/client'
5
+ require_relative 'utils/loader'
6
+ require_relative 'utils/logger'
7
+
8
+ module GeminiAI
9
+ # Convenience method to create a new client
10
+ def self.new(api_key = nil, **options)
11
+ Client.new(api_key, **options)
12
+ end
13
+
14
+ # Load environment variables
15
+ def self.load_env(file_path = '.env')
16
+ Utils::Loader.load(file_path)
17
+ end
18
+ end
@@ -0,0 +1,18 @@
1
+ module GeminiAI
2
+ module Utils
3
+ # Utility class for loading environment variables from .env files
4
+ class Loader
5
+ def self.load(file_path = '.env')
6
+ return unless File.exist?(file_path)
7
+
8
+ File.readlines(file_path).each do |line|
9
+ line = line.strip
10
+ next if line.empty? || line.start_with?('#')
11
+
12
+ key, value = line.split('=', 2)
13
+ ENV[key] = value if key && value
14
+ end
15
+ end
16
+ end
17
+ end
18
+ end
@@ -0,0 +1,35 @@
1
+ require 'logger'
2
+
3
+ module GeminiAI
4
+ module Utils
5
+ # Centralized logging utility
6
+ class Logger
7
+ def self.instance
8
+ @logger ||= ::Logger.new(STDOUT).tap do |log|
9
+ log.level = ::Logger::INFO
10
+ log.formatter = proc do |severity, datetime, progname, msg|
11
+ # Mask any potential API key in logs
12
+ masked_msg = msg.to_s.gsub(/AIza[a-zA-Z0-9_-]{35,}/, '[REDACTED]')
13
+ "#{datetime}: #{severity} -- #{masked_msg}\n"
14
+ end
15
+ end
16
+ end
17
+
18
+ def self.debug(message)
19
+ instance.debug(message)
20
+ end
21
+
22
+ def self.info(message)
23
+ instance.info(message)
24
+ end
25
+
26
+ def self.warn(message)
27
+ instance.warn(message)
28
+ end
29
+
30
+ def self.error(message)
31
+ instance.error(message)
32
+ end
33
+ end
34
+ end
35
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: friday_gemini_ai
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.3
4
+ version: 0.1.5
5
5
  platform: ruby
6
6
  authors:
7
- - bniladridas
7
+ - Niladri Das
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2025-03-03 00:00:00.000000000 Z
11
+ date: 2025-08-24 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: httparty
@@ -75,7 +75,12 @@ extra_rdoc_files: []
75
75
  files:
76
76
  - LICENSE
77
77
  - README.md
78
- - lib/gemini_ai.rb
78
+ - lib/core/client.rb
79
+ - lib/core/errors.rb
80
+ - lib/core/version.rb
81
+ - lib/gemini.rb
82
+ - lib/utils/loader.rb
83
+ - lib/utils/logger.rb
79
84
  homepage: https://github.com/bniladridas/friday_gemini_ai
80
85
  licenses:
81
86
  - MIT
@@ -95,7 +100,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
95
100
  - !ruby/object:Gem::Version
96
101
  version: '0'
97
102
  requirements: []
98
- rubygems_version: 3.4.10
103
+ rubygems_version: 3.5.22
99
104
  signing_key:
100
105
  specification_version: 4
101
106
  summary: A Ruby gem for interacting with Google's Gemini AI models