friday_gemini_ai 0.1.5 → 1.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/LICENSE +1 -1
- data/README.md +337 -82
- data/lib/core/client.rb +109 -54
- data/lib/core/errors.rb +11 -6
- data/lib/core/version.rb +7 -2
- data/lib/friday_gemini_ai.rb +7 -0
- data/lib/gemini.rb +9 -4
- data/lib/mac/README.md +23 -0
- data/lib/mac/mac_utils.rb +15 -0
- data/lib/utils/loader.rb +8 -3
- data/lib/utils/logger.rb +12 -7
- data/lib/utils/moderation.rb +33 -0
- metadata +45 -20
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: e47b0608da87cc2e8ea3b07991e762e75b8a409dd62d09327a5682db07c8eb6c
|
|
4
|
+
data.tar.gz: 5f773c36a19f18e942d3c35efab546cb62c7de3c7b3706466e806bf756b04f01
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: c6aeac25b9715848574a884c754ed9ee26c56e827f5f72952b5ea5e49c0bba32431d523b377037bce1c3fa4d3fc15fbccb390ba8389884c3752f4d2012c845cf
|
|
7
|
+
data.tar.gz: 485c3b43da85910f8e29c3a02d65bd29734af133d32a513c583afac35d338fd95ebae4d613e0f4aa552da1d64abb757ec3c662a48cc8d0d5f0a35641a0c26dc1
|
data/LICENSE
CHANGED
data/README.md
CHANGED
|
@@ -1,11 +1,13 @@
|
|
|
1
1
|
# Friday Gemini AI
|
|
2
2
|
|
|
3
|
-
[](https://badge.fury.io/rb/friday_gemini_ai)
|
|
4
3
|
[](https://github.com/bniladridas/friday_gemini_ai/actions/workflows/ci.yml)
|
|
5
|
-
[](https://github.com/bniladridas/friday_gemini_ai/actions/workflows/security.yml)
|
|
5
|
+
[](https://github.com/bniladridas/friday_gemini_ai/actions/workflows/dependencies.yml)
|
|
6
|
+
[](https://github.com/bniladridas/friday_gemini_ai/actions/workflows/harperbot.yml)
|
|
7
7
|
|
|
8
|
-
Ruby gem for Google's Gemini AI models.
|
|
8
|
+
Ruby gem for integrating with Google's Gemini AI models.
|
|
9
|
+
|
|
10
|
+
The full API of this library can be found in [docs/reference/api.md](docs/reference/api.md).
|
|
9
11
|
|
|
10
12
|
## Installation
|
|
11
13
|
|
|
@@ -13,121 +15,374 @@ Ruby gem for Google's Gemini AI models.
|
|
|
13
15
|
gem install friday_gemini_ai
|
|
14
16
|
```
|
|
15
17
|
|
|
16
|
-
|
|
18
|
+
Set your API key in `.env`:
|
|
19
|
+
|
|
17
20
|
```
|
|
18
|
-
GEMINI_API_KEY=
|
|
21
|
+
GEMINI_API_KEY=your_api_key
|
|
19
22
|
```
|
|
20
23
|
|
|
24
|
+
> [!NOTE]
|
|
25
|
+
> Ensure your API key is kept secure and not committed to version control.
|
|
26
|
+
|
|
27
|
+
## HarperBot Integration
|
|
28
|
+
|
|
29
|
+
HarperBot provides automated PR code reviews using Google's Gemini AI. It supports two deployment modes:
|
|
30
|
+
|
|
31
|
+
### Webhook Mode (Recommended)
|
|
32
|
+
This is the preferred deployment path. You need to:
|
|
33
|
+
|
|
34
|
+
- Install the [HarperBot GitHub App](https://github.com/apps/harper-new-line) and grant it access to the repositories you want to monitor.
|
|
35
|
+
- Provision these secrets (in Vercel or another host) so the webhook server can authenticate with both Gemini and GitHub:
|
|
36
|
+
- `GEMINI_API_KEY`
|
|
37
|
+
- `HARPERBOT_GEMINI_API_KEY` *(optional override)*
|
|
38
|
+
- `HARPER_BOT_APP_ID`
|
|
39
|
+
- `HARPER_BOT_PRIVATE_KEY`
|
|
40
|
+
- `WEBHOOK_SECRET`
|
|
41
|
+
- Subscribe the app webhooks to **Pull request** and **Issue comment** events so HarperBot receives new PRs and manual `/analyze` commands.
|
|
42
|
+
- When you migrate the app to a different GitHub account, uninstall the previous installation so the retired app stops receiving webhooks and you avoid duplicate comments.
|
|
43
|
+
- Regenerate the webhook secret whenever the app changes hands (or whenever you suspect it was leaked) and update the `WEBHOOK_SECRET` environment variable before resuming deployments.
|
|
44
|
+
- Deploy the webhook service behind a production WSGI server (for example, Gunicorn) whenever you self-host it outside Vercel; Flask's dev server is not safe for production traffic.
|
|
45
|
+
|
|
46
|
+
Once those requirements are met, the centralized HarperBot instance receives webhooks from any connected repository without per-repo secrets.
|
|
47
|
+
|
|
48
|
+
**Quick checklist before you deploy:** install the HarperBot GitHub App, configure the five secrets above, and serve the webhook process through a WSGI server when not on Vercel so you stay secure.
|
|
49
|
+
|
|
50
|
+
### Workflow Mode (Legacy)
|
|
51
|
+
- Repository-specific GitHub Actions workflow
|
|
52
|
+
- Requires secrets setup per repository
|
|
53
|
+
- Automated setup: `curl -fsSL https://raw.githubusercontent.com/bniladridas/friday_gemini_ai/main/bin/setup-harperbot | bash` (use `--update` to update, `--dry-run` to preview)
|
|
54
|
+
- **Note:** This is legacy mode for existing users. New installations should use Webhook Mode for better scalability and centralized management
|
|
55
|
+
|
|
56
|
+
For detailed setup instructions, see [harperbot/HarperBot.md](harperbot/HarperBot.md).
|
|
57
|
+
|
|
21
58
|
## Usage
|
|
22
59
|
|
|
23
|
-
|
|
24
|
-
|
|
60
|
+
The full API of this library can be found in [docs/reference/api.md](docs/reference/api.md).
|
|
61
|
+
|
|
62
|
+
### Basic Setup
|
|
63
|
+
|
|
64
|
+
**Security Note for Automated Setup:** The recommended `curl | bash` method downloads and executes code from the internet. For security, review the script at https://github.com/bniladridas/friday_gemini_ai/blob/main/bin/setup-harperbot before running. Alternatively, download first: `curl -O https://raw.githubusercontent.com/bniladridas/friday_gemini_ai/main/bin/setup-harperbot`, inspect, then `bash setup-harperbot`.
|
|
25
65
|
|
|
66
|
+
```ruby
|
|
67
|
+
require 'friday_gemini_ai'
|
|
26
68
|
GeminiAI.load_env
|
|
27
69
|
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
70
|
+
client = GeminiAI::Client.new # Default: gemini-2.5-pro
|
|
71
|
+
fast_client = GeminiAI::Client.new(model: :flash)
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
### Model Reference
|
|
75
|
+
|
|
76
|
+
| Key | ID | Use case |
|
|
77
|
+
| ------------- | ----------------------- | ------------------------------- |
|
|
78
|
+
| `:pro` | `gemini-2.5-pro` | Most capable, complex reasoning |
|
|
79
|
+
| `:flash` | `gemini-2.5-flash` | Fast, general-purpose |
|
|
80
|
+
| `:flash_2_0` | `gemini-2.0-flash` | Legacy support |
|
|
81
|
+
| `:flash_lite` | `gemini-2.0-flash-lite` | Lightweight legacy |
|
|
82
|
+
|
|
83
|
+
## Capabilities
|
|
84
|
+
|
|
85
|
+
* **Text:** content generation, summaries, documentation
|
|
86
|
+
* **Chat:** multi-turn Q&A and assistants
|
|
87
|
+
* **Image:** image-to-text analysis
|
|
88
|
+
* **CLI:** for quick prototyping and automation
|
|
89
|
+
|
|
90
|
+
## Features
|
|
91
|
+
|
|
92
|
+
* **Multiple Model Support:** Gemini 2.5 + 2.0 families with automatic fallback
|
|
93
|
+
* **Text Generation:** configurable parameters, safety settings
|
|
94
|
+
* **Image Analysis:** base64 image input, detailed descriptions
|
|
95
|
+
* **Chat:** context retention, system instructions
|
|
96
|
+
* **Security:** API key masking, retries, and rate limits (1s default, 3s CI)
|
|
97
|
+
|
|
98
|
+
## Handling errors
|
|
99
|
+
|
|
100
|
+
When the library is unable to connect to the API,
|
|
101
|
+
or if the API returns a non-success status code (i.e., 4xx or 5xx response),
|
|
102
|
+
a subclass of `GeminiAI::APIError` will be thrown:
|
|
103
|
+
|
|
104
|
+
```ruby
|
|
105
|
+
response = client.generate_text('Hello').catch do |err|
|
|
106
|
+
if err.is_a?(GeminiAI::APIError)
|
|
107
|
+
puts err.status # 400
|
|
108
|
+
puts err.name # BadRequestError
|
|
109
|
+
puts err.headers # {server: 'nginx', ...}
|
|
110
|
+
else
|
|
111
|
+
raise err
|
|
112
|
+
end
|
|
113
|
+
end
|
|
114
|
+
```
|
|
32
115
|
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
116
|
+
Error codes are as follows:
|
|
117
|
+
|
|
118
|
+
| Status Code | Error Type |
|
|
119
|
+
| ----------- | -------------------------- |
|
|
120
|
+
| 400 | `BadRequestError` |
|
|
121
|
+
| 401 | `AuthenticationError` |
|
|
122
|
+
| 403 | `PermissionDeniedError` |
|
|
123
|
+
| 404 | `NotFoundError` |
|
|
124
|
+
| 422 | `UnprocessableEntityError` |
|
|
125
|
+
| 429 | `RateLimitError` |
|
|
126
|
+
| >=500 | `InternalServerError` |
|
|
127
|
+
| N/A | `APIConnectionError` |
|
|
128
|
+
|
|
129
|
+
### Retries
|
|
130
|
+
|
|
131
|
+
Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
|
|
132
|
+
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
|
|
133
|
+
429 Rate Limit, and >=500 Internal errors will all be retried by default.
|
|
134
|
+
|
|
135
|
+
You can use the `max_retries` option to configure or disable this:
|
|
136
|
+
|
|
137
|
+
```ruby
|
|
138
|
+
# Configure the default for all requests:
|
|
139
|
+
client = GeminiAI::Client.new(max_retries: 0) # default is 2
|
|
140
|
+
|
|
141
|
+
# Or, configure per-request:
|
|
142
|
+
client.generate_text('Hello', max_retries: 5)
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
### Timeouts
|
|
146
|
+
|
|
147
|
+
Requests time out after 60 seconds by default. You can configure this with a `timeout` option:
|
|
148
|
+
|
|
149
|
+
```ruby
|
|
150
|
+
# Configure the default for all requests:
|
|
151
|
+
client = GeminiAI::Client.new(timeout: 20) # 20 seconds (default is 60)
|
|
152
|
+
|
|
153
|
+
# Override per-request:
|
|
154
|
+
client.generate_text('Hello', timeout: 5)
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
On timeout, an `APIConnectionTimeoutError` is thrown.
|
|
158
|
+
|
|
159
|
+
Note that requests which time out will be [retried twice by default](#retries).
|
|
160
|
+
|
|
161
|
+
## Advanced Usage
|
|
162
|
+
|
|
163
|
+
### Logging
|
|
164
|
+
|
|
165
|
+
> [!IMPORTANT]
|
|
166
|
+
> All log messages are intended for debugging only. The format and content of log messages
|
|
167
|
+
> may change between releases.
|
|
168
|
+
|
|
169
|
+
#### Log levels
|
|
170
|
+
|
|
171
|
+
The log level can be configured via the `GEMINI_LOG_LEVEL` environment variable or client option.
|
|
172
|
+
|
|
173
|
+
Available log levels, from most to least verbose:
|
|
174
|
+
|
|
175
|
+
- `'debug'` - Show debug messages, info, warnings, and errors
|
|
176
|
+
- `'info'` - Show info messages, warnings, and errors
|
|
177
|
+
- `'warn'` - Show warnings and errors (default)
|
|
178
|
+
- `'error'` - Show only errors
|
|
179
|
+
- `'off'` - Disable all logging
|
|
180
|
+
|
|
181
|
+
```ruby
|
|
182
|
+
require 'friday_gemini_ai'
|
|
183
|
+
|
|
184
|
+
client = GeminiAI::Client.new(log_level: 'debug') # Show all log messages
|
|
36
185
|
```
|
|
37
186
|
|
|
187
|
+
## Frequently Asked Questions
|
|
188
|
+
|
|
189
|
+
## Semantic versioning
|
|
190
|
+
|
|
191
|
+
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
|
|
192
|
+
|
|
193
|
+
1. Changes that only affect static types, without breaking runtime behavior.
|
|
194
|
+
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
|
|
195
|
+
3. Changes that we do not expect to impact the vast majority of users in practice.
|
|
196
|
+
|
|
197
|
+
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
|
|
198
|
+
|
|
199
|
+
We are keen for your feedback; please open an [issue](https://github.com/bniladridas/friday_gemini_ai/issues) with questions, bugs, or suggestions.
|
|
200
|
+
|
|
201
|
+
## Requirements
|
|
202
|
+
|
|
203
|
+
Ruby 3.0 or later is supported.
|
|
204
|
+
|
|
205
|
+
The following runtimes are supported:
|
|
206
|
+
|
|
207
|
+
- Ruby 3.0+
|
|
208
|
+
- JRuby (compatible versions)
|
|
209
|
+
- TruffleRuby (compatible versions)
|
|
210
|
+
|
|
211
|
+
Note that Windows support is limited; Linux and macOS are recommended.
|
|
212
|
+
|
|
213
|
+
## Migration Guide
|
|
214
|
+
|
|
215
|
+
Gemini 1.5 models have been deprecated.
|
|
216
|
+
Use:
|
|
217
|
+
|
|
218
|
+
* `:pro` → `gemini-2.5-pro`
|
|
219
|
+
* `:flash` → `gemini-2.5-flash`
|
|
220
|
+
|
|
221
|
+
Legacy options (`:flash_2_0`, `:flash_lite`) remain supported for backward compatibility.
|
|
222
|
+
|
|
223
|
+
## Environment Variables
|
|
224
|
+
|
|
225
|
+
```bash
|
|
226
|
+
# Required
|
|
227
|
+
GEMINI_API_KEY=your_api_key_here
|
|
228
|
+
|
|
229
|
+
# Optional
|
|
230
|
+
GEMINI_LOG_LEVEL=debug # debug | info | warn | error
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
### CLI Shortcuts
|
|
234
|
+
|
|
38
235
|
```bash
|
|
39
236
|
./bin/gemini test
|
|
40
|
-
./bin/gemini generate "Your prompt
|
|
237
|
+
./bin/gemini generate "Your prompt"
|
|
41
238
|
./bin/gemini chat
|
|
42
239
|
```
|
|
43
240
|
|
|
44
|
-
##
|
|
241
|
+
## GitHub Actions Integration
|
|
45
242
|
|
|
46
|
-
|
|
47
|
-
|-----------|----------|-------------|
|
|
48
|
-
| `:pro` | `gemini-2.5-pro` | Latest and most capable model |
|
|
49
|
-
| `:flash` | `gemini-2.5-flash` | Fast and efficient latest model |
|
|
50
|
-
| `:flash_2_0` | `gemini-2.0-flash` | Previous generation fast model |
|
|
51
|
-
| `:flash_lite` | `gemini-2.0-flash-lite` | Lightweight model |
|
|
52
|
-
| `:pro_1_5` | `gemini-1.5-pro` | Gemini 1.5 Pro model |
|
|
53
|
-
| `:flash_1_5` | `gemini-1.5-flash` | Gemini 1.5 Flash model |
|
|
54
|
-
| `:flash_8b` | `gemini-1.5-flash-8b` | Compact 8B parameter model |
|
|
243
|
+
Friday Gemini AI includes a built-in GitHub Actions workflow for automated PR reviews via **HarperBot**, powered by Gemini AI.
|
|
55
244
|
|
|
56
|
-
**
|
|
57
|
-
- Use `:pro` for complex reasoning and analysis
|
|
58
|
-
- Use `:flash` for fast, general-purpose tasks
|
|
59
|
-
- Use `:flash_2_0` for compatibility with older workflows
|
|
60
|
-
- Use `:flash_lite` for simple, lightweight tasks
|
|
245
|
+
💡 **Install the [HarperBot GitHub App](https://github.com/apps/harper-new-line)** for automated PR reviews across repositories.
|
|
61
246
|
|
|
62
|
-
|
|
247
|
+
### HarperBot – Automated PR Analysis
|
|
63
248
|
|
|
64
|
-
|
|
65
|
-
- Generate creative content, stories, and articles
|
|
66
|
-
- Create explanations and educational content
|
|
67
|
-
- Write code comments and documentation
|
|
249
|
+
HarperBot provides AI-driven code review and analysis directly in pull requests.
|
|
68
250
|
|
|
69
|
-
**
|
|
70
|
-
- Build multi-turn chat applications
|
|
71
|
-
- Create interactive Q&A systems
|
|
72
|
-
- Develop conversational interfaces
|
|
251
|
+
**Key Capabilities:**
|
|
73
252
|
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
253
|
+
* Configurable focus: `all`, `security`, `performance`, `quality`
|
|
254
|
+
* Code quality, documentation, and test coverage analysis
|
|
255
|
+
* Security & performance issue detection
|
|
256
|
+
* Inline review comments with actionable suggestions
|
|
257
|
+
* Clean, minimal, and structured feedback output
|
|
78
258
|
|
|
79
|
-
|
|
80
|
-
- Test ideas and prompts via CLI
|
|
81
|
-
- Prototype AI-powered features
|
|
82
|
-
- Generate content with custom parameters
|
|
259
|
+
### Setup
|
|
83
260
|
|
|
84
|
-
|
|
261
|
+
**Workflow Mode (default)**
|
|
85
262
|
|
|
86
|
-
|
|
87
|
-
- Support for Gemini 2.5, 2.0, and 1.5 model families
|
|
88
|
-
- Configurable parameters (temperature, tokens, top-p, top-k)
|
|
89
|
-
- Error handling and API key security
|
|
90
|
-
- CLI interface and .env integration
|
|
263
|
+
1. Add repository secrets:
|
|
91
264
|
|
|
92
|
-
|
|
265
|
+
* `GEMINI_API_KEY`
|
|
266
|
+
* `GITHUB_TOKEN` (auto-provided by GitHub)
|
|
267
|
+
2. Configure `.github/workflows/harperbot.yml`
|
|
268
|
+
3. Optional: tune behavior via `harperbot/config.yaml`
|
|
269
|
+
|
|
270
|
+
**Webhook Mode (Recommended)**
|
|
271
|
+
|
|
272
|
+
* Deploy to Vercel (production branch)
|
|
273
|
+
* Install the [HarperBot GitHub App](https://github.com/apps/harper-new-line) and grant it access to your repositories
|
|
274
|
+
* Set environment variables in Vercel:
|
|
275
|
+
- `GEMINI_API_KEY`: Your Google Gemini API key
|
|
276
|
+
- `HARPER_BOT_APP_ID`: App ID from your GitHub App settings
|
|
277
|
+
- `HARPER_BOT_PRIVATE_KEY`: Private key content (paste the entire .pem file)
|
|
278
|
+
- `WEBHOOK_SECRET`: Random secret string for webhook verification
|
|
279
|
+
- `VERCEL_AUTOMATION_BYPASS_SECRET`: Automatically generated by Vercel for deployment protection bypass (managed in Vercel dashboard)
|
|
280
|
+
* Configure webhook URL in GitHub App settings:
|
|
281
|
+
- Use the Vercel deployment URL (e.g., `https://your-project.vercel.app/webhook`)
|
|
282
|
+
- Append the bypass token as a query parameter (managed in Vercel dashboard, never commit to code)
|
|
283
|
+
* Webhooks will handle PR events automatically (opened, reopened, synchronize)
|
|
284
|
+
* Preferred for scalability and centralized management
|
|
285
|
+
|
|
286
|
+
**Security Note:** The bypass token for Vercel deployment protection should be stored securely in Vercel's environment variables, not exposed in public documentation or code repositories.
|
|
93
287
|
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
**
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
288
|
+
### Workflow Highlights
|
|
289
|
+
|
|
290
|
+
* **Pull Requests:** triggered on open, update, or reopen
|
|
291
|
+
* **Push to main:** runs Gemini CLI verification
|
|
292
|
+
* **Concurrency control:** cancels redundant runs for efficiency
|
|
293
|
+
|
|
294
|
+
Required permissions:
|
|
295
|
+
|
|
296
|
+
```yaml
|
|
297
|
+
permissions:
|
|
298
|
+
contents: read
|
|
299
|
+
pull-requests: write
|
|
300
|
+
issues: write
|
|
301
|
+
statuses: write
|
|
302
|
+
```
|
|
303
|
+
|
|
304
|
+
## Local Development & Testing
|
|
305
|
+
|
|
306
|
+
```bash
|
|
307
|
+
bundle exec rake test # Run tests
|
|
308
|
+
bundle exec rake rubocop # Optional lint check
|
|
309
|
+
gem build *.gemspec # Verify build
|
|
310
|
+
```
|
|
311
|
+
|
|
312
|
+
### Test Workflows Locally
|
|
313
|
+
|
|
314
|
+
Using [act](https://github.com/nektos/act):
|
|
315
|
+
|
|
316
|
+
```bash
|
|
317
|
+
brew install act
|
|
318
|
+
act -j test --container-architecture linux/amd64
|
|
319
|
+
```
|
|
113
320
|
|
|
114
321
|
## Examples
|
|
115
322
|
|
|
116
|
-
|
|
117
|
-
- `examples/advanced.rb` - Configuration and error handling
|
|
323
|
+
### Text Generation
|
|
118
324
|
|
|
119
|
-
|
|
325
|
+
```ruby
|
|
326
|
+
client = GeminiAI::Client.new
|
|
327
|
+
puts client.generate_text('Write a haiku about Ruby')
|
|
328
|
+
```
|
|
329
|
+
|
|
330
|
+
### Image Analysis
|
|
331
|
+
|
|
332
|
+
```ruby
|
|
333
|
+
image_data = Base64.strict_encode64(File.binread('path/to/image.jpg'))
|
|
334
|
+
puts client.generate_image_text(image_data, 'Describe this image')
|
|
335
|
+
```
|
|
336
|
+
|
|
337
|
+
### Chat
|
|
338
|
+
|
|
339
|
+
```ruby
|
|
340
|
+
messages = [
|
|
341
|
+
{ role: 'user', content: 'Hello!' },
|
|
342
|
+
{ role: 'model', content: 'Hi there!' },
|
|
343
|
+
{ role: 'user', content: 'Tell me about Ruby.' }
|
|
344
|
+
]
|
|
345
|
+
puts client.chat(messages, system_instruction: 'Be helpful and concise.')
|
|
346
|
+
```
|
|
347
|
+
|
|
348
|
+
## Conventional Commits
|
|
349
|
+
|
|
350
|
+
Consistent commit messages are enforced via a local Git hook.
|
|
351
|
+
|
|
352
|
+
```bash
|
|
353
|
+
cp scripts/commit-msg .git/hooks/
|
|
354
|
+
chmod +x .git/hooks/commit-msg
|
|
355
|
+
```
|
|
356
|
+
|
|
357
|
+
**Types:** `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
|
|
358
|
+
Example:
|
|
120
359
|
|
|
121
360
|
```bash
|
|
122
|
-
|
|
123
|
-
ruby tests/unit/client.rb
|
|
124
|
-
ruby tests/integration/api.rb
|
|
361
|
+
git commit -m "feat: add user authentication"
|
|
125
362
|
```
|
|
126
363
|
|
|
364
|
+
## Documentation
|
|
365
|
+
|
|
366
|
+
* [Documentation](docs/index.md)
|
|
367
|
+
* [Quickstart](docs/start/quickstart.md)
|
|
368
|
+
* [API Reference](docs/reference/api.md)
|
|
369
|
+
* [Cookbook](docs/reference/cookbook.md)
|
|
370
|
+
* [Best Practices](docs/guides/practices.md)
|
|
371
|
+
* [CI/CD Workflows](docs/guides/workflows.md)
|
|
372
|
+
* [Changelog](docs/CHANGELOG.md)
|
|
373
|
+
* [Contributing](docs/CONTRIBUTING.md)
|
|
374
|
+
* [Resources](docs/guides/resources.md)
|
|
375
|
+
|
|
127
376
|
## Contributing
|
|
128
377
|
|
|
129
|
-
Fork
|
|
378
|
+
Fork → Branch → Commit → Pull Request.
|
|
130
379
|
|
|
131
380
|
## License
|
|
132
381
|
|
|
133
|
-
MIT
|
|
382
|
+
MIT – see [LICENSE](LICENSE).
|
|
383
|
+
|
|
384
|
+
<div align="center">
|
|
385
|
+
|
|
386
|
+
[<sup>© 2026 Friday Gemini AI • Hand-crafted for Rubyists</sup>](https://bniladridas.github.io/friday_gemini_ai/)
|
|
387
|
+
|
|
388
|
+
</div>
|
data/lib/core/client.rb
CHANGED
|
@@ -1,37 +1,46 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
# SPDX-License-Identifier: MIT
|
|
3
|
+
# Copyright (c) 2026 friday_gemini_ai
|
|
4
|
+
|
|
1
5
|
require 'httparty'
|
|
2
6
|
require 'json'
|
|
3
7
|
require 'base64'
|
|
4
8
|
require 'logger'
|
|
5
9
|
require 'dotenv/load'
|
|
6
10
|
require_relative 'errors'
|
|
11
|
+
require_relative '../utils/moderation'
|
|
7
12
|
|
|
8
13
|
module GeminiAI
|
|
9
14
|
# Core client class for Gemini AI API communication
|
|
10
15
|
class Client
|
|
11
16
|
BASE_URL = 'https://generativelanguage.googleapis.com/v1/models'
|
|
17
|
+
# Model mappings
|
|
18
|
+
# Current supported models
|
|
12
19
|
MODELS = {
|
|
13
20
|
# Gemini 2.5 models (latest)
|
|
14
21
|
pro: 'gemini-2.5-pro',
|
|
15
22
|
flash: 'gemini-2.5-flash',
|
|
16
|
-
|
|
23
|
+
|
|
17
24
|
# Gemini 2.0 models
|
|
18
25
|
flash_2_0: 'gemini-2.0-flash',
|
|
19
26
|
flash_lite: 'gemini-2.0-flash-lite',
|
|
20
|
-
|
|
27
|
+
|
|
21
28
|
# Legacy aliases for backward compatibility
|
|
22
|
-
pro_2_0: 'gemini-2.0-flash'
|
|
23
|
-
|
|
24
|
-
|
|
29
|
+
pro_2_0: 'gemini-2.0-flash'
|
|
30
|
+
}.freeze
|
|
31
|
+
|
|
32
|
+
# Deprecated models removed in this version (log warning and default to :pro)
|
|
33
|
+
DEPRECATED_MODELS = {
|
|
25
34
|
pro_1_5: 'gemini-1.5-pro',
|
|
26
35
|
flash_1_5: 'gemini-1.5-flash',
|
|
27
36
|
flash_8b: 'gemini-1.5-flash-8b'
|
|
28
|
-
}
|
|
37
|
+
}.freeze
|
|
29
38
|
|
|
30
39
|
# Configure logging
|
|
31
40
|
def self.logger
|
|
32
|
-
@logger ||= Logger.new(
|
|
41
|
+
@logger ||= Logger.new($stdout).tap do |log|
|
|
33
42
|
log.level = Logger::DEBUG
|
|
34
|
-
log.formatter = proc do |severity, datetime,
|
|
43
|
+
log.formatter = proc do |severity, datetime, _progname, msg|
|
|
35
44
|
# Mask any potential API key in logs
|
|
36
45
|
masked_msg = msg.to_s.gsub(/AIza[a-zA-Z0-9_-]{35,}/, '[REDACTED]')
|
|
37
46
|
"#{datetime}: #{severity} -- #{masked_msg}\n"
|
|
@@ -40,27 +49,26 @@ module GeminiAI
|
|
|
40
49
|
end
|
|
41
50
|
|
|
42
51
|
def initialize(api_key = nil, model: :pro)
|
|
52
|
+
puts "Initializing client with api_key: #{api_key.inspect}, model: #{model}"
|
|
43
53
|
# Prioritize passed API key, then environment variable
|
|
44
|
-
@api_key = api_key || ENV
|
|
45
|
-
|
|
54
|
+
@api_key = api_key || ENV.fetch('GEMINI_API_KEY', nil)
|
|
55
|
+
|
|
46
56
|
# Rate limiting - track last request time
|
|
47
57
|
@last_request_time = nil
|
|
48
58
|
# More conservative rate limiting in CI environments
|
|
49
|
-
@min_request_interval =
|
|
50
|
-
|
|
59
|
+
@min_request_interval = ENV['CI'] == 'true' || ENV['GITHUB_ACTIONS'] == 'true' ? 3.0 : 1.0
|
|
60
|
+
|
|
51
61
|
# Extensive logging for debugging
|
|
52
|
-
self.class.logger.debug(
|
|
62
|
+
self.class.logger.debug('Initializing Client')
|
|
53
63
|
self.class.logger.debug("API Key present: #{!@api_key.nil?}")
|
|
54
64
|
self.class.logger.debug("API Key length: #{@api_key&.length}")
|
|
55
|
-
|
|
56
|
-
# Validate API key
|
|
65
|
+
|
|
66
|
+
# Validate API key before proceeding
|
|
67
|
+
puts "About to validate API key: #{@api_key.inspect}"
|
|
57
68
|
validate_api_key!
|
|
58
|
-
|
|
59
|
-
@model =
|
|
60
|
-
|
|
61
|
-
MODELS[:pro]
|
|
62
|
-
}
|
|
63
|
-
|
|
69
|
+
|
|
70
|
+
@model = resolve_model(model)
|
|
71
|
+
|
|
64
72
|
self.class.logger.debug("Selected model: #{@model}")
|
|
65
73
|
end
|
|
66
74
|
|
|
@@ -72,23 +80,34 @@ module GeminiAI
|
|
|
72
80
|
generationConfig: build_generation_config(options)
|
|
73
81
|
}
|
|
74
82
|
|
|
75
|
-
|
|
83
|
+
# Add safety settings if provided
|
|
84
|
+
if options[:safety_settings]
|
|
85
|
+
request_body[:safetySettings] = options[:safety_settings].map do |setting|
|
|
86
|
+
{
|
|
87
|
+
category: setting[:category],
|
|
88
|
+
threshold: setting[:threshold]
|
|
89
|
+
}
|
|
90
|
+
end
|
|
91
|
+
end
|
|
92
|
+
|
|
93
|
+
apply_moderation(send_request(request_body), options)
|
|
76
94
|
end
|
|
77
95
|
|
|
78
96
|
def generate_image_text(image_base64, prompt, options = {})
|
|
79
|
-
raise Error,
|
|
80
|
-
|
|
97
|
+
raise Error, 'Image is required' if image_base64.nil? || image_base64.empty?
|
|
98
|
+
|
|
81
99
|
request_body = {
|
|
82
100
|
contents: [
|
|
83
101
|
{ parts: [
|
|
84
102
|
{ inline_data: { mime_type: 'image/jpeg', data: image_base64 } },
|
|
85
103
|
{ text: prompt }
|
|
86
|
-
]}
|
|
104
|
+
] }
|
|
87
105
|
],
|
|
88
106
|
generationConfig: build_generation_config(options)
|
|
89
107
|
}
|
|
90
108
|
|
|
91
|
-
|
|
109
|
+
# Use the pro model for image-to-text tasks
|
|
110
|
+
apply_moderation(send_request(request_body, model: :pro), options)
|
|
92
111
|
end
|
|
93
112
|
|
|
94
113
|
def chat(messages, options = {})
|
|
@@ -97,27 +116,63 @@ module GeminiAI
|
|
|
97
116
|
generationConfig: build_generation_config(options)
|
|
98
117
|
}
|
|
99
118
|
|
|
100
|
-
|
|
119
|
+
# Add system instruction if provided
|
|
120
|
+
if options[:system_instruction]
|
|
121
|
+
request_body[:systemInstruction] = {
|
|
122
|
+
parts: [
|
|
123
|
+
{ text: options[:system_instruction] }
|
|
124
|
+
]
|
|
125
|
+
}
|
|
126
|
+
end
|
|
127
|
+
|
|
128
|
+
apply_moderation(send_request(request_body), options)
|
|
101
129
|
end
|
|
102
130
|
|
|
103
131
|
private
|
|
104
132
|
|
|
133
|
+
def apply_moderation(response, options)
|
|
134
|
+
if options[:moderate]
|
|
135
|
+
moderated, warnings = Utils::Moderation.moderate_text(response)
|
|
136
|
+
warnings.each { |w| self.class.logger.warn(w) } unless warnings.empty?
|
|
137
|
+
moderated
|
|
138
|
+
else
|
|
139
|
+
response
|
|
140
|
+
end
|
|
141
|
+
end
|
|
142
|
+
|
|
143
|
+
def resolve_model(model)
|
|
144
|
+
if DEPRECATED_MODELS.key?(model)
|
|
145
|
+
self.class.logger.warn("Model #{model} (#{DEPRECATED_MODELS[model]}) is deprecated and has been removed. " \
|
|
146
|
+
'Defaulting to :pro (gemini-2.5-pro). Please update your code to use supported models.')
|
|
147
|
+
MODELS[:pro]
|
|
148
|
+
else
|
|
149
|
+
MODELS.fetch(model) do
|
|
150
|
+
self.class.logger.warn("Invalid model: #{model}, defaulting to pro")
|
|
151
|
+
MODELS[:pro]
|
|
152
|
+
end
|
|
153
|
+
end
|
|
154
|
+
end
|
|
155
|
+
|
|
105
156
|
def validate_api_key!
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
157
|
+
puts "Validating API key: #{@api_key.inspect}"
|
|
158
|
+
if @api_key.nil? || @api_key.to_s.strip.empty?
|
|
159
|
+
puts 'API key is nil or empty'
|
|
160
|
+
self.class.logger.error('API key is missing')
|
|
161
|
+
raise Error, 'API key is required. Set GEMINI_API_KEY environment variable or pass key directly.'
|
|
109
162
|
end
|
|
110
163
|
|
|
111
164
|
# Optional: Add basic API key format validation
|
|
112
165
|
unless valid_api_key_format?(@api_key)
|
|
113
|
-
|
|
114
|
-
|
|
166
|
+
puts 'API key format is invalid'
|
|
167
|
+
self.class.logger.error('Invalid API key format')
|
|
168
|
+
raise Error, 'Invalid API key format. Please check your key.'
|
|
115
169
|
end
|
|
116
170
|
|
|
117
171
|
# Optional: Check key length and complexity
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
172
|
+
return unless @api_key.length < 40
|
|
173
|
+
|
|
174
|
+
puts 'API key is too short'
|
|
175
|
+
self.class.logger.warn('Potentially weak API key detected')
|
|
121
176
|
end
|
|
122
177
|
|
|
123
178
|
def valid_api_key_format?(key)
|
|
@@ -127,14 +182,14 @@ module GeminiAI
|
|
|
127
182
|
|
|
128
183
|
def validate_prompt!(prompt)
|
|
129
184
|
if prompt.nil? || prompt.strip.empty?
|
|
130
|
-
self.class.logger.error(
|
|
131
|
-
raise Error,
|
|
185
|
+
self.class.logger.error('Empty prompt provided')
|
|
186
|
+
raise Error, 'Prompt cannot be empty'
|
|
132
187
|
end
|
|
133
188
|
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
189
|
+
return unless prompt.length > 8192
|
|
190
|
+
|
|
191
|
+
self.class.logger.error('Prompt exceeds maximum length')
|
|
192
|
+
raise Error, 'Prompt too long (max 8192 tokens)'
|
|
138
193
|
end
|
|
139
194
|
|
|
140
195
|
def build_generation_config(options)
|
|
@@ -149,7 +204,7 @@ module GeminiAI
|
|
|
149
204
|
def send_request(body, model: nil, retry_count: 0)
|
|
150
205
|
# Rate limiting - ensure minimum interval between requests
|
|
151
206
|
rate_limit_delay
|
|
152
|
-
|
|
207
|
+
|
|
153
208
|
current_model = model ? MODELS.fetch(model) { MODELS[:pro] } : @model
|
|
154
209
|
url = "#{BASE_URL}/#{current_model}:generateContent?key=#{@api_key}"
|
|
155
210
|
|
|
@@ -160,9 +215,9 @@ module GeminiAI
|
|
|
160
215
|
|
|
161
216
|
begin
|
|
162
217
|
response = HTTParty.post(
|
|
163
|
-
url,
|
|
218
|
+
url,
|
|
164
219
|
body: body.to_json,
|
|
165
|
-
headers: {
|
|
220
|
+
headers: {
|
|
166
221
|
'Content-Type' => 'application/json',
|
|
167
222
|
'x-goog-api-client' => 'gemini_ai_ruby_gem/0.1.0'
|
|
168
223
|
},
|
|
@@ -183,19 +238,19 @@ module GeminiAI
|
|
|
183
238
|
case response.code
|
|
184
239
|
when 200
|
|
185
240
|
text = response.parsed_response
|
|
186
|
-
|
|
241
|
+
.dig('candidates', 0, 'content', 'parts', 0, 'text')
|
|
187
242
|
text || 'No response generated'
|
|
188
243
|
when 429
|
|
189
244
|
# Rate limit exceeded - implement exponential backoff
|
|
190
245
|
max_retries = 3
|
|
191
246
|
if retry_count < max_retries
|
|
192
|
-
wait_time = (2
|
|
247
|
+
wait_time = (2**retry_count) * 5 # 5, 10, 20 seconds
|
|
193
248
|
self.class.logger.warn("Rate limit hit (429). Retrying in #{wait_time}s (attempt #{retry_count + 1}/#{max_retries})")
|
|
194
249
|
sleep(wait_time)
|
|
195
|
-
|
|
250
|
+
send_request(body, model: model, retry_count: retry_count + 1)
|
|
196
251
|
else
|
|
197
252
|
self.class.logger.error("Rate limit exceeded after #{max_retries} retries")
|
|
198
|
-
raise Error,
|
|
253
|
+
raise Error, 'Rate limit exceeded. Please check your quota and billing details.'
|
|
199
254
|
end
|
|
200
255
|
else
|
|
201
256
|
error_message = response.parsed_response['error']&.dig('message') || response.body
|
|
@@ -207,7 +262,7 @@ module GeminiAI
|
|
|
207
262
|
# Rate limiting to prevent hitting API limits
|
|
208
263
|
def rate_limit_delay
|
|
209
264
|
current_time = Time.now
|
|
210
|
-
|
|
265
|
+
|
|
211
266
|
if @last_request_time
|
|
212
267
|
time_since_last = current_time - @last_request_time
|
|
213
268
|
if time_since_last < @min_request_interval
|
|
@@ -216,18 +271,18 @@ module GeminiAI
|
|
|
216
271
|
sleep(sleep_time)
|
|
217
272
|
end
|
|
218
273
|
end
|
|
219
|
-
|
|
274
|
+
|
|
220
275
|
@last_request_time = Time.now
|
|
221
276
|
end
|
|
222
277
|
|
|
223
278
|
# Mask API key for logging and error reporting
|
|
224
279
|
def mask_api_key(key)
|
|
225
280
|
return '[REDACTED]' if key.nil?
|
|
226
|
-
|
|
281
|
+
|
|
227
282
|
# Keep first 4 and last 4 characters, replace middle with asterisks
|
|
228
283
|
return key if key.length <= 8
|
|
229
|
-
|
|
230
|
-
"#{key[0,4]}#{'*' * (key.length - 8)}#{key[-4,4]}"
|
|
284
|
+
|
|
285
|
+
"#{key[0, 4]}#{'*' * (key.length - 8)}#{key[-4, 4]}"
|
|
231
286
|
end
|
|
232
287
|
end
|
|
233
|
-
end
|
|
288
|
+
end
|
data/lib/core/errors.rb
CHANGED
|
@@ -1,19 +1,24 @@
|
|
|
1
|
+
# SPDX-License-Identifier: MIT
|
|
2
|
+
# Copyright (c) 2026 friday_gemini_ai
|
|
3
|
+
|
|
4
|
+
# frozen_string_literal: true
|
|
5
|
+
|
|
1
6
|
module GeminiAI
|
|
2
7
|
# Base error class for all GeminiAI related errors
|
|
3
8
|
class Error < StandardError; end
|
|
4
|
-
|
|
9
|
+
|
|
5
10
|
# API related errors
|
|
6
11
|
class APIError < Error; end
|
|
7
|
-
|
|
12
|
+
|
|
8
13
|
# Authentication errors
|
|
9
14
|
class AuthenticationError < Error; end
|
|
10
|
-
|
|
15
|
+
|
|
11
16
|
# Rate limit errors
|
|
12
17
|
class RateLimitError < Error; end
|
|
13
|
-
|
|
18
|
+
|
|
14
19
|
# Invalid request errors
|
|
15
20
|
class InvalidRequestError < Error; end
|
|
16
|
-
|
|
21
|
+
|
|
17
22
|
# Network/connection errors
|
|
18
23
|
class NetworkError < Error; end
|
|
19
|
-
end
|
|
24
|
+
end
|
data/lib/core/version.rb
CHANGED
data/lib/gemini.rb
CHANGED
|
@@ -1,3 +1,8 @@
|
|
|
1
|
+
# SPDX-License-Identifier: MIT
|
|
2
|
+
# Copyright (c) 2026 friday_gemini_ai
|
|
3
|
+
|
|
4
|
+
# frozen_string_literal: true
|
|
5
|
+
|
|
1
6
|
# Main entry point for the GeminiAI gem
|
|
2
7
|
require_relative 'core/version'
|
|
3
8
|
require_relative 'core/errors'
|
|
@@ -7,12 +12,12 @@ require_relative 'utils/logger'
|
|
|
7
12
|
|
|
8
13
|
module GeminiAI
|
|
9
14
|
# Convenience method to create a new client
|
|
10
|
-
def self.new(api_key = nil,
|
|
11
|
-
Client.new(api_key,
|
|
15
|
+
def self.new(api_key = nil, model: :pro)
|
|
16
|
+
Client.new(api_key, model: model)
|
|
12
17
|
end
|
|
13
|
-
|
|
18
|
+
|
|
14
19
|
# Load environment variables
|
|
15
20
|
def self.load_env(file_path = '.env')
|
|
16
21
|
Utils::Loader.load(file_path)
|
|
17
22
|
end
|
|
18
|
-
end
|
|
23
|
+
end
|
data/lib/mac/README.md
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
1
|
+
# Mac Utils
|
|
2
|
+
|
|
3
|
+
This module provides macOS-specific utilities for the friday_gemini_ai gem.
|
|
4
|
+
|
|
5
|
+
## Methods
|
|
6
|
+
|
|
7
|
+
### `GeminiAI::MacUtils.mac?`
|
|
8
|
+
|
|
9
|
+
Returns `true` if the current platform is macOS (Darwin), `false` otherwise.
|
|
10
|
+
|
|
11
|
+
### `GeminiAI::MacUtils.version`
|
|
12
|
+
|
|
13
|
+
Returns the macOS version string (e.g., "14.1") if running on macOS, `nil` otherwise.
|
|
14
|
+
|
|
15
|
+
## Usage
|
|
16
|
+
|
|
17
|
+
```ruby
|
|
18
|
+
require 'mac/mac_utils'
|
|
19
|
+
|
|
20
|
+
if GeminiAI::MacUtils.mac?
|
|
21
|
+
puts "Running on macOS version #{GeminiAI::MacUtils.version}"
|
|
22
|
+
end
|
|
23
|
+
```
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
# SPDX-License-Identifier: MIT
|
|
2
|
+
# Copyright (c) 2026 friday_gemini_ai
|
|
3
|
+
|
|
4
|
+
module GeminiAI
|
|
5
|
+
module MacUtils
|
|
6
|
+
def self.mac?
|
|
7
|
+
require 'rbconfig'
|
|
8
|
+
RbConfig::CONFIG['host_os'] =~ /darwin/
|
|
9
|
+
end
|
|
10
|
+
|
|
11
|
+
def self.version
|
|
12
|
+
`/usr/bin/sw_vers -productVersion`.strip if mac?
|
|
13
|
+
end
|
|
14
|
+
end
|
|
15
|
+
end
|
data/lib/utils/loader.rb
CHANGED
|
@@ -1,18 +1,23 @@
|
|
|
1
|
+
# SPDX-License-Identifier: MIT
|
|
2
|
+
# Copyright (c) 2026 friday_gemini_ai
|
|
3
|
+
|
|
4
|
+
# frozen_string_literal: true
|
|
5
|
+
|
|
1
6
|
module GeminiAI
|
|
2
7
|
module Utils
|
|
3
8
|
# Utility class for loading environment variables from .env files
|
|
4
9
|
class Loader
|
|
5
10
|
def self.load(file_path = '.env')
|
|
6
11
|
return unless File.exist?(file_path)
|
|
7
|
-
|
|
12
|
+
|
|
8
13
|
File.readlines(file_path).each do |line|
|
|
9
14
|
line = line.strip
|
|
10
15
|
next if line.empty? || line.start_with?('#')
|
|
11
|
-
|
|
16
|
+
|
|
12
17
|
key, value = line.split('=', 2)
|
|
13
18
|
ENV[key] = value if key && value
|
|
14
19
|
end
|
|
15
20
|
end
|
|
16
21
|
end
|
|
17
22
|
end
|
|
18
|
-
end
|
|
23
|
+
end
|
data/lib/utils/logger.rb
CHANGED
|
@@ -1,3 +1,8 @@
|
|
|
1
|
+
# SPDX-License-Identifier: MIT
|
|
2
|
+
# Copyright (c) 2026 friday_gemini_ai
|
|
3
|
+
|
|
4
|
+
# frozen_string_literal: true
|
|
5
|
+
|
|
1
6
|
require 'logger'
|
|
2
7
|
|
|
3
8
|
module GeminiAI
|
|
@@ -5,31 +10,31 @@ module GeminiAI
|
|
|
5
10
|
# Centralized logging utility
|
|
6
11
|
class Logger
|
|
7
12
|
def self.instance
|
|
8
|
-
@
|
|
13
|
+
@instance ||= ::Logger.new($stdout).tap do |log|
|
|
9
14
|
log.level = ::Logger::INFO
|
|
10
|
-
log.formatter = proc do |severity, datetime,
|
|
15
|
+
log.formatter = proc do |severity, datetime, _progname, msg|
|
|
11
16
|
# Mask any potential API key in logs
|
|
12
17
|
masked_msg = msg.to_s.gsub(/AIza[a-zA-Z0-9_-]{35,}/, '[REDACTED]')
|
|
13
18
|
"#{datetime}: #{severity} -- #{masked_msg}\n"
|
|
14
19
|
end
|
|
15
20
|
end
|
|
16
21
|
end
|
|
17
|
-
|
|
22
|
+
|
|
18
23
|
def self.debug(message)
|
|
19
24
|
instance.debug(message)
|
|
20
25
|
end
|
|
21
|
-
|
|
26
|
+
|
|
22
27
|
def self.info(message)
|
|
23
28
|
instance.info(message)
|
|
24
29
|
end
|
|
25
|
-
|
|
30
|
+
|
|
26
31
|
def self.warn(message)
|
|
27
32
|
instance.warn(message)
|
|
28
33
|
end
|
|
29
|
-
|
|
34
|
+
|
|
30
35
|
def self.error(message)
|
|
31
36
|
instance.error(message)
|
|
32
37
|
end
|
|
33
38
|
end
|
|
34
39
|
end
|
|
35
|
-
end
|
|
40
|
+
end
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module GeminiAI
|
|
4
|
+
module Utils
|
|
5
|
+
# Content moderation utility for filtering potentially harmful or inappropriate content
|
|
6
|
+
class Moderation
|
|
7
|
+
# Patterns for detecting potentially harmful content
|
|
8
|
+
HARMFUL_PATTERNS = [
|
|
9
|
+
/\b(hack|exploit|malware|virus|trojan|ransomware)/i,
|
|
10
|
+
/\b(illegal|unlawful|criminal)/i,
|
|
11
|
+
/\b(violence|kill|harm|attack)/i,
|
|
12
|
+
/\b(drug|weapon|nuclear)/i
|
|
13
|
+
].freeze
|
|
14
|
+
|
|
15
|
+
# Moderate text by checking for harmful patterns and redacting them
|
|
16
|
+
# Returns [moderated_text, warnings_array]
|
|
17
|
+
def self.moderate_text(text)
|
|
18
|
+
return [text, []] unless text.is_a?(String)
|
|
19
|
+
|
|
20
|
+
moderated = text.dup
|
|
21
|
+
warnings = []
|
|
22
|
+
|
|
23
|
+
HARMFUL_PATTERNS.each do |pattern|
|
|
24
|
+
if moderated.gsub!(pattern, '[REDACTED]')
|
|
25
|
+
warnings << "Detected potentially harmful pattern: #{pattern.inspect}"
|
|
26
|
+
end
|
|
27
|
+
end
|
|
28
|
+
|
|
29
|
+
[moderated, warnings]
|
|
30
|
+
end
|
|
31
|
+
end
|
|
32
|
+
end
|
|
33
|
+
end
|
metadata
CHANGED
|
@@ -1,71 +1,91 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: friday_gemini_ai
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version:
|
|
4
|
+
version: 1.5.0
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Niladri Das
|
|
8
|
-
autorequire:
|
|
8
|
+
autorequire:
|
|
9
9
|
bindir: bin
|
|
10
10
|
cert_chain: []
|
|
11
|
-
date:
|
|
11
|
+
date: 2026-03-16 00:00:00.000000000 Z
|
|
12
12
|
dependencies:
|
|
13
13
|
- !ruby/object:Gem::Dependency
|
|
14
14
|
name: httparty
|
|
15
15
|
requirement: !ruby/object:Gem::Requirement
|
|
16
16
|
requirements:
|
|
17
|
-
- - "
|
|
17
|
+
- - ">="
|
|
18
|
+
- !ruby/object:Gem::Version
|
|
19
|
+
version: '0.21'
|
|
20
|
+
- - "<"
|
|
18
21
|
- !ruby/object:Gem::Version
|
|
19
|
-
version: 0.
|
|
22
|
+
version: '0.25'
|
|
20
23
|
type: :runtime
|
|
21
24
|
prerelease: false
|
|
22
25
|
version_requirements: !ruby/object:Gem::Requirement
|
|
23
26
|
requirements:
|
|
24
|
-
- - "
|
|
27
|
+
- - ">="
|
|
25
28
|
- !ruby/object:Gem::Version
|
|
26
|
-
version: 0.21
|
|
29
|
+
version: '0.21'
|
|
30
|
+
- - "<"
|
|
31
|
+
- !ruby/object:Gem::Version
|
|
32
|
+
version: '0.25'
|
|
27
33
|
- !ruby/object:Gem::Dependency
|
|
28
|
-
name:
|
|
34
|
+
name: logger
|
|
35
|
+
requirement: !ruby/object:Gem::Requirement
|
|
36
|
+
requirements:
|
|
37
|
+
- - ">="
|
|
38
|
+
- !ruby/object:Gem::Version
|
|
39
|
+
version: '0'
|
|
40
|
+
type: :runtime
|
|
41
|
+
prerelease: false
|
|
42
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
43
|
+
requirements:
|
|
44
|
+
- - ">="
|
|
45
|
+
- !ruby/object:Gem::Version
|
|
46
|
+
version: '0'
|
|
47
|
+
- !ruby/object:Gem::Dependency
|
|
48
|
+
name: dotenv
|
|
29
49
|
requirement: !ruby/object:Gem::Requirement
|
|
30
50
|
requirements:
|
|
31
51
|
- - "~>"
|
|
32
52
|
- !ruby/object:Gem::Version
|
|
33
|
-
version: '
|
|
53
|
+
version: '3.2'
|
|
34
54
|
type: :development
|
|
35
55
|
prerelease: false
|
|
36
56
|
version_requirements: !ruby/object:Gem::Requirement
|
|
37
57
|
requirements:
|
|
38
58
|
- - "~>"
|
|
39
59
|
- !ruby/object:Gem::Version
|
|
40
|
-
version: '
|
|
60
|
+
version: '3.2'
|
|
41
61
|
- !ruby/object:Gem::Dependency
|
|
42
|
-
name:
|
|
62
|
+
name: minitest
|
|
43
63
|
requirement: !ruby/object:Gem::Requirement
|
|
44
64
|
requirements:
|
|
45
65
|
- - "~>"
|
|
46
66
|
- !ruby/object:Gem::Version
|
|
47
|
-
version: '
|
|
67
|
+
version: '5.0'
|
|
48
68
|
type: :development
|
|
49
69
|
prerelease: false
|
|
50
70
|
version_requirements: !ruby/object:Gem::Requirement
|
|
51
71
|
requirements:
|
|
52
72
|
- - "~>"
|
|
53
73
|
- !ruby/object:Gem::Version
|
|
54
|
-
version: '
|
|
74
|
+
version: '5.0'
|
|
55
75
|
- !ruby/object:Gem::Dependency
|
|
56
|
-
name:
|
|
76
|
+
name: rake
|
|
57
77
|
requirement: !ruby/object:Gem::Requirement
|
|
58
78
|
requirements:
|
|
59
79
|
- - "~>"
|
|
60
80
|
- !ruby/object:Gem::Version
|
|
61
|
-
version:
|
|
81
|
+
version: 13.3.1
|
|
62
82
|
type: :development
|
|
63
83
|
prerelease: false
|
|
64
84
|
version_requirements: !ruby/object:Gem::Requirement
|
|
65
85
|
requirements:
|
|
66
86
|
- - "~>"
|
|
67
87
|
- !ruby/object:Gem::Version
|
|
68
|
-
version:
|
|
88
|
+
version: 13.3.1
|
|
69
89
|
description: Provides easy text generation capabilities using Google's Gemini AI models
|
|
70
90
|
email:
|
|
71
91
|
- bniladridas@gmail.com
|
|
@@ -78,14 +98,19 @@ files:
|
|
|
78
98
|
- lib/core/client.rb
|
|
79
99
|
- lib/core/errors.rb
|
|
80
100
|
- lib/core/version.rb
|
|
101
|
+
- lib/friday_gemini_ai.rb
|
|
81
102
|
- lib/gemini.rb
|
|
103
|
+
- lib/mac/README.md
|
|
104
|
+
- lib/mac/mac_utils.rb
|
|
82
105
|
- lib/utils/loader.rb
|
|
83
106
|
- lib/utils/logger.rb
|
|
107
|
+
- lib/utils/moderation.rb
|
|
84
108
|
homepage: https://github.com/bniladridas/friday_gemini_ai
|
|
85
109
|
licenses:
|
|
86
110
|
- MIT
|
|
87
|
-
metadata:
|
|
88
|
-
|
|
111
|
+
metadata:
|
|
112
|
+
rubygems_mfa_required: 'true'
|
|
113
|
+
post_install_message:
|
|
89
114
|
rdoc_options: []
|
|
90
115
|
require_paths:
|
|
91
116
|
- lib
|
|
@@ -100,8 +125,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
|
100
125
|
- !ruby/object:Gem::Version
|
|
101
126
|
version: '0'
|
|
102
127
|
requirements: []
|
|
103
|
-
rubygems_version: 3.
|
|
104
|
-
signing_key:
|
|
128
|
+
rubygems_version: 3.0.3.1
|
|
129
|
+
signing_key:
|
|
105
130
|
specification_version: 4
|
|
106
131
|
summary: A Ruby gem for interacting with Google's Gemini AI models
|
|
107
132
|
test_files: []
|