rails_ai 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/.rspec_status +96 -0
- data/AGENT_GUIDE.md +513 -0
- data/Appraisals +49 -0
- data/COMMERCIAL_LICENSE_TEMPLATE.md +92 -0
- data/FEATURES.md +204 -0
- data/LEGAL_PROTECTION_GUIDE.md +222 -0
- data/LICENSE +62 -0
- data/LICENSE_SUMMARY.md +74 -0
- data/MIT-LICENSE +62 -0
- data/PERFORMANCE.md +300 -0
- data/PROVIDERS.md +495 -0
- data/README.md +454 -0
- data/Rakefile +11 -0
- data/SPEED_OPTIMIZATIONS.md +217 -0
- data/STRUCTURE.md +139 -0
- data/USAGE_GUIDE.md +288 -0
- data/app/channels/ai_stream_channel.rb +33 -0
- data/app/components/ai/prompt_component.rb +25 -0
- data/app/controllers/concerns/ai/context_aware.rb +77 -0
- data/app/controllers/concerns/ai/streaming.rb +41 -0
- data/app/helpers/ai_helper.rb +164 -0
- data/app/jobs/ai/generate_embedding_job.rb +25 -0
- data/app/jobs/ai/generate_summary_job.rb +25 -0
- data/app/models/concerns/ai/embeddable.rb +38 -0
- data/app/views/rails_ai/dashboard/index.html.erb +51 -0
- data/config/routes.rb +19 -0
- data/lib/generators/rails_ai/install/install_generator.rb +38 -0
- data/lib/rails_ai/agents/agent_manager.rb +258 -0
- data/lib/rails_ai/agents/agent_team.rb +243 -0
- data/lib/rails_ai/agents/base_agent.rb +331 -0
- data/lib/rails_ai/agents/collaboration.rb +238 -0
- data/lib/rails_ai/agents/memory.rb +116 -0
- data/lib/rails_ai/agents/message_bus.rb +95 -0
- data/lib/rails_ai/agents/specialized_agents.rb +391 -0
- data/lib/rails_ai/agents/task_queue.rb +111 -0
- data/lib/rails_ai/cache.rb +14 -0
- data/lib/rails_ai/config.rb +40 -0
- data/lib/rails_ai/context.rb +7 -0
- data/lib/rails_ai/context_analyzer.rb +86 -0
- data/lib/rails_ai/engine.rb +48 -0
- data/lib/rails_ai/events.rb +9 -0
- data/lib/rails_ai/image_context.rb +110 -0
- data/lib/rails_ai/performance.rb +231 -0
- data/lib/rails_ai/provider.rb +8 -0
- data/lib/rails_ai/providers/anthropic_adapter.rb +256 -0
- data/lib/rails_ai/providers/base.rb +60 -0
- data/lib/rails_ai/providers/dummy_adapter.rb +29 -0
- data/lib/rails_ai/providers/gemini_adapter.rb +509 -0
- data/lib/rails_ai/providers/openai_adapter.rb +535 -0
- data/lib/rails_ai/providers/secure_anthropic_adapter.rb +206 -0
- data/lib/rails_ai/providers/secure_openai_adapter.rb +284 -0
- data/lib/rails_ai/railtie.rb +48 -0
- data/lib/rails_ai/redactor.rb +12 -0
- data/lib/rails_ai/security/api_key_manager.rb +82 -0
- data/lib/rails_ai/security/audit_logger.rb +46 -0
- data/lib/rails_ai/security/error_handler.rb +62 -0
- data/lib/rails_ai/security/input_validator.rb +176 -0
- data/lib/rails_ai/security/secure_file_handler.rb +45 -0
- data/lib/rails_ai/security/secure_http_client.rb +177 -0
- data/lib/rails_ai/security.rb +0 -0
- data/lib/rails_ai/version.rb +5 -0
- data/lib/rails_ai/window_context.rb +103 -0
- data/lib/rails_ai.rb +502 -0
- data/monitoring/ci_setup_guide.md +214 -0
- data/monitoring/enhanced_monitoring_script.rb +237 -0
- data/monitoring/google_alerts_setup.md +42 -0
- data/monitoring_log_20250921.txt +0 -0
- data/monitoring_script.rb +161 -0
- data/rails_ai.gemspec +54 -0
- data/scripts/security_scanner.rb +353 -0
- data/setup_monitoring.sh +163 -0
- data/wiki/API-Documentation.md +734 -0
- data/wiki/Architecture-Overview.md +672 -0
- data/wiki/Contributing-Guide.md +407 -0
- data/wiki/Development-Setup.md +532 -0
- data/wiki/Home.md +278 -0
- data/wiki/Installation-Guide.md +527 -0
- data/wiki/Quick-Start.md +186 -0
- data/wiki/README.md +135 -0
- data/wiki/Release-Process.md +467 -0
- metadata +385 -0
@@ -0,0 +1,672 @@
|
|
1
|
+
# Architecture Overview
|
2
|
+
|
3
|
+
This document provides a comprehensive overview of Rails AI's architecture, design decisions, and system components.
|
4
|
+
|
5
|
+
## 🏗️ High-Level Architecture
|
6
|
+
|
7
|
+
```
|
8
|
+
┌─────────────────────────────────────────────────────────────┐
|
9
|
+
│ Rails AI Architecture │
|
10
|
+
├─────────────────────────────────────────────────────────────┤
|
11
|
+
│ Application Layer │
|
12
|
+
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
13
|
+
│ │ Controllers │ │ Models │ │ Views │ │
|
14
|
+
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
15
|
+
├─────────────────────────────────────────────────────────────┤
|
16
|
+
│ Rails Integration Layer │
|
17
|
+
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
18
|
+
│ │ Generators │ │ Components │ │ Jobs │ │
|
19
|
+
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
20
|
+
├─────────────────────────────────────────────────────────────┤
|
21
|
+
│ Core AI Layer │
|
22
|
+
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
23
|
+
│ │ Context │ │ Providers │ │ Performance │ │
|
24
|
+
│ │ System │ │ System │ │ System │ │
|
25
|
+
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
26
|
+
├─────────────────────────────────────────────────────────────┤
|
27
|
+
│ External AI Services │
|
28
|
+
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
29
|
+
│ │ OpenAI │ │ Anthropic │ │ Gemini │ │
|
30
|
+
│ │ │ │ (Claude) │ │ (Google) │ │
|
31
|
+
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
32
|
+
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
33
|
+
│ │ Custom │ │ Dummy │ │ Future │ │
|
34
|
+
│ │ Providers │ │ (Testing) │ │ Providers │ │
|
35
|
+
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
36
|
+
└─────────────────────────────────────────────────────────────┘
|
37
|
+
```
|
38
|
+
|
39
|
+
## 🧩 Core Components
|
40
|
+
|
41
|
+
### 1. Main Module (`RailsAi`)
|
42
|
+
|
43
|
+
The central entry point that provides a unified API for all AI operations.
|
44
|
+
|
45
|
+
```ruby
|
46
|
+
module RailsAi
|
47
|
+
# Core AI operations
|
48
|
+
def self.chat(prompt, **opts)
|
49
|
+
def self.generate_image(prompt, **opts)
|
50
|
+
def self.analyze_image(image, prompt, **opts)
|
51
|
+
|
52
|
+
# Context-aware operations
|
53
|
+
def self.analyze_image_with_context(image, prompt, contexts)
|
54
|
+
def self.generate_with_context(prompt, contexts)
|
55
|
+
|
56
|
+
# Performance utilities
|
57
|
+
def self.metrics
|
58
|
+
def self.warmup!
|
59
|
+
def self.clear_cache!
|
60
|
+
end
|
61
|
+
```
|
62
|
+
|
63
|
+
**Responsibilities:**
|
64
|
+
- Provide unified API for all AI operations
|
65
|
+
- Handle request routing to appropriate providers
|
66
|
+
- Implement caching and performance optimizations
|
67
|
+
- Manage configuration and initialization
|
68
|
+
|
69
|
+
### 2. Multi-Provider System
|
70
|
+
|
71
|
+
Abstract interface for different AI service providers with unified API.
|
72
|
+
|
73
|
+
```ruby
|
74
|
+
module RailsAi::Providers
|
75
|
+
class Base
|
76
|
+
def chat!(messages:, model:, **opts)
|
77
|
+
raise NotImplementedError
|
78
|
+
end
|
79
|
+
|
80
|
+
def generate_image!(prompt:, model:, **opts)
|
81
|
+
raise NotImplementedError
|
82
|
+
end
|
83
|
+
|
84
|
+
def analyze_image!(image:, prompt:, model:, **opts)
|
85
|
+
raise NotImplementedError
|
86
|
+
end
|
87
|
+
end
|
88
|
+
|
89
|
+
class OpenAIAdapter < Base
|
90
|
+
# OpenAI-specific implementation (GPT-4, DALL-E, Whisper)
|
91
|
+
end
|
92
|
+
|
93
|
+
class AnthropicAdapter < Base
|
94
|
+
# Anthropic-specific implementation (Claude 3)
|
95
|
+
end
|
96
|
+
|
97
|
+
class GeminiAdapter < Base
|
98
|
+
# Google Gemini-specific implementation
|
99
|
+
end
|
100
|
+
|
101
|
+
class DummyAdapter < Base
|
102
|
+
# Testing and development implementation
|
103
|
+
end
|
104
|
+
end
|
105
|
+
```
|
106
|
+
|
107
|
+
**Provider Capabilities Matrix:**
|
108
|
+
|
109
|
+
| Feature | OpenAI | Anthropic | Gemini | Dummy |
|
110
|
+
|---------|--------|-----------|--------|-------|
|
111
|
+
| **Text Generation** | ✅ | ✅ | ✅ | ✅ |
|
112
|
+
| **Text Streaming** | ✅ | ✅ | ✅ | ✅ |
|
113
|
+
| **Image Generation** | ✅ | ❌ | ❌ | ✅ |
|
114
|
+
| **Image Analysis** | ✅ | ✅ | ✅ | ✅ |
|
115
|
+
| **Video Generation** | ✅ | ❌ | ❌ | ✅ |
|
116
|
+
| **Audio Generation** | ✅ | ❌ | ❌ | ✅ |
|
117
|
+
| **Audio Transcription** | ✅ | ❌ | ❌ | ✅ |
|
118
|
+
| **Embeddings** | ✅ | ⚠️ | ⚠️ | ✅ |
|
119
|
+
|
120
|
+
**Responsibilities:**
|
121
|
+
- Abstract AI service differences
|
122
|
+
- Implement provider-specific logic
|
123
|
+
- Handle API communication
|
124
|
+
- Manage authentication and rate limiting
|
125
|
+
- Provide graceful fallbacks for unsupported operations
|
126
|
+
|
127
|
+
### 3. Context System
|
128
|
+
|
129
|
+
Intelligent context awareness for enhanced AI interactions.
|
130
|
+
|
131
|
+
```ruby
|
132
|
+
module RailsAi
|
133
|
+
class UserContext
|
134
|
+
# User-specific information
|
135
|
+
attr_reader :id, :email, :role, :preferences, :created_at, :last_activity
|
136
|
+
end
|
137
|
+
|
138
|
+
class WindowContext
|
139
|
+
# Application state information
|
140
|
+
attr_reader :controller, :action, :params, :request_method, :request_path
|
141
|
+
end
|
142
|
+
|
143
|
+
class ImageContext
|
144
|
+
# Image metadata and information
|
145
|
+
attr_reader :source, :format, :dimensions, :file_size, :metadata
|
146
|
+
end
|
147
|
+
|
148
|
+
class ContextAnalyzer
|
149
|
+
# Context-aware prompt building
|
150
|
+
def build_context_aware_prompt(base_prompt, contexts)
|
151
|
+
end
|
152
|
+
end
|
153
|
+
end
|
154
|
+
```
|
155
|
+
|
156
|
+
**Responsibilities:**
|
157
|
+
- Capture user, application, and image context
|
158
|
+
- Build context-aware prompts
|
159
|
+
- Provide context to AI operations
|
160
|
+
- Optimize context for different use cases
|
161
|
+
|
162
|
+
### 4. Performance System
|
163
|
+
|
164
|
+
Comprehensive performance optimizations and monitoring.
|
165
|
+
|
166
|
+
```ruby
|
167
|
+
module RailsAi::Performance
|
168
|
+
class SmartCache
|
169
|
+
# Intelligent caching with compression
|
170
|
+
def fetch(key, **opts, &block)
|
171
|
+
end
|
172
|
+
end
|
173
|
+
|
174
|
+
class RequestDeduplicator
|
175
|
+
# Concurrent request deduplication
|
176
|
+
def deduplicate(key, &block)
|
177
|
+
end
|
178
|
+
end
|
179
|
+
|
180
|
+
class ConnectionPool
|
181
|
+
# HTTP connection pooling
|
182
|
+
def with_connection(&block)
|
183
|
+
end
|
184
|
+
end
|
185
|
+
|
186
|
+
class BatchProcessor
|
187
|
+
# Batch processing for multiple operations
|
188
|
+
def add_operation(operation)
|
189
|
+
end
|
190
|
+
end
|
191
|
+
|
192
|
+
class PerformanceMonitor
|
193
|
+
# Performance metrics and monitoring
|
194
|
+
def measure(operation, &block)
|
195
|
+
end
|
196
|
+
end
|
197
|
+
end
|
198
|
+
```
|
199
|
+
|
200
|
+
**Responsibilities:**
|
201
|
+
- Implement caching strategies
|
202
|
+
- Optimize network operations
|
203
|
+
- Monitor performance metrics
|
204
|
+
- Manage resource usage
|
205
|
+
- Handle batch processing
|
206
|
+
|
207
|
+
## 🔄 Data Flow
|
208
|
+
|
209
|
+
### 1. Basic AI Operation Flow
|
210
|
+
|
211
|
+
```
|
212
|
+
User Request → RailsAi Module → Provider Selection → AI Service → Response → Cache → User
|
213
|
+
```
|
214
|
+
|
215
|
+
**Detailed Flow:**
|
216
|
+
1. User calls `RailsAi.chat("Hello")`
|
217
|
+
2. RailsAi normalizes input and checks cache
|
218
|
+
3. If cached, return cached response
|
219
|
+
4. If not cached, select appropriate provider based on configuration
|
220
|
+
5. Provider makes API call to AI service
|
221
|
+
6. Response is cached and returned to user
|
222
|
+
|
223
|
+
### 2. Multi-Provider Operation Flow
|
224
|
+
|
225
|
+
```
|
226
|
+
User Request → Provider Selection → Provider A → Success? → Response
|
227
|
+
↓ ↓
|
228
|
+
Fallback Provider → Provider B → Success? → Response
|
229
|
+
↓ ↓
|
230
|
+
Error Handling → User Notification
|
231
|
+
```
|
232
|
+
|
233
|
+
**Detailed Flow:**
|
234
|
+
1. User calls `RailsAi.chat("Hello")`
|
235
|
+
2. System selects primary provider (e.g., OpenAI)
|
236
|
+
3. If primary provider fails, automatically try fallback (e.g., Anthropic)
|
237
|
+
4. If all providers fail, return error with helpful message
|
238
|
+
5. Successful response is cached and returned
|
239
|
+
|
240
|
+
### 3. Context-Aware Operation Flow
|
241
|
+
|
242
|
+
```
|
243
|
+
User Request → Context Capture → Context Analysis → Enhanced Prompt → Provider → AI Service → Response → Cache → User
|
244
|
+
```
|
245
|
+
|
246
|
+
**Detailed Flow:**
|
247
|
+
1. User calls `RailsAi.analyze_image_with_context(image, prompt, contexts)`
|
248
|
+
2. Context system captures user, window, and image context
|
249
|
+
3. ContextAnalyzer builds enhanced prompt with context
|
250
|
+
4. Enhanced prompt is sent to appropriate provider
|
251
|
+
5. Provider makes API call with context-aware prompt
|
252
|
+
6. Response is cached and returned to user
|
253
|
+
|
254
|
+
### 4. Streaming Operation Flow
|
255
|
+
|
256
|
+
```
|
257
|
+
User Request → RailsAi Module → Provider → AI Service → Stream → Action Cable → User (Real-time)
|
258
|
+
```
|
259
|
+
|
260
|
+
**Detailed Flow:**
|
261
|
+
1. User calls `RailsAi.stream("Long prompt")`
|
262
|
+
2. RailsAi routes to provider with streaming enabled
|
263
|
+
3. Provider establishes streaming connection
|
264
|
+
4. AI service streams response tokens
|
265
|
+
5. Tokens are sent to user via Action Cable
|
266
|
+
6. User receives real-time updates
|
267
|
+
|
268
|
+
## 🏛️ Design Patterns
|
269
|
+
|
270
|
+
### 1. Adapter Pattern
|
271
|
+
|
272
|
+
Used for provider abstraction:
|
273
|
+
|
274
|
+
```ruby
|
275
|
+
# All providers implement the same interface
|
276
|
+
class OpenAIAdapter < Base
|
277
|
+
def chat!(messages:, model:, **opts)
|
278
|
+
# OpenAI-specific implementation
|
279
|
+
end
|
280
|
+
end
|
281
|
+
|
282
|
+
class AnthropicAdapter < Base
|
283
|
+
def chat!(messages:, model:, **opts)
|
284
|
+
# Anthropic-specific implementation
|
285
|
+
end
|
286
|
+
end
|
287
|
+
|
288
|
+
class GeminiAdapter < Base
|
289
|
+
def chat!(messages:, model:, **opts)
|
290
|
+
# Gemini-specific implementation
|
291
|
+
end
|
292
|
+
end
|
293
|
+
```
|
294
|
+
|
295
|
+
### 2. Strategy Pattern
|
296
|
+
|
297
|
+
Used for different AI operations and provider selection:
|
298
|
+
|
299
|
+
```ruby
|
300
|
+
# Different strategies for different operations
|
301
|
+
RailsAi.chat(prompt) # Text strategy
|
302
|
+
RailsAi.generate_image(prompt) # Image strategy
|
303
|
+
RailsAi.generate_video(prompt) # Video strategy
|
304
|
+
|
305
|
+
# Provider selection strategy
|
306
|
+
def smart_provider_selection(operation_type)
|
307
|
+
case operation_type
|
308
|
+
when :image_generation then :openai
|
309
|
+
when :text_generation then :gemini
|
310
|
+
when :code_analysis then :anthropic
|
311
|
+
end
|
312
|
+
end
|
313
|
+
```
|
314
|
+
|
315
|
+
### 3. Observer Pattern
|
316
|
+
|
317
|
+
Used for performance monitoring:
|
318
|
+
|
319
|
+
```ruby
|
320
|
+
# Performance monitoring observes all operations
|
321
|
+
RailsAi.performance_monitor.measure(:chat) do
|
322
|
+
provider.chat!(messages: messages, model: model)
|
323
|
+
end
|
324
|
+
```
|
325
|
+
|
326
|
+
### 4. Factory Pattern
|
327
|
+
|
328
|
+
Used for provider creation:
|
329
|
+
|
330
|
+
```ruby
|
331
|
+
def self.provider
|
332
|
+
case config.provider.to_sym
|
333
|
+
when :openai then Providers::OpenAIAdapter.new
|
334
|
+
when :anthropic then Providers::AnthropicAdapter.new
|
335
|
+
when :gemini then Providers::GeminiAdapter.new
|
336
|
+
when :dummy then Providers::DummyAdapter.new
|
337
|
+
else Providers::DummyAdapter.new
|
338
|
+
end
|
339
|
+
end
|
340
|
+
```
|
341
|
+
|
342
|
+
### 5. Chain of Responsibility Pattern
|
343
|
+
|
344
|
+
Used for fallback providers:
|
345
|
+
|
346
|
+
```ruby
|
347
|
+
def robust_ai_operation(prompt)
|
348
|
+
providers = [:openai, :anthropic, :gemini]
|
349
|
+
|
350
|
+
providers.each do |provider|
|
351
|
+
begin
|
352
|
+
RailsAi.configure { |c| c.provider = provider }
|
353
|
+
return RailsAi.chat(prompt)
|
354
|
+
rescue => e
|
355
|
+
Rails.logger.warn("#{provider} failed: #{e.message}")
|
356
|
+
next
|
357
|
+
end
|
358
|
+
end
|
359
|
+
|
360
|
+
raise "All providers failed"
|
361
|
+
end
|
362
|
+
```
|
363
|
+
|
364
|
+
## 🔧 Configuration System
|
365
|
+
|
366
|
+
### Configuration Structure
|
367
|
+
|
368
|
+
```ruby
|
369
|
+
RailsAi::Config = Struct.new(
|
370
|
+
:provider, # AI provider to use
|
371
|
+
:default_model, # Default AI model
|
372
|
+
:token_limit, # Token limit for requests
|
373
|
+
:cache_ttl, # Cache time-to-live
|
374
|
+
:stub_responses, # Stub responses for testing
|
375
|
+
:connection_pool_size, # HTTP connection pool size
|
376
|
+
:compression_threshold, # Compression threshold
|
377
|
+
:batch_size, # Batch processing size
|
378
|
+
:flush_interval, # Batch flush interval
|
379
|
+
:enable_performance_monitoring, # Performance monitoring
|
380
|
+
:enable_request_deduplication, # Request deduplication
|
381
|
+
:enable_compression, # Response compression
|
382
|
+
keyword_init: true
|
383
|
+
)
|
384
|
+
```
|
385
|
+
|
386
|
+
### Provider-Specific Configuration
|
387
|
+
|
388
|
+
```ruby
|
389
|
+
RailsAi.configure do |config|
|
390
|
+
config.provider = :openai
|
391
|
+
config.default_model = "gpt-4o-mini"
|
392
|
+
config.cache_ttl = 1.hour
|
393
|
+
config.enable_performance_monitoring = true
|
394
|
+
end
|
395
|
+
|
396
|
+
# Environment variables for different providers
|
397
|
+
# OPENAI_API_KEY=your_openai_key
|
398
|
+
# ANTHROPIC_API_KEY=your_anthropic_key
|
399
|
+
# GEMINI_API_KEY=your_gemini_key
|
400
|
+
```
|
401
|
+
|
402
|
+
## 🚀 Performance Architecture
|
403
|
+
|
404
|
+
### Caching Strategy
|
405
|
+
|
406
|
+
```
|
407
|
+
Request → Cache Check → Hit? → Return Cached Response
|
408
|
+
↓
|
409
|
+
Miss? → Provider → AI Service → Cache Response → Return
|
410
|
+
```
|
411
|
+
|
412
|
+
### Connection Pooling
|
413
|
+
|
414
|
+
```
|
415
|
+
Request → Connection Pool → Available Connection? → Use Connection
|
416
|
+
↓
|
417
|
+
None? → Wait/Queue → Use Connection
|
418
|
+
```
|
419
|
+
|
420
|
+
### Request Deduplication
|
421
|
+
|
422
|
+
```
|
423
|
+
Request → Deduplication Check → Duplicate? → Wait for Existing Request
|
424
|
+
↓
|
425
|
+
Unique? → Process Request → Cache Result
|
426
|
+
```
|
427
|
+
|
428
|
+
### Batch Processing
|
429
|
+
|
430
|
+
```
|
431
|
+
Multiple Requests → Batch Queue → Batch Size Reached? → Process Batch → Return Results
|
432
|
+
↓
|
433
|
+
Timeout? → Process Partial Batch → Return Results
|
434
|
+
```
|
435
|
+
|
436
|
+
## 🔒 Security Architecture
|
437
|
+
|
438
|
+
### Content Redaction
|
439
|
+
|
440
|
+
```ruby
|
441
|
+
module RailsAi::Redactor
|
442
|
+
EMAIL = /\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b/i
|
443
|
+
PHONE = /\+?\d[\d\s().-]{7,}\d/
|
444
|
+
|
445
|
+
def self.call(text)
|
446
|
+
text.to_s.gsub(EMAIL, "[email]").gsub(PHONE, "[phone]")
|
447
|
+
end
|
448
|
+
end
|
449
|
+
```
|
450
|
+
|
451
|
+
### Parameter Sanitization
|
452
|
+
|
453
|
+
```ruby
|
454
|
+
def sanitized_params
|
455
|
+
params.except('password', 'password_confirmation', 'token', 'secret', 'key')
|
456
|
+
end
|
457
|
+
```
|
458
|
+
|
459
|
+
### Provider-Specific Security
|
460
|
+
|
461
|
+
```ruby
|
462
|
+
# Each provider handles its own security concerns
|
463
|
+
class OpenAIAdapter < Base
|
464
|
+
def initialize
|
465
|
+
@api_key = ENV.fetch("OPENAI_API_KEY")
|
466
|
+
# Additional OpenAI-specific security measures
|
467
|
+
end
|
468
|
+
end
|
469
|
+
|
470
|
+
class AnthropicAdapter < Base
|
471
|
+
def initialize
|
472
|
+
@api_key = ENV.fetch("ANTHROPIC_API_KEY")
|
473
|
+
# Additional Anthropic-specific security measures
|
474
|
+
end
|
475
|
+
end
|
476
|
+
```
|
477
|
+
|
478
|
+
## 📊 Monitoring Architecture
|
479
|
+
|
480
|
+
### Performance Metrics
|
481
|
+
|
482
|
+
```ruby
|
483
|
+
{
|
484
|
+
chat: {
|
485
|
+
count: 100,
|
486
|
+
total_duration: 5.2,
|
487
|
+
avg_duration: 0.052,
|
488
|
+
min_duration: 0.001,
|
489
|
+
max_duration: 0.5,
|
490
|
+
total_memory: 1024
|
491
|
+
},
|
492
|
+
generate_image: {
|
493
|
+
count: 50,
|
494
|
+
total_duration: 12.3,
|
495
|
+
avg_duration: 0.246,
|
496
|
+
min_duration: 0.1,
|
497
|
+
max_duration: 2.0,
|
498
|
+
total_memory: 2048
|
499
|
+
}
|
500
|
+
}
|
501
|
+
```
|
502
|
+
|
503
|
+
### Provider-Specific Metrics
|
504
|
+
|
505
|
+
```ruby
|
506
|
+
{
|
507
|
+
providers: {
|
508
|
+
openai: { requests: 100, errors: 2, avg_latency: 0.5 },
|
509
|
+
anthropic: { requests: 50, errors: 1, avg_latency: 0.7 },
|
510
|
+
gemini: { requests: 75, errors: 0, avg_latency: 0.6 }
|
511
|
+
}
|
512
|
+
}
|
513
|
+
```
|
514
|
+
|
515
|
+
### Event Logging
|
516
|
+
|
517
|
+
```ruby
|
518
|
+
RailsAi::Events.log!(
|
519
|
+
kind: :image_analysis,
|
520
|
+
name: "completed",
|
521
|
+
payload: {
|
522
|
+
user_id: 1,
|
523
|
+
image_format: "png",
|
524
|
+
provider: "gemini",
|
525
|
+
model: "gemini-1.5-pro"
|
526
|
+
},
|
527
|
+
latency_ms: 1500
|
528
|
+
)
|
529
|
+
```
|
530
|
+
|
531
|
+
## 🔄 Extension Points
|
532
|
+
|
533
|
+
### Adding New Providers
|
534
|
+
|
535
|
+
1. Create provider class inheriting from `Base`
|
536
|
+
2. Implement required methods
|
537
|
+
3. Add to provider selection logic
|
538
|
+
4. Add tests and documentation
|
539
|
+
5. Update capability matrix
|
540
|
+
|
541
|
+
```ruby
|
542
|
+
# Example: Adding a new provider
|
543
|
+
class MyCustomProvider < RailsAi::Providers::Base
|
544
|
+
def initialize
|
545
|
+
unless defined?(::MyCustomGem)
|
546
|
+
raise LoadError, "my-custom gem is required for MyCustom provider"
|
547
|
+
end
|
548
|
+
super
|
549
|
+
end
|
550
|
+
|
551
|
+
def chat!(messages:, model:, **opts)
|
552
|
+
# Custom implementation
|
553
|
+
end
|
554
|
+
end
|
555
|
+
```
|
556
|
+
|
557
|
+
### Adding New Operations
|
558
|
+
|
559
|
+
1. Add method to main `RailsAi` module
|
560
|
+
2. Implement in all providers
|
561
|
+
3. Add caching if appropriate
|
562
|
+
4. Add performance monitoring
|
563
|
+
5. Add tests and documentation
|
564
|
+
|
565
|
+
### Adding New Context Types
|
566
|
+
|
567
|
+
1. Create context class
|
568
|
+
2. Add to `ContextAnalyzer`
|
569
|
+
3. Update context-aware methods
|
570
|
+
4. Add tests and documentation
|
571
|
+
|
572
|
+
## 🎯 Design Principles
|
573
|
+
|
574
|
+
### 1. Simplicity
|
575
|
+
|
576
|
+
- Simple, intuitive API
|
577
|
+
- Minimal configuration required
|
578
|
+
- Clear error messages
|
579
|
+
- Comprehensive documentation
|
580
|
+
|
581
|
+
### 2. Performance
|
582
|
+
|
583
|
+
- Intelligent caching
|
584
|
+
- Connection pooling
|
585
|
+
- Request deduplication
|
586
|
+
- Performance monitoring
|
587
|
+
- Batch processing
|
588
|
+
|
589
|
+
### 3. Flexibility
|
590
|
+
|
591
|
+
- Multiple provider support
|
592
|
+
- Configurable options
|
593
|
+
- Extensible architecture
|
594
|
+
- Plugin system
|
595
|
+
- Graceful fallbacks
|
596
|
+
|
597
|
+
### 4. Reliability
|
598
|
+
|
599
|
+
- Comprehensive testing
|
600
|
+
- Error handling
|
601
|
+
- Graceful degradation
|
602
|
+
- Monitoring and alerting
|
603
|
+
- Provider fallbacks
|
604
|
+
|
605
|
+
### 5. Security
|
606
|
+
|
607
|
+
- Content redaction
|
608
|
+
- Parameter sanitization
|
609
|
+
- Secure credential handling
|
610
|
+
- Rate limiting
|
611
|
+
- Provider-specific security
|
612
|
+
|
613
|
+
## 📈 Scalability Considerations
|
614
|
+
|
615
|
+
### Horizontal Scaling
|
616
|
+
|
617
|
+
- Stateless design
|
618
|
+
- Connection pooling
|
619
|
+
- Caching strategies
|
620
|
+
- Load balancing support
|
621
|
+
- Provider distribution
|
622
|
+
|
623
|
+
### Vertical Scaling
|
624
|
+
|
625
|
+
- Memory optimization
|
626
|
+
- CPU efficiency
|
627
|
+
- I/O optimization
|
628
|
+
- Resource management
|
629
|
+
- Performance monitoring
|
630
|
+
|
631
|
+
### Performance Scaling
|
632
|
+
|
633
|
+
- Batch processing
|
634
|
+
- Async operations
|
635
|
+
- Streaming support
|
636
|
+
- Background jobs
|
637
|
+
- Provider load balancing
|
638
|
+
|
639
|
+
## 🔮 Future Architecture Considerations
|
640
|
+
|
641
|
+
### Planned Enhancements
|
642
|
+
|
643
|
+
1. **Additional Providers**
|
644
|
+
- Azure OpenAI
|
645
|
+
- AWS Bedrock
|
646
|
+
- Cohere
|
647
|
+
- Local models (Ollama)
|
648
|
+
|
649
|
+
2. **Advanced Features**
|
650
|
+
- Multi-modal streaming
|
651
|
+
- Real-time collaboration
|
652
|
+
- Advanced caching strategies
|
653
|
+
- A/B testing framework
|
654
|
+
|
655
|
+
3. **Enterprise Features**
|
656
|
+
- Multi-tenant support
|
657
|
+
- Advanced monitoring
|
658
|
+
- Compliance features
|
659
|
+
- Custom model support
|
660
|
+
|
661
|
+
### Architecture Evolution
|
662
|
+
|
663
|
+
The current architecture is designed to be:
|
664
|
+
- **Extensible** - Easy to add new providers and features
|
665
|
+
- **Maintainable** - Clear separation of concerns
|
666
|
+
- **Testable** - Comprehensive test coverage
|
667
|
+
- **Scalable** - Handles growth gracefully
|
668
|
+
- **Future-proof** - Adapts to new AI capabilities
|
669
|
+
|
670
|
+
---
|
671
|
+
|
672
|
+
This architecture provides a solid foundation for building AI-powered Rails applications with excellent performance, reliability, and maintainability across multiple AI providers. 🚀
|