ruby_llm 0.1.0.pre38 → 0.1.0.pre39

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,301 +0,0 @@
1
- ---
2
- layout: default
3
- title: Error Handling
4
- parent: Guides
5
- nav_order: 8
6
- permalink: /guides/error-handling
7
- ---
8
-
9
- # Error Handling
10
-
11
- Proper error handling is crucial when working with AI services. RubyLLM provides a comprehensive error handling system that helps you build robust applications.
12
-
13
- ## Error Hierarchy
14
-
15
- RubyLLM uses a structured error hierarchy:
16
-
17
- ```ruby
18
- RubyLLM::Error # Base error class
19
- RubyLLM::BadRequestError # Invalid request parameters (400)
20
- RubyLLM::UnauthorizedError # API key issues (401)
21
- RubyLLM::PaymentRequiredError # Billing issues (402)
22
- RubyLLM::RateLimitError # Rate limit exceeded (429)
23
- RubyLLM::ServerError # Provider server error (500)
24
- RubyLLM::ServiceUnavailableError # Service unavailable (503)
25
- RubyLLM::ModelNotFoundError # Invalid model ID
26
- RubyLLM::InvalidRoleError # Invalid message role
27
- ```
28
-
29
- ## Basic Error Handling
30
-
31
- Wrap your AI interactions in `begin/rescue` blocks:
32
-
33
- ```ruby
34
- begin
35
- chat = RubyLLM.chat
36
- response = chat.ask "What's the capital of France?"
37
- puts response.content
38
- rescue RubyLLM::Error => e
39
- puts "AI interaction failed: #{e.message}"
40
- end
41
- ```
42
-
43
- ## Handling Specific Errors
44
-
45
- Target specific error types for more precise handling:
46
-
47
- ```ruby
48
- begin
49
- chat = RubyLLM.chat
50
- response = chat.ask "Generate a detailed analysis"
51
- rescue RubyLLM::UnauthorizedError
52
- puts "Please check your API credentials"
53
- rescue RubyLLM::PaymentRequiredError
54
- puts "Payment required - please check your account balance"
55
- rescue RubyLLM::RateLimitError
56
- puts "Rate limit exceeded - please try again later"
57
- rescue RubyLLM::ServiceUnavailableError
58
- puts "Service temporarily unavailable - please try again later"
59
- rescue RubyLLM::BadRequestError => e
60
- puts "Bad request: #{e.message}"
61
- rescue RubyLLM::Error => e
62
- puts "Other error: #{e.message}"
63
- end
64
- ```
65
-
66
- ## API Response Details
67
-
68
- The `Error` class contains the original response, allowing for detailed error inspection:
69
-
70
- ```ruby
71
- begin
72
- chat = RubyLLM.chat
73
- chat.ask "Some question"
74
- rescue RubyLLM::Error => e
75
- puts "Error: #{e.message}"
76
- puts "Status: #{e.response.status}"
77
- puts "Body: #{e.response.body}"
78
- end
79
- ```
80
-
81
- ## Error Handling with Streaming
82
-
83
- When using streaming, errors can occur during the stream:
84
-
85
- ```ruby
86
- begin
87
- chat = RubyLLM.chat
88
- chat.ask "Generate a long response" do |chunk|
89
- print chunk.content
90
- end
91
- rescue RubyLLM::Error => e
92
- puts "\nStreaming error: #{e.message}"
93
- end
94
- ```
95
-
96
- ## Handling Tool Errors
97
-
98
- When using tools, errors can be handled within the tool or in the calling code:
99
-
100
- ```ruby
101
- # Error handling within tools
102
- class Calculator < RubyLLM::Tool
103
- description "Performs calculations"
104
-
105
- param :expression,
106
- type: :string,
107
- desc: "Math expression to evaluate"
108
-
109
- def execute(expression:)
110
- eval(expression).to_s
111
- rescue StandardError => e
112
- # Return error as structured data
113
- { error: "Calculation error: #{e.message}" }
114
- end
115
- end
116
-
117
- # Error handling when using tools
118
- begin
119
- chat = RubyLLM.chat.with_tool(Calculator)
120
- chat.ask "What's 1/0?"
121
- rescue RubyLLM::Error => e
122
- puts "Error using tools: #{e.message}"
123
- end
124
- ```
125
-
126
- ## Automatic Retries
127
-
128
- RubyLLM automatically retries on certain transient errors:
129
-
130
- ```ruby
131
- # Configure retry behavior
132
- RubyLLM.configure do |config|
133
- config.max_retries = 5 # Maximum number of retries
134
- end
135
- ```
136
-
137
- The following errors trigger automatic retries:
138
- - Network timeouts
139
- - Connection failures
140
- - Rate limit errors (429)
141
- - Server errors (500, 502, 503, 504)
142
-
143
- ## Provider-Specific Errors
144
-
145
- Each provider may return slightly different error messages. RubyLLM normalizes these into standard error types, but the original error details are preserved:
146
-
147
- ```ruby
148
- begin
149
- chat = RubyLLM.chat
150
- chat.ask "Some question"
151
- rescue RubyLLM::Error => e
152
- if e.response.body.include?("organization_quota_exceeded")
153
- puts "Your organization's quota has been exceeded"
154
- else
155
- puts "Error: #{e.message}"
156
- end
157
- end
158
- ```
159
-
160
- ## Error Handling in Rails
161
-
162
- When using RubyLLM in a Rails application, you can handle errors at different levels:
163
-
164
- ### Controller Level
165
-
166
- ```ruby
167
- class ChatController < ApplicationController
168
- rescue_from RubyLLM::Error, with: :handle_ai_error
169
-
170
- def create
171
- @chat = Chat.create!(chat_params)
172
- @chat.ask(params[:message])
173
- redirect_to @chat
174
- end
175
-
176
- private
177
-
178
- def handle_ai_error(exception)
179
- flash[:error] = "AI service error: #{exception.message}"
180
- redirect_to chats_path
181
- end
182
- end
183
- ```
184
-
185
- ### Background Job Level
186
-
187
- ```ruby
188
- class AiChatJob < ApplicationJob
189
- retry_on RubyLLM::RateLimitError, RubyLLM::ServiceUnavailableError,
190
- wait: :exponentially_longer, attempts: 5
191
-
192
- discard_on RubyLLM::UnauthorizedError, RubyLLM::BadRequestError
193
-
194
- def perform(chat_id, message)
195
- chat = Chat.find(chat_id)
196
- chat.ask(message)
197
- rescue RubyLLM::Error => e
198
- # Log error and notify user
199
- ErrorNotifier.notify(chat.user, "AI chat error: #{e.message}")
200
- end
201
- end
202
- ```
203
-
204
- ## Monitoring Errors
205
-
206
- For production applications, monitor AI service errors:
207
-
208
- ```ruby
209
- # Custom error handler
210
- module AiErrorMonitoring
211
- def self.track_error(error, context = {})
212
- # Record error in your monitoring system
213
- Sentry.capture_exception(error, extra: context)
214
-
215
- # Log details
216
- Rails.logger.error "[AI Error] #{error.class}: #{error.message}"
217
- Rails.logger.error "Context: #{context.inspect}"
218
-
219
- # Return or re-raise as needed
220
- error
221
- end
222
- end
223
-
224
- # Usage
225
- begin
226
- chat.ask "Some question"
227
- rescue RubyLLM::Error => e
228
- AiErrorMonitoring.track_error(e, {
229
- model: chat.model.id,
230
- tokens: chat.messages.sum(&:input_tokens)
231
- })
232
-
233
- # Show appropriate message to user
234
- flash[:error] = "Sorry, we encountered an issue with our AI service"
235
- end
236
- ```
237
-
238
- ## Graceful Degradation
239
-
240
- For critical applications, implement fallback strategies:
241
-
242
- ```ruby
243
- def get_ai_response(question, fallback_message = nil)
244
- begin
245
- chat = RubyLLM.chat
246
- response = chat.ask(question)
247
- response.content
248
- rescue RubyLLM::Error => e
249
- Rails.logger.error "AI error: #{e.message}"
250
-
251
- # Fallback to alternative model
252
- begin
253
- fallback_chat = RubyLLM.chat(model: 'gpt-3.5-turbo')
254
- fallback_response = fallback_chat.ask(question)
255
- fallback_response.content
256
- rescue RubyLLM::Error => e2
257
- Rails.logger.error "Fallback AI error: #{e2.message}"
258
- fallback_message || "Sorry, our AI service is currently unavailable"
259
- end
260
- end
261
- end
262
- ```
263
-
264
- ## Best Practices
265
-
266
- 1. **Always wrap AI calls in error handling** - Don't assume AI services will always be available
267
- 2. **Implement timeouts** - Configure appropriate request timeouts
268
- 3. **Use background jobs** - Process AI requests asynchronously when possible
269
- 4. **Set up monitoring** - Track error rates and response times
270
- 5. **Have fallback content** - Prepare fallback responses when AI services fail
271
- 6. **Gracefully degrade** - Implement multiple fallback strategies
272
- 7. **Communicate to users** - Provide clear error messages when AI services are unavailable
273
-
274
- ## Error Recovery
275
-
276
- When dealing with errors, consider recovery strategies:
277
-
278
- ```ruby
279
- MAX_RETRIES = 3
280
-
281
- def ask_with_recovery(chat, question, retries = 0)
282
- chat.ask(question)
283
- rescue RubyLLM::RateLimitError, RubyLLM::ServiceUnavailableError => e
284
- if retries < MAX_RETRIES
285
- # Exponential backoff
286
- sleep_time = 2 ** retries
287
- puts "Error: #{e.message}. Retrying in #{sleep_time} seconds..."
288
- sleep sleep_time
289
- ask_with_recovery(chat, question, retries + 1)
290
- else
291
- raise e
292
- end
293
- end
294
- ```
295
-
296
- ## Next Steps
297
-
298
- Now that you understand error handling in RubyLLM, you might want to explore:
299
-
300
- - [Rails Integration]({% link guides/rails.md %}) for using RubyLLM in Rails applications
301
- - [Tools]({% link guides/tools.md %}) for using tools with error handling
@@ -1,164 +0,0 @@
1
- ---
2
- layout: default
3
- title: Getting Started
4
- parent: Guides
5
- nav_order: 1
6
- permalink: /guides/getting-started
7
- ---
8
-
9
- # Getting Started with RubyLLM
10
-
11
- This guide will help you get up and running with RubyLLM, showing you the basics of chatting with AI models, generating images, and creating embeddings.
12
-
13
- ## Prerequisites
14
-
15
- Before starting, make sure you have:
16
-
17
- 1. Installed the RubyLLM gem (see the [Installation guide]({% link installation.md %}))
18
- 2. At least one API key from a supported provider (OpenAI, Anthropic, Google, or DeepSeek)
19
-
20
- ## Basic Configuration
21
-
22
- Let's start by setting up RubyLLM with your API keys:
23
-
24
- ```ruby
25
- require 'ruby_llm'
26
-
27
- RubyLLM.configure do |config|
28
- # Add the API keys you have available
29
- config.openai_api_key = ENV['OPENAI_API_KEY']
30
- config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
31
- config.gemini_api_key = ENV['GEMINI_API_KEY']
32
- config.deepseek_api_key = ENV['DEEPSEEK_API_KEY']
33
- end
34
- ```
35
-
36
- ## Your First Chat
37
-
38
- Let's start with a simple chat interaction:
39
-
40
- ```ruby
41
- # Create a chat (uses the default model)
42
- chat = RubyLLM.chat
43
-
44
- # Ask a question
45
- response = chat.ask "What's the capital of France?"
46
- puts response.content
47
- # => "The capital of France is Paris."
48
-
49
- # Continue the conversation
50
- response = chat.ask "What's the population of that city?"
51
- puts response.content
52
- # => "Paris has a population of approximately 2.1 million people..."
53
- ```
54
-
55
- ### Using a Specific Model
56
-
57
- You can specify which model you want to use:
58
-
59
- ```ruby
60
- # Use Claude
61
- claude_chat = RubyLLM.chat(model: 'claude-3-5-sonnet-20241022')
62
- claude_chat.ask "Tell me about Ruby programming language"
63
-
64
- # Use Gemini
65
- gemini_chat = RubyLLM.chat(model: 'gemini-2.0-flash')
66
- gemini_chat.ask "What are the best Ruby gems for machine learning?"
67
- ```
68
-
69
- ## Exploring Available Models
70
-
71
- RubyLLM gives you access to models from multiple providers. You can see what's available:
72
-
73
- ```ruby
74
- # List all models
75
- all_models = RubyLLM.models.all
76
- puts "Total models: #{all_models.count}"
77
-
78
- # List chat models
79
- chat_models = RubyLLM.models.chat_models
80
- puts "Chat models:"
81
- chat_models.each do |model|
82
- puts "- #{model.id} (#{model.provider})"
83
- end
84
-
85
- # List embedding models
86
- RubyLLM.models.embedding_models.each do |model|
87
- puts "- #{model.id} (#{model.provider})"
88
- end
89
-
90
- # Find info about a specific model
91
- gpt = RubyLLM.models.find('gpt-4o-mini')
92
- puts "Context window: #{gpt.context_window}"
93
- puts "Max tokens: #{gpt.max_tokens}"
94
- puts "Pricing: $#{gpt.input_price_per_million} per million input tokens"
95
- ```
96
-
97
- ## Generating Images
98
-
99
- RubyLLM makes it easy to generate images with DALL-E:
100
-
101
- ```ruby
102
- # Generate an image
103
- image = RubyLLM.paint("a sunset over mountains")
104
-
105
- # The URL where you can view/download the image
106
- puts image.url
107
-
108
- # How the model interpreted your prompt
109
- puts image.revised_prompt
110
-
111
- # Generate a larger image
112
- large_image = RubyLLM.paint(
113
- "a cyberpunk city at night with neon lights",
114
- size: "1792x1024"
115
- )
116
- ```
117
-
118
- ## Creating Embeddings
119
-
120
- Embeddings are vector representations of text that can be used for semantic search, classification, and more:
121
-
122
- ```ruby
123
- # Create an embedding for a single text
124
- embedding = RubyLLM.embed("Ruby is a programmer's best friend")
125
-
126
- # The vector representation
127
- vector = embedding.vectors
128
- puts "Vector dimension: #{vector.length}"
129
-
130
- # Create embeddings for multiple texts
131
- texts = ["Ruby", "Python", "JavaScript"]
132
- embeddings = RubyLLM.embed(texts)
133
-
134
- # Each text gets its own vector
135
- puts "Number of vectors: #{embeddings.vectors.length}"
136
- ```
137
-
138
- ## Working with Conversations
139
-
140
- Here's how to have a multi-turn conversation:
141
-
142
- ```ruby
143
- chat = RubyLLM.chat
144
-
145
- # First message
146
- chat.ask "What are the benefits of Ruby on Rails?"
147
-
148
- # Follow-up questions
149
- chat.ask "How does that compare to Django?"
150
- chat.ask "Which one would you recommend for a new web project?"
151
-
152
- # You can check all messages in the conversation
153
- chat.messages.each do |message|
154
- puts "#{message.role}: #{message.content[0..100]}..."
155
- end
156
- ```
157
-
158
- ## What's Next?
159
-
160
- Now that you've got the basics down, you're ready to explore more advanced features:
161
-
162
- - [Chatting with AI]({% link guides/chat.md %}) - Learn more about chat capabilities
163
- - [Using Tools]({% link guides/tools.md %}) - Let AI use your Ruby code
164
- - [Rails Integration]({% link guides/rails.md %}) - Persist chats in your Rails apps