ruby_llm 0.1.0.pre38 → 0.1.0.pre39

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,274 +0,0 @@
1
- ---
2
- layout: default
3
- title: Image Generation
4
- parent: Guides
5
- nav_order: 6
6
- permalink: /guides/image-generation
7
- ---
8
-
9
- # Image Generation
10
-
11
- RubyLLM makes it easy to generate images using AI models like DALL-E. This guide explains how to create images and customize the generation process.
12
-
13
- ## Basic Image Generation
14
-
15
- The simplest way to generate an image is using the global `paint` method:
16
-
17
- ```ruby
18
- # Generate an image with DALL-E
19
- image = RubyLLM.paint("a sunset over mountains")
20
-
21
- # The URL where you can view/download the image
22
- puts image.url # => "https://..."
23
-
24
- # How the model interpreted your prompt
25
- puts image.revised_prompt # => "A breathtaking sunset painting the sky with warm..."
26
- ```
27
-
28
- The `paint` method handles all the complexities of connecting to the right provider and processing the request.
29
-
30
- ## Choosing Models
31
-
32
- By default, RubyLLM uses DALL-E 3, but you can specify a different model:
33
-
34
- ```ruby
35
- # Use a specific model
36
- image = RubyLLM.paint(
37
- "a cyberpunk city at night",
38
- model: "dall-e-3"
39
- )
40
-
41
- # Alternate syntax
42
- image = RubyLLM.paint(
43
- "a cyberpunk city at night",
44
- model: "dalle-3" # with or without hyphen
45
- )
46
- ```
47
-
48
- You can configure the default model globally:
49
-
50
- ```ruby
51
- RubyLLM.configure do |config|
52
- config.default_image_model = "dall-e-3"
53
- end
54
- ```
55
-
56
- ## Image Sizes
57
-
58
- You can control the size of the generated image:
59
-
60
- ```ruby
61
- # Standard size (1024x1024)
62
- image = RubyLLM.paint("a white siamese cat")
63
-
64
- # Landscape (1792x1024)
65
- landscape = RubyLLM.paint(
66
- "a panoramic mountain landscape",
67
- size: "1792x1024"
68
- )
69
-
70
- # Portrait (1024x1792)
71
- portrait = RubyLLM.paint(
72
- "a tall redwood tree",
73
- size: "1024x1792"
74
- )
75
-
76
- # Square with custom size
77
- square = RubyLLM.paint(
78
- "a perfect square mandala",
79
- size: "1024x1024" # standard square
80
- )
81
- ```
82
-
83
- Available sizes depend on the model. DALL-E 3 supports:
84
- - `"1024x1024"` - Standard square (default)
85
- - `"1792x1024"` - Wide landscape
86
- - `"1024x1792"` - Tall portrait
87
-
88
- ## Working with Generated Images
89
-
90
- The `Image` object returned by `paint` contains information about the generated image:
91
-
92
- ```ruby
93
- image = RubyLLM.paint("a cyberpunk cityscape")
94
-
95
- # URL to the generated image (temporary, expires after some time)
96
- image_url = image.url
97
-
98
- # How the model interpreted/enhanced your prompt
99
- enhanced_prompt = image.revised_prompt
100
-
101
- # The model used to generate the image
102
- model_used = image.model_id
103
- ```
104
-
105
- ### Saving Images Locally
106
-
107
- To save the generated image to a local file:
108
-
109
- ```ruby
110
- require 'open-uri'
111
-
112
- # Generate an image
113
- image = RubyLLM.paint("a sunset over mountains")
114
-
115
- # Save to a file
116
- File.open("sunset.png", "wb") do |file|
117
- file.write(URI.open(image.url).read)
118
- end
119
- ```
120
-
121
- ## Prompt Engineering for Images
122
-
123
- Crafting effective prompts is crucial for getting the best results:
124
-
125
- ```ruby
126
- # Basic prompt
127
- image = RubyLLM.paint("cat")
128
-
129
- # Detailed prompt
130
- image = RubyLLM.paint(
131
- "A fluffy orange tabby cat sitting on a windowsill, " \
132
- "looking out at a rainy day. Soft lighting, detailed fur, " \
133
- "photorealistic style."
134
- )
135
- ```
136
-
137
- ### Tips for Better Prompts
138
-
139
- 1. **Be specific** about subject, setting, lighting, style, and perspective
140
- 2. **Specify artistic style** (e.g., "oil painting", "digital art", "photorealistic")
141
- 3. **Include lighting details** ("soft morning light", "dramatic sunset")
142
- 4. **Add composition details** ("close-up", "wide angle", "overhead view")
143
- 5. **Specify mood or atmosphere** ("serene", "mysterious", "cheerful")
144
-
145
- ## Error Handling
146
-
147
- Handle errors that may occur during image generation:
148
-
149
- ```ruby
150
- begin
151
- image = RubyLLM.paint("a sunset over mountains")
152
- puts "Image generated: #{image.url}"
153
- rescue RubyLLM::UnauthorizedError
154
- puts "Please check your API key"
155
- rescue RubyLLM::BadRequestError => e
156
- puts "Invalid request: #{e.message}"
157
- rescue RubyLLM::Error => e
158
- puts "Error generating image: #{e.message}"
159
- end
160
- ```
161
-
162
- Common errors include:
163
- - `UnauthorizedError` - Invalid API key
164
- - `BadRequestError` - Content policy violation
165
- - `RateLimitError` - Too many requests
166
- - `ServiceUnavailableError` - Service temporarily unavailable
167
-
168
- ## Content Safety
169
-
170
- Image generation models have built-in safety filters. If your prompt violates content policies, you'll receive an error.
171
-
172
- To avoid content policy violations:
173
- - Avoid requesting violent, adult, or disturbing content
174
- - Don't ask for images of real public figures
175
- - Avoid copyrighted characters and content
176
- - Be mindful of sensitive subject matter
177
-
178
- ## Performance Considerations
179
-
180
- Image generation typically takes 5-15 seconds. Consider these best practices:
181
-
182
- 1. **Handle asynchronously** - Don't block your application while waiting
183
- 2. **Implement timeouts** - Set appropriate request timeouts
184
- 3. **Cache results** - Save images to your server rather than regenerating
185
- 4. **Implement retries** - Retry on temporary failures
186
-
187
- ## Rails Integration
188
-
189
- In a Rails application, you might implement image generation like this:
190
-
191
- ```ruby
192
- class ImagesController < ApplicationController
193
- def create
194
- GenerateImageJob.perform_later(
195
- prompt: params[:prompt],
196
- user_id: current_user.id
197
- )
198
-
199
- redirect_to images_path, notice: "Your image is being generated..."
200
- end
201
- end
202
-
203
- class GenerateImageJob < ApplicationJob
204
- queue_as :default
205
-
206
- def perform(prompt:, user_id:)
207
- user = User.find(user_id)
208
-
209
- begin
210
- image = RubyLLM.paint(prompt)
211
-
212
- # Download and store the image
213
- image_file = URI.open(image.url)
214
-
215
- # Create record in your database
216
- user.images.create!(
217
- prompt: prompt,
218
- revised_prompt: image.revised_prompt,
219
- file: image_file,
220
- model: image.model_id
221
- )
222
-
223
- # Notify user
224
- UserMailer.image_ready(user, prompt).deliver_later
225
- rescue RubyLLM::Error => e
226
- ErrorLogger.log(e, context: { prompt: prompt, user_id: user_id })
227
- UserMailer.image_failed(user, prompt, e.message).deliver_later
228
- end
229
- end
230
- end
231
- ```
232
-
233
- ## Example Use Cases
234
-
235
- ### Product Visualization
236
-
237
- ```ruby
238
- def visualize_product(product_name, description)
239
- prompt = "#{product_name}: #{description}. " \
240
- "Professional product photography on white background, " \
241
- "detailed, commercial quality."
242
-
243
- RubyLLM.paint(prompt, size: "1024x1024")
244
- end
245
-
246
- image = visualize_product(
247
- "Ergonomic Office Chair",
248
- "Modern mesh back office chair with adjustable armrests and lumbar support"
249
- )
250
- ```
251
-
252
- ### Art Generation for Content
253
-
254
- ```ruby
255
- def generate_article_header(title, style)
256
- prompt = "Header image for an article titled '#{title}'. " \
257
- "Style: #{style}. Wide format, suitable for a blog header."
258
-
259
- RubyLLM.paint(prompt, size: "1792x1024")
260
- end
261
-
262
- image = generate_article_header(
263
- "The Future of Renewable Energy",
264
- "Minimalist digital illustration with clean lines and a blue-green color palette"
265
- )
266
- ```
267
-
268
- ## Next Steps
269
-
270
- Now that you understand image generation, you might want to explore:
271
-
272
- - [Embeddings]({% link guides/embeddings.md %}) for vector representations
273
- - [Chat with Images]({% link guides/chat.md %}#working-with-images) to analyze images with AI
274
- - [Error Handling]({% link guides/error-handling.md %}) for robust applications
data/docs/guides/index.md DELETED
@@ -1,45 +0,0 @@
1
- ---
2
- layout: default
3
- title: Guides
4
- nav_order: 3
5
- has_children: true
6
- permalink: /guides/
7
- ---
8
-
9
- # RubyLLM Guides
10
-
11
- This section contains detailed guides to help you make the most of RubyLLM. Each guide focuses on a specific aspect of the library and provides practical examples and best practices.
12
-
13
- ## Available Guides
14
-
15
- ### [Getting Started]({% link guides/getting-started.md %})
16
- Learn the basics of RubyLLM and get up and running quickly with simple examples.
17
-
18
- ### [Chat]({% link guides/chat.md %})
19
- Explore the chat interface, which is the primary way to interact with AI models through RubyLLM.
20
-
21
- ### [Tools]({% link guides/tools.md %})
22
- Learn how to extend AI capabilities by creating tools that let models call your Ruby code.
23
-
24
- ### [Streaming]({% link guides/streaming.md %})
25
- Understand how to use streaming responses for real-time interactions.
26
-
27
- ### [Rails Integration]({% link guides/rails.md %})
28
- See how to integrate RubyLLM with Rails applications, including ActiveRecord persistence.
29
-
30
- ### [Image Generation]({% link guides/image-generation.md %})
31
- Learn how to generate images using DALL-E and other providers.
32
-
33
- ### [Embeddings]({% link guides/embeddings.md %})
34
- Explore how to create vector embeddings for semantic search and other applications.
35
-
36
- ### [Error Handling]({% link guides/error-handling.md %})
37
- Master the techniques for robust error handling in AI applications.
38
-
39
- ## Getting Help
40
-
41
- If you can't find what you're looking for in these guides, consider:
42
-
43
- 1. Checking the [API Documentation]() for detailed information about specific classes and methods
44
- 2. Looking at the [GitHub repository](https://github.com/yourusername/ruby_llm) for examples and the latest updates
45
- 3. Filing an issue on GitHub if you find a bug or have a feature request
data/docs/guides/rails.md DELETED
@@ -1,401 +0,0 @@
1
- ---
2
- layout: default
3
- title: Rails Integration
4
- parent: Guides
5
- nav_order: 5
6
- permalink: /guides/rails
7
- ---
8
-
9
- # Rails Integration
10
-
11
- RubyLLM provides seamless integration with Rails through ActiveRecord models. This allows you to easily persist chats, messages, and tool calls in your database.
12
-
13
- ## Setup
14
-
15
- ### 1. Create Migrations
16
-
17
- First, create the necessary tables in your database:
18
-
19
- ```ruby
20
- # db/migrate/YYYYMMDDHHMMSS_create_chats.rb
21
- class CreateChats < ActiveRecord::Migration[8.0]
22
- def change
23
- create_table :chats do |t|
24
- t.string :model_id
25
- t.timestamps
26
- end
27
- end
28
- end
29
-
30
- # db/migrate/YYYYMMDDHHMMSS_create_messages.rb
31
- class CreateMessages < ActiveRecord::Migration[8.0]
32
- def change
33
- create_table :messages do |t|
34
- t.references :chat, null: false, foreign_key: true
35
- t.string :role
36
- t.text :content
37
- t.string :model_id
38
- t.integer :input_tokens
39
- t.integer :output_tokens
40
- t.references :tool_call
41
- t.timestamps
42
- end
43
- end
44
- end
45
-
46
- # db/migrate/YYYYMMDDHHMMSS_create_tool_calls.rb
47
- class CreateToolCalls < ActiveRecord::Migration[8.0]
48
- def change
49
- create_table :tool_calls do |t|
50
- t.references :message, null: false, foreign_key: true
51
- t.string :tool_call_id, null: false
52
- t.string :name, null: false
53
- t.jsonb :arguments, default: {}
54
- t.timestamps
55
- end
56
-
57
- add_index :tool_calls, :tool_call_id
58
- end
59
- end
60
- ```
61
-
62
- Run the migrations:
63
-
64
- ```bash
65
- rails db:migrate
66
- ```
67
-
68
- ### 2. Set Up Models
69
-
70
- Create the model classes:
71
-
72
- ```ruby
73
- # app/models/chat.rb
74
- class Chat < ApplicationRecord
75
- acts_as_chat
76
- end
77
-
78
- # app/models/message.rb
79
- class Message < ApplicationRecord
80
- acts_as_message
81
- end
82
-
83
- # app/models/tool_call.rb
84
- class ToolCall < ApplicationRecord
85
- acts_as_tool_call
86
- end
87
- ```
88
-
89
- ### 3. Configure RubyLLM
90
-
91
- In an initializer (e.g., `config/initializers/ruby_llm.rb`):
92
-
93
- ```ruby
94
- RubyLLM.configure do |config|
95
- config.openai_api_key = ENV['OPENAI_API_KEY']
96
- config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
97
- config.gemini_api_key = ENV['GEMINI_API_KEY']
98
- config.deepseek_api_key = ENV['DEEPSEEK_API_KEY']
99
- end
100
- ```
101
-
102
- ## Basic Usage
103
-
104
- Once your models are set up, you can use them like any other Rails model:
105
-
106
- ```ruby
107
- # Create a new chat
108
- chat = Chat.create!(model_id: 'gpt-4o-mini')
109
-
110
- # Ask a question
111
- chat.ask "What's the capital of France?"
112
-
113
- # The response is automatically persisted
114
- puts chat.messages.last.content
115
-
116
- # Continue the conversation
117
- chat.ask "Tell me more about that city"
118
-
119
- # All messages are stored in the database
120
- chat.messages.order(:created_at).each do |message|
121
- puts "#{message.role}: #{message.content}"
122
- end
123
- ```
124
-
125
- ## Streaming Responses
126
-
127
- You can stream responses while still persisting the final result:
128
-
129
- ```ruby
130
- chat = Chat.create!(model_id: 'gpt-4o-mini')
131
-
132
- chat.ask "Write a short poem about Ruby" do |chunk|
133
- # Stream content to the user
134
- ActionCable.server.broadcast "chat_#{chat.id}", { content: chunk.content }
135
- end
136
-
137
- # The complete message is saved in the database
138
- puts chat.messages.last.content
139
- ```
140
-
141
- ## Using with Hotwire
142
-
143
- RubyLLM's Rails integration works seamlessly with Hotwire for real-time updates:
144
-
145
- ```ruby
146
- # app/models/chat.rb
147
- class Chat < ApplicationRecord
148
- acts_as_chat
149
-
150
- # Add broadcast capabilities
151
- broadcasts_to ->(chat) { "chat_#{chat.id}" }
152
- end
153
- ```
154
-
155
- In your controller:
156
-
157
- ```ruby
158
- # app/controllers/chats_controller.rb
159
- class ChatsController < ApplicationController
160
- def show
161
- @chat = Chat.find(params[:id])
162
- end
163
-
164
- def ask
165
- @chat = Chat.find(params[:id])
166
-
167
- # Use a background job to avoid blocking
168
- ChatJob.perform_later(@chat.id, params[:message])
169
-
170
- # Let the user know we're working on it
171
- respond_to do |format|
172
- format.turbo_stream
173
- format.html { redirect_to @chat }
174
- end
175
- end
176
- end
177
- ```
178
-
179
- Create a background job:
180
-
181
- ```ruby
182
- # app/jobs/chat_job.rb
183
- class ChatJob < ApplicationJob
184
- queue_as :default
185
-
186
- def perform(chat_id, message)
187
- chat = Chat.find(chat_id)
188
-
189
- # Start with a "typing" indicator
190
- Turbo::StreamsChannel.broadcast_append_to(
191
- chat,
192
- target: "messages",
193
- partial: "messages/typing"
194
- )
195
-
196
- chat.ask(message) do |chunk|
197
- # Remove typing indicator after first chunk
198
- if chunk == chat.messages.last.to_llm.content[0...chunk.content.length]
199
- Turbo::StreamsChannel.broadcast_remove_to(
200
- chat,
201
- target: "typing"
202
- )
203
- end
204
-
205
- # Update the streaming message
206
- Turbo::StreamsChannel.broadcast_replace_to(
207
- chat,
208
- target: "assistant_message_#{chat.messages.last.id}",
209
- partial: "messages/message",
210
- locals: { message: chat.messages.last, content: chunk.content }
211
- )
212
- end
213
- end
214
- end
215
- ```
216
-
217
- In your views:
218
-
219
- ```erb
220
- <!-- app/views/chats/show.html.erb -->
221
- <%= turbo_stream_from @chat %>
222
-
223
- <div id="messages">
224
- <%= render @chat.messages %>
225
- </div>
226
-
227
- <%= form_with(url: ask_chat_path(@chat), method: :post) do |f| %>
228
- <%= f.text_area :message %>
229
- <%= f.submit "Send" %>
230
- <% end %>
231
- ```
232
-
233
- ## Using Tools
234
-
235
- Tools work seamlessly with Rails integration:
236
-
237
- ```ruby
238
- class Calculator < RubyLLM::Tool
239
- description "Performs arithmetic calculations"
240
-
241
- param :expression,
242
- type: :string,
243
- desc: "Math expression to evaluate"
244
-
245
- def execute(expression:)
246
- eval(expression).to_s
247
- rescue StandardError => e
248
- "Error: #{e.message}"
249
- end
250
- end
251
-
252
- # Add the tool to your chat
253
- chat = Chat.create!(model_id: 'gpt-4o-mini')
254
- chat.with_tool(Calculator)
255
-
256
- # Ask a question that requires calculation
257
- chat.ask "What's 123 * 456?"
258
-
259
- # Tool calls are persisted
260
- tool_call = chat.messages.find_by(role: 'assistant').tool_calls.first
261
- puts "Tool: #{tool_call.name}"
262
- puts "Arguments: #{tool_call.arguments}"
263
- ```
264
-
265
- ## Customizing Models
266
-
267
- You can customize the behavior of your models:
268
-
269
- ```ruby
270
- class Chat < ApplicationRecord
271
- acts_as_chat
272
-
273
- # Add custom behavior
274
- belongs_to :user
275
- has_many :tags
276
-
277
- # Add custom scopes
278
- scope :recent, -> { order(created_at: :desc).limit(10) }
279
- scope :by_model, ->(model_id) { where(model_id: model_id) }
280
-
281
- # Add custom methods
282
- def summarize
283
- self.ask "Please summarize our conversation so far."
284
- end
285
-
286
- def token_count
287
- messages.sum { |m| (m.input_tokens || 0) + (m.output_tokens || 0) }
288
- end
289
- end
290
- ```
291
-
292
- ## Message Content Customization
293
-
294
- You can customize how message content is stored or extracted:
295
-
296
- ```ruby
297
- class Message < ApplicationRecord
298
- acts_as_message
299
-
300
- # Override content handling
301
- def extract_content
302
- # For example, compress or expand content
303
- JSON.parse(content) rescue content
304
- end
305
- end
306
- ```
307
-
308
- ## Advanced Integration
309
-
310
- ### User Association
311
-
312
- Associate chats with users:
313
-
314
- ```ruby
315
- # Migration
316
- add_reference :chats, :user, foreign_key: true
317
-
318
- # Model
319
- class Chat < ApplicationRecord
320
- acts_as_chat
321
- belongs_to :user
322
- end
323
-
324
- # Usage
325
- user.chats.create!(model_id: 'gpt-4o-mini').ask("Hello!")
326
- ```
327
-
328
- ### Metadata and Tagging
329
-
330
- Add metadata to chats:
331
-
332
- ```ruby
333
- # Migration
334
- add_column :chats, :metadata, :jsonb, default: {}
335
-
336
- # Model
337
- class Chat < ApplicationRecord
338
- acts_as_chat
339
- end
340
-
341
- # Usage
342
- chat = Chat.create!(
343
- model_id: 'gpt-4o-mini',
344
- metadata: {
345
- purpose: 'customer_support',
346
- category: 'billing',
347
- priority: 'high'
348
- }
349
- )
350
- ```
351
-
352
- ### Scoping and Filtering
353
-
354
- Create scopes for easier querying:
355
-
356
- ```ruby
357
- class Chat < ApplicationRecord
358
- acts_as_chat
359
-
360
- scope :using_gpt, -> { where("model_id LIKE ?", "gpt-%") }
361
- scope :using_claude, -> { where("model_id LIKE ?", "claude-%") }
362
- scope :recent, -> { order(created_at: :desc).limit(10) }
363
- scope :with_high_token_count, -> {
364
- joins(:messages)
365
- .group(:id)
366
- .having("SUM(messages.input_tokens + messages.output_tokens) > ?", 10000)
367
- }
368
- end
369
- ```
370
-
371
- ## Performance Considerations
372
-
373
- For high-volume applications:
374
-
375
- 1. **Background Processing**: Use background jobs for AI requests
376
- 2. **Connection Pooling**: Ensure your database connection pool is sized appropriately
377
- 3. **Pagination**: Use pagination when showing chat histories
378
- 4. **Archiving**: Consider archiving old chats to maintain performance
379
-
380
- ```ruby
381
- # Example background job
382
- class AskAiJob < ApplicationJob
383
- queue_as :ai_requests
384
-
385
- def perform(chat_id, message)
386
- chat = Chat.find(chat_id)
387
- chat.ask(message)
388
- end
389
- end
390
-
391
- # Usage
392
- AskAiJob.perform_later(chat.id, "Tell me about Ruby")
393
- ```
394
-
395
- ## Next Steps
396
-
397
- Now that you've integrated RubyLLM with Rails, you might want to explore:
398
-
399
- - [Using Tools]({% link guides/tools.md %}) to add capabilities to your chats
400
- - [Streaming Responses]({% link guides/streaming.md %}) for a better user experience
401
- - [Error Handling]({% link guides/error-handling.md %}) to handle AI service issues gracefully