ruby_llm 0.1.0.pre37 → 0.1.0.pre39
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.github/workflows/cicd.yml +6 -0
- data/.rspec_status +38 -7
- data/README.md +2 -0
- data/lib/ruby_llm/models.json +11 -125
- data/lib/ruby_llm/providers/deepseek/capabilities.rb +2 -2
- data/lib/ruby_llm/providers/openai/capabilities.rb +1 -2
- data/lib/ruby_llm/version.rb +1 -1
- data/ruby_llm.gemspec +1 -1
- metadata +1 -16
- data/docs/.gitignore +0 -7
- data/docs/Gemfile +0 -11
- data/docs/_config.yml +0 -43
- data/docs/_data/navigation.yml +0 -25
- data/docs/guides/chat.md +0 -206
- data/docs/guides/embeddings.md +0 -325
- data/docs/guides/error-handling.md +0 -301
- data/docs/guides/getting-started.md +0 -164
- data/docs/guides/image-generation.md +0 -274
- data/docs/guides/index.md +0 -45
- data/docs/guides/rails.md +0 -401
- data/docs/guides/streaming.md +0 -242
- data/docs/guides/tools.md +0 -247
- data/docs/index.md +0 -53
- data/docs/installation.md +0 -98
@@ -1,164 +0,0 @@
|
|
1
|
-
---
|
2
|
-
layout: default
|
3
|
-
title: Getting Started
|
4
|
-
parent: Guides
|
5
|
-
nav_order: 1
|
6
|
-
permalink: /guides/getting-started
|
7
|
-
---
|
8
|
-
|
9
|
-
# Getting Started with RubyLLM
|
10
|
-
|
11
|
-
This guide will help you get up and running with RubyLLM, showing you the basics of chatting with AI models, generating images, and creating embeddings.
|
12
|
-
|
13
|
-
## Prerequisites
|
14
|
-
|
15
|
-
Before starting, make sure you have:
|
16
|
-
|
17
|
-
1. Installed the RubyLLM gem (see the [Installation guide]({% link installation.md %}))
|
18
|
-
2. At least one API key from a supported provider (OpenAI, Anthropic, Google, or DeepSeek)
|
19
|
-
|
20
|
-
## Basic Configuration
|
21
|
-
|
22
|
-
Let's start by setting up RubyLLM with your API keys:
|
23
|
-
|
24
|
-
```ruby
|
25
|
-
require 'ruby_llm'
|
26
|
-
|
27
|
-
RubyLLM.configure do |config|
|
28
|
-
# Add the API keys you have available
|
29
|
-
config.openai_api_key = ENV['OPENAI_API_KEY']
|
30
|
-
config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
|
31
|
-
config.gemini_api_key = ENV['GEMINI_API_KEY']
|
32
|
-
config.deepseek_api_key = ENV['DEEPSEEK_API_KEY']
|
33
|
-
end
|
34
|
-
```
|
35
|
-
|
36
|
-
## Your First Chat
|
37
|
-
|
38
|
-
Let's start with a simple chat interaction:
|
39
|
-
|
40
|
-
```ruby
|
41
|
-
# Create a chat (uses the default model)
|
42
|
-
chat = RubyLLM.chat
|
43
|
-
|
44
|
-
# Ask a question
|
45
|
-
response = chat.ask "What's the capital of France?"
|
46
|
-
puts response.content
|
47
|
-
# => "The capital of France is Paris."
|
48
|
-
|
49
|
-
# Continue the conversation
|
50
|
-
response = chat.ask "What's the population of that city?"
|
51
|
-
puts response.content
|
52
|
-
# => "Paris has a population of approximately 2.1 million people..."
|
53
|
-
```
|
54
|
-
|
55
|
-
### Using a Specific Model
|
56
|
-
|
57
|
-
You can specify which model you want to use:
|
58
|
-
|
59
|
-
```ruby
|
60
|
-
# Use Claude
|
61
|
-
claude_chat = RubyLLM.chat(model: 'claude-3-5-sonnet-20241022')
|
62
|
-
claude_chat.ask "Tell me about Ruby programming language"
|
63
|
-
|
64
|
-
# Use Gemini
|
65
|
-
gemini_chat = RubyLLM.chat(model: 'gemini-2.0-flash')
|
66
|
-
gemini_chat.ask "What are the best Ruby gems for machine learning?"
|
67
|
-
```
|
68
|
-
|
69
|
-
## Exploring Available Models
|
70
|
-
|
71
|
-
RubyLLM gives you access to models from multiple providers. You can see what's available:
|
72
|
-
|
73
|
-
```ruby
|
74
|
-
# List all models
|
75
|
-
all_models = RubyLLM.models.all
|
76
|
-
puts "Total models: #{all_models.count}"
|
77
|
-
|
78
|
-
# List chat models
|
79
|
-
chat_models = RubyLLM.models.chat_models
|
80
|
-
puts "Chat models:"
|
81
|
-
chat_models.each do |model|
|
82
|
-
puts "- #{model.id} (#{model.provider})"
|
83
|
-
end
|
84
|
-
|
85
|
-
# List embedding models
|
86
|
-
RubyLLM.models.embedding_models.each do |model|
|
87
|
-
puts "- #{model.id} (#{model.provider})"
|
88
|
-
end
|
89
|
-
|
90
|
-
# Find info about a specific model
|
91
|
-
gpt = RubyLLM.models.find('gpt-4o-mini')
|
92
|
-
puts "Context window: #{gpt.context_window}"
|
93
|
-
puts "Max tokens: #{gpt.max_tokens}"
|
94
|
-
puts "Pricing: $#{gpt.input_price_per_million} per million input tokens"
|
95
|
-
```
|
96
|
-
|
97
|
-
## Generating Images
|
98
|
-
|
99
|
-
RubyLLM makes it easy to generate images with DALL-E:
|
100
|
-
|
101
|
-
```ruby
|
102
|
-
# Generate an image
|
103
|
-
image = RubyLLM.paint("a sunset over mountains")
|
104
|
-
|
105
|
-
# The URL where you can view/download the image
|
106
|
-
puts image.url
|
107
|
-
|
108
|
-
# How the model interpreted your prompt
|
109
|
-
puts image.revised_prompt
|
110
|
-
|
111
|
-
# Generate a larger image
|
112
|
-
large_image = RubyLLM.paint(
|
113
|
-
"a cyberpunk city at night with neon lights",
|
114
|
-
size: "1792x1024"
|
115
|
-
)
|
116
|
-
```
|
117
|
-
|
118
|
-
## Creating Embeddings
|
119
|
-
|
120
|
-
Embeddings are vector representations of text that can be used for semantic search, classification, and more:
|
121
|
-
|
122
|
-
```ruby
|
123
|
-
# Create an embedding for a single text
|
124
|
-
embedding = RubyLLM.embed("Ruby is a programmer's best friend")
|
125
|
-
|
126
|
-
# The vector representation
|
127
|
-
vector = embedding.vectors
|
128
|
-
puts "Vector dimension: #{vector.length}"
|
129
|
-
|
130
|
-
# Create embeddings for multiple texts
|
131
|
-
texts = ["Ruby", "Python", "JavaScript"]
|
132
|
-
embeddings = RubyLLM.embed(texts)
|
133
|
-
|
134
|
-
# Each text gets its own vector
|
135
|
-
puts "Number of vectors: #{embeddings.vectors.length}"
|
136
|
-
```
|
137
|
-
|
138
|
-
## Working with Conversations
|
139
|
-
|
140
|
-
Here's how to have a multi-turn conversation:
|
141
|
-
|
142
|
-
```ruby
|
143
|
-
chat = RubyLLM.chat
|
144
|
-
|
145
|
-
# First message
|
146
|
-
chat.ask "What are the benefits of Ruby on Rails?"
|
147
|
-
|
148
|
-
# Follow-up questions
|
149
|
-
chat.ask "How does that compare to Django?"
|
150
|
-
chat.ask "Which one would you recommend for a new web project?"
|
151
|
-
|
152
|
-
# You can check all messages in the conversation
|
153
|
-
chat.messages.each do |message|
|
154
|
-
puts "#{message.role}: #{message.content[0..100]}..."
|
155
|
-
end
|
156
|
-
```
|
157
|
-
|
158
|
-
## What's Next?
|
159
|
-
|
160
|
-
Now that you've got the basics down, you're ready to explore more advanced features:
|
161
|
-
|
162
|
-
- [Chatting with AI]({% link guides/chat.md %}) - Learn more about chat capabilities
|
163
|
-
- [Using Tools]({% link guides/tools.md %}) - Let AI use your Ruby code
|
164
|
-
- [Rails Integration]({% link guides/rails.md %}) - Persist chats in your Rails apps
|
@@ -1,274 +0,0 @@
|
|
1
|
-
---
|
2
|
-
layout: default
|
3
|
-
title: Image Generation
|
4
|
-
parent: Guides
|
5
|
-
nav_order: 6
|
6
|
-
permalink: /guides/image-generation
|
7
|
-
---
|
8
|
-
|
9
|
-
# Image Generation
|
10
|
-
|
11
|
-
RubyLLM makes it easy to generate images using AI models like DALL-E. This guide explains how to create images and customize the generation process.
|
12
|
-
|
13
|
-
## Basic Image Generation
|
14
|
-
|
15
|
-
The simplest way to generate an image is using the global `paint` method:
|
16
|
-
|
17
|
-
```ruby
|
18
|
-
# Generate an image with DALL-E
|
19
|
-
image = RubyLLM.paint("a sunset over mountains")
|
20
|
-
|
21
|
-
# The URL where you can view/download the image
|
22
|
-
puts image.url # => "https://..."
|
23
|
-
|
24
|
-
# How the model interpreted your prompt
|
25
|
-
puts image.revised_prompt # => "A breathtaking sunset painting the sky with warm..."
|
26
|
-
```
|
27
|
-
|
28
|
-
The `paint` method handles all the complexities of connecting to the right provider and processing the request.
|
29
|
-
|
30
|
-
## Choosing Models
|
31
|
-
|
32
|
-
By default, RubyLLM uses DALL-E 3, but you can specify a different model:
|
33
|
-
|
34
|
-
```ruby
|
35
|
-
# Use a specific model
|
36
|
-
image = RubyLLM.paint(
|
37
|
-
"a cyberpunk city at night",
|
38
|
-
model: "dall-e-3"
|
39
|
-
)
|
40
|
-
|
41
|
-
# Alternate syntax
|
42
|
-
image = RubyLLM.paint(
|
43
|
-
"a cyberpunk city at night",
|
44
|
-
model: "dalle-3" # with or without hyphen
|
45
|
-
)
|
46
|
-
```
|
47
|
-
|
48
|
-
You can configure the default model globally:
|
49
|
-
|
50
|
-
```ruby
|
51
|
-
RubyLLM.configure do |config|
|
52
|
-
config.default_image_model = "dall-e-3"
|
53
|
-
end
|
54
|
-
```
|
55
|
-
|
56
|
-
## Image Sizes
|
57
|
-
|
58
|
-
You can control the size of the generated image:
|
59
|
-
|
60
|
-
```ruby
|
61
|
-
# Standard size (1024x1024)
|
62
|
-
image = RubyLLM.paint("a white siamese cat")
|
63
|
-
|
64
|
-
# Landscape (1792x1024)
|
65
|
-
landscape = RubyLLM.paint(
|
66
|
-
"a panoramic mountain landscape",
|
67
|
-
size: "1792x1024"
|
68
|
-
)
|
69
|
-
|
70
|
-
# Portrait (1024x1792)
|
71
|
-
portrait = RubyLLM.paint(
|
72
|
-
"a tall redwood tree",
|
73
|
-
size: "1024x1792"
|
74
|
-
)
|
75
|
-
|
76
|
-
# Square with custom size
|
77
|
-
square = RubyLLM.paint(
|
78
|
-
"a perfect square mandala",
|
79
|
-
size: "1024x1024" # standard square
|
80
|
-
)
|
81
|
-
```
|
82
|
-
|
83
|
-
Available sizes depend on the model. DALL-E 3 supports:
|
84
|
-
- `"1024x1024"` - Standard square (default)
|
85
|
-
- `"1792x1024"` - Wide landscape
|
86
|
-
- `"1024x1792"` - Tall portrait
|
87
|
-
|
88
|
-
## Working with Generated Images
|
89
|
-
|
90
|
-
The `Image` object returned by `paint` contains information about the generated image:
|
91
|
-
|
92
|
-
```ruby
|
93
|
-
image = RubyLLM.paint("a cyberpunk cityscape")
|
94
|
-
|
95
|
-
# URL to the generated image (temporary, expires after some time)
|
96
|
-
image_url = image.url
|
97
|
-
|
98
|
-
# How the model interpreted/enhanced your prompt
|
99
|
-
enhanced_prompt = image.revised_prompt
|
100
|
-
|
101
|
-
# The model used to generate the image
|
102
|
-
model_used = image.model_id
|
103
|
-
```
|
104
|
-
|
105
|
-
### Saving Images Locally
|
106
|
-
|
107
|
-
To save the generated image to a local file:
|
108
|
-
|
109
|
-
```ruby
|
110
|
-
require 'open-uri'
|
111
|
-
|
112
|
-
# Generate an image
|
113
|
-
image = RubyLLM.paint("a sunset over mountains")
|
114
|
-
|
115
|
-
# Save to a file
|
116
|
-
File.open("sunset.png", "wb") do |file|
|
117
|
-
file.write(URI.open(image.url).read)
|
118
|
-
end
|
119
|
-
```
|
120
|
-
|
121
|
-
## Prompt Engineering for Images
|
122
|
-
|
123
|
-
Crafting effective prompts is crucial for getting the best results:
|
124
|
-
|
125
|
-
```ruby
|
126
|
-
# Basic prompt
|
127
|
-
image = RubyLLM.paint("cat")
|
128
|
-
|
129
|
-
# Detailed prompt
|
130
|
-
image = RubyLLM.paint(
|
131
|
-
"A fluffy orange tabby cat sitting on a windowsill, " \
|
132
|
-
"looking out at a rainy day. Soft lighting, detailed fur, " \
|
133
|
-
"photorealistic style."
|
134
|
-
)
|
135
|
-
```
|
136
|
-
|
137
|
-
### Tips for Better Prompts
|
138
|
-
|
139
|
-
1. **Be specific** about subject, setting, lighting, style, and perspective
|
140
|
-
2. **Specify artistic style** (e.g., "oil painting", "digital art", "photorealistic")
|
141
|
-
3. **Include lighting details** ("soft morning light", "dramatic sunset")
|
142
|
-
4. **Add composition details** ("close-up", "wide angle", "overhead view")
|
143
|
-
5. **Specify mood or atmosphere** ("serene", "mysterious", "cheerful")
|
144
|
-
|
145
|
-
## Error Handling
|
146
|
-
|
147
|
-
Handle errors that may occur during image generation:
|
148
|
-
|
149
|
-
```ruby
|
150
|
-
begin
|
151
|
-
image = RubyLLM.paint("a sunset over mountains")
|
152
|
-
puts "Image generated: #{image.url}"
|
153
|
-
rescue RubyLLM::UnauthorizedError
|
154
|
-
puts "Please check your API key"
|
155
|
-
rescue RubyLLM::BadRequestError => e
|
156
|
-
puts "Invalid request: #{e.message}"
|
157
|
-
rescue RubyLLM::Error => e
|
158
|
-
puts "Error generating image: #{e.message}"
|
159
|
-
end
|
160
|
-
```
|
161
|
-
|
162
|
-
Common errors include:
|
163
|
-
- `UnauthorizedError` - Invalid API key
|
164
|
-
- `BadRequestError` - Content policy violation
|
165
|
-
- `RateLimitError` - Too many requests
|
166
|
-
- `ServiceUnavailableError` - Service temporarily unavailable
|
167
|
-
|
168
|
-
## Content Safety
|
169
|
-
|
170
|
-
Image generation models have built-in safety filters. If your prompt violates content policies, you'll receive an error.
|
171
|
-
|
172
|
-
To avoid content policy violations:
|
173
|
-
- Avoid requesting violent, adult, or disturbing content
|
174
|
-
- Don't ask for images of real public figures
|
175
|
-
- Avoid copyrighted characters and content
|
176
|
-
- Be mindful of sensitive subject matter
|
177
|
-
|
178
|
-
## Performance Considerations
|
179
|
-
|
180
|
-
Image generation typically takes 5-15 seconds. Consider these best practices:
|
181
|
-
|
182
|
-
1. **Handle asynchronously** - Don't block your application while waiting
|
183
|
-
2. **Implement timeouts** - Set appropriate request timeouts
|
184
|
-
3. **Cache results** - Save images to your server rather than regenerating
|
185
|
-
4. **Implement retries** - Retry on temporary failures
|
186
|
-
|
187
|
-
## Rails Integration
|
188
|
-
|
189
|
-
In a Rails application, you might implement image generation like this:
|
190
|
-
|
191
|
-
```ruby
|
192
|
-
class ImagesController < ApplicationController
|
193
|
-
def create
|
194
|
-
GenerateImageJob.perform_later(
|
195
|
-
prompt: params[:prompt],
|
196
|
-
user_id: current_user.id
|
197
|
-
)
|
198
|
-
|
199
|
-
redirect_to images_path, notice: "Your image is being generated..."
|
200
|
-
end
|
201
|
-
end
|
202
|
-
|
203
|
-
class GenerateImageJob < ApplicationJob
|
204
|
-
queue_as :default
|
205
|
-
|
206
|
-
def perform(prompt:, user_id:)
|
207
|
-
user = User.find(user_id)
|
208
|
-
|
209
|
-
begin
|
210
|
-
image = RubyLLM.paint(prompt)
|
211
|
-
|
212
|
-
# Download and store the image
|
213
|
-
image_file = URI.open(image.url)
|
214
|
-
|
215
|
-
# Create record in your database
|
216
|
-
user.images.create!(
|
217
|
-
prompt: prompt,
|
218
|
-
revised_prompt: image.revised_prompt,
|
219
|
-
file: image_file,
|
220
|
-
model: image.model_id
|
221
|
-
)
|
222
|
-
|
223
|
-
# Notify user
|
224
|
-
UserMailer.image_ready(user, prompt).deliver_later
|
225
|
-
rescue RubyLLM::Error => e
|
226
|
-
ErrorLogger.log(e, context: { prompt: prompt, user_id: user_id })
|
227
|
-
UserMailer.image_failed(user, prompt, e.message).deliver_later
|
228
|
-
end
|
229
|
-
end
|
230
|
-
end
|
231
|
-
```
|
232
|
-
|
233
|
-
## Example Use Cases
|
234
|
-
|
235
|
-
### Product Visualization
|
236
|
-
|
237
|
-
```ruby
|
238
|
-
def visualize_product(product_name, description)
|
239
|
-
prompt = "#{product_name}: #{description}. " \
|
240
|
-
"Professional product photography on white background, " \
|
241
|
-
"detailed, commercial quality."
|
242
|
-
|
243
|
-
RubyLLM.paint(prompt, size: "1024x1024")
|
244
|
-
end
|
245
|
-
|
246
|
-
image = visualize_product(
|
247
|
-
"Ergonomic Office Chair",
|
248
|
-
"Modern mesh back office chair with adjustable armrests and lumbar support"
|
249
|
-
)
|
250
|
-
```
|
251
|
-
|
252
|
-
### Art Generation for Content
|
253
|
-
|
254
|
-
```ruby
|
255
|
-
def generate_article_header(title, style)
|
256
|
-
prompt = "Header image for an article titled '#{title}'. " \
|
257
|
-
"Style: #{style}. Wide format, suitable for a blog header."
|
258
|
-
|
259
|
-
RubyLLM.paint(prompt, size: "1792x1024")
|
260
|
-
end
|
261
|
-
|
262
|
-
image = generate_article_header(
|
263
|
-
"The Future of Renewable Energy",
|
264
|
-
"Minimalist digital illustration with clean lines and a blue-green color palette"
|
265
|
-
)
|
266
|
-
```
|
267
|
-
|
268
|
-
## Next Steps
|
269
|
-
|
270
|
-
Now that you understand image generation, you might want to explore:
|
271
|
-
|
272
|
-
- [Embeddings]({% link guides/embeddings.md %}) for vector representations
|
273
|
-
- [Chat with Images]({% link guides/chat.md %}#working-with-images) to analyze images with AI
|
274
|
-
- [Error Handling]({% link guides/error-handling.md %}) for robust applications
|
data/docs/guides/index.md
DELETED
@@ -1,45 +0,0 @@
|
|
1
|
-
---
|
2
|
-
layout: default
|
3
|
-
title: Guides
|
4
|
-
nav_order: 3
|
5
|
-
has_children: true
|
6
|
-
permalink: /guides/
|
7
|
-
---
|
8
|
-
|
9
|
-
# RubyLLM Guides
|
10
|
-
|
11
|
-
This section contains detailed guides to help you make the most of RubyLLM. Each guide focuses on a specific aspect of the library and provides practical examples and best practices.
|
12
|
-
|
13
|
-
## Available Guides
|
14
|
-
|
15
|
-
### [Getting Started]({% link guides/getting-started.md %})
|
16
|
-
Learn the basics of RubyLLM and get up and running quickly with simple examples.
|
17
|
-
|
18
|
-
### [Chat]({% link guides/chat.md %})
|
19
|
-
Explore the chat interface, which is the primary way to interact with AI models through RubyLLM.
|
20
|
-
|
21
|
-
### [Tools]({% link guides/tools.md %})
|
22
|
-
Learn how to extend AI capabilities by creating tools that let models call your Ruby code.
|
23
|
-
|
24
|
-
### [Streaming]({% link guides/streaming.md %})
|
25
|
-
Understand how to use streaming responses for real-time interactions.
|
26
|
-
|
27
|
-
### [Rails Integration]({% link guides/rails.md %})
|
28
|
-
See how to integrate RubyLLM with Rails applications, including ActiveRecord persistence.
|
29
|
-
|
30
|
-
### [Image Generation]({% link guides/image-generation.md %})
|
31
|
-
Learn how to generate images using DALL-E and other providers.
|
32
|
-
|
33
|
-
### [Embeddings]({% link guides/embeddings.md %})
|
34
|
-
Explore how to create vector embeddings for semantic search and other applications.
|
35
|
-
|
36
|
-
### [Error Handling]({% link guides/error-handling.md %})
|
37
|
-
Master the techniques for robust error handling in AI applications.
|
38
|
-
|
39
|
-
## Getting Help
|
40
|
-
|
41
|
-
If you can't find what you're looking for in these guides, consider:
|
42
|
-
|
43
|
-
1. Checking the [API Documentation]() for detailed information about specific classes and methods
|
44
|
-
2. Looking at the [GitHub repository](https://github.com/yourusername/ruby_llm) for examples and the latest updates
|
45
|
-
3. Filing an issue on GitHub if you find a bug or have a feature request
|