ai-chat 0.0.8 → 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/{LICENSE.txt → LICENSE} +6 -6
- data/README.md +410 -94
- data/ai-chat.gemspec +33 -0
- data/lib/ai/chat.rb +158 -148
- data/lib/ai/response.rb +12 -0
- data/lib/ai-chat.rb +1 -11
- data/lib/ai_chat.rb +6 -0
- metadata +45 -78
- data/.config/rubocop/config.yml +0 -2
- data/.reek.yml +0 -7
- data/.ruby-version +0 -1
- data/CHANGELOG.md +0 -28
- data/CODE_OF_CONDUCT.md +0 -84
- data/Gemfile +0 -6
- data/Rakefile +0 -10
- data/ai_chat.gemspec +0 -48
- data/lib/ai/chat/version.rb +0 -7
- data/test_program/Gemfile +0 -4
- data/test_program/test_ai_chat.rb +0 -157
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: a820e2eb8c832c3f7b0ba877f8eb9418d9b681ba96af0b40c6aea0c9a31c598e
|
4
|
+
data.tar.gz: e2705f17dda7e4ee70f91ec78d5a114e6b4d12bc7e5194b5d6652dfac61c35c7
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: '07239a1c4c435d6877995f3915a0b8f167c885e4d4b02f9b5768abc182c29727c4daa2948237c36daec9874d58eca617457794233a65be15555ba4a86b0a4e0c'
|
7
|
+
data.tar.gz: 4e240275ae5ed5a3bc15b0618a5090c5be37e48e4609f2c58bca6a3938159934a12875f4a2aa06dc6b6004995813c378ec507ea0a552c1bb20f2264ccad5ba29
|
data/{LICENSE.txt → LICENSE}
RENAMED
@@ -1,6 +1,6 @@
|
|
1
|
-
|
1
|
+
MIT License
|
2
2
|
|
3
|
-
Copyright (c)
|
3
|
+
Copyright (c) 2024 Raghu Betina
|
4
4
|
|
5
5
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6
6
|
of this software and associated documentation files (the "Software"), to deal
|
@@ -9,13 +9,13 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
9
|
copies of the Software, and to permit persons to whom the Software is
|
10
10
|
furnished to do so, subject to the following conditions:
|
11
11
|
|
12
|
-
The above copyright notice and this permission notice shall be included in
|
13
|
-
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
13
|
+
copies or substantial portions of the Software.
|
14
14
|
|
15
15
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16
16
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17
17
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18
18
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19
19
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20
|
-
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
21
|
-
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21
|
+
SOFTWARE.
|
data/README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1
|
-
#
|
1
|
+
# OpenAI Chat
|
2
2
|
|
3
|
-
This gem provides a class called `
|
3
|
+
This gem provides a class called `OpenAI::Chat` that is intended to make it as easy as possible to use OpenAI's cutting-edge generative AI models.
|
4
4
|
|
5
5
|
## Installation
|
6
6
|
|
@@ -9,7 +9,7 @@ This gem provides a class called `AI::Chat` that is intended to make it as easy
|
|
9
9
|
Add this line to your application's Gemfile:
|
10
10
|
|
11
11
|
```ruby
|
12
|
-
gem "
|
12
|
+
gem "openai-chat", "< 1.0.0"
|
13
13
|
```
|
14
14
|
|
15
15
|
And then, at a command prompt:
|
@@ -23,7 +23,7 @@ bundle install
|
|
23
23
|
Or, install it directly with:
|
24
24
|
|
25
25
|
```
|
26
|
-
gem install
|
26
|
+
gem install openai-chat
|
27
27
|
```
|
28
28
|
|
29
29
|
## Simplest usage
|
@@ -31,47 +31,176 @@ gem install ai-chat
|
|
31
31
|
In your Ruby program:
|
32
32
|
|
33
33
|
```ruby
|
34
|
-
require "
|
34
|
+
require "openai/chat"
|
35
35
|
|
36
|
-
# Create an instance of
|
37
|
-
|
36
|
+
# Create an instance of OpenAI::Chat
|
37
|
+
a = OpenAI::Chat.new
|
38
38
|
|
39
|
-
#
|
40
|
-
|
39
|
+
# Build up your conversation by adding messages
|
40
|
+
a.add("If the Ruby community had an official motto, what might it be?")
|
41
41
|
|
42
|
-
#
|
43
|
-
|
42
|
+
# See the convo so far - it's just an array of hashes!
|
43
|
+
pp a.messages
|
44
|
+
# => [{:role=>"user", :content=>"If the Ruby community had an official motto, what might it be?"}]
|
44
45
|
|
45
|
-
#
|
46
|
-
|
47
|
-
# => "Greetings, good sir or madam! How dost thou fare on this fine day? Pray, tell me how I may be of service to thee."
|
46
|
+
# Generate the next message using AI
|
47
|
+
a.generate! # => "Matz is nice and so we are nice" (or similar)
|
48
48
|
|
49
|
-
#
|
50
|
-
|
51
|
-
|
52
|
-
#
|
49
|
+
# Your array now includes the assistant's response
|
50
|
+
pp a.messages
|
51
|
+
# => [
|
52
|
+
# {:role=>"user", :content=>"If the Ruby community had an official motto, what might it be?"},
|
53
|
+
# {:role=>"assistant", :content=>"Matz is nice and so we are nice", :response => #<OpenAI::Chat::Response id=resp_abc... model=gpt-4.1-nano tokens=12>}
|
54
|
+
# ]
|
55
|
+
|
56
|
+
# Continue the conversation
|
57
|
+
a.add("What about Rails?")
|
58
|
+
a.generate! # => "Convention over configuration."
|
59
|
+
```
|
60
|
+
|
61
|
+
## Understanding the Data Structure
|
62
|
+
|
63
|
+
Every OpenAI chat is just an array of hashes. Each hash needs:
|
64
|
+
- `:role`: who's speaking ("system", "user", or "assistant")
|
65
|
+
- `:content`: what they're saying
|
66
|
+
|
67
|
+
That's it! You're building something like this:
|
68
|
+
|
69
|
+
```ruby
|
70
|
+
[
|
71
|
+
{:role => "system", :content => "You are a helpful assistant"},
|
72
|
+
{:role => "user", :content => "Hello!"},
|
73
|
+
{:role => "assistant", :content => "Hi there! How can I help you today?", :response => #<OpenAI::Chat::Response id=resp_abc... model=gpt-4.1-nano tokens=12>}
|
74
|
+
]
|
75
|
+
```
|
76
|
+
|
77
|
+
That last bit, under `:response`, is an object that represents the JSON that the OpenAI API sent back to us. It contains information about the number of tokens consumed, as well as a response ID that we can use later if we want to pick up the conversation at that point. More on that later.
|
78
|
+
|
79
|
+
## Adding Different Types of Messages
|
80
|
+
|
81
|
+
```ruby
|
82
|
+
require "openai/chat"
|
83
|
+
|
84
|
+
b = OpenAI::Chat.new
|
85
|
+
|
86
|
+
# Add system instructions
|
87
|
+
b.add("You are a helpful assistant that talks like Shakespeare.", role: "system")
|
88
|
+
|
89
|
+
# Add a user message (role defaults to "user")
|
90
|
+
b.add("If the Ruby community had an official motto, what might it be?")
|
91
|
+
|
92
|
+
# Check what we've built
|
93
|
+
pp b.messages
|
94
|
+
# => [
|
95
|
+
# {:role=>"system", :content=>"You are a helpful assistant that talks like Shakespeare."},
|
96
|
+
# {:role=>"user", :content=>"If the Ruby community had an official motto, what might it be?"}
|
97
|
+
# ]
|
98
|
+
|
99
|
+
# Generate a response
|
100
|
+
b.generate! # => "Methinks 'tis 'Ruby doth bring joy to all who craft with care'"
|
101
|
+
```
|
102
|
+
|
103
|
+
### Convenience Methods
|
104
|
+
|
105
|
+
Instead of always specifying the role, you can use these shortcuts:
|
106
|
+
|
107
|
+
```ruby
|
108
|
+
c = OpenAI::Chat.new
|
109
|
+
|
110
|
+
# These are equivalent:
|
111
|
+
c.add("You are helpful", role: "system")
|
112
|
+
c.system("You are helpful")
|
113
|
+
|
114
|
+
# These are equivalent:
|
115
|
+
c.add("Hello there!")
|
116
|
+
c.user("Hello there!")
|
117
|
+
|
118
|
+
# These are equivalent:
|
119
|
+
c.add("Hi! How can I help?", role: "assistant")
|
120
|
+
c.assistant("Hi! How can I help?")
|
121
|
+
```
|
122
|
+
|
123
|
+
## Why This Design?
|
124
|
+
|
125
|
+
We use the `add` method (and its shortcuts) to build up an array because:
|
126
|
+
|
127
|
+
1. **It's educational**: You can see exactly what data structure you're building
|
128
|
+
2. **It's debuggable**: Use `pp a.messages` anytime to inspect your conversation
|
129
|
+
3. **It's flexible**: The same pattern works when loading existing conversations:
|
130
|
+
|
131
|
+
```ruby
|
132
|
+
# In a Rails app, you might do:
|
133
|
+
d = OpenAI::Chat.new
|
134
|
+
d.messages = @conversation.messages # Load existing messages
|
135
|
+
d.user("What should I do next?") # Add a new question
|
136
|
+
d.generate! # Generate a response
|
53
137
|
```
|
54
138
|
|
55
139
|
## Configuration
|
56
140
|
|
57
|
-
|
141
|
+
### Model
|
142
|
+
|
143
|
+
By default, the gem uses OpenAI's `gpt-4.1-nano` model. If you want to use a different model, you can set it:
|
58
144
|
|
59
145
|
```ruby
|
60
|
-
|
146
|
+
e = OpenAI::Chat.new
|
147
|
+
e.model = "o4-mini"
|
61
148
|
```
|
62
149
|
|
150
|
+
As of 2025-07-29, the list of chat models that you probably want to choose from are:
|
151
|
+
|
152
|
+
#### Foundation models
|
153
|
+
|
154
|
+
- gpt-4.1-nano
|
155
|
+
- gpt-4.1-mini
|
156
|
+
- gpt-4.1
|
157
|
+
|
158
|
+
#### Reasoning models
|
159
|
+
|
160
|
+
- o4-mini
|
161
|
+
- o3
|
162
|
+
|
163
|
+
### API key
|
164
|
+
|
63
165
|
The gem by default looks for an environment variable called `OPENAI_API_KEY` and uses that if it finds it.
|
64
166
|
|
65
167
|
You can specify a different environment variable name:
|
66
168
|
|
67
169
|
```ruby
|
68
|
-
|
170
|
+
f = OpenAI::Chat.new(api_key_env_var: "MY_OPENAI_TOKEN")
|
69
171
|
```
|
70
172
|
|
71
173
|
Or, you can pass an API key in directly:
|
72
174
|
|
73
175
|
```ruby
|
74
|
-
|
176
|
+
g = OpenAI::Chat.new(api_key: "your-api-key-goes-here")
|
177
|
+
```
|
178
|
+
|
179
|
+
## Inspecting Your Conversation
|
180
|
+
|
181
|
+
You can call `.messages` to get an array containing the conversation so far:
|
182
|
+
|
183
|
+
```ruby
|
184
|
+
h = OpenAI::Chat.new
|
185
|
+
h.system("You are a helpful cooking assistant")
|
186
|
+
h.user("How do I boil an egg?")
|
187
|
+
h.generate!
|
188
|
+
|
189
|
+
# See the whole conversation
|
190
|
+
pp h.messages
|
191
|
+
# => [
|
192
|
+
# {:role=>"system", :content=>"You are a helpful cooking assistant"},
|
193
|
+
# {:role=>"user", :content=>"How do I boil an egg?"},
|
194
|
+
# {:role=>"assistant", :content=>"Here's how to boil an egg..."}
|
195
|
+
# ]
|
196
|
+
|
197
|
+
# Get just the last response
|
198
|
+
h.messages.last[:content]
|
199
|
+
# => "Here's how to boil an egg..."
|
200
|
+
|
201
|
+
# Or use the convenient shortcut
|
202
|
+
h.last
|
203
|
+
# => "Here's how to boil an egg..."
|
75
204
|
```
|
76
205
|
|
77
206
|
## Structured Output
|
@@ -79,129 +208,316 @@ x = AI::Chat.new(api_key: "your-api-key-goes-here")
|
|
79
208
|
Get back Structured Output by setting the `schema` attribute (I suggest using [OpenAI's handy tool for generating the JSON Schema](https://platform.openai.com/docs/guides/structured-outputs)):
|
80
209
|
|
81
210
|
```ruby
|
82
|
-
|
211
|
+
i = OpenAI::Chat.new
|
83
212
|
|
84
|
-
|
213
|
+
i.system("You are an expert nutritionist. The user will describe a meal. Estimate the calories, carbs, fat, and protein.")
|
85
214
|
|
86
|
-
|
215
|
+
# The schema should be a JSON string (use OpenAI's tool to generate: https://platform.openai.com/docs/guides/structured-outputs)
|
216
|
+
i.schema = '{"name": "nutrition_values","strict": true,"schema": {"type": "object","properties": {"fat": {"type": "number","description": "The amount of fat in grams."},"protein": {"type": "number","description": "The amount of protein in grams."},"carbs": {"type": "number","description": "The amount of carbohydrates in grams."},"total_calories": {"type": "number","description": "The total calories calculated based on fat, protein, and carbohydrates."}},"required": ["fat","protein","carbs","total_calories"],"additionalProperties": false}}'
|
87
217
|
|
88
|
-
|
218
|
+
i.user("1 slice of pizza")
|
89
219
|
|
90
|
-
|
91
|
-
# => {
|
220
|
+
response = i.generate!
|
221
|
+
# => {:fat=>15, :protein=>12, :carbs=>35, :total_calories=>285}
|
222
|
+
|
223
|
+
# The response is parsed JSON, not a string!
|
224
|
+
response[:total_calories] # => 285
|
225
|
+
```
|
226
|
+
|
227
|
+
You can also provide the equivalent Ruby `Hash` rather than a `String` containing JSON.
|
228
|
+
|
229
|
+
```ruby
|
230
|
+
# Equivalent to assigning the String above
|
231
|
+
i.schema = {
|
232
|
+
name: "nutrition_values",
|
233
|
+
strict: true,
|
234
|
+
schema: {
|
235
|
+
type: "object",
|
236
|
+
properties: {
|
237
|
+
fat: { type: "number", description: "The amount of fat in grams." },
|
238
|
+
protein: { type: "number", description: "The amount of protein in grams." },
|
239
|
+
carbs: { type: "number", description: "The amount of carbohydrates in grams." },
|
240
|
+
total_calories: { type: "number", description:
|
241
|
+
"The total calories calculated based on fat, protein, and carbohydrates." }
|
242
|
+
},
|
243
|
+
required: [:fat, :protein, :carbs, :total_calories],
|
244
|
+
additionalProperties: false
|
245
|
+
}
|
246
|
+
}
|
92
247
|
```
|
93
248
|
|
94
|
-
|
249
|
+
The keys can be `String`s or `Symbol`s.
|
250
|
+
|
251
|
+
## Including Images
|
95
252
|
|
96
253
|
You can include images in your chat messages using the `user` method with the `image` or `images` parameter:
|
97
254
|
|
98
255
|
```ruby
|
256
|
+
j = OpenAI::Chat.new
|
257
|
+
|
99
258
|
# Send a single image
|
100
|
-
|
259
|
+
j.user("What's in this image?", image: "path/to/local/image.jpg")
|
260
|
+
j.generate! # => "I can see a sunset over the ocean..."
|
101
261
|
|
102
262
|
# Send multiple images
|
103
|
-
|
263
|
+
j.user("Compare these images", images: ["image1.jpg", "image2.jpg"])
|
264
|
+
j.generate! # => "The first image shows... while the second..."
|
265
|
+
|
266
|
+
# Mix URLs and local files
|
267
|
+
j.user("What's the difference?", images: [
|
268
|
+
"local_photo.jpg",
|
269
|
+
"https://example.com/remote_photo.jpg"
|
270
|
+
])
|
271
|
+
j.generate!
|
104
272
|
```
|
105
273
|
|
106
274
|
The gem supports three types of image inputs:
|
107
275
|
|
108
|
-
- URLs
|
109
|
-
- File paths
|
110
|
-
- File-like objects
|
276
|
+
- **URLs**: Pass an image URL starting with `http://` or `https://`
|
277
|
+
- **File paths**: Pass a string with a path to a local image file
|
278
|
+
- **File-like objects**: Pass an object that responds to `read` (like `File.open("image.jpg")` or Rails uploaded files)
|
279
|
+
|
280
|
+
## Web Search
|
111
281
|
|
112
|
-
|
282
|
+
To give the model access to real-time information from the internet, you can enable the `web_search` feature. This uses OpenAI's built-in `web_search_preview` tool.
|
113
283
|
|
114
284
|
```ruby
|
115
|
-
|
116
|
-
|
117
|
-
|
118
|
-
|
119
|
-
{"text" => "What is in the above image? What is in the below image?"},
|
120
|
-
{"image" => "https://upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Elephant_Diversity.jpg/305px-Elephant_Diversity.jpg"},
|
121
|
-
{"text" => "What are the differences between the images?"}
|
122
|
-
]
|
123
|
-
)
|
124
|
-
z.assistant!
|
285
|
+
m = OpenAI::Chat.new
|
286
|
+
m.web_search = true
|
287
|
+
m.user("What are the latest developments in the Ruby language?")
|
288
|
+
m.generate! # This may use web search to find current information
|
125
289
|
```
|
126
290
|
|
127
|
-
|
291
|
+
**Note:** This feature requires a model that supports the `web_search_preview` tool, such as `gpt-4o` or `gpt-4o-mini`. The gem will attempt to use a compatible model if you have `web_search` enabled.
|
292
|
+
|
293
|
+
## Building Conversations Without API Calls
|
294
|
+
|
295
|
+
You can manually add assistant messages without making API calls, which is useful when reconstructing a past conversation:
|
128
296
|
|
129
297
|
```ruby
|
130
|
-
|
131
|
-
|
132
|
-
|
133
|
-
|
134
|
-
|
135
|
-
|
136
|
-
|
137
|
-
|
138
|
-
|
139
|
-
|
298
|
+
# Create a new chat instance
|
299
|
+
k = OpenAI::Chat.new
|
300
|
+
|
301
|
+
# Add previous messages
|
302
|
+
k.system("You are a helpful assistant who provides information about planets.")
|
303
|
+
|
304
|
+
k.user("Tell me about Mars.")
|
305
|
+
k.assistant("Mars is the fourth planet from the Sun....")
|
306
|
+
|
307
|
+
k.user("What's the atmosphere like?")
|
308
|
+
k.assistant("Mars has a very thin atmosphere compared to Earth....")
|
309
|
+
|
310
|
+
k.user("Could it support human life?")
|
311
|
+
k.assistant("Mars currently can't support human life without....")
|
312
|
+
|
313
|
+
# Now continue the conversation with an API-generated response
|
314
|
+
k.user("Are there any current missions to go there?")
|
315
|
+
response = k.generate!
|
316
|
+
puts response
|
140
317
|
```
|
141
318
|
|
142
|
-
|
319
|
+
With this, you can loop through any conversation's history (perhaps after retrieving it from your database), recreate an `OpenAI::Chat`, and then continue it.
|
320
|
+
|
321
|
+
## Reasoning Models
|
143
322
|
|
144
|
-
|
323
|
+
When using reasoning models like `o3` or `o4-mini`, you can specify a reasoning effort level to control how much reasoning the model does before producing its final response:
|
324
|
+
|
325
|
+
```ruby
|
326
|
+
l = OpenAI::Chat.new
|
327
|
+
l.model = "o3-mini"
|
328
|
+
l.reasoning_effort = "medium" # Can be "low", "medium", or "high"
|
145
329
|
|
146
|
-
|
147
|
-
|
330
|
+
l.user("What does this error message mean? <insert error message>")
|
331
|
+
l.generate!
|
148
332
|
```
|
149
333
|
|
150
|
-
|
334
|
+
The `reasoning_effort` parameter guides the model on how many reasoning tokens to generate before creating a response to the prompt. Options are:
|
335
|
+
- `"low"`: Favors speed and economical token usage.
|
336
|
+
- `"medium"`: (Default) Balances speed and reasoning accuracy.
|
337
|
+
- `"high"`: Favors more complete reasoning.
|
151
338
|
|
152
|
-
|
339
|
+
Setting to `nil` disables the reasoning parameter.
|
153
340
|
|
154
|
-
|
155
|
-
- TODO: Setting `.messages` will replace the conversation with the provided array.
|
341
|
+
## Advanced: Response Details
|
156
342
|
|
157
|
-
|
343
|
+
When you call `generate!` or `generate!`, the gem stores additional information about the API response:
|
158
344
|
|
159
|
-
|
345
|
+
```ruby
|
346
|
+
t = OpenAI::Chat.new
|
347
|
+
t.user("Hello!")
|
348
|
+
t.generate!
|
349
|
+
|
350
|
+
# Each assistant message includes a response object
|
351
|
+
pp t.messages.last
|
352
|
+
# => {
|
353
|
+
# :role => "assistant",
|
354
|
+
# :content => "Hello! How can I help you today?",
|
355
|
+
# :response => #<OpenAI::Chat::Response id=resp_abc... model=gpt-4.1-nano tokens=12>
|
356
|
+
# }
|
357
|
+
|
358
|
+
# Access detailed information
|
359
|
+
response = t.last_response
|
360
|
+
response.id # => "resp_abc123..."
|
361
|
+
response.model # => "gpt-4.1-nano"
|
362
|
+
response.usage # => {:prompt_tokens=>5, :completion_tokens=>7, :total_tokens=>12}
|
363
|
+
|
364
|
+
# Helper methods
|
365
|
+
t.last_response_id # => "resp_abc123..."
|
366
|
+
t.last_usage # => {:prompt_tokens=>5, :completion_tokens=>7, :total_tokens=>12}
|
367
|
+
t.total_tokens # => 12
|
368
|
+
```
|
160
369
|
|
161
|
-
|
162
|
-
|
370
|
+
This information is useful for:
|
371
|
+
|
372
|
+
- Debugging and monitoring token usage.
|
373
|
+
- Understanding which model was actually used.
|
374
|
+
- Future features like cost tracking.
|
375
|
+
|
376
|
+
You can also, if you know a response ID, pick up an old conversation at that point in time:
|
377
|
+
|
378
|
+
```ruby
|
379
|
+
t = OpenAI::Chat.new
|
380
|
+
t.user("Hello!")
|
381
|
+
t.generate!
|
382
|
+
old_id = t.last_response_id # => "resp_abc123..."
|
383
|
+
|
384
|
+
# Some time in the future...
|
385
|
+
|
386
|
+
u = OpenAI::Chat.new
|
387
|
+
u.pick_up_from("resp_abc123...")
|
388
|
+
u.messages # => [
|
389
|
+
# {:role=>"assistant", :response => #<OpenAI::Chat::Response id=resp_abc...}
|
390
|
+
# ]
|
391
|
+
u.user("What should we do next?")
|
392
|
+
u.generate!
|
163
393
|
```
|
164
|
-
|
165
|
-
|
394
|
+
|
395
|
+
Unless you've stored the previous messages somewhere yourself, this technique won't bring them back. But OpenAI remembers what they were, so that you can at least continue the conversation. (If you're using a reasoning model, this technique also preserves all of the model's reasoning.)
|
396
|
+
|
397
|
+
## Setting messages directly
|
398
|
+
|
399
|
+
You can use `.messages=()` to assign an `Array` of `Hashes`. Each `Hash` must have keys `:role` and `:content`, and optionally `:image` or `:images`:
|
400
|
+
|
401
|
+
```ruby
|
402
|
+
# Using the planet example with array of hashes
|
403
|
+
p = OpenAI::Chat.new
|
404
|
+
|
405
|
+
# Set all messages at once instead of calling methods sequentially
|
406
|
+
p.messages = [
|
407
|
+
{ role: "system", content: "You are a helpful assistant who provides information about planets." },
|
408
|
+
{ role: "user", content: "Tell me about Mars." },
|
409
|
+
{ role: "assistant", content: "Mars is the fourth planet from the Sun...." },
|
410
|
+
{ role: "user", content: "What's the atmosphere like?" },
|
411
|
+
{ role: "assistant", content: "Mars has a very thin atmosphere compared to Earth...." },
|
412
|
+
{ role: "user", content: "Could it support human life?" },
|
413
|
+
{ role: "assistant", content: "Mars currently can't support human life without...." }
|
414
|
+
]
|
415
|
+
|
416
|
+
# Now continue the conversation with an API-generated response
|
417
|
+
p.user("Are there any current missions to go there?")
|
418
|
+
response = p.generate!
|
419
|
+
puts response
|
166
420
|
```
|
167
|
-
3. Install dependencies: `bundle install`
|
168
|
-
4. Run the test program: `ruby test_ai_chat.rb`
|
169
421
|
|
170
|
-
|
422
|
+
You can still include images:
|
171
423
|
|
172
|
-
|
424
|
+
```ruby
|
425
|
+
# Create a new chat instance
|
426
|
+
q = OpenAI::Chat.new
|
427
|
+
|
428
|
+
# With images
|
429
|
+
q.messages = [
|
430
|
+
{ role: "system", content: "You are a helpful assistant." },
|
431
|
+
{ role: "user", content: "What's in this image?", image: "path/to/image.jpg" },
|
432
|
+
]
|
433
|
+
|
434
|
+
# With multiple images
|
435
|
+
q.messages = [
|
436
|
+
{ role: "system", content: "You are a helpful assistant." },
|
437
|
+
{ role: "user", content: "Compare these images", images: ["image1.jpg", "image2.jpg"] }
|
438
|
+
]
|
439
|
+
```
|
173
440
|
|
174
|
-
|
441
|
+
## Assigning `ActiveRecord::Relation`s
|
442
|
+
|
443
|
+
If your chat history is contained in an `ActiveRecord::Relation`, you can assign it directly:
|
175
444
|
|
176
445
|
```ruby
|
177
|
-
|
178
|
-
|
179
|
-
x.reasoning_effort = "medium" # Can be "low", "medium", or "high"
|
446
|
+
# Load from ActiveRecord
|
447
|
+
@thread = Thread.find(42)
|
180
448
|
|
181
|
-
|
182
|
-
|
449
|
+
r = OpenAI::Chat.new
|
450
|
+
r.messages = @thread.posts.order(:created_at)
|
451
|
+
r.user("What should we discuss next?")
|
452
|
+
r.generate! # Creates a new post record, too
|
183
453
|
```
|
184
454
|
|
185
|
-
|
186
|
-
- `"low"`: Favors speed and economical token usage
|
187
|
-
- `"medium"`: (Default) Balances speed and reasoning accuracy
|
188
|
-
- `"high"`: Favors more complete reasoning
|
455
|
+
### Requirements
|
189
456
|
|
190
|
-
|
457
|
+
In order for the above to "magically" work, there are a few requirements. Your ActiveRecord model must have:
|
458
|
+
|
459
|
+
- `.role` method that returns "system", "user", or "assistant"
|
460
|
+
- `.content` method that returns the message text
|
461
|
+
- `.image` method (optional) for single images - can return URLs, file paths, or Active Storage attachments
|
462
|
+
- `.images` method (optional) for multiple images
|
463
|
+
|
464
|
+
### Custom Column Names
|
465
|
+
|
466
|
+
If your columns have different names:
|
467
|
+
|
468
|
+
```ruby
|
469
|
+
s = OpenAI::Chat.new
|
470
|
+
s.configure_message_attributes(
|
471
|
+
role: :message_type, # Your column for role
|
472
|
+
content: :message_body, # Your column for content
|
473
|
+
image: :attachment # Your column/association for images
|
474
|
+
)
|
475
|
+
s.messages = @conversation.messages
|
476
|
+
```
|
477
|
+
|
478
|
+
### Saving Responses with Metadata
|
191
479
|
|
192
|
-
|
480
|
+
To preserve response metadata, add an `openai_response` column to your messages table:
|
193
481
|
|
194
|
-
|
195
|
-
|
482
|
+
```ruby
|
483
|
+
# In your migration
|
484
|
+
add_column :messages, :openai_response, :text
|
196
485
|
|
197
|
-
|
486
|
+
# In your model
|
487
|
+
class Message < ApplicationRecord
|
488
|
+
serialize :openai_response, OpenAI::Chat::Response
|
489
|
+
end
|
198
490
|
|
199
|
-
|
491
|
+
# Usage
|
492
|
+
@thread = Thread.find(42)
|
200
493
|
|
201
|
-
|
494
|
+
t = OpenAI::Chat.new
|
495
|
+
t.posts = @thread.messages
|
496
|
+
t.user("Hello!")
|
497
|
+
t.generate!
|
202
498
|
|
203
|
-
The
|
499
|
+
# The saved message will include token usage, model info, etc.
|
500
|
+
last_message = @thread.messages.last
|
501
|
+
last_message.openai_response.usage # => {:prompt_tokens=>10, ...}
|
502
|
+
```
|
204
503
|
|
205
|
-
##
|
504
|
+
## Other Features Being Considered
|
505
|
+
|
506
|
+
- **Session management**: Save and restore conversations by ID
|
507
|
+
- **Streaming responses**: Real-time streaming as the AI generates its response
|
508
|
+
- **Cost tracking**: Automatic calculation and tracking of API costs
|
509
|
+
|
510
|
+
## Testing with Real API Calls
|
206
511
|
|
207
|
-
|
512
|
+
While this gem includes specs, they use mocked API responses. To test with real API calls:
|
513
|
+
|
514
|
+
1. Navigate to the test program directory: `cd demo`
|
515
|
+
2. Create a `.env` file in the test_program directory with your API credentials:
|
516
|
+
```
|
517
|
+
# Your OpenAI API key
|
518
|
+
OPENAI_API_KEY=your_openai_api_key_here
|
519
|
+
```
|
520
|
+
3. Install dependencies: `bundle install`
|
521
|
+
4. Run the test program: `ruby demo.rb`
|
522
|
+
|
523
|
+
This test program runs through all the major features of the gem, making real API calls to OpenAI.
|