ai-chat 0.0.8 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f3f66755a7b1d08ff7fe7e502e69140e11e193dfd238a83abcd032ca83bbd37d
4
- data.tar.gz: 4bea8f4f0d9c08ed7f42163e6ab16198f217cf67452075c0e119a0b76144b011
3
+ metadata.gz: c3ac8b911bbb322ec7470c226ed15f524b520ca927ccb50eb53d1beba9579e21
4
+ data.tar.gz: e15f7f7c3f8f06819357b16e853eb8937eb543c8d8f5079e61a5e2ce3599448e
5
5
  SHA512:
6
- metadata.gz: 5b9108a61768cc64b5027bab0a918022bb4fba3641406d6ee3ef67f8bec7a0718510392240f72eb56e39a11c412660d25301436b9f625020326d86a845ab7f07
7
- data.tar.gz: bf82ac462f4879fc6c923d616377cd4d402b2b1fa1b02c62064f29b764e2246a669e5b57f51af262eb5e66f3130be48625a78576dbfb1fb2034057fcbbae6a07
6
+ metadata.gz: e78959fc4366d03d9cbc96e218f77283001af9e9a49b1753c9944040bb8de1fd7c25635cbfb40543a7abead39614cfd68a4ad9d3958907b56023651c44d395e3
7
+ data.tar.gz: 448d2cdc7892504079edff0028f820bbe384a440e06fc109ac8b4d131e903c196c5ca1f3ff1103615dfeb4b80c3cc6e8b8a08ddbfb5ab044d84c4eb7aa580563
@@ -1,6 +1,6 @@
1
- The MIT License (MIT)
1
+ MIT License
2
2
 
3
- Copyright (c) 2025 Jelani Woods
3
+ Copyright (c) 2024 Raghu Betina
4
4
 
5
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
6
6
  of this software and associated documentation files (the "Software"), to deal
@@ -9,13 +9,13 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
9
  copies of the Software, and to permit persons to whom the Software is
10
10
  furnished to do so, subject to the following conditions:
11
11
 
12
- The above copyright notice and this permission notice shall be included in
13
- all copies or substantial portions of the Software.
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
14
 
15
15
  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
16
  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
17
  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
18
  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
19
  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
- THE SOFTWARE.
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
data/README.md CHANGED
@@ -1,6 +1,6 @@
1
- # AI::Chat
1
+ # OpenAI Chat
2
2
 
3
- This gem provides a class called `AI::Chat` that is intended to make it as easy as possible to use cutting-edge Large Language Models.
3
+ This gem provides a class called `OpenAI::Chat` that is intended to make it as easy as possible to use OpenAI's cutting-edge generative AI models.
4
4
 
5
5
  ## Installation
6
6
 
@@ -9,7 +9,7 @@ This gem provides a class called `AI::Chat` that is intended to make it as easy
9
9
  Add this line to your application's Gemfile:
10
10
 
11
11
  ```ruby
12
- gem "ai-chat", "< 1.0.0"
12
+ gem "openai-chat", "< 1.0.0"
13
13
  ```
14
14
 
15
15
  And then, at a command prompt:
@@ -23,7 +23,7 @@ bundle install
23
23
  Or, install it directly with:
24
24
 
25
25
  ```
26
- gem install ai-chat
26
+ gem install openai-chat
27
27
  ```
28
28
 
29
29
  ## Simplest usage
@@ -31,47 +31,176 @@ gem install ai-chat
31
31
  In your Ruby program:
32
32
 
33
33
  ```ruby
34
- require "ai-chat"
34
+ require "openai/chat"
35
35
 
36
- # Create an instance of AI::Chat
37
- x = AI::Chat.new
36
+ # Create an instance of OpenAI::Chat
37
+ a = OpenAI::Chat.new
38
38
 
39
- # Add system-level instructions
40
- x.system("You are a helpful assistant that speaks like Shakespeare.")
39
+ # Build up your conversation by adding messages
40
+ a.add("If the Ruby community had an official motto, what might it be?")
41
41
 
42
- # Add a user message to the chat
43
- x.user("Hi there!")
42
+ # See the convo so far - it's just an array of hashes!
43
+ pp a.messages
44
+ # => [{:role=>"user", :content=>"If the Ruby community had an official motto, what might it be?"}]
44
45
 
45
- # Get the next message from the model
46
- x.assistant!
47
- # => "Greetings, good sir or madam! How dost thou fare on this fine day? Pray, tell me how I may be of service to thee."
46
+ # Generate the next message using AI
47
+ a.generate! # => "Matz is nice and so we are nice" (or similar)
48
48
 
49
- # Rinse and repeat
50
- x.user("What's the best pizza in Chicago?")
51
- x.assistant!
52
- # => "Ah, the fair and bustling city of Chicago, renowned for its deep-dish delight that hath captured hearts and stomachs aplenty. Amongst the many offerings of this great city, 'tis often said that Lou Malnati's and Giordano's...."
49
+ # Your array now includes the assistant's response
50
+ pp a.messages
51
+ # => [
52
+ # {:role=>"user", :content=>"If the Ruby community had an official motto, what might it be?"},
53
+ # {:role=>"assistant", :content=>"Matz is nice and so we are nice", :response => #<OpenAI::Chat::Response id=resp_abc... model=gpt-4.1-nano tokens=12>}
54
+ # ]
55
+
56
+ # Continue the conversation
57
+ a.add("What about Rails?")
58
+ a.generate! # => "Convention over configuration."
59
+ ```
60
+
61
+ ## Understanding the Data Structure
62
+
63
+ Every OpenAI chat is just an array of hashes. Each hash needs:
64
+ - `:role`: who's speaking ("system", "user", or "assistant")
65
+ - `:content`: what they're saying
66
+
67
+ That's it! You're building something like this:
68
+
69
+ ```ruby
70
+ [
71
+ {:role => "system", :content => "You are a helpful assistant"},
72
+ {:role => "user", :content => "Hello!"},
73
+ {:role => "assistant", :content => "Hi there! How can I help you today?", :response => #<OpenAI::Chat::Response id=resp_abc... model=gpt-4.1-nano tokens=12>}
74
+ ]
75
+ ```
76
+
77
+ That last bit, under `:response`, is an object that represents the JSON that the OpenAI API sent back to us. It contains information about the number of tokens consumed, as well as a response ID that we can use later if we want to pick up the conversation at that point. More on that later.
78
+
79
+ ## Adding Different Types of Messages
80
+
81
+ ```ruby
82
+ require "openai/chat"
83
+
84
+ b = OpenAI::Chat.new
85
+
86
+ # Add system instructions
87
+ b.add("You are a helpful assistant that talks like Shakespeare.", role: "system")
88
+
89
+ # Add a user message (role defaults to "user")
90
+ b.add("If the Ruby community had an official motto, what might it be?")
91
+
92
+ # Check what we've built
93
+ pp b.messages
94
+ # => [
95
+ # {:role=>"system", :content=>"You are a helpful assistant that talks like Shakespeare."},
96
+ # {:role=>"user", :content=>"If the Ruby community had an official motto, what might it be?"}
97
+ # ]
98
+
99
+ # Generate a response
100
+ b.generate! # => "Methinks 'tis 'Ruby doth bring joy to all who craft with care'"
101
+ ```
102
+
103
+ ### Convenience Methods
104
+
105
+ Instead of always specifying the role, you can use these shortcuts:
106
+
107
+ ```ruby
108
+ c = OpenAI::Chat.new
109
+
110
+ # These are equivalent:
111
+ c.add("You are helpful", role: "system")
112
+ c.system("You are helpful")
113
+
114
+ # These are equivalent:
115
+ c.add("Hello there!")
116
+ c.user("Hello there!")
117
+
118
+ # These are equivalent:
119
+ c.add("Hi! How can I help?", role: "assistant")
120
+ c.assistant("Hi! How can I help?")
121
+ ```
122
+
123
+ ## Why This Design?
124
+
125
+ We use the `add` method (and its shortcuts) to build up an array because:
126
+
127
+ 1. **It's educational**: You can see exactly what data structure you're building
128
+ 2. **It's debuggable**: Use `pp a.messages` anytime to inspect your conversation
129
+ 3. **It's flexible**: The same pattern works when loading existing conversations:
130
+
131
+ ```ruby
132
+ # In a Rails app, you might do:
133
+ d = OpenAI::Chat.new
134
+ d.messages = @conversation.messages # Load existing messages
135
+ d.user("What should I do next?") # Add a new question
136
+ d.generate! # Generate a response
53
137
  ```
54
138
 
55
139
  ## Configuration
56
140
 
57
- By default, the gem uses OpenAI's `gpt-4.1-mini` model. If you want to use a different model, you can set it:
141
+ ### Model
142
+
143
+ By default, the gem uses OpenAI's `gpt-4.1-nano` model. If you want to use a different model, you can set it:
58
144
 
59
145
  ```ruby
60
- x.model = "o3"
146
+ e = OpenAI::Chat.new
147
+ e.model = "o4-mini"
61
148
  ```
62
149
 
150
+ As of 2025-07-29, the list of chat models that you probably want to choose from are:
151
+
152
+ #### Foundation models
153
+
154
+ - gpt-4.1-nano
155
+ - gpt-4.1-mini
156
+ - gpt-4.1
157
+
158
+ #### Reasoning models
159
+
160
+ - o4-mini
161
+ - o3
162
+
163
+ ### API key
164
+
63
165
  The gem by default looks for an environment variable called `OPENAI_API_KEY` and uses that if it finds it.
64
166
 
65
167
  You can specify a different environment variable name:
66
168
 
67
169
  ```ruby
68
- x = AI::Chat.new(api_key_env_var: "OPENAI_TOKEN")
170
+ f = OpenAI::Chat.new(api_key_env_var: "MY_OPENAI_TOKEN")
69
171
  ```
70
172
 
71
173
  Or, you can pass an API key in directly:
72
174
 
73
175
  ```ruby
74
- x = AI::Chat.new(api_key: "your-api-key-goes-here")
176
+ g = OpenAI::Chat.new(api_key: "your-api-key-goes-here")
177
+ ```
178
+
179
+ ## Inspecting Your Conversation
180
+
181
+ You can call `.messages` to get an array containing the conversation so far:
182
+
183
+ ```ruby
184
+ h = OpenAI::Chat.new
185
+ h.system("You are a helpful cooking assistant")
186
+ h.user("How do I boil an egg?")
187
+ h.generate!
188
+
189
+ # See the whole conversation
190
+ pp h.messages
191
+ # => [
192
+ # {:role=>"system", :content=>"You are a helpful cooking assistant"},
193
+ # {:role=>"user", :content=>"How do I boil an egg?"},
194
+ # {:role=>"assistant", :content=>"Here's how to boil an egg..."}
195
+ # ]
196
+
197
+ # Get just the last response
198
+ h.messages.last[:content]
199
+ # => "Here's how to boil an egg..."
200
+
201
+ # Or use the convenient shortcut
202
+ h.last
203
+ # => "Here's how to boil an egg..."
75
204
  ```
76
205
 
77
206
  ## Structured Output
@@ -79,129 +208,316 @@ x = AI::Chat.new(api_key: "your-api-key-goes-here")
79
208
  Get back Structured Output by setting the `schema` attribute (I suggest using [OpenAI's handy tool for generating the JSON Schema](https://platform.openai.com/docs/guides/structured-outputs)):
80
209
 
81
210
  ```ruby
82
- x = AI::Chat.new
211
+ i = OpenAI::Chat.new
83
212
 
84
- x.system("You are an expert nutritionist. The user will describe a meal. Estimate the calories, carbs, fat, and protein.")
213
+ i.system("You are an expert nutritionist. The user will describe a meal. Estimate the calories, carbs, fat, and protein.")
85
214
 
86
- x.schema = '{"name": "nutrition_values","strict": true,"schema": {"type": "object","properties": { "fat": { "type": "number", "description": "The amount of fat in grams." }, "protein": { "type": "number", "description": "The amount of protein in grams." }, "carbs": { "type": "number", "description": "The amount of carbohydrates in grams." }, "total_calories": { "type": "number", "description": "The total calories calculated based on fat, protein, and carbohydrates." }},"required": [ "fat", "protein", "carbs", "total_calories"],"additionalProperties": false}}'
215
+ # The schema should be a JSON string (use OpenAI's tool to generate: https://platform.openai.com/docs/guides/structured-outputs)
216
+ i.schema = '{"name": "nutrition_values","strict": true,"schema": {"type": "object","properties": {"fat": {"type": "number","description": "The amount of fat in grams."},"protein": {"type": "number","description": "The amount of protein in grams."},"carbs": {"type": "number","description": "The amount of carbohydrates in grams."},"total_calories": {"type": "number","description": "The total calories calculated based on fat, protein, and carbohydrates."}},"required": ["fat","protein","carbs","total_calories"],"additionalProperties": false}}'
87
217
 
88
- x.user("1 slice of pizza")
218
+ i.user("1 slice of pizza")
89
219
 
90
- x.assistant!
91
- # => {"fat"=>15, "protein"=>5, "carbs"=>50, "total_calories"=>350}
220
+ response = i.generate!
221
+ # => {:fat=>15, :protein=>12, :carbs=>35, :total_calories=>285}
222
+
223
+ # The response is parsed JSON, not a string!
224
+ response[:total_calories] # => 285
225
+ ```
226
+
227
+ You can also provide the equivalent Ruby `Hash` rather than a `String` containing JSON.
228
+
229
+ ```ruby
230
+ # Equivalent to assigning the String above
231
+ i.schema = {
232
+ name: "nutrition_values",
233
+ strict: true,
234
+ schema: {
235
+ type: "object",
236
+ properties: {
237
+ fat: { type: "number", description: "The amount of fat in grams." },
238
+ protein: { type: "number", description: "The amount of protein in grams." },
239
+ carbs: { type: "number", description: "The amount of carbohydrates in grams." },
240
+ total_calories: { type: "number", description:
241
+ "The total calories calculated based on fat, protein, and carbohydrates." }
242
+ },
243
+ required: [:fat, :protein, :carbs, :total_calories],
244
+ additionalProperties: false
245
+ }
246
+ }
92
247
  ```
93
248
 
94
- ## Include images
249
+ The keys can be `String`s or `Symbol`s.
250
+
251
+ ## Including Images
95
252
 
96
253
  You can include images in your chat messages using the `user` method with the `image` or `images` parameter:
97
254
 
98
255
  ```ruby
256
+ j = OpenAI::Chat.new
257
+
99
258
  # Send a single image
100
- x.user("What's in this image?", image: "path/to/local/image.jpg")
259
+ j.user("What's in this image?", image: "path/to/local/image.jpg")
260
+ j.generate! # => "I can see a sunset over the ocean..."
101
261
 
102
262
  # Send multiple images
103
- x.user("What are these images showing?", images: ["path/to/image1.jpg", "https://example.com/image2.jpg"])
263
+ j.user("Compare these images", images: ["image1.jpg", "image2.jpg"])
264
+ j.generate! # => "The first image shows... while the second..."
265
+
266
+ # Mix URLs and local files
267
+ j.user("What's the difference?", images: [
268
+ "local_photo.jpg",
269
+ "https://example.com/remote_photo.jpg"
270
+ ])
271
+ j.generate!
104
272
  ```
105
273
 
106
274
  The gem supports three types of image inputs:
107
275
 
108
- - URLs: Pass an image URL starting with `http://` or `https://`.
109
- - File paths: Pass a string with a path to a local image file.
110
- - File-like objects: Pass an object that responds to `read` (like `File.open("image.jpg")` or a Rails uploaded file).
276
+ - **URLs**: Pass an image URL starting with `http://` or `https://`
277
+ - **File paths**: Pass a string with a path to a local image file
278
+ - **File-like objects**: Pass an object that responds to `read` (like `File.open("image.jpg")` or Rails uploaded files)
279
+
280
+ ## Web Search
111
281
 
112
- You can send multiple images, and place them between bits of text, in a single user message:
282
+ To give the model access to real-time information from the internet, you can enable the `web_search` feature. This uses OpenAI's built-in `web_search_preview` tool.
113
283
 
114
284
  ```ruby
115
- z = AI::Chat.new
116
- z.user(
117
- [
118
- {"image" => "https://upload.wikimedia.org/wikipedia/commons/thumb/6/6a/Eubalaena_glacialis_with_calf.jpg/215px-Eubalaena_glacialis_with_calf.jpg"},
119
- {"text" => "What is in the above image? What is in the below image?"},
120
- {"image" => "https://upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Elephant_Diversity.jpg/305px-Elephant_Diversity.jpg"},
121
- {"text" => "What are the differences between the images?"}
122
- ]
123
- )
124
- z.assistant!
285
+ m = OpenAI::Chat.new
286
+ m.web_search = true
287
+ m.user("What are the latest developments in the Ruby language?")
288
+ m.generate! # This may use web search to find current information
125
289
  ```
126
290
 
127
- Both string and symbol keys are supported for the hash items:
291
+ **Note:** This feature requires a model that supports the `web_search_preview` tool, such as `gpt-4o` or `gpt-4o-mini`. The gem will attempt to use a compatible model if you have `web_search` enabled.
292
+
293
+ ## Building Conversations Without API Calls
294
+
295
+ You can manually add assistant messages without making API calls, which is useful when reconstructing a past conversation:
128
296
 
129
297
  ```ruby
130
- z = AI::Chat.new
131
- z.user(
132
- [
133
- {image: "https://upload.wikimedia.org/wikipedia/commons/thumb/6/6a/Eubalaena_glacialis_with_calf.jpg/215px-Eubalaena_glacialis_with_calf.jpg"},
134
- {text: "What is in the above image? What is in the below image?"},
135
- {image: "https://upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Elephant_Diversity.jpg/305px-Elephant_Diversity.jpg"},
136
- {text: "What are the differences between the images?"}
137
- ]
138
- )
139
- z.assistant!
298
+ # Create a new chat instance
299
+ k = OpenAI::Chat.new
300
+
301
+ # Add previous messages
302
+ k.system("You are a helpful assistant who provides information about planets.")
303
+
304
+ k.user("Tell me about Mars.")
305
+ k.assistant("Mars is the fourth planet from the Sun....")
306
+
307
+ k.user("What's the atmosphere like?")
308
+ k.assistant("Mars has a very thin atmosphere compared to Earth....")
309
+
310
+ k.user("Could it support human life?")
311
+ k.assistant("Mars currently can't support human life without....")
312
+
313
+ # Now continue the conversation with an API-generated response
314
+ k.user("Are there any current missions to go there?")
315
+ response = k.generate!
316
+ puts response
140
317
  ```
141
318
 
142
- ## Set assistant messages manually
319
+ With this, you can loop through any conversation's history (perhaps after retrieving it from your database), recreate an `OpenAI::Chat`, and then continue it.
320
+
321
+ ## Reasoning Models
143
322
 
144
- You can manually add assistant messages:
323
+ When using reasoning models like `o3` or `o4-mini`, you can specify a reasoning effort level to control how much reasoning the model does before producing its final response:
324
+
325
+ ```ruby
326
+ l = OpenAI::Chat.new
327
+ l.model = "o3-mini"
328
+ l.reasoning_effort = "medium" # Can be "low", "medium", or "high"
145
329
 
146
- ```rb
147
- x.assistant("Greetings, good sir or madam! How dost thou fare on this fine day? Pray, tell me how I may be of service to thee.")
330
+ l.user("What does this error message mean? <insert error message>")
331
+ l.generate!
148
332
  ```
149
333
 
150
- Useful if you are reconstructing a chat that has already happened.
334
+ The `reasoning_effort` parameter guides the model on how many reasoning tokens to generate before creating a response to the prompt. Options are:
335
+ - `"low"`: Favors speed and economical token usage.
336
+ - `"medium"`: (Default) Balances speed and reasoning accuracy.
337
+ - `"high"`: Favors more complete reasoning.
151
338
 
152
- ## Getting and setting messages directly
339
+ Setting to `nil` disables the reasoning parameter.
153
340
 
154
- - You can call `.messages` to get an array containing the conversation so far.
155
- - TODO: Setting `.messages` will replace the conversation with the provided array.
341
+ ## Advanced: Response Details
156
342
 
157
- ## Testing with Real API Calls
343
+ When you call `generate!` or `generate!`, the gem stores additional information about the API response:
158
344
 
159
- While this gem includes specs, they use mocked API responses. To test with real API calls:
345
+ ```ruby
346
+ t = OpenAI::Chat.new
347
+ t.user("Hello!")
348
+ t.generate!
349
+
350
+ # Each assistant message includes a response object
351
+ pp t.messages.last
352
+ # => {
353
+ # :role => "assistant",
354
+ # :content => "Hello! How can I help you today?",
355
+ # :response => #<OpenAI::Chat::Response id=resp_abc... model=gpt-4.1-nano tokens=12>
356
+ # }
357
+
358
+ # Access detailed information
359
+ response = t.last_response
360
+ response.id # => "resp_abc123..."
361
+ response.model # => "gpt-4.1-nano"
362
+ response.usage # => {:prompt_tokens=>5, :completion_tokens=>7, :total_tokens=>12}
363
+
364
+ # Helper methods
365
+ t.last_response_id # => "resp_abc123..."
366
+ t.last_usage # => {:prompt_tokens=>5, :completion_tokens=>7, :total_tokens=>12}
367
+ t.total_tokens # => 12
368
+ ```
160
369
 
161
- 1. Navigate to the test program directory: `cd test_program`
162
- 2. Create a `.env` file in the test_program directory with your API credentials:
370
+ This information is useful for:
371
+
372
+ - Debugging and monitoring token usage.
373
+ - Understanding which model was actually used.
374
+ - Future features like cost tracking.
375
+
376
+ You can also, if you know a response ID, pick up an old conversation at that point in time:
377
+
378
+ ```ruby
379
+ t = OpenAI::Chat.new
380
+ t.user("Hello!")
381
+ t.generate!
382
+ old_id = t.last_response_id # => "resp_abc123..."
383
+
384
+ # Some time in the future...
385
+
386
+ u = OpenAI::Chat.new
387
+ u.pick_up_from("resp_abc123...")
388
+ u.messages # => [
389
+ # {:role=>"assistant", :response => #<OpenAI::Chat::Response id=resp_abc...}
390
+ # ]
391
+ u.user("What should we do next?")
392
+ u.generate!
163
393
  ```
164
- # Your OpenAI API key
165
- OPENAI_API_KEY=your_openai_api_key_here
394
+
395
+ Unless you've stored the previous messages somewhere yourself, this technique won't bring them back. But OpenAI remembers what they were, so that you can at least continue the conversation. (If you're using a reasoning model, this technique also preserves all of the model's reasoning.)
396
+
397
+ ## Setting messages directly
398
+
399
+ You can use `.messages=()` to assign an `Array` of `Hashes`. Each `Hash` must have keys `:role` and `:content`, and optionally `:image` or `:images`:
400
+
401
+ ```ruby
402
+ # Using the planet example with array of hashes
403
+ p = OpenAI::Chat.new
404
+
405
+ # Set all messages at once instead of calling methods sequentially
406
+ p.messages = [
407
+ { role: "system", content: "You are a helpful assistant who provides information about planets." },
408
+ { role: "user", content: "Tell me about Mars." },
409
+ { role: "assistant", content: "Mars is the fourth planet from the Sun...." },
410
+ { role: "user", content: "What's the atmosphere like?" },
411
+ { role: "assistant", content: "Mars has a very thin atmosphere compared to Earth...." },
412
+ { role: "user", content: "Could it support human life?" },
413
+ { role: "assistant", content: "Mars currently can't support human life without...." }
414
+ ]
415
+
416
+ # Now continue the conversation with an API-generated response
417
+ p.user("Are there any current missions to go there?")
418
+ response = p.generate!
419
+ puts response
166
420
  ```
167
- 3. Install dependencies: `bundle install`
168
- 4. Run the test program: `ruby test_ai_chat.rb`
169
421
 
170
- This test program runs through all the major features of the gem, making real API calls to OpenAI.
422
+ You can still include images:
171
423
 
172
- ## Reasoning Effort
424
+ ```ruby
425
+ # Create a new chat instance
426
+ q = OpenAI::Chat.new
427
+
428
+ # With images
429
+ q.messages = [
430
+ { role: "system", content: "You are a helpful assistant." },
431
+ { role: "user", content: "What's in this image?", image: "path/to/image.jpg" },
432
+ ]
433
+
434
+ # With multiple images
435
+ q.messages = [
436
+ { role: "system", content: "You are a helpful assistant." },
437
+ { role: "user", content: "Compare these images", images: ["image1.jpg", "image2.jpg"] }
438
+ ]
439
+ ```
173
440
 
174
- When using reasoning models like `o3` or `o4-mini`, you can specify a reasoning effort level to control how much reasoning the model does before producing its final response:
441
+ ## Assigning `ActiveRecord::Relation`s
442
+
443
+ If your chat history is contained in an `ActiveRecord::Relation`, you can assign it directly:
175
444
 
176
445
  ```ruby
177
- x = AI::Chat.new
178
- x.model = "o4-mini"
179
- x.reasoning_effort = "medium" # Can be "low", "medium", or "high"
446
+ # Load from ActiveRecord
447
+ @thread = Thread.find(42)
180
448
 
181
- x.user("Write a bash script that transposes a matrix represented as '[1,2],[3,4],[5,6]'")
182
- x.assistant!
449
+ r = OpenAI::Chat.new
450
+ r.messages = @thread.posts.order(:created_at)
451
+ r.user("What should we discuss next?")
452
+ r.generate! # Creates a new post record, too
183
453
  ```
184
454
 
185
- The `reasoning_effort` parameter guides the model on how many reasoning tokens to generate before creating a response to the prompt. Options are:
186
- - `"low"`: Favors speed and economical token usage
187
- - `"medium"`: (Default) Balances speed and reasoning accuracy
188
- - `"high"`: Favors more complete reasoning
455
+ ### Requirements
189
456
 
190
- Setting to `nil` disables the reasoning parameter.
457
+ In order for the above to "magically" work, there are a few requirements. Your ActiveRecord model must have:
458
+
459
+ - `.role` method that returns "system", "user", or "assistant"
460
+ - `.content` method that returns the message text
461
+ - `.image` method (optional) for single images - can return URLs, file paths, or Active Storage attachments
462
+ - `.images` method (optional) for multiple images
463
+
464
+ ### Custom Column Names
465
+
466
+ If your columns have different names:
467
+
468
+ ```ruby
469
+ s = OpenAI::Chat.new
470
+ s.configure_message_attributes(
471
+ role: :message_type, # Your column for role
472
+ content: :message_body, # Your column for content
473
+ image: :attachment # Your column/association for images
474
+ )
475
+ s.messages = @conversation.messages
476
+ ```
477
+
478
+ ### Saving Responses with Metadata
191
479
 
192
- ## TODOs
480
+ To preserve response metadata, add an `openai_response` column to your messages table:
193
481
 
194
- - Add the ability to set all messages at once, ideally with an ActiveRecord Relation.
195
- - Add a way to access the whole API response body (rather than just the message content).
482
+ ```ruby
483
+ # In your migration
484
+ add_column :messages, :openai_response, :text
196
485
 
197
- ## Contributing
486
+ # In your model
487
+ class Message < ApplicationRecord
488
+ serialize :openai_response, OpenAI::Chat::Response
489
+ end
198
490
 
199
- Bug reports and pull requests are welcome on GitHub at https://github.com/firstdraft/ai-chat. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/firstdraft/ai-chat/blob/main/CODE_OF_CONDUCT.md).
491
+ # Usage
492
+ @thread = Thread.find(42)
200
493
 
201
- ## License
494
+ t = OpenAI::Chat.new
495
+ t.posts = @thread.messages
496
+ t.user("Hello!")
497
+ t.generate!
202
498
 
203
- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
499
+ # The saved message will include token usage, model info, etc.
500
+ last_message = @thread.messages.last
501
+ last_message.openai_response.usage # => {:prompt_tokens=>10, ...}
502
+ ```
204
503
 
205
- ## Code of Conduct
504
+ ## Other Features Being Considered
505
+
506
+ - **Session management**: Save and restore conversations by ID
507
+ - **Streaming responses**: Real-time streaming as the AI generates its response
508
+ - **Cost tracking**: Automatic calculation and tracking of API costs
509
+
510
+ ## Testing with Real API Calls
206
511
 
207
- Everyone interacting in the AI Chat project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/firstdraft/ai-chat/blob/main/CODE_OF_CONDUCT.md).
512
+ While this gem includes specs, they use mocked API responses. To test with real API calls:
513
+
514
+ 1. Navigate to the test program directory: `cd demo`
515
+ 2. Create a `.env` file in the test_program directory with your API credentials:
516
+ ```
517
+ # Your OpenAI API key
518
+ OPENAI_API_KEY=your_openai_api_key_here
519
+ ```
520
+ 3. Install dependencies: `bundle install`
521
+ 4. Run the test program: `ruby demo.rb`
522
+
523
+ This test program runs through all the major features of the gem, making real API calls to OpenAI.