ai-chat 0.3.1 → 0.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +101 -88
- data/ai-chat.gemspec +2 -2
- data/lib/ai/amazing_print.rb +3 -3
- data/lib/ai/chat.rb +81 -97
- data/lib/ai/http.rb +1 -1
- metadata +4 -4
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 975e7f80044ac46d72ad1c08e290d7fa71b43048fe76d68784d3961c36efde95
|
|
4
|
+
data.tar.gz: cf0a2b5fcee3e6ee413580419c992efe6c50e5935be875f2e82d8a740c7d15fb
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: fe727e64a0388922db85c3085dea85c6622b88c886cbb25fc660dc9bad8291205a1904a19f611457ac83cbe673a299db4d97f0b658e236de0e3ccccb46f44aac
|
|
7
|
+
data.tar.gz: 4be9d9a80ea39e20ef2c11d8142f9403b7b43f46c8f4ccc5a80f88a027980be11b1e882165e67e0380d2a8982e0e4bbd3489585cac3c4a0a291d46a75813d53d
|
data/README.md
CHANGED
|
@@ -26,19 +26,19 @@ The `examples/` directory contains focused examples for specific features:
|
|
|
26
26
|
|
|
27
27
|
- `01_quick.rb` - Quick overview of key features
|
|
28
28
|
- `02_core.rb` - Core functionality (basic chat, messages, responses)
|
|
29
|
-
- `
|
|
30
|
-
- `
|
|
31
|
-
- `
|
|
32
|
-
- `
|
|
33
|
-
- `
|
|
34
|
-
- `
|
|
35
|
-
- `
|
|
36
|
-
- `
|
|
37
|
-
- `
|
|
38
|
-
- `
|
|
39
|
-
- `
|
|
40
|
-
- `
|
|
41
|
-
- `
|
|
29
|
+
- `03_multimodal.rb` - Basic file and image handling
|
|
30
|
+
- `04_file_handling_comprehensive.rb` - Advanced file handling (PDFs, text files, Rails uploads)
|
|
31
|
+
- `05_structured_output.rb` - Basic structured output with schemas
|
|
32
|
+
- `06_structured_output_comprehensive.rb` - All 6 supported schema formats
|
|
33
|
+
- `07_edge_cases.rb` - Error handling and edge cases
|
|
34
|
+
- `08_additional_patterns.rb` - Less common usage patterns (direct add method, web search + schema, etc.)
|
|
35
|
+
- `09_mixed_content.rb` - Combining text and images in messages
|
|
36
|
+
- `10_image_generation.rb` - Using the image generation tool
|
|
37
|
+
- `11_code_interpreter.rb` - Using the code interpreter tool
|
|
38
|
+
- `12_background_mode.rb` - Running responses in background mode
|
|
39
|
+
- `13_conversation_features_comprehensive.rb` - Conversation features (auto-creation, continuity, inspection)
|
|
40
|
+
- `14_schema_generation.rb` - Generate JSON schemas from natural language
|
|
41
|
+
- `15_proxy.rb` - Proxy support for student accounts
|
|
42
42
|
|
|
43
43
|
Each example is self-contained and can be run individually:
|
|
44
44
|
```bash
|
|
@@ -93,7 +93,7 @@ a.generate! # => { :role => "assistant", :content => "Matz is nice and so we are
|
|
|
93
93
|
pp a.messages
|
|
94
94
|
# => [
|
|
95
95
|
# {:role=>"user", :content=>"If the Ruby community had an official motto, what might it be?"},
|
|
96
|
-
# {:role=>"assistant", :content=>"Matz is nice and so we are nice", :response => { id=resp_abc... model=gpt-
|
|
96
|
+
# {:role=>"assistant", :content=>"Matz is nice and so we are nice", :response => { id=resp_abc... model=gpt-5.1 tokens=12 } }
|
|
97
97
|
# ]
|
|
98
98
|
|
|
99
99
|
# Continue the conversation
|
|
@@ -113,7 +113,7 @@ That's it! You're building something like this:
|
|
|
113
113
|
[
|
|
114
114
|
{:role => "system", :content => "You are a helpful assistant"},
|
|
115
115
|
{:role => "user", :content => "Hello!"},
|
|
116
|
-
{:role => "assistant", :content => "Hi there! How can I help you today?", :response => { id=resp_abc... model=gpt-
|
|
116
|
+
{:role => "assistant", :content => "Hi there! How can I help you today?", :response => { id=resp_abc... model=gpt-5.1 tokens=12 } }
|
|
117
117
|
]
|
|
118
118
|
```
|
|
119
119
|
|
|
@@ -183,25 +183,14 @@ d.generate! # Generate a response
|
|
|
183
183
|
|
|
184
184
|
### Model
|
|
185
185
|
|
|
186
|
-
By default, the gem uses OpenAI's `gpt-
|
|
186
|
+
By default, the gem uses OpenAI's `gpt-5.1` model. If you want to use a different model, you can set it:
|
|
187
187
|
|
|
188
188
|
```ruby
|
|
189
189
|
e = AI::Chat.new
|
|
190
|
-
e.model = "
|
|
190
|
+
e.model = "gpt-4o"
|
|
191
191
|
```
|
|
192
192
|
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
#### Foundation models
|
|
196
|
-
|
|
197
|
-
- gpt-4.1-nano
|
|
198
|
-
- gpt-4.1-mini
|
|
199
|
-
- gpt-4.1
|
|
200
|
-
|
|
201
|
-
#### Reasoning models
|
|
202
|
-
|
|
203
|
-
- o4-mini
|
|
204
|
-
- o3
|
|
193
|
+
See [OpenAI's model documentation](https://platform.openai.com/docs/models) for available models.
|
|
205
194
|
|
|
206
195
|
### API key
|
|
207
196
|
|
|
@@ -248,7 +237,7 @@ h.last[:content]
|
|
|
248
237
|
|
|
249
238
|
## Web Search
|
|
250
239
|
|
|
251
|
-
To give the model access to real-time information from the internet, you can enable web searching. This uses OpenAI's built-in `
|
|
240
|
+
To give the model access to real-time information from the internet, you can enable web searching. This uses OpenAI's built-in `web_search` tool.
|
|
252
241
|
|
|
253
242
|
```ruby
|
|
254
243
|
m = AI::Chat.new
|
|
@@ -257,17 +246,6 @@ m.user("What are the latest developments in the Ruby language?")
|
|
|
257
246
|
m.generate! # This may use web search to find current information
|
|
258
247
|
```
|
|
259
248
|
|
|
260
|
-
**Note:** This feature requires a model that supports the `web_search_preview` tool, such as `gpt-4o` or `gpt-4o-mini`. The gem will attempt to use a compatible model if you have `web_search` enabled.
|
|
261
|
-
|
|
262
|
-
If you don't want the model to use web search, set `web_search` to `false` (this is the default):
|
|
263
|
-
|
|
264
|
-
```ruby
|
|
265
|
-
m = AI::Chat.new
|
|
266
|
-
m.web_search = false
|
|
267
|
-
m.user("What are the latest developments in the Ruby language?")
|
|
268
|
-
m.generate! # This definitely won't use web search to find current information
|
|
269
|
-
```
|
|
270
|
-
|
|
271
249
|
## Structured Output
|
|
272
250
|
|
|
273
251
|
Get back Structured Output by setting the `schema` attribute (I suggest using [OpenAI's handy tool for generating the JSON Schema](https://platform.openai.com/docs/guides/structured-outputs)):
|
|
@@ -362,6 +340,40 @@ i.schema = '{"name":"nutrition_values","strict":true,"schema":{...}}'
|
|
|
362
340
|
i.schema = '{"type":"object","properties":{...}}'
|
|
363
341
|
```
|
|
364
342
|
|
|
343
|
+
### Generating a Schema
|
|
344
|
+
|
|
345
|
+
You can call the class method `AI::Chat.generate_schema!` to use OpenAI to generate a JSON schema for you given a `String` describing the schema you want.
|
|
346
|
+
|
|
347
|
+
```rb
|
|
348
|
+
AI::Chat.generate_schema!("A user profile with name (required), email (required), age (number), and bio (optional text).")
|
|
349
|
+
# => "{ ... }"
|
|
350
|
+
```
|
|
351
|
+
|
|
352
|
+
This method returns a String containing the JSON schema. The JSON schema also writes (or overwrites) to `schema.json` at the root of the project.
|
|
353
|
+
|
|
354
|
+
Similar to generating messages with `AI::Chat` objects, this class method will assume that you have an API key called `OPENAI_API_KEY` defined. You can also pass the API key directly or choose a different environment variable key for it to use.
|
|
355
|
+
|
|
356
|
+
```rb
|
|
357
|
+
# Passing the API key directly
|
|
358
|
+
AI::Chat.generate_schema!("A user with full name (required), first_name (required), and last_name (required).", api_key: "MY_SECRET_API_KEY")
|
|
359
|
+
|
|
360
|
+
# Choosing a different API key name
|
|
361
|
+
AI::Chat.generate_schema!("A user with full name (required), first_name (required), and last_name (required).", api_key_env_var: "CUSTOM_KEY")
|
|
362
|
+
```
|
|
363
|
+
|
|
364
|
+
You can choose a location for the schema to save by using the `location` keyword argument.
|
|
365
|
+
|
|
366
|
+
```rb
|
|
367
|
+
AI::Chat.generate_schema!("A user with full name (required), first_name (required), and last_name (required).", location: "my_schemas/user.json")
|
|
368
|
+
```
|
|
369
|
+
|
|
370
|
+
If you don't want to write the output to a file, you can pass `false` to `location`.
|
|
371
|
+
|
|
372
|
+
```rb
|
|
373
|
+
AI::Chat.generate_schema!("A user with full name (required), first_name (required), and last_name (required).", location: false)
|
|
374
|
+
# => { ... }
|
|
375
|
+
```
|
|
376
|
+
|
|
365
377
|
### Schema Notes
|
|
366
378
|
|
|
367
379
|
- The keys can be `String`s or `Symbol`s.
|
|
@@ -440,27 +452,6 @@ l.generate!
|
|
|
440
452
|
|
|
441
453
|
**Note**: Images should use `image:`/`images:` parameters, while documents should use `file:`/`files:` parameters.
|
|
442
454
|
|
|
443
|
-
## Re-sending old images and files
|
|
444
|
-
|
|
445
|
-
Note: if you generate another API request using the same chat, old images and files in the conversation history will not be re-sent by default. If you really want to re-send old images and files, then you must set `previous_response_id` to `nil`:
|
|
446
|
-
|
|
447
|
-
```ruby
|
|
448
|
-
a = AI::Chat.new
|
|
449
|
-
a.user("What color is the object in this photo?", image: "thing.png")
|
|
450
|
-
a.generate! # => "Red"
|
|
451
|
-
a.user("What is the object in the photo?")
|
|
452
|
-
a.generate! # => { :content => "I don't see a photo", ... }
|
|
453
|
-
|
|
454
|
-
b = AI::Chat.new
|
|
455
|
-
b.user("What color is the object in this photo?", image: "thing.png")
|
|
456
|
-
b.generate! # => "Red"
|
|
457
|
-
b.user("What is the object in the photo?")
|
|
458
|
-
b.previous_response_id = nil
|
|
459
|
-
b.generate! # => { :content => "An apple", ... }
|
|
460
|
-
```
|
|
461
|
-
|
|
462
|
-
If you don't set `previous_response_id` to `nil`, the model won't have the old image(s) to work with.
|
|
463
|
-
|
|
464
455
|
## Image generation
|
|
465
456
|
|
|
466
457
|
You can enable OpenAI's image generation tool:
|
|
@@ -574,25 +565,24 @@ puts response
|
|
|
574
565
|
|
|
575
566
|
With this, you can loop through any conversation's history (perhaps after retrieving it from your database), recreate an `AI::Chat`, and then continue it.
|
|
576
567
|
|
|
577
|
-
## Reasoning
|
|
568
|
+
## Reasoning Effort
|
|
578
569
|
|
|
579
|
-
|
|
570
|
+
You can control how much reasoning the model does before producing its response:
|
|
580
571
|
|
|
581
572
|
```ruby
|
|
582
573
|
l = AI::Chat.new
|
|
583
|
-
l.
|
|
584
|
-
l.reasoning_effort = "medium" # Can be "low", "medium", or "high"
|
|
574
|
+
l.reasoning_effort = "low" # Can be "low", "medium", or "high"
|
|
585
575
|
|
|
586
576
|
l.user("What does this error message mean? <insert error message>")
|
|
587
577
|
l.generate!
|
|
588
578
|
```
|
|
589
579
|
|
|
590
|
-
The `reasoning_effort` parameter guides the model on how many reasoning tokens to generate
|
|
580
|
+
The `reasoning_effort` parameter guides the model on how many reasoning tokens to generate. Options are:
|
|
591
581
|
- `"low"`: Favors speed and economical token usage.
|
|
592
|
-
- `"medium"`:
|
|
582
|
+
- `"medium"`: Balances speed and reasoning accuracy.
|
|
593
583
|
- `"high"`: Favors more complete reasoning.
|
|
594
584
|
|
|
595
|
-
|
|
585
|
+
By default, `reasoning_effort` is `nil`, which means no reasoning parameter is sent to the API. For `gpt-5.1` (the default model), this is equivalent to `"none"` reasoning.
|
|
596
586
|
|
|
597
587
|
## Advanced: Response Details
|
|
598
588
|
|
|
@@ -608,13 +598,13 @@ pp t.messages.last
|
|
|
608
598
|
# => {
|
|
609
599
|
# :role => "assistant",
|
|
610
600
|
# :content => "Hello! How can I help you today?",
|
|
611
|
-
# :response => { id=resp_abc... model=gpt-
|
|
601
|
+
# :response => { id=resp_abc... model=gpt-5.1 tokens=12 }
|
|
612
602
|
# }
|
|
613
603
|
|
|
614
604
|
# Access detailed information
|
|
615
605
|
response = t.last[:response]
|
|
616
606
|
response[:id] # => "resp_abc123..."
|
|
617
|
-
response[:model] # => "gpt-
|
|
607
|
+
response[:model] # => "gpt-5.1"
|
|
618
608
|
response[:usage] # => {:prompt_tokens=>5, :completion_tokens=>7, :total_tokens=>12}
|
|
619
609
|
```
|
|
620
610
|
|
|
@@ -624,26 +614,24 @@ This information is useful for:
|
|
|
624
614
|
- Understanding which model was actually used.
|
|
625
615
|
- Future features like cost tracking.
|
|
626
616
|
|
|
627
|
-
|
|
617
|
+
### Last Response ID
|
|
618
|
+
|
|
619
|
+
In addition to the `response` object inside each message, the `AI::Chat` instance also provides a convenient reader, `last_response_id`, which always holds the ID of the most recent response.
|
|
628
620
|
|
|
629
621
|
```ruby
|
|
630
|
-
|
|
631
|
-
|
|
632
|
-
|
|
633
|
-
old_id = t.last[:response][:id] # => "resp_abc123..."
|
|
622
|
+
chat = AI::Chat.new
|
|
623
|
+
chat.user("Hello")
|
|
624
|
+
chat.generate!
|
|
634
625
|
|
|
635
|
-
|
|
626
|
+
puts chat.last_response_id # => "resp_abc123..."
|
|
636
627
|
|
|
637
|
-
|
|
638
|
-
|
|
639
|
-
|
|
640
|
-
|
|
641
|
-
# ]
|
|
642
|
-
u.user("What should we do next?")
|
|
643
|
-
u.generate!
|
|
628
|
+
chat.user("Goodbye")
|
|
629
|
+
chat.generate!
|
|
630
|
+
|
|
631
|
+
puts chat.last_response_id # => "resp_xyz789..." (a new ID)
|
|
644
632
|
```
|
|
645
633
|
|
|
646
|
-
|
|
634
|
+
This is particularly useful for managing background tasks. When you make a request in background mode, you can immediately get the `last_response_id` to track, retrieve, or cancel that specific job later from a different process.
|
|
647
635
|
|
|
648
636
|
### Automatic Conversation Management
|
|
649
637
|
|
|
@@ -673,8 +661,6 @@ chat.user("Continue our discussion")
|
|
|
673
661
|
chat.generate! # Uses the loaded conversation
|
|
674
662
|
```
|
|
675
663
|
|
|
676
|
-
**Note on forking:** If you want to "fork" a conversation (create a branch), you can still use `previous_response_id`. If both `conversation_id` and `previous_response_id` are set, the gem will use `previous_response_id` and warn you.
|
|
677
|
-
|
|
678
664
|
## Inspecting Conversation Details
|
|
679
665
|
|
|
680
666
|
The gem provides two methods to inspect what happened during a conversation:
|
|
@@ -778,9 +764,36 @@ q.messages = [
|
|
|
778
764
|
|
|
779
765
|
## Other Features Being Considered
|
|
780
766
|
|
|
781
|
-
- **Session management**: Save and restore conversations by ID
|
|
782
767
|
- **Streaming responses**: Real-time streaming as the AI generates its response
|
|
783
768
|
- **Cost tracking**: Automatic calculation and tracking of API costs
|
|
769
|
+
- **Token usage helpers**: Convenience methods like `total_tokens` to sum usage across all responses in a conversation
|
|
770
|
+
|
|
771
|
+
## TODO: Missing Test Coverage
|
|
772
|
+
|
|
773
|
+
The following gem-specific logic would benefit from additional RSpec test coverage:
|
|
774
|
+
|
|
775
|
+
1. **Schema format normalization** - The `wrap_schema_if_needed` method detects and wraps 3 different input formats (raw, named, already-wrapped). This complex conditional logic could silently regress.
|
|
776
|
+
|
|
777
|
+
2. **Multimodal content array building** - The `add` method builds nested structures when images/files are provided, handling `image`/`images` and `file`/`files` parameters with specific ordering (text → images → files).
|
|
778
|
+
|
|
779
|
+
3. **File classification and processing** - `classify_obj` and `process_file_input` distinguish URLs vs file paths vs file-like objects, with MIME type detection determining encoding behavior.
|
|
780
|
+
|
|
781
|
+
4. **Message preparation after response** - `prepare_messages_for_api` has slicing logic that only sends messages after the last response, preventing re-sending entire conversation history.
|
|
782
|
+
|
|
783
|
+
These are all gem-specific transformations (not just OpenAI pass-through) that could regress without proper test coverage.
|
|
784
|
+
|
|
785
|
+
## TODO: Code Quality
|
|
786
|
+
|
|
787
|
+
Address Reek warnings (`bundle exec reek`). Currently 29 warnings for code smells like:
|
|
788
|
+
|
|
789
|
+
- `TooManyStatements` in several methods
|
|
790
|
+
- `DuplicateMethodCall` in `extract_and_save_files`, `verbose`, etc.
|
|
791
|
+
- `RepeatedConditional` for `proxy` checks
|
|
792
|
+
- `FeatureEnvy` in `parse_response` and `wait_for_response`
|
|
793
|
+
|
|
794
|
+
These don't affect functionality but indicate areas for refactoring.
|
|
795
|
+
|
|
796
|
+
Then, add `quality` back as a CI check.
|
|
784
797
|
|
|
785
798
|
## Testing with Real API Calls
|
|
786
799
|
|
data/ai-chat.gemspec
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
Gem::Specification.new do |spec|
|
|
4
4
|
spec.name = "ai-chat"
|
|
5
|
-
spec.version = "0.
|
|
5
|
+
spec.version = "0.4.0"
|
|
6
6
|
spec.authors = ["Raghu Betina"]
|
|
7
7
|
spec.email = ["raghu@firstdraft.com"]
|
|
8
8
|
spec.homepage = "https://github.com/firstdraft/ai-chat"
|
|
@@ -21,7 +21,7 @@ Gem::Specification.new do |spec|
|
|
|
21
21
|
spec.required_ruby_version = "~> 3.2"
|
|
22
22
|
spec.add_runtime_dependency "openai", "~> 0.34"
|
|
23
23
|
spec.add_runtime_dependency "marcel", "~> 1.0"
|
|
24
|
-
spec.add_runtime_dependency "base64",
|
|
24
|
+
spec.add_runtime_dependency "base64", "~> 0.1", "> 0.1.1"
|
|
25
25
|
spec.add_runtime_dependency "json", "~> 2.0"
|
|
26
26
|
spec.add_runtime_dependency "ostruct", "~> 0.2"
|
|
27
27
|
spec.add_runtime_dependency "tty-spinner", "~> 0.9.3"
|
data/lib/ai/amazing_print.rb
CHANGED
|
@@ -33,7 +33,7 @@ module AmazingPrint
|
|
|
33
33
|
# :reek:TooManyStatements
|
|
34
34
|
def format_ai_chat(chat)
|
|
35
35
|
vars = []
|
|
36
|
-
|
|
36
|
+
|
|
37
37
|
# Format messages with truncation
|
|
38
38
|
if chat.instance_variable_defined?(:@messages)
|
|
39
39
|
messages = chat.instance_variable_get(:@messages).map do |msg|
|
|
@@ -45,7 +45,7 @@ module AmazingPrint
|
|
|
45
45
|
end
|
|
46
46
|
vars << ["@messages", messages]
|
|
47
47
|
end
|
|
48
|
-
|
|
48
|
+
|
|
49
49
|
# Add other variables (except sensitive ones)
|
|
50
50
|
skip_vars = [:@api_key, :@client, :@messages]
|
|
51
51
|
chat.instance_variables.sort.each do |var|
|
|
@@ -68,7 +68,7 @@ module AmazingPrint
|
|
|
68
68
|
if @options[:multiline]
|
|
69
69
|
"#<#{object.class}\n#{data.map { |line| " #{line}" }.join("\n")}\n>"
|
|
70
70
|
else
|
|
71
|
-
"#<#{object.class} #{data.join(
|
|
71
|
+
"#<#{object.class} #{data.join(", ")}>"
|
|
72
72
|
end
|
|
73
73
|
end
|
|
74
74
|
end
|
data/lib/ai/chat.rb
CHANGED
|
@@ -22,36 +22,35 @@ module AI
|
|
|
22
22
|
# :reek:IrresponsibleModule
|
|
23
23
|
class Chat
|
|
24
24
|
# :reek:Attribute
|
|
25
|
-
attr_accessor :background, :code_interpreter, :conversation_id, :image_generation, :image_folder, :messages, :model, :proxy, :
|
|
26
|
-
attr_reader :
|
|
25
|
+
attr_accessor :background, :code_interpreter, :conversation_id, :image_generation, :image_folder, :messages, :model, :proxy, :reasoning_effort, :web_search
|
|
26
|
+
attr_reader :client, :last_response_id, :schema, :schema_file
|
|
27
27
|
|
|
28
|
-
|
|
29
|
-
PROXY_URL = "https://prepend.me/".freeze
|
|
28
|
+
PROXY_URL = "https://prepend.me/"
|
|
30
29
|
|
|
31
30
|
def initialize(api_key: nil, api_key_env_var: "OPENAI_API_KEY")
|
|
32
31
|
@api_key = api_key || ENV.fetch(api_key_env_var)
|
|
33
32
|
@messages = []
|
|
34
33
|
@reasoning_effort = nil
|
|
35
|
-
@model = "gpt-
|
|
34
|
+
@model = "gpt-5.1"
|
|
36
35
|
@client = OpenAI::Client.new(api_key: @api_key)
|
|
37
|
-
@
|
|
36
|
+
@last_response_id = nil
|
|
38
37
|
@proxy = false
|
|
39
38
|
@image_generation = false
|
|
40
39
|
@image_folder = "./images"
|
|
41
40
|
end
|
|
42
41
|
|
|
43
|
-
def self.generate_schema!(description, api_key: nil, api_key_env_var: "OPENAI_API_KEY", proxy: false)
|
|
42
|
+
def self.generate_schema!(description, location: "schema.json", api_key: nil, api_key_env_var: "OPENAI_API_KEY", proxy: false)
|
|
44
43
|
api_key ||= ENV.fetch(api_key_env_var)
|
|
45
44
|
prompt_path = File.expand_path("../prompts/schema_generator.md", __dir__)
|
|
46
|
-
system_prompt = File.
|
|
45
|
+
system_prompt = File.read(prompt_path)
|
|
47
46
|
|
|
48
47
|
json = if proxy
|
|
49
48
|
uri = URI(PROXY_URL + "api.openai.com/v1/responses")
|
|
50
49
|
parameters = {
|
|
51
|
-
model: "
|
|
50
|
+
model: "gpt-5.1",
|
|
52
51
|
input: [
|
|
53
52
|
{role: :system, content: system_prompt},
|
|
54
|
-
{role: :user, content: description}
|
|
53
|
+
{role: :user, content: description}
|
|
55
54
|
],
|
|
56
55
|
text: {format: {type: "json_object"}},
|
|
57
56
|
reasoning: {effort: "high"}
|
|
@@ -61,7 +60,7 @@ module AI
|
|
|
61
60
|
else
|
|
62
61
|
client = OpenAI::Client.new(api_key: api_key)
|
|
63
62
|
response = client.responses.create(
|
|
64
|
-
model: "
|
|
63
|
+
model: "gpt-5.1",
|
|
65
64
|
input: [
|
|
66
65
|
{role: :system, content: system_prompt},
|
|
67
66
|
{role: :user, content: description}
|
|
@@ -73,7 +72,13 @@ module AI
|
|
|
73
72
|
output_text = response.output_text
|
|
74
73
|
JSON.parse(output_text)
|
|
75
74
|
end
|
|
76
|
-
JSON.pretty_generate(json)
|
|
75
|
+
content = JSON.pretty_generate(json)
|
|
76
|
+
if location
|
|
77
|
+
path = Pathname.new(location)
|
|
78
|
+
FileUtils.mkdir_p(path.dirname) if path.dirname != "."
|
|
79
|
+
File.binwrite(location, content)
|
|
80
|
+
end
|
|
81
|
+
content
|
|
77
82
|
end
|
|
78
83
|
|
|
79
84
|
# :reek:TooManyStatements
|
|
@@ -146,7 +151,7 @@ module AI
|
|
|
146
151
|
response = create_response
|
|
147
152
|
parse_response(response)
|
|
148
153
|
|
|
149
|
-
|
|
154
|
+
@last_response_id = last.dig(:response, :id)
|
|
150
155
|
last
|
|
151
156
|
end
|
|
152
157
|
|
|
@@ -158,29 +163,11 @@ module AI
|
|
|
158
163
|
response = if wait
|
|
159
164
|
wait_for_response(timeout)
|
|
160
165
|
else
|
|
161
|
-
retrieve_response(
|
|
166
|
+
retrieve_response(last_response_id)
|
|
162
167
|
end
|
|
163
168
|
parse_response(response)
|
|
164
169
|
end
|
|
165
170
|
|
|
166
|
-
# :reek:NilCheck
|
|
167
|
-
# :reek:TooManyStatements
|
|
168
|
-
def reasoning_effort=(value)
|
|
169
|
-
if value.nil?
|
|
170
|
-
@reasoning_effort = nil
|
|
171
|
-
return
|
|
172
|
-
end
|
|
173
|
-
|
|
174
|
-
normalized_value = value.to_sym
|
|
175
|
-
|
|
176
|
-
if VALID_REASONING_EFFORTS.include?(normalized_value)
|
|
177
|
-
@reasoning_effort = normalized_value
|
|
178
|
-
else
|
|
179
|
-
valid_values = VALID_REASONING_EFFORTS.map { |valid_value| ":#{valid_value} or \"#{valid_value}\"" }.join(", ")
|
|
180
|
-
raise ArgumentError, "Invalid reasoning_effort value: '#{value}'. Must be one of: #{valid_values}"
|
|
181
|
-
end
|
|
182
|
-
end
|
|
183
|
-
|
|
184
171
|
def schema=(value)
|
|
185
172
|
if value.is_a?(String)
|
|
186
173
|
parsed = JSON.parse(value, symbolize_names: true)
|
|
@@ -192,6 +179,12 @@ module AI
|
|
|
192
179
|
end
|
|
193
180
|
end
|
|
194
181
|
|
|
182
|
+
def schema_file=(path)
|
|
183
|
+
content = File.read(path)
|
|
184
|
+
@schema_file = path
|
|
185
|
+
self.schema = content
|
|
186
|
+
end
|
|
187
|
+
|
|
195
188
|
def last
|
|
196
189
|
messages.last
|
|
197
190
|
end
|
|
@@ -200,12 +193,12 @@ module AI
|
|
|
200
193
|
raise "No conversation_id set. Call generate! first to create a conversation." unless conversation_id
|
|
201
194
|
|
|
202
195
|
if proxy
|
|
203
|
-
uri = URI(PROXY_URL + "api.openai.com/v1/conversations/#{conversation_id}/items?order=#{order
|
|
196
|
+
uri = URI(PROXY_URL + "api.openai.com/v1/conversations/#{conversation_id}/items?order=#{order}")
|
|
204
197
|
response_hash = send_request(uri, content_type: "json", method: "get")
|
|
205
198
|
|
|
206
199
|
if response_hash.key?(:data)
|
|
207
200
|
response_hash.dig(:data).map do |hash|
|
|
208
|
-
# Transform values to allow expected symbols that non-proxied request returns
|
|
201
|
+
# Transform values to allow expected symbols that non-proxied request returns
|
|
209
202
|
|
|
210
203
|
hash.transform_values! do |value|
|
|
211
204
|
if hash.key(value) == :type
|
|
@@ -283,6 +276,7 @@ module AI
|
|
|
283
276
|
private
|
|
284
277
|
|
|
285
278
|
class InputClassificationError < StandardError; end
|
|
279
|
+
|
|
286
280
|
class WrongAPITokenUsedError < StandardError; end
|
|
287
281
|
|
|
288
282
|
# :reek:FeatureEnvy
|
|
@@ -320,16 +314,8 @@ module AI
|
|
|
320
314
|
parameters[:text] = schema if schema
|
|
321
315
|
parameters[:reasoning] = {effort: reasoning_effort} if reasoning_effort
|
|
322
316
|
|
|
323
|
-
|
|
324
|
-
|
|
325
|
-
parameters[:previous_response_id] = previous_response_id
|
|
326
|
-
elsif previous_response_id
|
|
327
|
-
parameters[:previous_response_id] = previous_response_id
|
|
328
|
-
elsif conversation_id
|
|
329
|
-
parameters[:conversation] = conversation_id
|
|
330
|
-
else
|
|
331
|
-
create_conversation
|
|
332
|
-
end
|
|
317
|
+
create_conversation unless conversation_id
|
|
318
|
+
parameters[:conversation] = conversation_id
|
|
333
319
|
|
|
334
320
|
messages_to_send = prepare_messages_for_api
|
|
335
321
|
parameters[:input] = strip_responses(messages_to_send) unless messages_to_send.empty?
|
|
@@ -367,7 +353,7 @@ module AI
|
|
|
367
353
|
if response.key?(:conversation)
|
|
368
354
|
self.conversation_id = response.dig(:conversation, :id)
|
|
369
355
|
end
|
|
370
|
-
else
|
|
356
|
+
else
|
|
371
357
|
text_response = response.output_text
|
|
372
358
|
response_id = response.id
|
|
373
359
|
response_status = response.status
|
|
@@ -419,16 +405,16 @@ module AI
|
|
|
419
405
|
end
|
|
420
406
|
|
|
421
407
|
def cancel_request
|
|
422
|
-
client.responses.cancel(
|
|
408
|
+
client.responses.cancel(last_response_id)
|
|
423
409
|
end
|
|
424
410
|
|
|
425
411
|
def prepare_messages_for_api
|
|
426
|
-
return messages unless
|
|
412
|
+
return messages unless last_response_id
|
|
427
413
|
|
|
428
|
-
|
|
414
|
+
last_response_index = messages.find_index { |message| message.dig(:response, :id) == last_response_id }
|
|
429
415
|
|
|
430
|
-
if
|
|
431
|
-
messages[(
|
|
416
|
+
if last_response_index
|
|
417
|
+
messages[(last_response_index + 1)..] || []
|
|
432
418
|
else
|
|
433
419
|
messages
|
|
434
420
|
end
|
|
@@ -564,7 +550,7 @@ module AI
|
|
|
564
550
|
def tools
|
|
565
551
|
tools_list = []
|
|
566
552
|
if web_search
|
|
567
|
-
tools_list << {type: "
|
|
553
|
+
tools_list << {type: "web_search"}
|
|
568
554
|
end
|
|
569
555
|
if image_generation
|
|
570
556
|
tools_list << {type: "image_generation"}
|
|
@@ -605,12 +591,12 @@ module AI
|
|
|
605
591
|
def extract_and_save_images(response)
|
|
606
592
|
image_filenames = []
|
|
607
593
|
|
|
608
|
-
if proxy
|
|
609
|
-
|
|
594
|
+
image_outputs = if proxy
|
|
595
|
+
response.dig(:output).select { |output|
|
|
610
596
|
output.dig(:type) == "image_generation_call"
|
|
611
597
|
}
|
|
612
|
-
else
|
|
613
|
-
|
|
598
|
+
else
|
|
599
|
+
response.output.select { |output|
|
|
614
600
|
output.respond_to?(:type) && output.type == :image_generation_call
|
|
615
601
|
}
|
|
616
602
|
end
|
|
@@ -694,7 +680,7 @@ module AI
|
|
|
694
680
|
message_outputs = response.dig(:output).select do |output|
|
|
695
681
|
output.dig(:type) == "message"
|
|
696
682
|
end
|
|
697
|
-
|
|
683
|
+
|
|
698
684
|
outputs_with_annotations = message_outputs.map do |message|
|
|
699
685
|
message.dig(:content).find do |content|
|
|
700
686
|
content.dig(:annotations).length.positive?
|
|
@@ -704,7 +690,7 @@ module AI
|
|
|
704
690
|
message_outputs = response.output.select do |output|
|
|
705
691
|
output.respond_to?(:type) && output.type == :message
|
|
706
692
|
end
|
|
707
|
-
|
|
693
|
+
|
|
708
694
|
outputs_with_annotations = message_outputs.map do |message|
|
|
709
695
|
message.content.find do |content|
|
|
710
696
|
content.respond_to?(:annotations) && content.annotations.length.positive?
|
|
@@ -723,12 +709,12 @@ module AI
|
|
|
723
709
|
annotation.key?(:filename)
|
|
724
710
|
end
|
|
725
711
|
end.compact
|
|
726
|
-
|
|
712
|
+
|
|
727
713
|
annotations.each do |annotation|
|
|
728
714
|
container_id = annotation.dig(:container_id)
|
|
729
715
|
file_id = annotation.dig(:file_id)
|
|
730
716
|
filename = annotation.dig(:filename)
|
|
731
|
-
|
|
717
|
+
|
|
732
718
|
warn_if_file_fails_to_save do
|
|
733
719
|
file_content = retrieve_file(file_id, container_id: container_id)
|
|
734
720
|
file_path = File.join(subfolder_path, filename)
|
|
@@ -742,18 +728,16 @@ module AI
|
|
|
742
728
|
annotation.respond_to?(:filename)
|
|
743
729
|
end
|
|
744
730
|
end.compact
|
|
745
|
-
|
|
731
|
+
|
|
746
732
|
annotations.each do |annotation|
|
|
747
733
|
container_id = annotation.container_id
|
|
748
734
|
file_id = annotation.file_id
|
|
749
735
|
filename = annotation.filename
|
|
750
|
-
|
|
736
|
+
|
|
751
737
|
warn_if_file_fails_to_save do
|
|
752
738
|
file_content = retrieve_file(file_id, container_id: container_id)
|
|
753
739
|
file_path = File.join(subfolder_path, filename)
|
|
754
|
-
File.
|
|
755
|
-
file.write(file_content.read)
|
|
756
|
-
end
|
|
740
|
+
File.binwrite(file_path, file_content.read)
|
|
757
741
|
filenames << file_path
|
|
758
742
|
end
|
|
759
743
|
end
|
|
@@ -776,53 +760,53 @@ module AI
|
|
|
776
760
|
yield
|
|
777
761
|
end
|
|
778
762
|
rescue Timeout::Error
|
|
779
|
-
client.responses.cancel(
|
|
763
|
+
client.responses.cancel(last_response_id)
|
|
780
764
|
end
|
|
781
765
|
|
|
782
766
|
# :reek:DuplicateMethodCall
|
|
783
767
|
# :reek:TooManyStatements
|
|
784
768
|
def wait_for_response(timeout)
|
|
785
|
-
|
|
786
|
-
|
|
787
|
-
|
|
788
|
-
|
|
789
|
-
|
|
769
|
+
spinner = TTY::Spinner.new("[:spinner] Thinking ...", format: :dots)
|
|
770
|
+
spinner.auto_spin
|
|
771
|
+
api_response = retrieve_response(last_response_id)
|
|
772
|
+
number_of_times_polled = 0
|
|
773
|
+
response = timeout_request(timeout) do
|
|
774
|
+
status = if api_response.respond_to?(:status)
|
|
775
|
+
api_response.status
|
|
776
|
+
else
|
|
777
|
+
api_response.dig(:status)&.to_sym
|
|
778
|
+
end
|
|
779
|
+
|
|
780
|
+
while status != :completed
|
|
781
|
+
some_amount_of_seconds = calculate_wait(number_of_times_polled)
|
|
782
|
+
sleep some_amount_of_seconds
|
|
783
|
+
number_of_times_polled += 1
|
|
784
|
+
api_response = retrieve_response(last_response_id)
|
|
790
785
|
status = if api_response.respond_to?(:status)
|
|
791
786
|
api_response.status
|
|
792
|
-
else
|
|
787
|
+
else
|
|
793
788
|
api_response.dig(:status)&.to_sym
|
|
794
789
|
end
|
|
795
|
-
|
|
796
|
-
while status != :completed
|
|
797
|
-
some_amount_of_seconds = calculate_wait(number_of_times_polled)
|
|
798
|
-
sleep some_amount_of_seconds
|
|
799
|
-
number_of_times_polled += 1
|
|
800
|
-
api_response = retrieve_response(previous_response_id)
|
|
801
|
-
status = if api_response.respond_to?(:status)
|
|
802
|
-
api_response.status
|
|
803
|
-
else
|
|
804
|
-
api_response.dig(:status)&.to_sym
|
|
805
|
-
end
|
|
806
|
-
end
|
|
807
|
-
api_response
|
|
808
|
-
end
|
|
809
|
-
|
|
810
|
-
status = if api_response.respond_to?(:status)
|
|
811
|
-
api_response.status
|
|
812
|
-
else
|
|
813
|
-
api_response.dig(:status).to_sym
|
|
814
790
|
end
|
|
815
|
-
|
|
816
|
-
|
|
817
|
-
|
|
791
|
+
api_response
|
|
792
|
+
end
|
|
793
|
+
|
|
794
|
+
status = if api_response.respond_to?(:status)
|
|
795
|
+
api_response.status
|
|
796
|
+
else
|
|
797
|
+
api_response.dig(:status).to_sym
|
|
798
|
+
end
|
|
799
|
+
exit_message = (status == :cancelled) ? "request timed out" : "done!"
|
|
800
|
+
spinner.stop(exit_message)
|
|
801
|
+
response
|
|
818
802
|
end
|
|
819
803
|
|
|
820
|
-
def retrieve_response(
|
|
804
|
+
def retrieve_response(response_id)
|
|
821
805
|
if proxy
|
|
822
|
-
uri = URI(PROXY_URL + "api.openai.com/v1/responses/#{
|
|
806
|
+
uri = URI(PROXY_URL + "api.openai.com/v1/responses/#{response_id}")
|
|
823
807
|
send_request(uri, content_type: "json", method: "get")
|
|
824
808
|
else
|
|
825
|
-
client.responses.retrieve(
|
|
809
|
+
client.responses.retrieve(response_id)
|
|
826
810
|
end
|
|
827
811
|
end
|
|
828
812
|
|
|
@@ -832,7 +816,7 @@ module AI
|
|
|
832
816
|
send_request(uri, method: "get")
|
|
833
817
|
else
|
|
834
818
|
container_content = client.containers.files.content
|
|
835
|
-
|
|
819
|
+
container_content.retrieve(file_id, container_id: container_id)
|
|
836
820
|
end
|
|
837
821
|
end
|
|
838
822
|
end
|
data/lib/ai/http.rb
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
require "net/http"
|
|
2
2
|
module AI
|
|
3
3
|
module Http
|
|
4
|
-
def send_request(uri, content_type: nil, parameters: nil
|
|
4
|
+
def send_request(uri, method:, content_type: nil, parameters: nil)
|
|
5
5
|
Net::HTTP.start(uri.host, 443, use_ssl: true) do |http|
|
|
6
6
|
headers = {
|
|
7
7
|
"Authorization" => "Bearer #{@api_key}"
|
metadata
CHANGED
|
@@ -1,13 +1,13 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: ai-chat
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 0.
|
|
4
|
+
version: 0.4.0
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Raghu Betina
|
|
8
8
|
bindir: bin
|
|
9
9
|
cert_chain: []
|
|
10
|
-
date:
|
|
10
|
+
date: 1980-01-02 00:00:00.000000000 Z
|
|
11
11
|
dependencies:
|
|
12
12
|
- !ruby/object:Gem::Dependency
|
|
13
13
|
name: openai
|
|
@@ -146,8 +146,8 @@ email:
|
|
|
146
146
|
executables: []
|
|
147
147
|
extensions: []
|
|
148
148
|
extra_rdoc_files:
|
|
149
|
-
- README.md
|
|
150
149
|
- LICENSE
|
|
150
|
+
- README.md
|
|
151
151
|
files:
|
|
152
152
|
- LICENSE
|
|
153
153
|
- README.md
|
|
@@ -181,7 +181,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
|
181
181
|
- !ruby/object:Gem::Version
|
|
182
182
|
version: '0'
|
|
183
183
|
requirements: []
|
|
184
|
-
rubygems_version: 3.
|
|
184
|
+
rubygems_version: 3.7.1
|
|
185
185
|
specification_version: 4
|
|
186
186
|
summary: A beginner-friendly Ruby interface for OpenAI's API
|
|
187
187
|
test_files: []
|