ai-chat 0.2.4 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +122 -65
- data/ai-chat.gemspec +7 -5
- data/lib/ai/chat.rb +310 -66
- data/lib/ai/http.rb +45 -0
- data/lib/prompts/schema_generator.md +123 -0
- metadata +44 -12
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: b21d17972572c7c6282aa2a40c539f48967b47caad6ac0f80f99046337436576
|
|
4
|
+
data.tar.gz: b27033cec74910347d8f965e7aa4928b888a0d42aa8f14c48bc229014522cf7d
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 5fd677c3e077c29a9777c1c1fa57108798d39ad09fed0093db12ce0819c9a4a204bbc8cbbfd84b4236df48b9ee7e3a0ea0469d29745188d1a22d3f5789f5c65a
|
|
7
|
+
data.tar.gz: 4edb6917f52330c5a8dd27e825e1cc27201da82ab8cc7e4988d0514b3ed92969f0b8369be4ceec44fbaa3684545774060430ca110730e5dfe753686c4f292526
|
data/README.md
CHANGED
|
@@ -34,6 +34,11 @@ The `examples/` directory contains focused examples for specific features:
|
|
|
34
34
|
- `08_advanced_usage.rb` - Advanced patterns (chaining, web search)
|
|
35
35
|
- `09_edge_cases.rb` - Error handling and edge cases
|
|
36
36
|
- `10_additional_patterns.rb` - Less common usage patterns (direct add method, web search + schema, etc.)
|
|
37
|
+
- `11_mixed_content.rb` - Combining text and images in messages
|
|
38
|
+
- `12_image_generation.rb` - Using the image generation tool
|
|
39
|
+
- `13_code_interpreter.rb` - Using the code interpreter tool
|
|
40
|
+
- `14_background_mode.rb` - Running responses in background mode
|
|
41
|
+
- `15_conversation_features_comprehensive.rb` - All conversation features (auto-creation, inspection, loading, forking)
|
|
37
42
|
|
|
38
43
|
Each example is self-contained and can be run individually:
|
|
39
44
|
```bash
|
|
@@ -243,17 +248,18 @@ h.last[:content]
|
|
|
243
248
|
|
|
244
249
|
## Web Search
|
|
245
250
|
|
|
246
|
-
To give the model access to real-time information from the internet,
|
|
251
|
+
To give the model access to real-time information from the internet, you can enable web searching. This uses OpenAI's built-in `web_search_preview` tool.
|
|
247
252
|
|
|
248
253
|
```ruby
|
|
249
254
|
m = AI::Chat.new
|
|
255
|
+
m.web_search = true
|
|
250
256
|
m.user("What are the latest developments in the Ruby language?")
|
|
251
257
|
m.generate! # This may use web search to find current information
|
|
252
258
|
```
|
|
253
259
|
|
|
254
260
|
**Note:** This feature requires a model that supports the `web_search_preview` tool, such as `gpt-4o` or `gpt-4o-mini`. The gem will attempt to use a compatible model if you have `web_search` enabled.
|
|
255
261
|
|
|
256
|
-
If you don't want the model to use web search, set `web_search` to `false
|
|
262
|
+
If you don't want the model to use web search, set `web_search` to `false` (this is the default):
|
|
257
263
|
|
|
258
264
|
```ruby
|
|
259
265
|
m = AI::Chat.new
|
|
@@ -525,6 +531,21 @@ y.user("Plot y = 2x*3 when x is -5 to 5.")
|
|
|
525
531
|
y.generate! # => {:content => "Here is the graph.", ... }
|
|
526
532
|
```
|
|
527
533
|
|
|
534
|
+
## Proxying Through prepend.me
|
|
535
|
+
|
|
536
|
+
You can proxy API calls through [prepend.me](https://prepend.me/).
|
|
537
|
+
|
|
538
|
+
```rb
|
|
539
|
+
chat = AI::Chat.new
|
|
540
|
+
chat.proxy = true
|
|
541
|
+
chat.user("Tell me a story")
|
|
542
|
+
chat.generate!
|
|
543
|
+
puts chat.last[:content]
|
|
544
|
+
# => "Once upon a time..."
|
|
545
|
+
```
|
|
546
|
+
|
|
547
|
+
When proxy is enabled, **you must use the API key provided by prepend.me** in place of a real OpenAI API key. Refer to [the section on API keys](#api-key) for options on how to set your key.
|
|
548
|
+
|
|
528
549
|
## Building Conversations Without API Calls
|
|
529
550
|
|
|
530
551
|
You can manually add assistant messages without making API calls, which is useful when reconstructing a past conversation:
|
|
@@ -624,6 +645,93 @@ u.generate!
|
|
|
624
645
|
|
|
625
646
|
Unless you've stored the previous messages somewhere yourself, this technique won't bring them back. But OpenAI remembers what they were, so that you can at least continue the conversation. (If you're using a reasoning model, this technique also preserves all of the model's reasoning.)
|
|
626
647
|
|
|
648
|
+
### Automatic Conversation Management
|
|
649
|
+
|
|
650
|
+
Starting with your first `generate!` call, the gem automatically creates and manages a conversation with OpenAI. This conversation is stored server-side and tracks all messages, tool calls, reasoning, and other items.
|
|
651
|
+
|
|
652
|
+
```ruby
|
|
653
|
+
chat = AI::Chat.new
|
|
654
|
+
chat.user("Hello")
|
|
655
|
+
chat.generate!
|
|
656
|
+
|
|
657
|
+
# Conversation ID is automatically set
|
|
658
|
+
puts chat.conversation_id # => "conv_abc123..."
|
|
659
|
+
|
|
660
|
+
# Continue the conversation - context is automatically maintained
|
|
661
|
+
chat.user("What did I just say?")
|
|
662
|
+
chat.generate! # Uses the same conversation automatically
|
|
663
|
+
```
|
|
664
|
+
|
|
665
|
+
You can also load an existing conversation from your database:
|
|
666
|
+
|
|
667
|
+
```ruby
|
|
668
|
+
# Load stored conversation_id from your database
|
|
669
|
+
chat = AI::Chat.new
|
|
670
|
+
chat.conversation_id = @thread.conversation_id # From your database
|
|
671
|
+
|
|
672
|
+
chat.user("Continue our discussion")
|
|
673
|
+
chat.generate! # Uses the loaded conversation
|
|
674
|
+
```
|
|
675
|
+
|
|
676
|
+
**Note on forking:** If you want to "fork" a conversation (create a branch), you can still use `previous_response_id`. If both `conversation_id` and `previous_response_id` are set, the gem will use `previous_response_id` and warn you.
|
|
677
|
+
|
|
678
|
+
## Inspecting Conversation Details
|
|
679
|
+
|
|
680
|
+
The gem provides two methods to inspect what happened during a conversation:
|
|
681
|
+
|
|
682
|
+
### `items` - Programmatic Access
|
|
683
|
+
|
|
684
|
+
Returns the raw conversation items for programmatic use (displaying in views, filtering, etc.):
|
|
685
|
+
|
|
686
|
+
```ruby
|
|
687
|
+
chat = AI::Chat.new
|
|
688
|
+
chat.web_search = true
|
|
689
|
+
chat.user("Search for Ruby tutorials")
|
|
690
|
+
chat.generate!
|
|
691
|
+
|
|
692
|
+
# Get all conversation items (chronological order by default)
|
|
693
|
+
page = chat.items
|
|
694
|
+
|
|
695
|
+
# Access item data
|
|
696
|
+
page.data.each do |item|
|
|
697
|
+
case item.type
|
|
698
|
+
when :message
|
|
699
|
+
puts "#{item.role}: #{item.content.first.text}"
|
|
700
|
+
when :web_search_call
|
|
701
|
+
puts "Web search: #{item.action.query}"
|
|
702
|
+
puts "Results: #{item.results.length}"
|
|
703
|
+
when :reasoning
|
|
704
|
+
puts "Reasoning: #{item.summary.first.text}"
|
|
705
|
+
end
|
|
706
|
+
end
|
|
707
|
+
|
|
708
|
+
# For long conversations, you can request reverse chronological order
|
|
709
|
+
# (useful for pagination to get most recent items first)
|
|
710
|
+
recent_items = chat.items(order: :desc)
|
|
711
|
+
```
|
|
712
|
+
|
|
713
|
+
### `verbose` - Terminal Output
|
|
714
|
+
|
|
715
|
+
Pretty-prints the entire conversation with all details for debugging and learning:
|
|
716
|
+
|
|
717
|
+
```ruby
|
|
718
|
+
chat.verbose
|
|
719
|
+
|
|
720
|
+
# Output:
|
|
721
|
+
# ┌────────────────────────────────────────────────────────────────────────────┐
|
|
722
|
+
# │ Conversation: conv_6903c1eea6cc819695af3a1b1ebf9b390c3db5e8ec021c9a │
|
|
723
|
+
# │ Items: 3 │
|
|
724
|
+
# └────────────────────────────────────────────────────────────────────────────┘
|
|
725
|
+
#
|
|
726
|
+
# [detailed colorized output of all items including web searches,
|
|
727
|
+
# reasoning, tool calls, messages, etc.]
|
|
728
|
+
```
|
|
729
|
+
|
|
730
|
+
This is useful for:
|
|
731
|
+
- **Learning** how the model uses tools (web search, code interpreter, etc.)
|
|
732
|
+
- **Debugging** why the model made certain decisions
|
|
733
|
+
- **Understanding** the full context beyond just the final response
|
|
734
|
+
|
|
627
735
|
## Setting messages directly
|
|
628
736
|
|
|
629
737
|
You can use `.messages=()` to assign an `Array` of `Hashes`. Each `Hash` must have keys `:role` and `:content`, and optionally `:image` or `:images`:
|
|
@@ -668,69 +776,6 @@ q.messages = [
|
|
|
668
776
|
]
|
|
669
777
|
```
|
|
670
778
|
|
|
671
|
-
## Assigning `ActiveRecord::Relation`s
|
|
672
|
-
|
|
673
|
-
If your chat history is contained in an `ActiveRecord::Relation`, you can assign it directly:
|
|
674
|
-
|
|
675
|
-
```ruby
|
|
676
|
-
# Load from ActiveRecord
|
|
677
|
-
@thread = Thread.find(42)
|
|
678
|
-
|
|
679
|
-
r = AI::Chat.new
|
|
680
|
-
r.messages = @thread.posts.order(:created_at)
|
|
681
|
-
r.user("What should we discuss next?")
|
|
682
|
-
r.generate! # Creates a new post record, too
|
|
683
|
-
```
|
|
684
|
-
|
|
685
|
-
### Requirements
|
|
686
|
-
|
|
687
|
-
In order for the above to "magically" work, there are a few requirements. Your ActiveRecord model must have:
|
|
688
|
-
|
|
689
|
-
- `.role` method that returns "system", "user", or "assistant"
|
|
690
|
-
- `.content` method that returns the message text
|
|
691
|
-
- `.image` method (optional) for single images - can return URLs, file paths, or Active Storage attachments
|
|
692
|
-
- `.images` method (optional) for multiple images
|
|
693
|
-
|
|
694
|
-
### Custom Column Names
|
|
695
|
-
|
|
696
|
-
If your columns have different names:
|
|
697
|
-
|
|
698
|
-
```ruby
|
|
699
|
-
s = AI::Chat.new
|
|
700
|
-
s.configure_message_attributes(
|
|
701
|
-
role: :message_type, # Your column for role
|
|
702
|
-
content: :message_body, # Your column for content
|
|
703
|
-
image: :attachment # Your column/association for images
|
|
704
|
-
)
|
|
705
|
-
s.messages = @conversation.messages
|
|
706
|
-
```
|
|
707
|
-
|
|
708
|
-
### Saving Responses with Metadata
|
|
709
|
-
|
|
710
|
-
To preserve response metadata, add an `openai_response` column to your messages table:
|
|
711
|
-
|
|
712
|
-
```ruby
|
|
713
|
-
# In your migration
|
|
714
|
-
add_column :messages, :openai_response, :text
|
|
715
|
-
|
|
716
|
-
# In your model
|
|
717
|
-
class Message < ApplicationRecord
|
|
718
|
-
serialize :openai_response, AI::Chat::Response
|
|
719
|
-
end
|
|
720
|
-
|
|
721
|
-
# Usage
|
|
722
|
-
@thread = Thread.find(42)
|
|
723
|
-
|
|
724
|
-
t = AI::Chat.new
|
|
725
|
-
t.posts = @thread.messages
|
|
726
|
-
t.user("Hello!")
|
|
727
|
-
t.generate!
|
|
728
|
-
|
|
729
|
-
# The saved message will include token usage, model info, etc.
|
|
730
|
-
last_message = @thread.messages.last
|
|
731
|
-
last_message.openai_response.usage # => {:prompt_tokens=>10, ...}
|
|
732
|
-
```
|
|
733
|
-
|
|
734
779
|
## Other Features Being Considered
|
|
735
780
|
|
|
736
781
|
- **Session management**: Save and restore conversations by ID
|
|
@@ -750,3 +795,15 @@ While this gem includes specs, they use mocked API responses. To test with real
|
|
|
750
795
|
3. Run the examples: `bundle exec ruby examples/all.rb`
|
|
751
796
|
|
|
752
797
|
This test program runs through all the major features of the gem, making real API calls to OpenAI.
|
|
798
|
+
|
|
799
|
+
## Contributing
|
|
800
|
+
|
|
801
|
+
When contributing to this project:
|
|
802
|
+
|
|
803
|
+
1. **Code Style**: This project uses StandardRB for linting. Run `bundle exec standardrb --fix` before committing to automatically fix style issues.
|
|
804
|
+
|
|
805
|
+
2. **Testing**: Ensure all specs pass with `bundle exec rspec`.
|
|
806
|
+
|
|
807
|
+
3. **Examples**: If adding a feature, consider adding an example in the `examples/` directory.
|
|
808
|
+
|
|
809
|
+
4. **Documentation**: Update the README if your changes affect the public API.
|
data/ai-chat.gemspec
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
Gem::Specification.new do |spec|
|
|
4
4
|
spec.name = "ai-chat"
|
|
5
|
-
spec.version = "0.
|
|
5
|
+
spec.version = "0.3.0"
|
|
6
6
|
spec.authors = ["Raghu Betina"]
|
|
7
7
|
spec.email = ["raghu@firstdraft.com"]
|
|
8
8
|
spec.homepage = "https://github.com/firstdraft/ai-chat"
|
|
@@ -12,20 +12,22 @@ Gem::Specification.new do |spec|
|
|
|
12
12
|
spec.metadata = {
|
|
13
13
|
"bug_tracker_uri" => "https://github.com/firstdraft/ai-chat/issues",
|
|
14
14
|
"changelog_uri" => "https://github.com/firstdraft/ai-chat/blob/main/CHANGELOG.md",
|
|
15
|
-
"homepage_uri" => "https://
|
|
15
|
+
"homepage_uri" => "https://rubygems.org/gems/ai-chat",
|
|
16
16
|
"label" => "AI Chat",
|
|
17
17
|
"rubygems_mfa_required" => "true",
|
|
18
18
|
"source_code_uri" => "https://github.com/firstdraft/ai-chat"
|
|
19
19
|
}
|
|
20
20
|
|
|
21
21
|
spec.required_ruby_version = "~> 3.2"
|
|
22
|
-
spec.add_runtime_dependency "openai", "~> 0.
|
|
22
|
+
spec.add_runtime_dependency "openai", "~> 0.34"
|
|
23
23
|
spec.add_runtime_dependency "marcel", "~> 1.0"
|
|
24
|
-
spec.add_runtime_dependency "base64", "> 0.1.1"
|
|
24
|
+
spec.add_runtime_dependency "base64", "~> 0.1", "> 0.1.1"
|
|
25
25
|
spec.add_runtime_dependency "json", "~> 2.0"
|
|
26
|
+
spec.add_runtime_dependency "ostruct", "~> 0.2"
|
|
26
27
|
spec.add_runtime_dependency "tty-spinner", "~> 0.9.3"
|
|
28
|
+
spec.add_runtime_dependency "amazing_print", "~> 1.8"
|
|
27
29
|
|
|
28
|
-
spec.add_development_dependency "dotenv"
|
|
30
|
+
spec.add_development_dependency "dotenv", ">= 1.0.0"
|
|
29
31
|
spec.add_development_dependency "refinements", "~> 11.1"
|
|
30
32
|
|
|
31
33
|
spec.extra_rdoc_files = Dir["README*", "LICENSE*"]
|
data/lib/ai/chat.rb
CHANGED
|
@@ -4,12 +4,16 @@ require "base64"
|
|
|
4
4
|
require "json"
|
|
5
5
|
require "marcel"
|
|
6
6
|
require "openai"
|
|
7
|
+
require "ostruct"
|
|
7
8
|
require "pathname"
|
|
8
9
|
require "stringio"
|
|
9
10
|
require "fileutils"
|
|
10
11
|
require "tty-spinner"
|
|
11
12
|
require "timeout"
|
|
12
13
|
|
|
14
|
+
require_relative "http"
|
|
15
|
+
include AI::Http
|
|
16
|
+
|
|
13
17
|
module AI
|
|
14
18
|
# :reek:MissingSafeMethod { exclude: [ generate! ] }
|
|
15
19
|
# :reek:TooManyMethods
|
|
@@ -18,22 +22,60 @@ module AI
|
|
|
18
22
|
# :reek:IrresponsibleModule
|
|
19
23
|
class Chat
|
|
20
24
|
# :reek:Attribute
|
|
21
|
-
attr_accessor :background, :code_interpreter, :image_generation, :image_folder, :messages, :model, :previous_response_id, :web_search
|
|
25
|
+
attr_accessor :background, :code_interpreter, :conversation_id, :image_generation, :image_folder, :messages, :model, :proxy, :previous_response_id, :web_search
|
|
22
26
|
attr_reader :reasoning_effort, :client, :schema
|
|
23
27
|
|
|
24
28
|
VALID_REASONING_EFFORTS = [:low, :medium, :high].freeze
|
|
29
|
+
PROXY_URL = "https://prepend.me/".freeze
|
|
25
30
|
|
|
26
31
|
def initialize(api_key: nil, api_key_env_var: "OPENAI_API_KEY")
|
|
27
|
-
api_key
|
|
32
|
+
@api_key = api_key || ENV.fetch(api_key_env_var)
|
|
28
33
|
@messages = []
|
|
29
34
|
@reasoning_effort = nil
|
|
30
35
|
@model = "gpt-4.1-nano"
|
|
31
|
-
@client = OpenAI::Client.new(api_key: api_key)
|
|
36
|
+
@client = OpenAI::Client.new(api_key: @api_key)
|
|
32
37
|
@previous_response_id = nil
|
|
38
|
+
@proxy = false
|
|
33
39
|
@image_generation = false
|
|
34
40
|
@image_folder = "./images"
|
|
35
41
|
end
|
|
36
42
|
|
|
43
|
+
def self.generate_schema!(description, api_key: nil, api_key_env_var: "OPENAI_API_KEY", proxy: false)
|
|
44
|
+
@api_key ||= ENV.fetch(api_key_env_var)
|
|
45
|
+
prompt_path = File.expand_path("../prompts/schema_generator.md", __dir__)
|
|
46
|
+
system_prompt = File.open(prompt_path).read
|
|
47
|
+
|
|
48
|
+
json = if proxy
|
|
49
|
+
uri = URI(PROXY_URL + "api.openai.com/v1/responses")
|
|
50
|
+
parameters = {
|
|
51
|
+
model: "o4-mini",
|
|
52
|
+
input: [
|
|
53
|
+
{role: :system, content: system_prompt},
|
|
54
|
+
{role: :user, content: description},
|
|
55
|
+
],
|
|
56
|
+
text: {format: {type: "json_object"}},
|
|
57
|
+
reasoning: {effort: "high"}
|
|
58
|
+
}
|
|
59
|
+
|
|
60
|
+
send_request(uri, content_type: "json", parameters: parameters, method: "post")
|
|
61
|
+
else
|
|
62
|
+
client = OpenAI::Client.new(api_key: api_key)
|
|
63
|
+
response = client.responses.create(
|
|
64
|
+
model: "o4-mini",
|
|
65
|
+
input: [
|
|
66
|
+
{role: :system, content: system_prompt},
|
|
67
|
+
{role: :user, content: description}
|
|
68
|
+
],
|
|
69
|
+
text: {format: {type: "json_object"}},
|
|
70
|
+
reasoning: {effort: "high"}
|
|
71
|
+
)
|
|
72
|
+
|
|
73
|
+
output_text = response.output_text
|
|
74
|
+
JSON.parse(output_text)
|
|
75
|
+
end
|
|
76
|
+
JSON.pretty_generate(json)
|
|
77
|
+
end
|
|
78
|
+
|
|
37
79
|
# :reek:TooManyStatements
|
|
38
80
|
# :reek:NilCheck
|
|
39
81
|
def add(content, role: "user", response: nil, status: nil, image: nil, images: nil, file: nil, files: nil)
|
|
@@ -100,10 +142,11 @@ module AI
|
|
|
100
142
|
# :reek:NilCheck
|
|
101
143
|
# :reek:TooManyStatements
|
|
102
144
|
def generate!
|
|
145
|
+
validate_api_key
|
|
103
146
|
response = create_response
|
|
104
147
|
parse_response(response)
|
|
105
148
|
|
|
106
|
-
self.previous_response_id = last.dig(:response, :id)
|
|
149
|
+
self.previous_response_id = last.dig(:response, :id) unless (conversation_id && !background)
|
|
107
150
|
last
|
|
108
151
|
end
|
|
109
152
|
|
|
@@ -115,7 +158,7 @@ module AI
|
|
|
115
158
|
response = if wait
|
|
116
159
|
wait_for_response(timeout)
|
|
117
160
|
else
|
|
118
|
-
|
|
161
|
+
retrieve_response(previous_response_id)
|
|
119
162
|
end
|
|
120
163
|
parse_response(response)
|
|
121
164
|
end
|
|
@@ -153,6 +196,50 @@ module AI
|
|
|
153
196
|
messages.last
|
|
154
197
|
end
|
|
155
198
|
|
|
199
|
+
def items(order: :asc)
|
|
200
|
+
raise "No conversation_id set. Call generate! first to create a conversation." unless conversation_id
|
|
201
|
+
|
|
202
|
+
if proxy
|
|
203
|
+
uri = URI(PROXY_URL + "api.openai.com/v1/conversations/#{conversation_id}/items?order=#{order.to_s}")
|
|
204
|
+
response_hash = send_request(uri, content_type: "json", method: "get")
|
|
205
|
+
|
|
206
|
+
if response_hash.key?(:data)
|
|
207
|
+
response_hash.dig(:data).map do |hash|
|
|
208
|
+
# Transform values to allow expected symbols that non-proxied request returns
|
|
209
|
+
|
|
210
|
+
hash.transform_values! do |value|
|
|
211
|
+
if hash.key(value) == :type
|
|
212
|
+
value.to_sym
|
|
213
|
+
else
|
|
214
|
+
value
|
|
215
|
+
end
|
|
216
|
+
end
|
|
217
|
+
end
|
|
218
|
+
response_hash
|
|
219
|
+
end
|
|
220
|
+
# Convert to Struct to allow same interface as non-proxied request
|
|
221
|
+
create_deep_struct(response_hash)
|
|
222
|
+
else
|
|
223
|
+
client.conversations.items.list(conversation_id, order: order)
|
|
224
|
+
end
|
|
225
|
+
end
|
|
226
|
+
|
|
227
|
+
def verbose
|
|
228
|
+
page = items
|
|
229
|
+
|
|
230
|
+
box_width = 78
|
|
231
|
+
inner_width = box_width - 4
|
|
232
|
+
|
|
233
|
+
puts
|
|
234
|
+
puts "┌#{"─" * (box_width - 2)}┐"
|
|
235
|
+
puts "│ Conversation: #{conversation_id.ljust(inner_width - 14)} │"
|
|
236
|
+
puts "│ Items: #{page.data.length.to_s.ljust(inner_width - 7)} │"
|
|
237
|
+
puts "└#{"─" * (box_width - 2)}┘"
|
|
238
|
+
puts
|
|
239
|
+
|
|
240
|
+
ap page.data, limit: 10, indent: 2
|
|
241
|
+
end
|
|
242
|
+
|
|
156
243
|
def inspect
|
|
157
244
|
"#<#{self.class.name} @messages=#{messages.inspect} @model=#{@model.inspect} @schema=#{@schema.inspect} @reasoning_effort=#{@reasoning_effort.inspect}>"
|
|
158
245
|
end
|
|
@@ -164,9 +251,9 @@ module AI
|
|
|
164
251
|
# :reek:DuplicateMethodCall
|
|
165
252
|
# :reek:UncommunicativeParameterName
|
|
166
253
|
def pretty_print(q)
|
|
167
|
-
q.group(1, "#<#{self.class}",
|
|
254
|
+
q.group(1, "#<#{self.class}", ">") do
|
|
168
255
|
q.breakable
|
|
169
|
-
|
|
256
|
+
|
|
170
257
|
# Show messages with truncation
|
|
171
258
|
q.text "@messages="
|
|
172
259
|
truncated_messages = @messages.map do |msg|
|
|
@@ -177,7 +264,7 @@ module AI
|
|
|
177
264
|
truncated_msg
|
|
178
265
|
end
|
|
179
266
|
q.pp truncated_messages
|
|
180
|
-
|
|
267
|
+
|
|
181
268
|
# Show other instance variables (except sensitive ones)
|
|
182
269
|
skip_vars = [:@messages, :@api_key, :@client]
|
|
183
270
|
instance_variables.sort.each do |var|
|
|
@@ -196,6 +283,7 @@ module AI
|
|
|
196
283
|
private
|
|
197
284
|
|
|
198
285
|
class InputClassificationError < StandardError; end
|
|
286
|
+
class WrongAPITokenUsedError < StandardError; end
|
|
199
287
|
|
|
200
288
|
# :reek:FeatureEnvy
|
|
201
289
|
# :reek:ManualDispatch
|
|
@@ -210,6 +298,17 @@ module AI
|
|
|
210
298
|
end
|
|
211
299
|
end
|
|
212
300
|
|
|
301
|
+
def create_conversation
|
|
302
|
+
self.conversation_id = if proxy
|
|
303
|
+
uri = URI(PROXY_URL + "api.openai.com/v1/conversations")
|
|
304
|
+
response = send_request(uri, content_type: "json", method: "post")
|
|
305
|
+
response.dig(:id)
|
|
306
|
+
else
|
|
307
|
+
conversation = client.conversations.create
|
|
308
|
+
conversation.id
|
|
309
|
+
end
|
|
310
|
+
end
|
|
311
|
+
|
|
213
312
|
# :reek:TooManyStatements
|
|
214
313
|
def create_response
|
|
215
314
|
parameters = {
|
|
@@ -220,27 +319,72 @@ module AI
|
|
|
220
319
|
parameters[:tools] = tools unless tools.empty?
|
|
221
320
|
parameters[:text] = schema if schema
|
|
222
321
|
parameters[:reasoning] = {effort: reasoning_effort} if reasoning_effort
|
|
223
|
-
|
|
322
|
+
|
|
323
|
+
if previous_response_id && conversation_id
|
|
324
|
+
warn "Both conversation_id and previous_response_id are set. Using previous_response_id for forking. Only set one."
|
|
325
|
+
parameters[:previous_response_id] = previous_response_id
|
|
326
|
+
elsif previous_response_id
|
|
327
|
+
parameters[:previous_response_id] = previous_response_id
|
|
328
|
+
elsif conversation_id
|
|
329
|
+
parameters[:conversation] = conversation_id
|
|
330
|
+
else
|
|
331
|
+
create_conversation
|
|
332
|
+
end
|
|
224
333
|
|
|
225
334
|
messages_to_send = prepare_messages_for_api
|
|
226
335
|
parameters[:input] = strip_responses(messages_to_send) unless messages_to_send.empty?
|
|
227
336
|
|
|
228
|
-
|
|
337
|
+
if proxy
|
|
338
|
+
uri = URI(PROXY_URL + "api.openai.com/v1/responses")
|
|
339
|
+
send_request(uri, content_type: "json", parameters: parameters, method: "post")
|
|
340
|
+
else
|
|
341
|
+
client.responses.create(**parameters)
|
|
342
|
+
end
|
|
229
343
|
end
|
|
230
344
|
|
|
231
345
|
# :reek:NilCheck
|
|
232
346
|
# :reek:TooManyStatements
|
|
233
347
|
def parse_response(response)
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
348
|
+
if proxy && response.is_a?(Hash)
|
|
349
|
+
response_messages = response.dig(:output).select do |output|
|
|
350
|
+
output.dig(:type) == "message"
|
|
351
|
+
end
|
|
352
|
+
|
|
353
|
+
message_contents = response_messages.map do |message|
|
|
354
|
+
message.dig(:content)
|
|
355
|
+
end.flatten
|
|
356
|
+
|
|
357
|
+
output_texts = message_contents.select do |content|
|
|
358
|
+
content[:type] == "output_text"
|
|
359
|
+
end
|
|
360
|
+
|
|
361
|
+
text_response = output_texts.map { |output| output[:text] }.join
|
|
362
|
+
response_id = response.dig(:id)
|
|
363
|
+
response_status = response.dig(:status).to_sym
|
|
364
|
+
response_model = response.dig(:model)
|
|
365
|
+
response_usage = response.dig(:usage)&.slice(:input_tokens, :output_tokens, :total_tokens)
|
|
366
|
+
|
|
367
|
+
if response.key?(:conversation)
|
|
368
|
+
self.conversation_id = response.dig(:conversation, :id)
|
|
369
|
+
end
|
|
370
|
+
else
|
|
371
|
+
text_response = response.output_text
|
|
372
|
+
response_id = response.id
|
|
373
|
+
response_status = response.status
|
|
374
|
+
response_model = response.model
|
|
375
|
+
response_usage = response.usage.to_h.slice(:input_tokens, :output_tokens, :total_tokens)
|
|
376
|
+
|
|
377
|
+
if response.conversation
|
|
378
|
+
self.conversation_id = response.conversation.id
|
|
379
|
+
end
|
|
380
|
+
end
|
|
381
|
+
image_filenames = extract_and_save_images(response) + extract_and_save_files(response)
|
|
238
382
|
|
|
239
383
|
chat_response = {
|
|
240
384
|
id: response_id,
|
|
241
|
-
model:
|
|
242
|
-
usage: response_usage,
|
|
243
|
-
total_tokens: response_usage
|
|
385
|
+
model: response_model,
|
|
386
|
+
usage: response_usage || {},
|
|
387
|
+
total_tokens: response_usage&.fetch(:total_tokens, 0),
|
|
244
388
|
images: image_filenames
|
|
245
389
|
}.compact
|
|
246
390
|
|
|
@@ -261,7 +405,7 @@ module AI
|
|
|
261
405
|
role: "assistant",
|
|
262
406
|
content: response_content,
|
|
263
407
|
response: chat_response,
|
|
264
|
-
status:
|
|
408
|
+
status: response_status
|
|
265
409
|
}
|
|
266
410
|
|
|
267
411
|
message.store(:images, image_filenames) unless image_filenames.empty?
|
|
@@ -461,19 +605,31 @@ module AI
|
|
|
461
605
|
def extract_and_save_images(response)
|
|
462
606
|
image_filenames = []
|
|
463
607
|
|
|
464
|
-
|
|
465
|
-
|
|
466
|
-
|
|
608
|
+
if proxy
|
|
609
|
+
image_outputs = response.dig(:output).select { |output|
|
|
610
|
+
output.dig(:type) == "image_generation_call"
|
|
611
|
+
}
|
|
612
|
+
else
|
|
613
|
+
image_outputs = response.output.select { |output|
|
|
614
|
+
output.respond_to?(:type) && output.type == :image_generation_call
|
|
615
|
+
}
|
|
616
|
+
end
|
|
467
617
|
|
|
468
618
|
return image_filenames if image_outputs.empty?
|
|
469
619
|
|
|
470
|
-
|
|
620
|
+
response_id = proxy ? response.dig(:id) : response.id
|
|
621
|
+
subfolder_path = create_images_folder(response_id)
|
|
471
622
|
|
|
472
623
|
image_outputs.each_with_index do |output, index|
|
|
473
|
-
|
|
624
|
+
if proxy
|
|
625
|
+
next unless output.key?(:result) && output.dig(:result)
|
|
626
|
+
else
|
|
627
|
+
next unless output.respond_to?(:result) && output.result
|
|
628
|
+
end
|
|
474
629
|
|
|
475
630
|
warn_if_file_fails_to_save do
|
|
476
|
-
|
|
631
|
+
result = proxy ? output.dig(:result) : output.result
|
|
632
|
+
image_data = Base64.strict_decode64(result)
|
|
477
633
|
|
|
478
634
|
filename = "#{(index + 1).to_s.rjust(3, "0")}.png"
|
|
479
635
|
file_path = File.join(subfolder_path, filename)
|
|
@@ -498,10 +654,32 @@ module AI
|
|
|
498
654
|
end
|
|
499
655
|
|
|
500
656
|
def warn_if_file_fails_to_save
|
|
501
|
-
|
|
502
|
-
|
|
503
|
-
|
|
504
|
-
|
|
657
|
+
yield
|
|
658
|
+
rescue => error
|
|
659
|
+
warn "Failed to save image: #{error.message}"
|
|
660
|
+
end
|
|
661
|
+
|
|
662
|
+
def validate_api_key
|
|
663
|
+
openai_api_key_used = @api_key.start_with?("sk-proj")
|
|
664
|
+
proxy_api_key_used = !openai_api_key_used
|
|
665
|
+
proxy_enabled = proxy
|
|
666
|
+
proxy_disabled = !proxy
|
|
667
|
+
|
|
668
|
+
if openai_api_key_used && proxy_enabled
|
|
669
|
+
raise WrongAPITokenUsedError, <<~STRING
|
|
670
|
+
It looks like you're using an official API key from OpenAI with proxying enabled. When proxying is enabled you must use an OpenAI API key from prepend.me. Please disable proxy or update your API key before generating a response.
|
|
671
|
+
STRING
|
|
672
|
+
elsif proxy_api_key_used && proxy_disabled
|
|
673
|
+
raise WrongAPITokenUsedError, <<~STRING
|
|
674
|
+
It looks like you're using an unofficial OpenAI API key from prepend.me. When using an unofficial API key you must enable proxy before generating a response. Proxying is currently disabled, please enable it before generating a response.
|
|
675
|
+
|
|
676
|
+
Example:
|
|
677
|
+
|
|
678
|
+
chat = AI::Chat.new
|
|
679
|
+
chat.proxy = true
|
|
680
|
+
chat.user(...)
|
|
681
|
+
chat.generate!
|
|
682
|
+
STRING
|
|
505
683
|
end
|
|
506
684
|
end
|
|
507
685
|
|
|
@@ -512,41 +690,74 @@ module AI
|
|
|
512
690
|
def extract_and_save_files(response)
|
|
513
691
|
filenames = []
|
|
514
692
|
|
|
515
|
-
|
|
516
|
-
|
|
517
|
-
|
|
518
|
-
|
|
519
|
-
|
|
520
|
-
|
|
521
|
-
|
|
693
|
+
if proxy
|
|
694
|
+
message_outputs = response.dig(:output).select do |output|
|
|
695
|
+
output.dig(:type) == "message"
|
|
696
|
+
end
|
|
697
|
+
|
|
698
|
+
outputs_with_annotations = message_outputs.map do |message|
|
|
699
|
+
message.dig(:content).find do |content|
|
|
700
|
+
content.dig(:annotations).length.positive?
|
|
701
|
+
end
|
|
702
|
+
end.compact
|
|
703
|
+
else
|
|
704
|
+
message_outputs = response.output.select do |output|
|
|
705
|
+
output.respond_to?(:type) && output.type == :message
|
|
522
706
|
end
|
|
523
|
-
|
|
707
|
+
|
|
708
|
+
outputs_with_annotations = message_outputs.map do |message|
|
|
709
|
+
message.content.find do |content|
|
|
710
|
+
content.respond_to?(:annotations) && content.annotations.length.positive?
|
|
711
|
+
end
|
|
712
|
+
end.compact
|
|
713
|
+
end
|
|
524
714
|
|
|
525
715
|
return filenames if outputs_with_annotations.empty?
|
|
526
716
|
|
|
527
|
-
|
|
528
|
-
|
|
529
|
-
output.annotations.find do |annotation|
|
|
530
|
-
annotation.respond_to?(:filename)
|
|
531
|
-
end
|
|
532
|
-
end.compact
|
|
533
|
-
|
|
534
|
-
annotations.each do |annotation|
|
|
535
|
-
container_id = annotation.container_id
|
|
536
|
-
file_id = annotation.file_id
|
|
537
|
-
filename = annotation.filename
|
|
717
|
+
response_id = proxy ? response.dig(:id) : response.id
|
|
718
|
+
subfolder_path = create_images_folder(response_id)
|
|
538
719
|
|
|
539
|
-
|
|
540
|
-
|
|
541
|
-
|
|
542
|
-
|
|
543
|
-
|
|
544
|
-
|
|
720
|
+
if proxy
|
|
721
|
+
annotations = outputs_with_annotations.map do |output|
|
|
722
|
+
output.dig(:annotations).find do |annotation|
|
|
723
|
+
annotation.key?(:filename)
|
|
724
|
+
end
|
|
725
|
+
end.compact
|
|
726
|
+
|
|
727
|
+
annotations.each do |annotation|
|
|
728
|
+
container_id = annotation.dig(:container_id)
|
|
729
|
+
file_id = annotation.dig(:file_id)
|
|
730
|
+
filename = annotation.dig(:filename)
|
|
731
|
+
|
|
732
|
+
warn_if_file_fails_to_save do
|
|
733
|
+
file_content = retrieve_file(file_id, container_id: container_id)
|
|
734
|
+
file_path = File.join(subfolder_path, filename)
|
|
735
|
+
File.binwrite(file_path, file_content)
|
|
736
|
+
filenames << file_path
|
|
737
|
+
end
|
|
738
|
+
end
|
|
739
|
+
else
|
|
740
|
+
annotations = outputs_with_annotations.map do |output|
|
|
741
|
+
output.annotations.find do |annotation|
|
|
742
|
+
annotation.respond_to?(:filename)
|
|
743
|
+
end
|
|
744
|
+
end.compact
|
|
745
|
+
|
|
746
|
+
annotations.each do |annotation|
|
|
747
|
+
container_id = annotation.container_id
|
|
748
|
+
file_id = annotation.file_id
|
|
749
|
+
filename = annotation.filename
|
|
750
|
+
|
|
751
|
+
warn_if_file_fails_to_save do
|
|
752
|
+
file_content = retrieve_file(file_id, container_id: container_id)
|
|
753
|
+
file_path = File.join(subfolder_path, filename)
|
|
754
|
+
File.open(file_path, "wb") do |file|
|
|
755
|
+
file.write(file_content.read)
|
|
756
|
+
end
|
|
757
|
+
filenames << file_path
|
|
545
758
|
end
|
|
546
|
-
filenames << file_path
|
|
547
759
|
end
|
|
548
760
|
end
|
|
549
|
-
|
|
550
761
|
filenames
|
|
551
762
|
end
|
|
552
763
|
|
|
@@ -561,13 +772,11 @@ module AI
|
|
|
561
772
|
end
|
|
562
773
|
|
|
563
774
|
def timeout_request(duration)
|
|
564
|
-
|
|
565
|
-
|
|
566
|
-
yield
|
|
567
|
-
end
|
|
568
|
-
rescue Timeout::Error
|
|
569
|
-
client.responses.cancel(previous_response_id)
|
|
775
|
+
Timeout.timeout(duration) do
|
|
776
|
+
yield
|
|
570
777
|
end
|
|
778
|
+
rescue Timeout::Error
|
|
779
|
+
client.responses.cancel(previous_response_id)
|
|
571
780
|
end
|
|
572
781
|
|
|
573
782
|
# :reek:DuplicateMethodCall
|
|
@@ -575,21 +784,56 @@ module AI
|
|
|
575
784
|
def wait_for_response(timeout)
|
|
576
785
|
spinner = TTY::Spinner.new("[:spinner] Thinking ...", format: :dots)
|
|
577
786
|
spinner.auto_spin
|
|
578
|
-
api_response =
|
|
787
|
+
api_response = retrieve_response(previous_response_id)
|
|
579
788
|
number_of_times_polled = 0
|
|
580
789
|
response = timeout_request(timeout) do
|
|
581
|
-
|
|
790
|
+
status = if api_response.respond_to?(:status)
|
|
791
|
+
api_response.status
|
|
792
|
+
else
|
|
793
|
+
api_response.dig(:status)&.to_sym
|
|
794
|
+
end
|
|
795
|
+
|
|
796
|
+
while status != :completed
|
|
582
797
|
some_amount_of_seconds = calculate_wait(number_of_times_polled)
|
|
583
798
|
sleep some_amount_of_seconds
|
|
584
799
|
number_of_times_polled += 1
|
|
585
|
-
api_response =
|
|
800
|
+
api_response = retrieve_response(previous_response_id)
|
|
801
|
+
status = if api_response.respond_to?(:status)
|
|
802
|
+
api_response.status
|
|
803
|
+
else
|
|
804
|
+
api_response.dig(:status)&.to_sym
|
|
805
|
+
end
|
|
586
806
|
end
|
|
587
807
|
api_response
|
|
588
808
|
end
|
|
589
|
-
|
|
590
|
-
|
|
809
|
+
|
|
810
|
+
status = if api_response.respond_to?(:status)
|
|
811
|
+
api_response.status
|
|
812
|
+
else
|
|
813
|
+
api_response.dig(:status).to_sym
|
|
814
|
+
end
|
|
815
|
+
exit_message = status == :cancelled ? "request timed out" : "done!"
|
|
591
816
|
spinner.stop(exit_message)
|
|
592
817
|
response
|
|
593
818
|
end
|
|
819
|
+
|
|
820
|
+
def retrieve_response(previous_response_id)
|
|
821
|
+
if proxy
|
|
822
|
+
uri = URI(PROXY_URL + "api.openai.com/v1/responses/#{previous_response_id}")
|
|
823
|
+
send_request(uri, content_type: "json", method: "get")
|
|
824
|
+
else
|
|
825
|
+
client.responses.retrieve(previous_response_id)
|
|
826
|
+
end
|
|
827
|
+
end
|
|
828
|
+
|
|
829
|
+
def retrieve_file(file_id, container_id: nil)
|
|
830
|
+
if proxy
|
|
831
|
+
uri = URI(PROXY_URL + "api.openai.com/v1/containers/#{container_id}/files/#{file_id}/content")
|
|
832
|
+
send_request(uri, method: "get")
|
|
833
|
+
else
|
|
834
|
+
container_content = client.containers.files.content
|
|
835
|
+
file_content = container_content.retrieve(file_id, container_id: container_id)
|
|
836
|
+
end
|
|
837
|
+
end
|
|
594
838
|
end
|
|
595
839
|
end
|
data/lib/ai/http.rb
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
1
|
+
require "net/http"
|
|
2
|
+
module AI
|
|
3
|
+
module Http
|
|
4
|
+
def send_request(uri, content_type: nil, parameters: nil, method:)
|
|
5
|
+
Net::HTTP.start(uri.host, 443, use_ssl: true) do |http|
|
|
6
|
+
headers = {
|
|
7
|
+
"Authorization" => "Bearer #{@api_key}"
|
|
8
|
+
}
|
|
9
|
+
if content_type
|
|
10
|
+
headers.store("Content-Type", "application/json")
|
|
11
|
+
end
|
|
12
|
+
net_http_method = "Net::HTTP::#{method.downcase.capitalize}"
|
|
13
|
+
client = Kernel.const_get(net_http_method)
|
|
14
|
+
request = client.new(uri, headers)
|
|
15
|
+
|
|
16
|
+
if parameters
|
|
17
|
+
request.body = parameters.to_json
|
|
18
|
+
end
|
|
19
|
+
response = http.request(request)
|
|
20
|
+
|
|
21
|
+
# Handle proxy server 503 HTML response
|
|
22
|
+
begin
|
|
23
|
+
if content_type
|
|
24
|
+
return JSON.parse(response.body, symbolize_names: true)
|
|
25
|
+
else
|
|
26
|
+
return response.body
|
|
27
|
+
end
|
|
28
|
+
rescue JSON::ParserError, TypeError => e
|
|
29
|
+
raise JSON::ParserError, "Failed to parse response from proxy: #{e.message}"
|
|
30
|
+
end
|
|
31
|
+
end
|
|
32
|
+
end
|
|
33
|
+
|
|
34
|
+
def create_deep_struct(value)
|
|
35
|
+
case value
|
|
36
|
+
when Hash
|
|
37
|
+
OpenStruct.new(value.transform_values { |hash_value| send __method__, hash_value })
|
|
38
|
+
when Array
|
|
39
|
+
value.map { |element| send __method__, element }
|
|
40
|
+
else
|
|
41
|
+
value
|
|
42
|
+
end
|
|
43
|
+
end
|
|
44
|
+
end
|
|
45
|
+
end
|
|
@@ -0,0 +1,123 @@
|
|
|
1
|
+
You are an expert at creating JSON Schemas for OpenAI's Structured Outputs feature.
|
|
2
|
+
|
|
3
|
+
Generate a valid JSON Schema that follows these strict rules:
|
|
4
|
+
|
|
5
|
+
## OUTPUT FORMAT
|
|
6
|
+
Return a JSON object with this root structure:
|
|
7
|
+
- "name": a short snake_case identifier for the schema
|
|
8
|
+
- "strict": must be true
|
|
9
|
+
- "schema": the actual JSON Schema object
|
|
10
|
+
|
|
11
|
+
## SCHEMA REQUIREMENTS
|
|
12
|
+
|
|
13
|
+
### Critical Rules:
|
|
14
|
+
1. Root schema must be "type": "object" (not anyOf)
|
|
15
|
+
2. Set "additionalProperties": false on ALL objects (including nested ones)
|
|
16
|
+
3. ALL properties must be in "required" arrays (no optional fields unless using union types)
|
|
17
|
+
4. Always specify "items" for arrays
|
|
18
|
+
|
|
19
|
+
### Supported Types:
|
|
20
|
+
- string, number, boolean, integer, object, array, enum, anyOf
|
|
21
|
+
|
|
22
|
+
### Optional Fields:
|
|
23
|
+
To make a field optional, use union types:
|
|
24
|
+
- "type": ["string", "null"] for optional string
|
|
25
|
+
- "type": ["number", "null"] for optional number
|
|
26
|
+
- etc.
|
|
27
|
+
|
|
28
|
+
### String Properties (use when appropriate):
|
|
29
|
+
- "pattern": regex pattern (e.g., "^@[a-zA-Z0-9_]+$" for usernames)
|
|
30
|
+
- "format": predefined formats (date-time, time, date, duration, email, hostname, ipv4, ipv6, uuid)
|
|
31
|
+
- Example: {"type": "string", "format": "email", "description": "User's email address"}
|
|
32
|
+
|
|
33
|
+
### Number Properties (use when appropriate):
|
|
34
|
+
- "minimum": minimum value (inclusive)
|
|
35
|
+
- "maximum": maximum value (inclusive)
|
|
36
|
+
- "exclusiveMinimum": minimum value (exclusive)
|
|
37
|
+
- "exclusiveMaximum": maximum value (exclusive)
|
|
38
|
+
- "multipleOf": must be multiple of this value
|
|
39
|
+
- Example: {"type": "number", "minimum": -130, "maximum": 130, "description": "Temperature in degrees"}
|
|
40
|
+
|
|
41
|
+
### Array Properties (use when appropriate):
|
|
42
|
+
- "minItems": minimum number of items
|
|
43
|
+
- "maxItems": maximum number of items
|
|
44
|
+
- Example: {"type": "array", "items": {...}, "minItems": 1, "maxItems": 10}
|
|
45
|
+
|
|
46
|
+
### Enum Values:
|
|
47
|
+
Use enums for fixed sets of values:
|
|
48
|
+
- Example: {"type": "string", "enum": ["draft", "published", "archived"]}
|
|
49
|
+
|
|
50
|
+
### Nested Objects:
|
|
51
|
+
All nested objects MUST have:
|
|
52
|
+
- "additionalProperties": false
|
|
53
|
+
- Complete "required" arrays
|
|
54
|
+
- Clear "description" fields
|
|
55
|
+
|
|
56
|
+
### Recursive Schemas:
|
|
57
|
+
Support recursion using "$ref":
|
|
58
|
+
- Root recursion: {"$ref": "#"}
|
|
59
|
+
- Definition reference: {"$ref": "#/$defs/node_name"}
|
|
60
|
+
|
|
61
|
+
### Descriptions:
|
|
62
|
+
Add clear, helpful "description" fields for all properties to guide the model.
|
|
63
|
+
|
|
64
|
+
## CONSTRAINTS
|
|
65
|
+
- Max 5000 properties total, 10 levels of nesting
|
|
66
|
+
- Max 1000 enum values across all enums
|
|
67
|
+
- Total string length of all names/values cannot exceed 120,000 chars
|
|
68
|
+
|
|
69
|
+
## EXAMPLE OUTPUTS
|
|
70
|
+
|
|
71
|
+
Simple example:
|
|
72
|
+
{
|
|
73
|
+
"name": "user_profile",
|
|
74
|
+
"strict": true,
|
|
75
|
+
"schema": {
|
|
76
|
+
"type": "object",
|
|
77
|
+
"properties": {
|
|
78
|
+
"name": {"type": "string", "description": "User's full name"},
|
|
79
|
+
"age": {"type": "integer", "minimum": 0, "maximum": 150, "description": "User's age in years"},
|
|
80
|
+
"email": {"type": "string", "format": "email", "description": "User's email address"}
|
|
81
|
+
},
|
|
82
|
+
"required": ["name", "age", "email"],
|
|
83
|
+
"additionalProperties": false
|
|
84
|
+
}
|
|
85
|
+
}
|
|
86
|
+
|
|
87
|
+
Complex example with arrays and nesting:
|
|
88
|
+
{
|
|
89
|
+
"name": "recipe_collection",
|
|
90
|
+
"strict": true,
|
|
91
|
+
"schema": {
|
|
92
|
+
"type": "object",
|
|
93
|
+
"properties": {
|
|
94
|
+
"recipes": {
|
|
95
|
+
"type": "array",
|
|
96
|
+
"items": {
|
|
97
|
+
"type": "object",
|
|
98
|
+
"properties": {
|
|
99
|
+
"name": {"type": "string", "description": "Recipe name"},
|
|
100
|
+
"ingredients": {
|
|
101
|
+
"type": "array",
|
|
102
|
+
"items": {
|
|
103
|
+
"type": "object",
|
|
104
|
+
"properties": {
|
|
105
|
+
"name": {"type": "string"},
|
|
106
|
+
"quantity": {"type": "string"}
|
|
107
|
+
},
|
|
108
|
+
"required": ["name", "quantity"],
|
|
109
|
+
"additionalProperties": false
|
|
110
|
+
}
|
|
111
|
+
}
|
|
112
|
+
},
|
|
113
|
+
"required": ["name", "ingredients"],
|
|
114
|
+
"additionalProperties": false
|
|
115
|
+
}
|
|
116
|
+
}
|
|
117
|
+
},
|
|
118
|
+
"required": ["recipes"],
|
|
119
|
+
"additionalProperties": false
|
|
120
|
+
}
|
|
121
|
+
}
|
|
122
|
+
|
|
123
|
+
Return ONLY the JSON object, no additional text or explanation.
|
metadata
CHANGED
|
@@ -1,14 +1,13 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: ai-chat
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 0.
|
|
4
|
+
version: 0.3.0
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Raghu Betina
|
|
8
|
-
autorequire:
|
|
9
8
|
bindir: bin
|
|
10
9
|
cert_chain: []
|
|
11
|
-
date: 2025-
|
|
10
|
+
date: 2025-11-13 00:00:00.000000000 Z
|
|
12
11
|
dependencies:
|
|
13
12
|
- !ruby/object:Gem::Dependency
|
|
14
13
|
name: openai
|
|
@@ -16,14 +15,14 @@ dependencies:
|
|
|
16
15
|
requirements:
|
|
17
16
|
- - "~>"
|
|
18
17
|
- !ruby/object:Gem::Version
|
|
19
|
-
version: '0.
|
|
18
|
+
version: '0.34'
|
|
20
19
|
type: :runtime
|
|
21
20
|
prerelease: false
|
|
22
21
|
version_requirements: !ruby/object:Gem::Requirement
|
|
23
22
|
requirements:
|
|
24
23
|
- - "~>"
|
|
25
24
|
- !ruby/object:Gem::Version
|
|
26
|
-
version: '0.
|
|
25
|
+
version: '0.34'
|
|
27
26
|
- !ruby/object:Gem::Dependency
|
|
28
27
|
name: marcel
|
|
29
28
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -42,6 +41,9 @@ dependencies:
|
|
|
42
41
|
name: base64
|
|
43
42
|
requirement: !ruby/object:Gem::Requirement
|
|
44
43
|
requirements:
|
|
44
|
+
- - "~>"
|
|
45
|
+
- !ruby/object:Gem::Version
|
|
46
|
+
version: '0.1'
|
|
45
47
|
- - ">"
|
|
46
48
|
- !ruby/object:Gem::Version
|
|
47
49
|
version: 0.1.1
|
|
@@ -49,6 +51,9 @@ dependencies:
|
|
|
49
51
|
prerelease: false
|
|
50
52
|
version_requirements: !ruby/object:Gem::Requirement
|
|
51
53
|
requirements:
|
|
54
|
+
- - "~>"
|
|
55
|
+
- !ruby/object:Gem::Version
|
|
56
|
+
version: '0.1'
|
|
52
57
|
- - ">"
|
|
53
58
|
- !ruby/object:Gem::Version
|
|
54
59
|
version: 0.1.1
|
|
@@ -66,6 +71,20 @@ dependencies:
|
|
|
66
71
|
- - "~>"
|
|
67
72
|
- !ruby/object:Gem::Version
|
|
68
73
|
version: '2.0'
|
|
74
|
+
- !ruby/object:Gem::Dependency
|
|
75
|
+
name: ostruct
|
|
76
|
+
requirement: !ruby/object:Gem::Requirement
|
|
77
|
+
requirements:
|
|
78
|
+
- - "~>"
|
|
79
|
+
- !ruby/object:Gem::Version
|
|
80
|
+
version: '0.2'
|
|
81
|
+
type: :runtime
|
|
82
|
+
prerelease: false
|
|
83
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
84
|
+
requirements:
|
|
85
|
+
- - "~>"
|
|
86
|
+
- !ruby/object:Gem::Version
|
|
87
|
+
version: '0.2'
|
|
69
88
|
- !ruby/object:Gem::Dependency
|
|
70
89
|
name: tty-spinner
|
|
71
90
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -80,20 +99,34 @@ dependencies:
|
|
|
80
99
|
- - "~>"
|
|
81
100
|
- !ruby/object:Gem::Version
|
|
82
101
|
version: 0.9.3
|
|
102
|
+
- !ruby/object:Gem::Dependency
|
|
103
|
+
name: amazing_print
|
|
104
|
+
requirement: !ruby/object:Gem::Requirement
|
|
105
|
+
requirements:
|
|
106
|
+
- - "~>"
|
|
107
|
+
- !ruby/object:Gem::Version
|
|
108
|
+
version: '1.8'
|
|
109
|
+
type: :runtime
|
|
110
|
+
prerelease: false
|
|
111
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
112
|
+
requirements:
|
|
113
|
+
- - "~>"
|
|
114
|
+
- !ruby/object:Gem::Version
|
|
115
|
+
version: '1.8'
|
|
83
116
|
- !ruby/object:Gem::Dependency
|
|
84
117
|
name: dotenv
|
|
85
118
|
requirement: !ruby/object:Gem::Requirement
|
|
86
119
|
requirements:
|
|
87
120
|
- - ">="
|
|
88
121
|
- !ruby/object:Gem::Version
|
|
89
|
-
version:
|
|
122
|
+
version: 1.0.0
|
|
90
123
|
type: :development
|
|
91
124
|
prerelease: false
|
|
92
125
|
version_requirements: !ruby/object:Gem::Requirement
|
|
93
126
|
requirements:
|
|
94
127
|
- - ">="
|
|
95
128
|
- !ruby/object:Gem::Version
|
|
96
|
-
version:
|
|
129
|
+
version: 1.0.0
|
|
97
130
|
- !ruby/object:Gem::Dependency
|
|
98
131
|
name: refinements
|
|
99
132
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -108,7 +141,6 @@ dependencies:
|
|
|
108
141
|
- - "~>"
|
|
109
142
|
- !ruby/object:Gem::Version
|
|
110
143
|
version: '11.1'
|
|
111
|
-
description:
|
|
112
144
|
email:
|
|
113
145
|
- raghu@firstdraft.com
|
|
114
146
|
executables: []
|
|
@@ -123,17 +155,18 @@ files:
|
|
|
123
155
|
- lib/ai-chat.rb
|
|
124
156
|
- lib/ai/amazing_print.rb
|
|
125
157
|
- lib/ai/chat.rb
|
|
158
|
+
- lib/ai/http.rb
|
|
159
|
+
- lib/prompts/schema_generator.md
|
|
126
160
|
homepage: https://github.com/firstdraft/ai-chat
|
|
127
161
|
licenses:
|
|
128
162
|
- MIT
|
|
129
163
|
metadata:
|
|
130
164
|
bug_tracker_uri: https://github.com/firstdraft/ai-chat/issues
|
|
131
165
|
changelog_uri: https://github.com/firstdraft/ai-chat/blob/main/CHANGELOG.md
|
|
132
|
-
homepage_uri: https://
|
|
166
|
+
homepage_uri: https://rubygems.org/gems/ai-chat
|
|
133
167
|
label: AI Chat
|
|
134
168
|
rubygems_mfa_required: 'true'
|
|
135
169
|
source_code_uri: https://github.com/firstdraft/ai-chat
|
|
136
|
-
post_install_message:
|
|
137
170
|
rdoc_options: []
|
|
138
171
|
require_paths:
|
|
139
172
|
- lib
|
|
@@ -148,8 +181,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
|
148
181
|
- !ruby/object:Gem::Version
|
|
149
182
|
version: '0'
|
|
150
183
|
requirements: []
|
|
151
|
-
rubygems_version: 3.
|
|
152
|
-
signing_key:
|
|
184
|
+
rubygems_version: 3.6.2
|
|
153
185
|
specification_version: 4
|
|
154
186
|
summary: A beginner-friendly Ruby interface for OpenAI's API
|
|
155
187
|
test_files: []
|