ai-chat 0.3.2 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4dd73c65bb0fa8183801233a76038c8e9a32f86268ab113311317a47f72504db
4
- data.tar.gz: 8a6541039268eee87a035f7547d767919962b8f20573e06618700e05ea72ffe6
3
+ metadata.gz: 975e7f80044ac46d72ad1c08e290d7fa71b43048fe76d68784d3961c36efde95
4
+ data.tar.gz: cf0a2b5fcee3e6ee413580419c992efe6c50e5935be875f2e82d8a740c7d15fb
5
5
  SHA512:
6
- metadata.gz: b2564b32f5f1c69e8749eb079339d120cbfbf71ce9c5870cfb7b47ea81891ec11dbc9290b59f2b49b35830f59d7f0817d3dcaa558bc5ef1d97309a0a26da3733
7
- data.tar.gz: ae9ebf5fd4b0a1001e6fb047585c852055c43cfc2b7af7fc3385ed8df355aec260e2ba6e80a2443b9104648a49356124974353317d3818449413fe923ab27a0a
6
+ metadata.gz: fe727e64a0388922db85c3085dea85c6622b88c886cbb25fc660dc9bad8291205a1904a19f611457ac83cbe673a299db4d97f0b658e236de0e3ccccb46f44aac
7
+ data.tar.gz: 4be9d9a80ea39e20ef2c11d8142f9403b7b43f46c8f4ccc5a80f88a027980be11b1e882165e67e0380d2a8982e0e4bbd3489585cac3c4a0a291d46a75813d53d
data/README.md CHANGED
@@ -26,19 +26,19 @@ The `examples/` directory contains focused examples for specific features:
26
26
 
27
27
  - `01_quick.rb` - Quick overview of key features
28
28
  - `02_core.rb` - Core functionality (basic chat, messages, responses)
29
- - `03_configuration.rb` - Configuration options (API keys, models, reasoning effort)
30
- - `04_multimodal.rb` - Basic file and image handling
31
- - `05_file_handling_comprehensive.rb` - Advanced file handling (PDFs, text files, Rails uploads)
32
- - `06_structured_output.rb` - Basic structured output with schemas
33
- - `07_structured_output_comprehensive.rb` - All 6 supported schema formats
34
- - `08_advanced_usage.rb` - Advanced patterns (chaining, web search)
35
- - `09_edge_cases.rb` - Error handling and edge cases
36
- - `10_additional_patterns.rb` - Less common usage patterns (direct add method, web search + schema, etc.)
37
- - `11_mixed_content.rb` - Combining text and images in messages
38
- - `12_image_generation.rb` - Using the image generation tool
39
- - `13_code_interpreter.rb` - Using the code interpreter tool
40
- - `14_background_mode.rb` - Running responses in background mode
41
- - `15_conversation_features_comprehensive.rb` - All conversation features (auto-creation, inspection, loading, forking)
29
+ - `03_multimodal.rb` - Basic file and image handling
30
+ - `04_file_handling_comprehensive.rb` - Advanced file handling (PDFs, text files, Rails uploads)
31
+ - `05_structured_output.rb` - Basic structured output with schemas
32
+ - `06_structured_output_comprehensive.rb` - All 6 supported schema formats
33
+ - `07_edge_cases.rb` - Error handling and edge cases
34
+ - `08_additional_patterns.rb` - Less common usage patterns (direct add method, web search + schema, etc.)
35
+ - `09_mixed_content.rb` - Combining text and images in messages
36
+ - `10_image_generation.rb` - Using the image generation tool
37
+ - `11_code_interpreter.rb` - Using the code interpreter tool
38
+ - `12_background_mode.rb` - Running responses in background mode
39
+ - `13_conversation_features_comprehensive.rb` - Conversation features (auto-creation, continuity, inspection)
40
+ - `14_schema_generation.rb` - Generate JSON schemas from natural language
41
+ - `15_proxy.rb` - Proxy support for student accounts
42
42
 
43
43
  Each example is self-contained and can be run individually:
44
44
  ```bash
@@ -93,7 +93,7 @@ a.generate! # => { :role => "assistant", :content => "Matz is nice and so we are
93
93
  pp a.messages
94
94
  # => [
95
95
  # {:role=>"user", :content=>"If the Ruby community had an official motto, what might it be?"},
96
- # {:role=>"assistant", :content=>"Matz is nice and so we are nice", :response => { id=resp_abc... model=gpt-4.1-nano tokens=12 } }
96
+ # {:role=>"assistant", :content=>"Matz is nice and so we are nice", :response => { id=resp_abc... model=gpt-5.1 tokens=12 } }
97
97
  # ]
98
98
 
99
99
  # Continue the conversation
@@ -113,7 +113,7 @@ That's it! You're building something like this:
113
113
  [
114
114
  {:role => "system", :content => "You are a helpful assistant"},
115
115
  {:role => "user", :content => "Hello!"},
116
- {:role => "assistant", :content => "Hi there! How can I help you today?", :response => { id=resp_abc... model=gpt-4.1-nano tokens=12 } }
116
+ {:role => "assistant", :content => "Hi there! How can I help you today?", :response => { id=resp_abc... model=gpt-5.1 tokens=12 } }
117
117
  ]
118
118
  ```
119
119
 
@@ -183,25 +183,14 @@ d.generate! # Generate a response
183
183
 
184
184
  ### Model
185
185
 
186
- By default, the gem uses OpenAI's `gpt-4.1-nano` model. If you want to use a different model, you can set it:
186
+ By default, the gem uses OpenAI's `gpt-5.1` model. If you want to use a different model, you can set it:
187
187
 
188
188
  ```ruby
189
189
  e = AI::Chat.new
190
- e.model = "o4-mini"
190
+ e.model = "gpt-4o"
191
191
  ```
192
192
 
193
- As of 2025-07-29, the list of chat models that you probably want to choose from are:
194
-
195
- #### Foundation models
196
-
197
- - gpt-4.1-nano
198
- - gpt-4.1-mini
199
- - gpt-4.1
200
-
201
- #### Reasoning models
202
-
203
- - o4-mini
204
- - o3
193
+ See [OpenAI's model documentation](https://platform.openai.com/docs/models) for available models.
205
194
 
206
195
  ### API key
207
196
 
@@ -248,7 +237,7 @@ h.last[:content]
248
237
 
249
238
  ## Web Search
250
239
 
251
- To give the model access to real-time information from the internet, you can enable web searching. This uses OpenAI's built-in `web_search_preview` tool.
240
+ To give the model access to real-time information from the internet, you can enable web searching. This uses OpenAI's built-in `web_search` tool.
252
241
 
253
242
  ```ruby
254
243
  m = AI::Chat.new
@@ -257,17 +246,6 @@ m.user("What are the latest developments in the Ruby language?")
257
246
  m.generate! # This may use web search to find current information
258
247
  ```
259
248
 
260
- **Note:** This feature requires a model that supports the `web_search_preview` tool, such as `gpt-4o` or `gpt-4o-mini`. The gem will attempt to use a compatible model if you have `web_search` enabled.
261
-
262
- If you don't want the model to use web search, set `web_search` to `false` (this is the default):
263
-
264
- ```ruby
265
- m = AI::Chat.new
266
- m.web_search = false
267
- m.user("What are the latest developments in the Ruby language?")
268
- m.generate! # This definitely won't use web search to find current information
269
- ```
270
-
271
249
  ## Structured Output
272
250
 
273
251
  Get back Structured Output by setting the `schema` attribute (I suggest using [OpenAI's handy tool for generating the JSON Schema](https://platform.openai.com/docs/guides/structured-outputs)):
@@ -474,27 +452,6 @@ l.generate!
474
452
 
475
453
  **Note**: Images should use `image:`/`images:` parameters, while documents should use `file:`/`files:` parameters.
476
454
 
477
- ## Re-sending old images and files
478
-
479
- Note: if you generate another API request using the same chat, old images and files in the conversation history will not be re-sent by default. If you really want to re-send old images and files, then you must set `previous_response_id` to `nil`:
480
-
481
- ```ruby
482
- a = AI::Chat.new
483
- a.user("What color is the object in this photo?", image: "thing.png")
484
- a.generate! # => "Red"
485
- a.user("What is the object in the photo?")
486
- a.generate! # => { :content => "I don't see a photo", ... }
487
-
488
- b = AI::Chat.new
489
- b.user("What color is the object in this photo?", image: "thing.png")
490
- b.generate! # => "Red"
491
- b.user("What is the object in the photo?")
492
- b.previous_response_id = nil
493
- b.generate! # => { :content => "An apple", ... }
494
- ```
495
-
496
- If you don't set `previous_response_id` to `nil`, the model won't have the old image(s) to work with.
497
-
498
455
  ## Image generation
499
456
 
500
457
  You can enable OpenAI's image generation tool:
@@ -608,25 +565,24 @@ puts response
608
565
 
609
566
  With this, you can loop through any conversation's history (perhaps after retrieving it from your database), recreate an `AI::Chat`, and then continue it.
610
567
 
611
- ## Reasoning Models
568
+ ## Reasoning Effort
612
569
 
613
- When using reasoning models like `o3` or `o4-mini`, you can specify a reasoning effort level to control how much reasoning the model does before producing its final response:
570
+ You can control how much reasoning the model does before producing its response:
614
571
 
615
572
  ```ruby
616
573
  l = AI::Chat.new
617
- l.model = "o3-mini"
618
- l.reasoning_effort = "medium" # Can be "low", "medium", or "high"
574
+ l.reasoning_effort = "low" # Can be "low", "medium", or "high"
619
575
 
620
576
  l.user("What does this error message mean? <insert error message>")
621
577
  l.generate!
622
578
  ```
623
579
 
624
- The `reasoning_effort` parameter guides the model on how many reasoning tokens to generate before creating a response to the prompt. Options are:
580
+ The `reasoning_effort` parameter guides the model on how many reasoning tokens to generate. Options are:
625
581
  - `"low"`: Favors speed and economical token usage.
626
- - `"medium"`: (Default) Balances speed and reasoning accuracy.
582
+ - `"medium"`: Balances speed and reasoning accuracy.
627
583
  - `"high"`: Favors more complete reasoning.
628
584
 
629
- Setting to `nil` disables the reasoning parameter.
585
+ By default, `reasoning_effort` is `nil`, which means no reasoning parameter is sent to the API. For `gpt-5.1` (the default model), this is equivalent to `"none"` reasoning.
630
586
 
631
587
  ## Advanced: Response Details
632
588
 
@@ -642,13 +598,13 @@ pp t.messages.last
642
598
  # => {
643
599
  # :role => "assistant",
644
600
  # :content => "Hello! How can I help you today?",
645
- # :response => { id=resp_abc... model=gpt-4.1-nano tokens=12 }
601
+ # :response => { id=resp_abc... model=gpt-5.1 tokens=12 }
646
602
  # }
647
603
 
648
604
  # Access detailed information
649
605
  response = t.last[:response]
650
606
  response[:id] # => "resp_abc123..."
651
- response[:model] # => "gpt-4.1-nano"
607
+ response[:model] # => "gpt-5.1"
652
608
  response[:usage] # => {:prompt_tokens=>5, :completion_tokens=>7, :total_tokens=>12}
653
609
  ```
654
610
 
@@ -658,26 +614,24 @@ This information is useful for:
658
614
  - Understanding which model was actually used.
659
615
  - Future features like cost tracking.
660
616
 
661
- You can also, if you know a response ID, continue an old conversation by setting the `previous_response_id`:
617
+ ### Last Response ID
618
+
619
+ In addition to the `response` object inside each message, the `AI::Chat` instance also provides a convenient reader, `last_response_id`, which always holds the ID of the most recent response.
662
620
 
663
621
  ```ruby
664
- t = AI::Chat.new
665
- t.user("Hello!")
666
- t.generate!
667
- old_id = t.last[:response][:id] # => "resp_abc123..."
622
+ chat = AI::Chat.new
623
+ chat.user("Hello")
624
+ chat.generate!
668
625
 
669
- # Some time in the future...
626
+ puts chat.last_response_id # => "resp_abc123..."
670
627
 
671
- u = AI::Chat.new
672
- u.previous_response_id = "resp_abc123..."
673
- u.user("What did I just say?")
674
- u.generate! # Will have context from the previous conversation}
675
- # ]
676
- u.user("What should we do next?")
677
- u.generate!
628
+ chat.user("Goodbye")
629
+ chat.generate!
630
+
631
+ puts chat.last_response_id # => "resp_xyz789..." (a new ID)
678
632
  ```
679
633
 
680
- Unless you've stored the previous messages somewhere yourself, this technique won't bring them back. But OpenAI remembers what they were, so that you can at least continue the conversation. (If you're using a reasoning model, this technique also preserves all of the model's reasoning.)
634
+ This is particularly useful for managing background tasks. When you make a request in background mode, you can immediately get the `last_response_id` to track, retrieve, or cancel that specific job later from a different process.
681
635
 
682
636
  ### Automatic Conversation Management
683
637
 
@@ -707,8 +661,6 @@ chat.user("Continue our discussion")
707
661
  chat.generate! # Uses the loaded conversation
708
662
  ```
709
663
 
710
- **Note on forking:** If you want to "fork" a conversation (create a branch), you can still use `previous_response_id`. If both `conversation_id` and `previous_response_id` are set, the gem will use `previous_response_id` and warn you.
711
-
712
664
  ## Inspecting Conversation Details
713
665
 
714
666
  The gem provides two methods to inspect what happened during a conversation:
@@ -812,9 +764,36 @@ q.messages = [
812
764
 
813
765
  ## Other Features Being Considered
814
766
 
815
- - **Session management**: Save and restore conversations by ID
816
767
  - **Streaming responses**: Real-time streaming as the AI generates its response
817
768
  - **Cost tracking**: Automatic calculation and tracking of API costs
769
+ - **Token usage helpers**: Convenience methods like `total_tokens` to sum usage across all responses in a conversation
770
+
771
+ ## TODO: Missing Test Coverage
772
+
773
+ The following gem-specific logic would benefit from additional RSpec test coverage:
774
+
775
+ 1. **Schema format normalization** - The `wrap_schema_if_needed` method detects and wraps 3 different input formats (raw, named, already-wrapped). This complex conditional logic could silently regress.
776
+
777
+ 2. **Multimodal content array building** - The `add` method builds nested structures when images/files are provided, handling `image`/`images` and `file`/`files` parameters with specific ordering (text → images → files).
778
+
779
+ 3. **File classification and processing** - `classify_obj` and `process_file_input` distinguish URLs vs file paths vs file-like objects, with MIME type detection determining encoding behavior.
780
+
781
+ 4. **Message preparation after response** - `prepare_messages_for_api` has slicing logic that only sends messages after the last response, preventing re-sending entire conversation history.
782
+
783
+ These are all gem-specific transformations (not just OpenAI pass-through) that could regress without proper test coverage.
784
+
785
+ ## TODO: Code Quality
786
+
787
+ Address Reek warnings (`bundle exec reek`). Currently 29 warnings for code smells like:
788
+
789
+ - `TooManyStatements` in several methods
790
+ - `DuplicateMethodCall` in `extract_and_save_files`, `verbose`, etc.
791
+ - `RepeatedConditional` for `proxy` checks
792
+ - `FeatureEnvy` in `parse_response` and `wait_for_response`
793
+
794
+ These don't affect functionality but indicate areas for refactoring.
795
+
796
+ Then, add `quality` back as a CI check.
818
797
 
819
798
  ## Testing with Real API Calls
820
799
 
data/ai-chat.gemspec CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  Gem::Specification.new do |spec|
4
4
  spec.name = "ai-chat"
5
- spec.version = "0.3.2"
5
+ spec.version = "0.4.0"
6
6
  spec.authors = ["Raghu Betina"]
7
7
  spec.email = ["raghu@firstdraft.com"]
8
8
  spec.homepage = "https://github.com/firstdraft/ai-chat"
@@ -21,7 +21,7 @@ Gem::Specification.new do |spec|
21
21
  spec.required_ruby_version = "~> 3.2"
22
22
  spec.add_runtime_dependency "openai", "~> 0.34"
23
23
  spec.add_runtime_dependency "marcel", "~> 1.0"
24
- spec.add_runtime_dependency "base64", "~> 0.1", "> 0.1.1"
24
+ spec.add_runtime_dependency "base64", "~> 0.1", "> 0.1.1"
25
25
  spec.add_runtime_dependency "json", "~> 2.0"
26
26
  spec.add_runtime_dependency "ostruct", "~> 0.2"
27
27
  spec.add_runtime_dependency "tty-spinner", "~> 0.9.3"
@@ -33,7 +33,7 @@ module AmazingPrint
33
33
  # :reek:TooManyStatements
34
34
  def format_ai_chat(chat)
35
35
  vars = []
36
-
36
+
37
37
  # Format messages with truncation
38
38
  if chat.instance_variable_defined?(:@messages)
39
39
  messages = chat.instance_variable_get(:@messages).map do |msg|
@@ -45,7 +45,7 @@ module AmazingPrint
45
45
  end
46
46
  vars << ["@messages", messages]
47
47
  end
48
-
48
+
49
49
  # Add other variables (except sensitive ones)
50
50
  skip_vars = [:@api_key, :@client, :@messages]
51
51
  chat.instance_variables.sort.each do |var|
@@ -68,7 +68,7 @@ module AmazingPrint
68
68
  if @options[:multiline]
69
69
  "#<#{object.class}\n#{data.map { |line| " #{line}" }.join("\n")}\n>"
70
70
  else
71
- "#<#{object.class} #{data.join(', ')}>"
71
+ "#<#{object.class} #{data.join(", ")}>"
72
72
  end
73
73
  end
74
74
  end
data/lib/ai/chat.rb CHANGED
@@ -22,19 +22,18 @@ module AI
22
22
  # :reek:IrresponsibleModule
23
23
  class Chat
24
24
  # :reek:Attribute
25
- attr_accessor :background, :code_interpreter, :conversation_id, :image_generation, :image_folder, :messages, :model, :proxy, :previous_response_id, :web_search
26
- attr_reader :reasoning_effort, :client, :schema, :schema_file
25
+ attr_accessor :background, :code_interpreter, :conversation_id, :image_generation, :image_folder, :messages, :model, :proxy, :reasoning_effort, :web_search
26
+ attr_reader :client, :last_response_id, :schema, :schema_file
27
27
 
28
- VALID_REASONING_EFFORTS = [:low, :medium, :high].freeze
29
- PROXY_URL = "https://prepend.me/".freeze
28
+ PROXY_URL = "https://prepend.me/"
30
29
 
31
30
  def initialize(api_key: nil, api_key_env_var: "OPENAI_API_KEY")
32
31
  @api_key = api_key || ENV.fetch(api_key_env_var)
33
32
  @messages = []
34
33
  @reasoning_effort = nil
35
- @model = "gpt-4.1-nano"
34
+ @model = "gpt-5.1"
36
35
  @client = OpenAI::Client.new(api_key: @api_key)
37
- @previous_response_id = nil
36
+ @last_response_id = nil
38
37
  @proxy = false
39
38
  @image_generation = false
40
39
  @image_folder = "./images"
@@ -43,7 +42,7 @@ module AI
43
42
  def self.generate_schema!(description, location: "schema.json", api_key: nil, api_key_env_var: "OPENAI_API_KEY", proxy: false)
44
43
  api_key ||= ENV.fetch(api_key_env_var)
45
44
  prompt_path = File.expand_path("../prompts/schema_generator.md", __dir__)
46
- system_prompt = File.open(prompt_path).read
45
+ system_prompt = File.read(prompt_path)
47
46
 
48
47
  json = if proxy
49
48
  uri = URI(PROXY_URL + "api.openai.com/v1/responses")
@@ -51,7 +50,7 @@ module AI
51
50
  model: "gpt-5.1",
52
51
  input: [
53
52
  {role: :system, content: system_prompt},
54
- {role: :user, content: description},
53
+ {role: :user, content: description}
55
54
  ],
56
55
  text: {format: {type: "json_object"}},
57
56
  reasoning: {effort: "high"}
@@ -77,9 +76,7 @@ module AI
77
76
  if location
78
77
  path = Pathname.new(location)
79
78
  FileUtils.mkdir_p(path.dirname) if path.dirname != "."
80
- File.open(location, "wb") do |file|
81
- file.write(content)
82
- end
79
+ File.binwrite(location, content)
83
80
  end
84
81
  content
85
82
  end
@@ -154,7 +151,7 @@ module AI
154
151
  response = create_response
155
152
  parse_response(response)
156
153
 
157
- self.previous_response_id = last.dig(:response, :id) unless (conversation_id && !background)
154
+ @last_response_id = last.dig(:response, :id)
158
155
  last
159
156
  end
160
157
 
@@ -166,29 +163,11 @@ module AI
166
163
  response = if wait
167
164
  wait_for_response(timeout)
168
165
  else
169
- retrieve_response(previous_response_id)
166
+ retrieve_response(last_response_id)
170
167
  end
171
168
  parse_response(response)
172
169
  end
173
170
 
174
- # :reek:NilCheck
175
- # :reek:TooManyStatements
176
- def reasoning_effort=(value)
177
- if value.nil?
178
- @reasoning_effort = nil
179
- return
180
- end
181
-
182
- normalized_value = value.to_sym
183
-
184
- if VALID_REASONING_EFFORTS.include?(normalized_value)
185
- @reasoning_effort = normalized_value
186
- else
187
- valid_values = VALID_REASONING_EFFORTS.map { |valid_value| ":#{valid_value} or \"#{valid_value}\"" }.join(", ")
188
- raise ArgumentError, "Invalid reasoning_effort value: '#{value}'. Must be one of: #{valid_values}"
189
- end
190
- end
191
-
192
171
  def schema=(value)
193
172
  if value.is_a?(String)
194
173
  parsed = JSON.parse(value, symbolize_names: true)
@@ -201,7 +180,7 @@ module AI
201
180
  end
202
181
 
203
182
  def schema_file=(path)
204
- content = File.open(path).read
183
+ content = File.read(path)
205
184
  @schema_file = path
206
185
  self.schema = content
207
186
  end
@@ -214,12 +193,12 @@ module AI
214
193
  raise "No conversation_id set. Call generate! first to create a conversation." unless conversation_id
215
194
 
216
195
  if proxy
217
- uri = URI(PROXY_URL + "api.openai.com/v1/conversations/#{conversation_id}/items?order=#{order.to_s}")
196
+ uri = URI(PROXY_URL + "api.openai.com/v1/conversations/#{conversation_id}/items?order=#{order}")
218
197
  response_hash = send_request(uri, content_type: "json", method: "get")
219
198
 
220
199
  if response_hash.key?(:data)
221
200
  response_hash.dig(:data).map do |hash|
222
- # Transform values to allow expected symbols that non-proxied request returns
201
+ # Transform values to allow expected symbols that non-proxied request returns
223
202
 
224
203
  hash.transform_values! do |value|
225
204
  if hash.key(value) == :type
@@ -297,6 +276,7 @@ module AI
297
276
  private
298
277
 
299
278
  class InputClassificationError < StandardError; end
279
+
300
280
  class WrongAPITokenUsedError < StandardError; end
301
281
 
302
282
  # :reek:FeatureEnvy
@@ -334,16 +314,8 @@ module AI
334
314
  parameters[:text] = schema if schema
335
315
  parameters[:reasoning] = {effort: reasoning_effort} if reasoning_effort
336
316
 
337
- if previous_response_id && conversation_id
338
- warn "Both conversation_id and previous_response_id are set. Using previous_response_id for forking. Only set one."
339
- parameters[:previous_response_id] = previous_response_id
340
- elsif previous_response_id
341
- parameters[:previous_response_id] = previous_response_id
342
- elsif conversation_id
343
- parameters[:conversation] = conversation_id
344
- else
345
- create_conversation
346
- end
317
+ create_conversation unless conversation_id
318
+ parameters[:conversation] = conversation_id
347
319
 
348
320
  messages_to_send = prepare_messages_for_api
349
321
  parameters[:input] = strip_responses(messages_to_send) unless messages_to_send.empty?
@@ -381,7 +353,7 @@ module AI
381
353
  if response.key?(:conversation)
382
354
  self.conversation_id = response.dig(:conversation, :id)
383
355
  end
384
- else
356
+ else
385
357
  text_response = response.output_text
386
358
  response_id = response.id
387
359
  response_status = response.status
@@ -433,16 +405,16 @@ module AI
433
405
  end
434
406
 
435
407
  def cancel_request
436
- client.responses.cancel(previous_response_id)
408
+ client.responses.cancel(last_response_id)
437
409
  end
438
410
 
439
411
  def prepare_messages_for_api
440
- return messages unless previous_response_id
412
+ return messages unless last_response_id
441
413
 
442
- previous_response_index = messages.find_index { |message| message.dig(:response, :id) == previous_response_id }
414
+ last_response_index = messages.find_index { |message| message.dig(:response, :id) == last_response_id }
443
415
 
444
- if previous_response_index
445
- messages[(previous_response_index + 1)..] || []
416
+ if last_response_index
417
+ messages[(last_response_index + 1)..] || []
446
418
  else
447
419
  messages
448
420
  end
@@ -578,7 +550,7 @@ module AI
578
550
  def tools
579
551
  tools_list = []
580
552
  if web_search
581
- tools_list << {type: "web_search_preview"}
553
+ tools_list << {type: "web_search"}
582
554
  end
583
555
  if image_generation
584
556
  tools_list << {type: "image_generation"}
@@ -619,12 +591,12 @@ module AI
619
591
  def extract_and_save_images(response)
620
592
  image_filenames = []
621
593
 
622
- if proxy
623
- image_outputs = response.dig(:output).select { |output|
594
+ image_outputs = if proxy
595
+ response.dig(:output).select { |output|
624
596
  output.dig(:type) == "image_generation_call"
625
597
  }
626
- else
627
- image_outputs = response.output.select { |output|
598
+ else
599
+ response.output.select { |output|
628
600
  output.respond_to?(:type) && output.type == :image_generation_call
629
601
  }
630
602
  end
@@ -708,7 +680,7 @@ module AI
708
680
  message_outputs = response.dig(:output).select do |output|
709
681
  output.dig(:type) == "message"
710
682
  end
711
-
683
+
712
684
  outputs_with_annotations = message_outputs.map do |message|
713
685
  message.dig(:content).find do |content|
714
686
  content.dig(:annotations).length.positive?
@@ -718,7 +690,7 @@ module AI
718
690
  message_outputs = response.output.select do |output|
719
691
  output.respond_to?(:type) && output.type == :message
720
692
  end
721
-
693
+
722
694
  outputs_with_annotations = message_outputs.map do |message|
723
695
  message.content.find do |content|
724
696
  content.respond_to?(:annotations) && content.annotations.length.positive?
@@ -737,12 +709,12 @@ module AI
737
709
  annotation.key?(:filename)
738
710
  end
739
711
  end.compact
740
-
712
+
741
713
  annotations.each do |annotation|
742
714
  container_id = annotation.dig(:container_id)
743
715
  file_id = annotation.dig(:file_id)
744
716
  filename = annotation.dig(:filename)
745
-
717
+
746
718
  warn_if_file_fails_to_save do
747
719
  file_content = retrieve_file(file_id, container_id: container_id)
748
720
  file_path = File.join(subfolder_path, filename)
@@ -756,18 +728,16 @@ module AI
756
728
  annotation.respond_to?(:filename)
757
729
  end
758
730
  end.compact
759
-
731
+
760
732
  annotations.each do |annotation|
761
733
  container_id = annotation.container_id
762
734
  file_id = annotation.file_id
763
735
  filename = annotation.filename
764
-
736
+
765
737
  warn_if_file_fails_to_save do
766
738
  file_content = retrieve_file(file_id, container_id: container_id)
767
739
  file_path = File.join(subfolder_path, filename)
768
- File.open(file_path, "wb") do |file|
769
- file.write(file_content.read)
770
- end
740
+ File.binwrite(file_path, file_content.read)
771
741
  filenames << file_path
772
742
  end
773
743
  end
@@ -790,53 +760,53 @@ module AI
790
760
  yield
791
761
  end
792
762
  rescue Timeout::Error
793
- client.responses.cancel(previous_response_id)
763
+ client.responses.cancel(last_response_id)
794
764
  end
795
765
 
796
766
  # :reek:DuplicateMethodCall
797
767
  # :reek:TooManyStatements
798
768
  def wait_for_response(timeout)
799
- spinner = TTY::Spinner.new("[:spinner] Thinking ...", format: :dots)
800
- spinner.auto_spin
801
- api_response = retrieve_response(previous_response_id)
802
- number_of_times_polled = 0
803
- response = timeout_request(timeout) do
769
+ spinner = TTY::Spinner.new("[:spinner] Thinking ...", format: :dots)
770
+ spinner.auto_spin
771
+ api_response = retrieve_response(last_response_id)
772
+ number_of_times_polled = 0
773
+ response = timeout_request(timeout) do
774
+ status = if api_response.respond_to?(:status)
775
+ api_response.status
776
+ else
777
+ api_response.dig(:status)&.to_sym
778
+ end
779
+
780
+ while status != :completed
781
+ some_amount_of_seconds = calculate_wait(number_of_times_polled)
782
+ sleep some_amount_of_seconds
783
+ number_of_times_polled += 1
784
+ api_response = retrieve_response(last_response_id)
804
785
  status = if api_response.respond_to?(:status)
805
786
  api_response.status
806
- else
787
+ else
807
788
  api_response.dig(:status)&.to_sym
808
789
  end
809
-
810
- while status != :completed
811
- some_amount_of_seconds = calculate_wait(number_of_times_polled)
812
- sleep some_amount_of_seconds
813
- number_of_times_polled += 1
814
- api_response = retrieve_response(previous_response_id)
815
- status = if api_response.respond_to?(:status)
816
- api_response.status
817
- else
818
- api_response.dig(:status)&.to_sym
819
- end
820
- end
821
- api_response
822
- end
823
-
824
- status = if api_response.respond_to?(:status)
825
- api_response.status
826
- else
827
- api_response.dig(:status).to_sym
828
790
  end
829
- exit_message = status == :cancelled ? "request timed out" : "done!"
830
- spinner.stop(exit_message)
831
- response
791
+ api_response
792
+ end
793
+
794
+ status = if api_response.respond_to?(:status)
795
+ api_response.status
796
+ else
797
+ api_response.dig(:status).to_sym
798
+ end
799
+ exit_message = (status == :cancelled) ? "request timed out" : "done!"
800
+ spinner.stop(exit_message)
801
+ response
832
802
  end
833
803
 
834
- def retrieve_response(previous_response_id)
804
+ def retrieve_response(response_id)
835
805
  if proxy
836
- uri = URI(PROXY_URL + "api.openai.com/v1/responses/#{previous_response_id}")
806
+ uri = URI(PROXY_URL + "api.openai.com/v1/responses/#{response_id}")
837
807
  send_request(uri, content_type: "json", method: "get")
838
808
  else
839
- client.responses.retrieve(previous_response_id)
809
+ client.responses.retrieve(response_id)
840
810
  end
841
811
  end
842
812
 
@@ -846,7 +816,7 @@ module AI
846
816
  send_request(uri, method: "get")
847
817
  else
848
818
  container_content = client.containers.files.content
849
- file_content = container_content.retrieve(file_id, container_id: container_id)
819
+ container_content.retrieve(file_id, container_id: container_id)
850
820
  end
851
821
  end
852
822
  end
data/lib/ai/http.rb CHANGED
@@ -1,7 +1,7 @@
1
1
  require "net/http"
2
2
  module AI
3
3
  module Http
4
- def send_request(uri, content_type: nil, parameters: nil, method:)
4
+ def send_request(uri, method:, content_type: nil, parameters: nil)
5
5
  Net::HTTP.start(uri.host, 443, use_ssl: true) do |http|
6
6
  headers = {
7
7
  "Authorization" => "Bearer #{@api_key}"
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ai-chat
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.2
4
+ version: 0.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Raghu Betina
@@ -181,7 +181,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
181
181
  - !ruby/object:Gem::Version
182
182
  version: '0'
183
183
  requirements: []
184
- rubygems_version: 3.6.7
184
+ rubygems_version: 3.7.1
185
185
  specification_version: 4
186
186
  summary: A beginner-friendly Ruby interface for OpenAI's API
187
187
  test_files: []