raix 0.9.2 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5464c46a401957877b024ec9a1140b6f2624add3f21c172253663988c74baa49
4
- data.tar.gz: a556464db7a06866cf6ecd27530b7ddcbbc372fa8e4f96f7ba84f0c6467de3b1
3
+ metadata.gz: 56a4f5695ac0b5685715aa48e1bbb926efd05c5bd15b2f2198df3acda3ef30e4
4
+ data.tar.gz: e46d6284e3d4201fa6a8bc5b59403b41434d9e92ce131944d5d20d896925402f
5
5
  SHA512:
6
- metadata.gz: c99d540f8f8c7c0c35a57628d322ff0eef31803c0bee7244476633bd092320bd7040ceebb4b20126b55529bfe0fb373fee01e4977871a3eedf1e135331e9b4e2
7
- data.tar.gz: 53e124ebdf99b0b176fb0c30d15c40e9a3156e210686d705237aa43fc02e2ca0e71a63e5efd75deffaf1f5565a34837ccd6d7f1b1dd6f2cbf6b6d9d5c3f137d2
6
+ metadata.gz: 07224f3ff3b8ea20b01d77bd149f7aa3afc597c145db34cf65ecf188a4e2fb2f366729cd8b8c671783346ce743106a234bbf8e551b2243f5a5f8fb9f652ed28b
7
+ data.tar.gz: ed03422c1168de36a21729ed75281ff6dcfabead31232cefc22201fb670a18bae57877fc4e440aa8bffaf0705c1288511369f0e040757e92198e702922c407be
data/.rubocop.yml CHANGED
@@ -1,7 +1,14 @@
1
1
  AllCops:
2
+ NewCops: enable
2
3
  SuggestExtensions: false
3
4
  TargetRubyVersion: 3.2.1
4
5
 
6
+ Gemspec/RequireMFA:
7
+ Enabled: false
8
+
9
+ Style/OpenStructUse:
10
+ Enabled: false
11
+
5
12
  Style/StringLiterals:
6
13
  Enabled: true
7
14
  EnforcedStyle: double_quotes
data/CHANGELOG.md CHANGED
@@ -1,3 +1,23 @@
1
+ ## [1.0.0] - 2025-06-04
2
+ ### Breaking Changes
3
+ - **Deprecated `loop` parameter in ChatCompletion** - The system now automatically continues conversations after tool calls until the AI provides a text response. The `loop` parameter shows a deprecation warning but still works for backwards compatibility.
4
+ - **Tool-based completions now return strings instead of arrays** - When functions are called, the final response is a string containing the AI's text response, not an array of function results.
5
+ - **`stop_looping!` renamed to `stop_tool_calls_and_respond!`** - Better reflects the new automatic continuation behavior.
6
+
7
+ ### Added
8
+ - **Automatic conversation continuation** - Chat completions automatically continue after tool execution without needing the `loop` parameter.
9
+ - **`max_tool_calls` parameter** - Controls the maximum number of tool invocations to prevent infinite loops (default: 25).
10
+ - **Configuration for `max_tool_calls`** - Added `max_tool_calls` to the Configuration class with sensible defaults.
11
+
12
+ ### Changed
13
+ - ChatCompletion handles continuation after tool function calls automatically.
14
+ - Improved CI/CD workflow to use `bundle exec rake ci` for consistent testing.
15
+
16
+ ### Fixed
17
+ - Resolved conflict between `loop` attribute and Ruby's `Kernel.loop` method (fixes #11).
18
+ - Fixed various RuboCop warnings using keyword argument forwarding.
19
+ - Improved error handling with proper warning messages instead of puts.
20
+
1
21
  ## [0.9.2] - 2025-06-03
2
22
  ### Fixed
3
23
  - Fixed OpenAI chat completion compatibility
data/CLAUDE.md ADDED
@@ -0,0 +1,13 @@
1
+ This is a Ruby gem called Raix. Its purpose is to facilitate chat completion style AI text generation using LLMs provided by OpenAI and OpenRouter.
2
+
3
+ - When running all tests just do `bundle exec rake` since it automatically runs the linter with autocorrect
4
+ - Documentation: Include method/class documentation with examples when appropriate
5
+ - Add runtime dependencies to `raix.gemspec`.
6
+ - Add development dependencies to `Gemfile`.
7
+ - Don't ever test private methods directly. Specs should test behavior, not implementation.
8
+ - Never add test-specific code embedded in production code
9
+ - **Do not use require_relative**
10
+ - Require statements should always be in alphabetical order
11
+ - Always leave a blank line after module includes and before the rest of the class
12
+ - Do not decide unilaterally to leave code for the sake of "backwards compatibility"... always run those decisions by me first.
13
+ - Don't ever commit and push changes unless directly told to do so
data/README.llm CHANGED
@@ -42,7 +42,7 @@ class WhatIsTheWeather
42
42
  end
43
43
  ```
44
44
 
45
- If the AI calls multiple functions at once, Raix handles them in sequence and returns an array of results. Call `stop_looping!` inside a function to end the loop.
45
+ If the AI calls multiple functions at once, Raix handles them in sequence and returns an array of results. Call `stop_tool_calls_and_respond!` inside a function to end the loop.
46
46
 
47
47
  ## Prompt Declarations
48
48
  Include `Raix::PromptDeclarations` to define a chain of prompts in order. Each prompt can be inline text or a callable class that also includes `ChatCompletion`.
data/README.md CHANGED
@@ -107,9 +107,15 @@ When using JSON mode with non-OpenAI providers, Raix automatically sets the `req
107
107
 
108
108
  ### Use of Tools/Functions
109
109
 
110
- The second (optional) module that you can add to your Ruby classes after `ChatCompletion` is `FunctionDispatch`. It lets you declare and implement functions to be called at the AI's discretion as part of a chat completion "loop" in a declarative, Rails-like "DSL" fashion.
110
+ The second (optional) module that you can add to your Ruby classes after `ChatCompletion` is `FunctionDispatch`. It lets you declare and implement functions to be called at the AI's discretion in a declarative, Rails-like "DSL" fashion.
111
111
 
112
- Most end-user facing AI components that include functions should be invoked using `chat_completion(loop: true)`, so that the results of each function call are added to the transcript and chat completion is triggered again. The looping will continue until the AI generates a plain text response.
112
+ When the AI responds with tool function calls instead of a text message, Raix automatically:
113
+ 1. Executes the requested tool functions
114
+ 2. Adds the function results to the conversation transcript
115
+ 3. Sends the updated transcript back to the AI for another completion
116
+ 4. Repeats this process until the AI responds with a regular text message
117
+
118
+ This automatic continuation ensures that tool calls are seamlessly integrated into the conversation flow. The AI can use tool results to formulate its final response to the user. You can limit the number of tool calls using the `max_tool_calls` parameter to prevent excessive function invocations.
113
119
 
114
120
  ```ruby
115
121
  class WhatIsTheWeather
@@ -126,9 +132,9 @@ end
126
132
  RSpec.describe WhatIsTheWeather do
127
133
  subject { described_class.new }
128
134
 
129
- it "can call a function and loop to provide text response" do
135
+ it "provides a text response after automatically calling weather function" do
130
136
  subject.transcript << { user: "What is the weather in Zipolite, Oaxaca?" }
131
- response = subject.chat_completion(openai: "gpt-4o", loop: true)
137
+ response = subject.chat_completion(openai: "gpt-4o")
132
138
  expect(response).to include("hot and sunny")
133
139
  end
134
140
  end
@@ -264,9 +270,23 @@ This is particularly useful for:
264
270
  - Resource-intensive computations
265
271
  - Functions with deterministic outputs for the same inputs
266
272
 
267
- #### Manually Stopping a Loop
273
+ #### Limiting Tool Calls
268
274
 
269
- To loop AI components that don't interact with end users, at least one function block should invoke `stop_looping!` whenever you're ready to stop processing.
275
+ You can control the maximum number of tool calls before the AI must provide a text response:
276
+
277
+ ```ruby
278
+ # Limit to 5 tool calls (default is 25)
279
+ response = my_ai.chat_completion(max_tool_calls: 5)
280
+
281
+ # Configure globally
282
+ Raix.configure do |config|
283
+ config.max_tool_calls = 10
284
+ end
285
+ ```
286
+
287
+ #### Manually Stopping Tool Calls
288
+
289
+ For AI components that process tasks without end-user interaction, you can use `stop_tool_calls_and_respond!` within a function to force the AI to provide a text response without making additional tool calls.
270
290
 
271
291
  ```ruby
272
292
  class OrderProcessor
@@ -285,8 +305,8 @@ class OrderProcessor
285
305
  end
286
306
 
287
307
  def perform
288
- # will continue looping until `stop_looping!` is called
289
- chat_completion(loop: true)
308
+ # will automatically continue after tool calls until finished_processing is called
309
+ chat_completion
290
310
  end
291
311
 
292
312
 
@@ -317,7 +337,8 @@ class OrderProcessor
317
337
 
318
338
  function :finished_processing do
319
339
  order.update!(transcript:, processed_at: Time.current)
320
- stop_looping!
340
+ stop_tool_calls_and_respond!
341
+ "Order processing completed successfully"
321
342
  end
322
343
  end
323
344
  ```
data/Rakefile CHANGED
@@ -7,6 +7,12 @@ RSpec::Core::RakeTask.new(:spec)
7
7
 
8
8
  require "rubocop/rake_task"
9
9
 
10
- RuboCop::RakeTask.new
10
+ RuboCop::RakeTask.new(:rubocop_ci)
11
+
12
+ task ci: %i[spec rubocop_ci]
13
+
14
+ RuboCop::RakeTask.new(:rubocop) do |task|
15
+ task.options = ["--autocorrect"]
16
+ end
11
17
 
12
18
  task default: %i[spec rubocop]
@@ -257,9 +257,7 @@ module Raix
257
257
 
258
258
  if event[:error]
259
259
  raise ProtocolError, "SSE error: #{event[:error].message}"
260
- elsif event[:id] == request_id && event[:result]
261
- return event[:result]
262
- elsif event[:result] && !event[:id]
260
+ elsif event[:result] && (event[:id] == request_id || !event[:id])
263
261
  return event[:result]
264
262
  else
265
263
  @event_queue << event
@@ -11,32 +11,39 @@ require_relative "message_adapters/base"
11
11
  module Raix
12
12
  class UndeclaredToolError < StandardError; end
13
13
 
14
- # The `ChatCompletion`` module is a Rails concern that provides a way to interact
14
+ # The `ChatCompletion` module is a Rails concern that provides a way to interact
15
15
  # with the OpenRouter Chat Completion API via its client. The module includes a few
16
16
  # methods that allow you to build a transcript of messages and then send them to
17
17
  # the API for completion. The API will return a response that you can use however
18
18
  # you see fit.
19
19
  #
20
- # If the response includes a function call, the module will dispatch the function
21
- # call and return the result. Which implies that function calls need to be defined
22
- # on the class that includes this module. The `FunctionDispatch` module provides a
23
- # Rails-like DSL for declaring and implementing tool functions at the top of your
24
- # class instead of having to manually implement them as instance methods. The
25
- # primary benefit of using the `FunctionDispatch` module is that it handles
26
- # adding the function call results to the ongoing conversation transcript for you.
27
- # It also triggers a new chat completion automatically if you've set the `loop`
28
- # option to `true`, which is useful for implementing conversational chatbots that
29
- # include tool calls.
20
+ # When the AI responds with tool function calls instead of a text message, this
21
+ # module automatically:
22
+ # 1. Executes the requested tool functions
23
+ # 2. Adds the function results to the conversation transcript
24
+ # 3. Sends the updated transcript back to the AI for another completion
25
+ # 4. Repeats this process until the AI responds with a regular text message
30
26
  #
31
- # Note that some AI models can make more than a single tool function call in a
32
- # single response. When that happens, the module will dispatch all of the function
33
- # calls sequentially and return an array of results.
27
+ # This automatic continuation ensures that tool calls are seamlessly integrated
28
+ # into the conversation flow. The AI can use tool results to formulate its final
29
+ # response to the user. You can limit the number of tool calls using the
30
+ # `max_tool_calls` parameter to prevent excessive function invocations.
31
+ #
32
+ # Tool functions must be defined on the class that includes this module. The
33
+ # `FunctionDispatch` module provides a Rails-like DSL for declaring these
34
+ # functions at the class level, which is cleaner than implementing them as
35
+ # instance methods.
36
+ #
37
+ # Note that some AI models can make multiple tool function calls in a single
38
+ # response. When that happens, the module executes all requested functions
39
+ # before continuing the conversation.
34
40
  module ChatCompletion
35
41
  extend ActiveSupport::Concern
36
42
 
37
43
  attr_accessor :cache_at, :frequency_penalty, :logit_bias, :logprobs, :loop, :min_p, :model, :presence_penalty,
38
44
  :prediction, :repetition_penalty, :response_format, :stream, :temperature, :max_completion_tokens,
39
- :max_tokens, :seed, :stop, :top_a, :top_k, :top_logprobs, :top_p, :tools, :available_tools, :tool_choice, :provider
45
+ :max_tokens, :seed, :stop, :top_a, :top_k, :top_logprobs, :top_p, :tools, :available_tools, :tool_choice, :provider,
46
+ :max_tool_calls, :stop_tool_calls_and_respond
40
47
 
41
48
  class_methods do
42
49
  # Returns the current configuration of this class. Falls back to global configuration for unset values.
@@ -58,14 +65,15 @@ module Raix
58
65
  # This method performs chat completion based on the provided transcript and parameters.
59
66
  #
60
67
  # @param params [Hash] The parameters for chat completion.
61
- # @option loop [Boolean] :loop (false) Whether to loop the chat completion after function calls.
68
+ # @option loop [Boolean] :loop (false) DEPRECATED - The system now automatically continues after tool calls.
62
69
  # @option params [Boolean] :json (false) Whether to return the parse the response as a JSON object. Will search for <json> tags in the response first, then fall back to the default JSON parsing of the entire response.
63
70
  # @option params [String] :openai (nil) If non-nil, use OpenAI with the model specified in this param.
64
71
  # @option params [Boolean] :raw (false) Whether to return the raw response or dig the text content.
65
72
  # @option params [Array] :messages (nil) An array of messages to use instead of the transcript.
66
73
  # @option tools [Array|false] :available_tools (nil) Tools to pass to the LLM. Ignored if nil (default). If false, no tools are passed. If an array, only declared tools in the array are passed.
74
+ # @option max_tool_calls [Integer] :max_tool_calls Maximum number of tool calls before forcing a text response. Defaults to the configured value.
67
75
  # @return [String|Hash] The completed chat response.
68
- def chat_completion(params: {}, loop: false, json: false, raw: false, openai: nil, save_response: true, messages: nil, available_tools: nil)
76
+ def chat_completion(params: {}, loop: false, json: false, raw: false, openai: nil, save_response: true, messages: nil, available_tools: nil, max_tool_calls: nil)
69
77
  # set params to default values if not provided
70
78
  params[:cache_at] ||= cache_at.presence
71
79
  params[:frequency_penalty] ||= frequency_penalty.presence
@@ -108,8 +116,19 @@ module Raix
108
116
  end
109
117
  end
110
118
 
111
- # used by FunctionDispatch
112
- self.loop = loop
119
+ # Deprecation warning for loop parameter
120
+ if loop
121
+ warn "\n\nWARNING: The 'loop' parameter is DEPRECATED and will be ignored.\nChat completions now automatically continue after tool calls until the AI provides a text response.\nUse 'max_tool_calls' to limit the number of tool calls (default: #{configuration.max_tool_calls}).\n\n"
122
+ end
123
+
124
+ # Set max_tool_calls from parameter or configuration default
125
+ self.max_tool_calls = max_tool_calls || configuration.max_tool_calls
126
+
127
+ # Reset stop_tool_calls_and_respond flag
128
+ @stop_tool_calls_and_respond = false
129
+
130
+ # Track tool call count
131
+ tool_call_count = 0
113
132
 
114
133
  # set the model to the default if not provided
115
134
  self.model ||= configuration.model
@@ -143,14 +162,71 @@ module Raix
143
162
 
144
163
  tool_calls = response.dig("choices", 0, "message", "tool_calls") || []
145
164
  if tool_calls.any?
146
- return tool_calls.map do |tool_call|
165
+ tool_call_count += tool_calls.size
166
+
167
+ # Check if we've exceeded max_tool_calls
168
+ if tool_call_count > self.max_tool_calls
169
+ # Add system message about hitting the limit
170
+ messages << { role: "system", content: "Maximum tool calls (#{self.max_tool_calls}) exceeded. Please provide a final response to the user without calling any more tools." }
171
+
172
+ # Force a final response without tools
173
+ params[:tools] = nil
174
+ response = if openai
175
+ openai_request(params:, model: openai, messages:)
176
+ else
177
+ openrouter_request(params:, model:, messages:)
178
+ end
179
+
180
+ # Process the final response
181
+ content = response.dig("choices", 0, "message", "content")
182
+ transcript << { assistant: content } if save_response
183
+ return raw ? response : content.strip
184
+ end
185
+
186
+ # Dispatch tool calls
187
+ tool_calls.each do |tool_call| # TODO: parallelize this?
147
188
  # dispatch the called function
148
- arguments = JSON.parse(tool_call["function"]["arguments"].presence || "{}")
149
189
  function_name = tool_call["function"]["name"]
190
+ arguments = JSON.parse(tool_call["function"]["arguments"].presence || "{}")
150
191
  raise "Unauthorized function call: #{function_name}" unless self.class.functions.map { |f| f[:name].to_sym }.include?(function_name.to_sym)
151
192
 
152
193
  dispatch_tool_function(function_name, arguments.with_indifferent_access)
153
194
  end
195
+
196
+ # After executing tool calls, we need to continue the conversation
197
+ # to let the AI process the results and provide a text response.
198
+ # We continue until the AI responds with a regular assistant message
199
+ # (not another tool call request), unless stop_tool_calls_and_respond! was called.
200
+
201
+ # Use the updated transcript for the next call, not the original messages
202
+ updated_messages = transcript.flatten.compact
203
+ last_message = updated_messages.last
204
+
205
+ if !@stop_tool_calls_and_respond && (last_message[:role] != "assistant" || last_message[:tool_calls].present?)
206
+ # Send the updated transcript back to the AI
207
+ return chat_completion(
208
+ params:,
209
+ json:,
210
+ raw:,
211
+ openai:,
212
+ save_response:,
213
+ messages: nil, # Use transcript instead
214
+ available_tools:,
215
+ max_tool_calls: self.max_tool_calls - tool_call_count
216
+ )
217
+ elsif @stop_tool_calls_and_respond
218
+ # If stop_tool_calls_and_respond was set, force a final response without tools
219
+ params[:tools] = nil
220
+ response = if openai
221
+ openai_request(params:, model: openai, messages:)
222
+ else
223
+ openrouter_request(params:, model:, messages:)
224
+ end
225
+
226
+ content = response.dig("choices", 0, "message", "content")
227
+ transcript << { assistant: content } if save_response
228
+ return raw ? response : content.strip
229
+ end
154
230
  end
155
231
 
156
232
  response.tap do |res|
@@ -170,7 +246,7 @@ module Raix
170
246
  end
171
247
  rescue JSON::ParserError => e
172
248
  if e.message.include?("not a valid") # blank JSON
173
- puts "Retrying blank JSON response... (#{retry_count} attempts) #{e.message}"
249
+ warn "Retrying blank JSON response... (#{retry_count} attempts) #{e.message}"
174
250
  retry_count += 1
175
251
  sleep 1 * retry_count # backoff
176
252
  retry if retry_count < 3
@@ -178,11 +254,11 @@ module Raix
178
254
  raise e # just fail if we can't get content after 3 attempts
179
255
  end
180
256
 
181
- puts "Bad JSON received!!!!!!: #{content}"
257
+ warn "Bad JSON received!!!!!!: #{content}"
182
258
  raise e
183
259
  rescue Faraday::BadRequestError => e
184
260
  # make sure we see the actual error message on console or Honeybadger
185
- puts "Chat completion failed!!!!!!!!!!!!!!!!: #{e.response[:body]}"
261
+ warn "Chat completion failed!!!!!!!!!!!!!!!!: #{e.response[:body]}"
186
262
  raise e
187
263
  end
188
264
  end
@@ -257,7 +333,7 @@ module Raix
257
333
  configuration.openrouter_client.complete(messages, model:, extras: params.compact, stream:)
258
334
  rescue OpenRouter::ServerError => e
259
335
  if e.message.include?("retry")
260
- puts "Retrying OpenRouter request... (#{retry_count} attempts) #{e.message}"
336
+ warn "Retrying OpenRouter request... (#{retry_count} attempts) #{e.message}"
261
337
  retry_count += 1
262
338
  sleep 1 * retry_count # backoff
263
339
  retry if retry_count < 5
@@ -36,10 +36,15 @@ module Raix
36
36
  # The openai_client option determines the OpenAI client to use for communication.
37
37
  attr_accessor_with_fallback :openai_client
38
38
 
39
+ # The max_tool_calls option determines the maximum number of tool calls
40
+ # before forcing a text response to prevent excessive function invocations.
41
+ attr_accessor_with_fallback :max_tool_calls
42
+
39
43
  DEFAULT_MAX_TOKENS = 1000
40
44
  DEFAULT_MAX_COMPLETION_TOKENS = 16_384
41
45
  DEFAULT_MODEL = "meta-llama/llama-3.3-8b-instruct:free"
42
46
  DEFAULT_TEMPERATURE = 0.0
47
+ DEFAULT_MAX_TOOL_CALLS = 25
43
48
 
44
49
  # Initializes a new instance of the Configuration class with default values.
45
50
  def initialize(fallback: nil)
@@ -47,6 +52,7 @@ module Raix
47
52
  self.max_completion_tokens = DEFAULT_MAX_COMPLETION_TOKENS
48
53
  self.max_tokens = DEFAULT_MAX_TOKENS
49
54
  self.model = DEFAULT_MODEL
55
+ self.max_tool_calls = DEFAULT_MAX_TOOL_CALLS
50
56
  self.fallback = fallback
51
57
  end
52
58
 
@@ -100,7 +100,9 @@ module Raix
100
100
  }
101
101
  ]
102
102
 
103
- chat_completion(**chat_completion_args) if loop
103
+ # Return the content - ChatCompletion will automatically continue
104
+ # the conversation after tool execution to get a final response
105
+ content
104
106
  end
105
107
  end
106
108
  end
@@ -114,11 +116,11 @@ module Raix
114
116
  super
115
117
  end
116
118
 
117
- # Stops the looping of chat completion after function calls.
118
- # Useful for manually halting processing in workflow components
119
- # that do not require a final text response to an end user.
120
- def stop_looping!
121
- self.loop = false
119
+ # Stops the automatic continuation of chat completions after this function call.
120
+ # Useful when you want to halt processing within a function and force the AI
121
+ # to provide a text response without making additional tool calls.
122
+ def stop_tool_calls_and_respond!
123
+ @stop_tool_calls_and_respond = true
122
124
  end
123
125
 
124
126
  def tools
data/lib/raix/mcp.rb CHANGED
@@ -102,7 +102,7 @@ module Raix
102
102
  filtered_tools.each do |tool|
103
103
  remote_name = tool.name
104
104
  # TODO: Revisit later whether this much context is needed in the function name
105
- local_name = "#{remote_name}_#{client.unique_key}".to_sym
105
+ local_name = :"#{remote_name}_#{client.unique_key}"
106
106
 
107
107
  description = tool.description
108
108
  input_schema = tool.input_schema || {}
@@ -154,9 +154,8 @@ module Raix
154
154
  }
155
155
  ]
156
156
 
157
- # Continue the chat loop if requested (same semantics as FunctionDispatch)
158
- chat_completion(**chat_completion_args) if loop
159
-
157
+ # Return the content - ChatCompletion will automatically continue
158
+ # the conversation after tool execution
160
159
  content_text
161
160
  end
162
161
  end
@@ -170,7 +170,7 @@ module PromptDeclarations
170
170
  protected
171
171
 
172
172
  # workaround for super.chat_completion, which is not available in ruby
173
- def chat_completion_from_superclass(*args, **kargs)
174
- method(:chat_completion).super_method.call(*args, **kargs)
173
+ def chat_completion_from_superclass(*, **kargs)
174
+ method(:chat_completion).super_method.call(*, **kargs)
175
175
  end
176
176
  end
data/lib/raix/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Raix
4
- VERSION = "0.9.2"
4
+ VERSION = "1.0.0"
5
5
  end
data/raix.gemspec ADDED
@@ -0,0 +1,36 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "lib/raix/version"
4
+
5
+ Gem::Specification.new do |spec|
6
+ spec.name = "raix"
7
+ spec.version = Raix::VERSION
8
+ spec.authors = ["Obie Fernandez"]
9
+ spec.email = ["obiefernandez@gmail.com"]
10
+
11
+ spec.summary = "Ruby AI eXtensions"
12
+ spec.homepage = "https://github.com/OlympiaAI/raix"
13
+ spec.license = "MIT"
14
+ spec.required_ruby_version = ">= 3.2.2"
15
+
16
+ spec.metadata["homepage_uri"] = spec.homepage
17
+ spec.metadata["source_code_uri"] = "https://github.com/OlympiaAI/raix"
18
+ spec.metadata["changelog_uri"] = "https://github.com/OlympiaAI/raix/blob/main/CHANGELOG.md"
19
+
20
+ # Specify which files should be added to the gem when it is released.
21
+ # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
22
+ spec.files = Dir.chdir(__dir__) do
23
+ `git ls-files -z`.split("\x0").reject do |f|
24
+ (File.expand_path(f) == __FILE__) || f.start_with?(*%w[bin/ test/ spec/ features/ .git .circleci appveyor])
25
+ end
26
+ end
27
+ spec.bindir = "exe"
28
+ spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
29
+ spec.require_paths = ["lib"]
30
+
31
+ spec.add_dependency "activesupport", ">= 6.0"
32
+ spec.add_dependency "faraday-retry", "~> 2.0"
33
+ spec.add_dependency "open_router", "~> 0.2"
34
+ spec.add_dependency "ostruct"
35
+ spec.add_dependency "ruby-openai", "~> 7"
36
+ end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: raix
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.2
4
+ version: 1.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Obie Fernandez
@@ -89,6 +89,7 @@ files:
89
89
  - ".rubocop.yml"
90
90
  - ".ruby-version"
91
91
  - CHANGELOG.md
92
+ - CLAUDE.md
92
93
  - CODE_OF_CONDUCT.md
93
94
  - Gemfile
94
95
  - Gemfile.lock
@@ -110,6 +111,7 @@ files:
110
111
  - lib/raix/prompt_declarations.rb
111
112
  - lib/raix/response_format.rb
112
113
  - lib/raix/version.rb
114
+ - raix.gemspec
113
115
  - sig/raix.rbs
114
116
  homepage: https://github.com/OlympiaAI/raix
115
117
  licenses: