raix 0.9.2 → 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.rubocop.yml +7 -0
- data/CHANGELOG.md +30 -0
- data/CLAUDE.md +13 -0
- data/Gemfile.lock +1 -1
- data/README.llm +1 -1
- data/README.md +31 -10
- data/Rakefile +7 -1
- data/lib/mcp/sse_client.rb +1 -3
- data/lib/raix/chat_completion.rb +101 -25
- data/lib/raix/configuration.rb +6 -0
- data/lib/raix/function_dispatch.rb +8 -6
- data/lib/raix/mcp.rb +3 -4
- data/lib/raix/prompt_declarations.rb +136 -146
- data/lib/raix/version.rb +1 -1
- metadata +2 -1
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 5413df57139a529bf573642931910e4109cd260beae531c3fdce54fdc3961485
|
4
|
+
data.tar.gz: 1feb3f57924489378b7029981bc9a5ad42b131848b6a378c7a1860b1456c4fc5
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 7988de746caf67aafc4e21c1f9da3d1129c3b595723a28d41383dd4208bd7c84414d973c7530275920ff7c0732951e1cddc650aeb4eecfc99f28e3c32c86126a
|
7
|
+
data.tar.gz: b7cff4d03a83f584b4d0b414850b9eb25db02277e1b17e5a22c2415398e01135edd190f7ee249658616921d4ff93a25bb57dbab736b2936cf886763e4d975030
|
data/.rubocop.yml
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,3 +1,33 @@
|
|
1
|
+
## [1.0.1] - 2025-06-04
|
2
|
+
### Fixed
|
3
|
+
- Fixed PromptDeclarations module namespace - now properly namespaced under Raix
|
4
|
+
- Removed Rails.logger dependencies from PromptDeclarations for non-Rails environments
|
5
|
+
- Fixed documentation example showing incorrect `openai: true` usage (should be model string)
|
6
|
+
- Added comprehensive tests for PromptDeclarations module
|
7
|
+
|
8
|
+
### Changed
|
9
|
+
- Improved error handling in PromptDeclarations to catch StandardError instead of generic rescue
|
10
|
+
|
11
|
+
## [1.0.0] - 2025-06-04
|
12
|
+
### Breaking Changes
|
13
|
+
- **Deprecated `loop` parameter in ChatCompletion** - The system now automatically continues conversations after tool calls until the AI provides a text response. The `loop` parameter shows a deprecation warning but still works for backwards compatibility.
|
14
|
+
- **Tool-based completions now return strings instead of arrays** - When functions are called, the final response is a string containing the AI's text response, not an array of function results.
|
15
|
+
- **`stop_looping!` renamed to `stop_tool_calls_and_respond!`** - Better reflects the new automatic continuation behavior.
|
16
|
+
|
17
|
+
### Added
|
18
|
+
- **Automatic conversation continuation** - Chat completions automatically continue after tool execution without needing the `loop` parameter.
|
19
|
+
- **`max_tool_calls` parameter** - Controls the maximum number of tool invocations to prevent infinite loops (default: 25).
|
20
|
+
- **Configuration for `max_tool_calls`** - Added `max_tool_calls` to the Configuration class with sensible defaults.
|
21
|
+
|
22
|
+
### Changed
|
23
|
+
- ChatCompletion handles continuation after tool function calls automatically.
|
24
|
+
- Improved CI/CD workflow to use `bundle exec rake ci` for consistent testing.
|
25
|
+
|
26
|
+
### Fixed
|
27
|
+
- Resolved conflict between `loop` attribute and Ruby's `Kernel.loop` method (fixes #11).
|
28
|
+
- Fixed various RuboCop warnings using keyword argument forwarding.
|
29
|
+
- Improved error handling with proper warning messages instead of puts.
|
30
|
+
|
1
31
|
## [0.9.2] - 2025-06-03
|
2
32
|
### Fixed
|
3
33
|
- Fixed OpenAI chat completion compatibility
|
data/CLAUDE.md
ADDED
@@ -0,0 +1,13 @@
|
|
1
|
+
This is a Ruby gem called Raix. Its purpose is to facilitate chat completion style AI text generation using LLMs provided by OpenAI and OpenRouter.
|
2
|
+
|
3
|
+
- When running all tests just do `bundle exec rake` since it automatically runs the linter with autocorrect
|
4
|
+
- Documentation: Include method/class documentation with examples when appropriate
|
5
|
+
- Add runtime dependencies to `raix.gemspec`.
|
6
|
+
- Add development dependencies to `Gemfile`.
|
7
|
+
- Don't ever test private methods directly. Specs should test behavior, not implementation.
|
8
|
+
- Never add test-specific code embedded in production code
|
9
|
+
- **Do not use require_relative**
|
10
|
+
- Require statements should always be in alphabetical order
|
11
|
+
- Always leave a blank line after module includes and before the rest of the class
|
12
|
+
- Do not decide unilaterally to leave code for the sake of "backwards compatibility"... always run those decisions by me first.
|
13
|
+
- Don't ever commit and push changes unless directly told to do so
|
data/Gemfile.lock
CHANGED
data/README.llm
CHANGED
@@ -42,7 +42,7 @@ class WhatIsTheWeather
|
|
42
42
|
end
|
43
43
|
```
|
44
44
|
|
45
|
-
If the AI calls multiple functions at once, Raix handles them in sequence and returns an array of results. Call `
|
45
|
+
If the AI calls multiple functions at once, Raix handles them in sequence and returns an array of results. Call `stop_tool_calls_and_respond!` inside a function to end the loop.
|
46
46
|
|
47
47
|
## Prompt Declarations
|
48
48
|
Include `Raix::PromptDeclarations` to define a chain of prompts in order. Each prompt can be inline text or a callable class that also includes `ChatCompletion`.
|
data/README.md
CHANGED
@@ -107,9 +107,15 @@ When using JSON mode with non-OpenAI providers, Raix automatically sets the `req
|
|
107
107
|
|
108
108
|
### Use of Tools/Functions
|
109
109
|
|
110
|
-
The second (optional) module that you can add to your Ruby classes after `ChatCompletion` is `FunctionDispatch`. It lets you declare and implement functions to be called at the AI's discretion
|
110
|
+
The second (optional) module that you can add to your Ruby classes after `ChatCompletion` is `FunctionDispatch`. It lets you declare and implement functions to be called at the AI's discretion in a declarative, Rails-like "DSL" fashion.
|
111
111
|
|
112
|
-
|
112
|
+
When the AI responds with tool function calls instead of a text message, Raix automatically:
|
113
|
+
1. Executes the requested tool functions
|
114
|
+
2. Adds the function results to the conversation transcript
|
115
|
+
3. Sends the updated transcript back to the AI for another completion
|
116
|
+
4. Repeats this process until the AI responds with a regular text message
|
117
|
+
|
118
|
+
This automatic continuation ensures that tool calls are seamlessly integrated into the conversation flow. The AI can use tool results to formulate its final response to the user. You can limit the number of tool calls using the `max_tool_calls` parameter to prevent excessive function invocations.
|
113
119
|
|
114
120
|
```ruby
|
115
121
|
class WhatIsTheWeather
|
@@ -126,9 +132,9 @@ end
|
|
126
132
|
RSpec.describe WhatIsTheWeather do
|
127
133
|
subject { described_class.new }
|
128
134
|
|
129
|
-
it "
|
135
|
+
it "provides a text response after automatically calling weather function" do
|
130
136
|
subject.transcript << { user: "What is the weather in Zipolite, Oaxaca?" }
|
131
|
-
response = subject.chat_completion(openai: "gpt-4o"
|
137
|
+
response = subject.chat_completion(openai: "gpt-4o")
|
132
138
|
expect(response).to include("hot and sunny")
|
133
139
|
end
|
134
140
|
end
|
@@ -264,9 +270,23 @@ This is particularly useful for:
|
|
264
270
|
- Resource-intensive computations
|
265
271
|
- Functions with deterministic outputs for the same inputs
|
266
272
|
|
267
|
-
####
|
273
|
+
#### Limiting Tool Calls
|
268
274
|
|
269
|
-
|
275
|
+
You can control the maximum number of tool calls before the AI must provide a text response:
|
276
|
+
|
277
|
+
```ruby
|
278
|
+
# Limit to 5 tool calls (default is 25)
|
279
|
+
response = my_ai.chat_completion(max_tool_calls: 5)
|
280
|
+
|
281
|
+
# Configure globally
|
282
|
+
Raix.configure do |config|
|
283
|
+
config.max_tool_calls = 10
|
284
|
+
end
|
285
|
+
```
|
286
|
+
|
287
|
+
#### Manually Stopping Tool Calls
|
288
|
+
|
289
|
+
For AI components that process tasks without end-user interaction, you can use `stop_tool_calls_and_respond!` within a function to force the AI to provide a text response without making additional tool calls.
|
270
290
|
|
271
291
|
```ruby
|
272
292
|
class OrderProcessor
|
@@ -285,8 +305,8 @@ class OrderProcessor
|
|
285
305
|
end
|
286
306
|
|
287
307
|
def perform
|
288
|
-
# will continue
|
289
|
-
chat_completion
|
308
|
+
# will automatically continue after tool calls until finished_processing is called
|
309
|
+
chat_completion
|
290
310
|
end
|
291
311
|
|
292
312
|
|
@@ -317,7 +337,8 @@ class OrderProcessor
|
|
317
337
|
|
318
338
|
function :finished_processing do
|
319
339
|
order.update!(transcript:, processed_at: Time.current)
|
320
|
-
|
340
|
+
stop_tool_calls_and_respond!
|
341
|
+
"Order processing completed successfully"
|
321
342
|
end
|
322
343
|
end
|
323
344
|
```
|
@@ -439,7 +460,7 @@ class CustomPromptExample
|
|
439
460
|
|
440
461
|
# Prompt using OpenAI directly
|
441
462
|
prompt text: "Use OpenAI",
|
442
|
-
openai:
|
463
|
+
openai: "gpt-4o"
|
443
464
|
end
|
444
465
|
```
|
445
466
|
|
data/Rakefile
CHANGED
@@ -7,6 +7,12 @@ RSpec::Core::RakeTask.new(:spec)
|
|
7
7
|
|
8
8
|
require "rubocop/rake_task"
|
9
9
|
|
10
|
-
RuboCop::RakeTask.new
|
10
|
+
RuboCop::RakeTask.new(:rubocop_ci)
|
11
|
+
|
12
|
+
task ci: %i[spec rubocop_ci]
|
13
|
+
|
14
|
+
RuboCop::RakeTask.new(:rubocop) do |task|
|
15
|
+
task.options = ["--autocorrect"]
|
16
|
+
end
|
11
17
|
|
12
18
|
task default: %i[spec rubocop]
|
data/lib/mcp/sse_client.rb
CHANGED
@@ -257,9 +257,7 @@ module Raix
|
|
257
257
|
|
258
258
|
if event[:error]
|
259
259
|
raise ProtocolError, "SSE error: #{event[:error].message}"
|
260
|
-
elsif event[:id] == request_id
|
261
|
-
return event[:result]
|
262
|
-
elsif event[:result] && !event[:id]
|
260
|
+
elsif event[:result] && (event[:id] == request_id || !event[:id])
|
263
261
|
return event[:result]
|
264
262
|
else
|
265
263
|
@event_queue << event
|
data/lib/raix/chat_completion.rb
CHANGED
@@ -11,32 +11,39 @@ require_relative "message_adapters/base"
|
|
11
11
|
module Raix
|
12
12
|
class UndeclaredToolError < StandardError; end
|
13
13
|
|
14
|
-
# The `ChatCompletion
|
14
|
+
# The `ChatCompletion` module is a Rails concern that provides a way to interact
|
15
15
|
# with the OpenRouter Chat Completion API via its client. The module includes a few
|
16
16
|
# methods that allow you to build a transcript of messages and then send them to
|
17
17
|
# the API for completion. The API will return a response that you can use however
|
18
18
|
# you see fit.
|
19
19
|
#
|
20
|
-
#
|
21
|
-
#
|
22
|
-
#
|
23
|
-
#
|
24
|
-
#
|
25
|
-
#
|
26
|
-
# adding the function call results to the ongoing conversation transcript for you.
|
27
|
-
# It also triggers a new chat completion automatically if you've set the `loop`
|
28
|
-
# option to `true`, which is useful for implementing conversational chatbots that
|
29
|
-
# include tool calls.
|
20
|
+
# When the AI responds with tool function calls instead of a text message, this
|
21
|
+
# module automatically:
|
22
|
+
# 1. Executes the requested tool functions
|
23
|
+
# 2. Adds the function results to the conversation transcript
|
24
|
+
# 3. Sends the updated transcript back to the AI for another completion
|
25
|
+
# 4. Repeats this process until the AI responds with a regular text message
|
30
26
|
#
|
31
|
-
#
|
32
|
-
#
|
33
|
-
#
|
27
|
+
# This automatic continuation ensures that tool calls are seamlessly integrated
|
28
|
+
# into the conversation flow. The AI can use tool results to formulate its final
|
29
|
+
# response to the user. You can limit the number of tool calls using the
|
30
|
+
# `max_tool_calls` parameter to prevent excessive function invocations.
|
31
|
+
#
|
32
|
+
# Tool functions must be defined on the class that includes this module. The
|
33
|
+
# `FunctionDispatch` module provides a Rails-like DSL for declaring these
|
34
|
+
# functions at the class level, which is cleaner than implementing them as
|
35
|
+
# instance methods.
|
36
|
+
#
|
37
|
+
# Note that some AI models can make multiple tool function calls in a single
|
38
|
+
# response. When that happens, the module executes all requested functions
|
39
|
+
# before continuing the conversation.
|
34
40
|
module ChatCompletion
|
35
41
|
extend ActiveSupport::Concern
|
36
42
|
|
37
43
|
attr_accessor :cache_at, :frequency_penalty, :logit_bias, :logprobs, :loop, :min_p, :model, :presence_penalty,
|
38
44
|
:prediction, :repetition_penalty, :response_format, :stream, :temperature, :max_completion_tokens,
|
39
|
-
:max_tokens, :seed, :stop, :top_a, :top_k, :top_logprobs, :top_p, :tools, :available_tools, :tool_choice, :provider
|
45
|
+
:max_tokens, :seed, :stop, :top_a, :top_k, :top_logprobs, :top_p, :tools, :available_tools, :tool_choice, :provider,
|
46
|
+
:max_tool_calls, :stop_tool_calls_and_respond
|
40
47
|
|
41
48
|
class_methods do
|
42
49
|
# Returns the current configuration of this class. Falls back to global configuration for unset values.
|
@@ -58,14 +65,15 @@ module Raix
|
|
58
65
|
# This method performs chat completion based on the provided transcript and parameters.
|
59
66
|
#
|
60
67
|
# @param params [Hash] The parameters for chat completion.
|
61
|
-
# @option loop [Boolean] :loop (false)
|
68
|
+
# @option loop [Boolean] :loop (false) DEPRECATED - The system now automatically continues after tool calls.
|
62
69
|
# @option params [Boolean] :json (false) Whether to return the parse the response as a JSON object. Will search for <json> tags in the response first, then fall back to the default JSON parsing of the entire response.
|
63
70
|
# @option params [String] :openai (nil) If non-nil, use OpenAI with the model specified in this param.
|
64
71
|
# @option params [Boolean] :raw (false) Whether to return the raw response or dig the text content.
|
65
72
|
# @option params [Array] :messages (nil) An array of messages to use instead of the transcript.
|
66
73
|
# @option tools [Array|false] :available_tools (nil) Tools to pass to the LLM. Ignored if nil (default). If false, no tools are passed. If an array, only declared tools in the array are passed.
|
74
|
+
# @option max_tool_calls [Integer] :max_tool_calls Maximum number of tool calls before forcing a text response. Defaults to the configured value.
|
67
75
|
# @return [String|Hash] The completed chat response.
|
68
|
-
def chat_completion(params: {}, loop: false, json: false, raw: false, openai: nil, save_response: true, messages: nil, available_tools: nil)
|
76
|
+
def chat_completion(params: {}, loop: false, json: false, raw: false, openai: nil, save_response: true, messages: nil, available_tools: nil, max_tool_calls: nil)
|
69
77
|
# set params to default values if not provided
|
70
78
|
params[:cache_at] ||= cache_at.presence
|
71
79
|
params[:frequency_penalty] ||= frequency_penalty.presence
|
@@ -108,8 +116,19 @@ module Raix
|
|
108
116
|
end
|
109
117
|
end
|
110
118
|
|
111
|
-
#
|
112
|
-
|
119
|
+
# Deprecation warning for loop parameter
|
120
|
+
if loop
|
121
|
+
warn "\n\nWARNING: The 'loop' parameter is DEPRECATED and will be ignored.\nChat completions now automatically continue after tool calls until the AI provides a text response.\nUse 'max_tool_calls' to limit the number of tool calls (default: #{configuration.max_tool_calls}).\n\n"
|
122
|
+
end
|
123
|
+
|
124
|
+
# Set max_tool_calls from parameter or configuration default
|
125
|
+
self.max_tool_calls = max_tool_calls || configuration.max_tool_calls
|
126
|
+
|
127
|
+
# Reset stop_tool_calls_and_respond flag
|
128
|
+
@stop_tool_calls_and_respond = false
|
129
|
+
|
130
|
+
# Track tool call count
|
131
|
+
tool_call_count = 0
|
113
132
|
|
114
133
|
# set the model to the default if not provided
|
115
134
|
self.model ||= configuration.model
|
@@ -143,14 +162,71 @@ module Raix
|
|
143
162
|
|
144
163
|
tool_calls = response.dig("choices", 0, "message", "tool_calls") || []
|
145
164
|
if tool_calls.any?
|
146
|
-
|
165
|
+
tool_call_count += tool_calls.size
|
166
|
+
|
167
|
+
# Check if we've exceeded max_tool_calls
|
168
|
+
if tool_call_count > self.max_tool_calls
|
169
|
+
# Add system message about hitting the limit
|
170
|
+
messages << { role: "system", content: "Maximum tool calls (#{self.max_tool_calls}) exceeded. Please provide a final response to the user without calling any more tools." }
|
171
|
+
|
172
|
+
# Force a final response without tools
|
173
|
+
params[:tools] = nil
|
174
|
+
response = if openai
|
175
|
+
openai_request(params:, model: openai, messages:)
|
176
|
+
else
|
177
|
+
openrouter_request(params:, model:, messages:)
|
178
|
+
end
|
179
|
+
|
180
|
+
# Process the final response
|
181
|
+
content = response.dig("choices", 0, "message", "content")
|
182
|
+
transcript << { assistant: content } if save_response
|
183
|
+
return raw ? response : content.strip
|
184
|
+
end
|
185
|
+
|
186
|
+
# Dispatch tool calls
|
187
|
+
tool_calls.each do |tool_call| # TODO: parallelize this?
|
147
188
|
# dispatch the called function
|
148
|
-
arguments = JSON.parse(tool_call["function"]["arguments"].presence || "{}")
|
149
189
|
function_name = tool_call["function"]["name"]
|
190
|
+
arguments = JSON.parse(tool_call["function"]["arguments"].presence || "{}")
|
150
191
|
raise "Unauthorized function call: #{function_name}" unless self.class.functions.map { |f| f[:name].to_sym }.include?(function_name.to_sym)
|
151
192
|
|
152
193
|
dispatch_tool_function(function_name, arguments.with_indifferent_access)
|
153
194
|
end
|
195
|
+
|
196
|
+
# After executing tool calls, we need to continue the conversation
|
197
|
+
# to let the AI process the results and provide a text response.
|
198
|
+
# We continue until the AI responds with a regular assistant message
|
199
|
+
# (not another tool call request), unless stop_tool_calls_and_respond! was called.
|
200
|
+
|
201
|
+
# Use the updated transcript for the next call, not the original messages
|
202
|
+
updated_messages = transcript.flatten.compact
|
203
|
+
last_message = updated_messages.last
|
204
|
+
|
205
|
+
if !@stop_tool_calls_and_respond && (last_message[:role] != "assistant" || last_message[:tool_calls].present?)
|
206
|
+
# Send the updated transcript back to the AI
|
207
|
+
return chat_completion(
|
208
|
+
params:,
|
209
|
+
json:,
|
210
|
+
raw:,
|
211
|
+
openai:,
|
212
|
+
save_response:,
|
213
|
+
messages: nil, # Use transcript instead
|
214
|
+
available_tools:,
|
215
|
+
max_tool_calls: self.max_tool_calls - tool_call_count
|
216
|
+
)
|
217
|
+
elsif @stop_tool_calls_and_respond
|
218
|
+
# If stop_tool_calls_and_respond was set, force a final response without tools
|
219
|
+
params[:tools] = nil
|
220
|
+
response = if openai
|
221
|
+
openai_request(params:, model: openai, messages:)
|
222
|
+
else
|
223
|
+
openrouter_request(params:, model:, messages:)
|
224
|
+
end
|
225
|
+
|
226
|
+
content = response.dig("choices", 0, "message", "content")
|
227
|
+
transcript << { assistant: content } if save_response
|
228
|
+
return raw ? response : content.strip
|
229
|
+
end
|
154
230
|
end
|
155
231
|
|
156
232
|
response.tap do |res|
|
@@ -170,7 +246,7 @@ module Raix
|
|
170
246
|
end
|
171
247
|
rescue JSON::ParserError => e
|
172
248
|
if e.message.include?("not a valid") # blank JSON
|
173
|
-
|
249
|
+
warn "Retrying blank JSON response... (#{retry_count} attempts) #{e.message}"
|
174
250
|
retry_count += 1
|
175
251
|
sleep 1 * retry_count # backoff
|
176
252
|
retry if retry_count < 3
|
@@ -178,11 +254,11 @@ module Raix
|
|
178
254
|
raise e # just fail if we can't get content after 3 attempts
|
179
255
|
end
|
180
256
|
|
181
|
-
|
257
|
+
warn "Bad JSON received!!!!!!: #{content}"
|
182
258
|
raise e
|
183
259
|
rescue Faraday::BadRequestError => e
|
184
260
|
# make sure we see the actual error message on console or Honeybadger
|
185
|
-
|
261
|
+
warn "Chat completion failed!!!!!!!!!!!!!!!!: #{e.response[:body]}"
|
186
262
|
raise e
|
187
263
|
end
|
188
264
|
end
|
@@ -257,7 +333,7 @@ module Raix
|
|
257
333
|
configuration.openrouter_client.complete(messages, model:, extras: params.compact, stream:)
|
258
334
|
rescue OpenRouter::ServerError => e
|
259
335
|
if e.message.include?("retry")
|
260
|
-
|
336
|
+
warn "Retrying OpenRouter request... (#{retry_count} attempts) #{e.message}"
|
261
337
|
retry_count += 1
|
262
338
|
sleep 1 * retry_count # backoff
|
263
339
|
retry if retry_count < 5
|
data/lib/raix/configuration.rb
CHANGED
@@ -36,10 +36,15 @@ module Raix
|
|
36
36
|
# The openai_client option determines the OpenAI client to use for communication.
|
37
37
|
attr_accessor_with_fallback :openai_client
|
38
38
|
|
39
|
+
# The max_tool_calls option determines the maximum number of tool calls
|
40
|
+
# before forcing a text response to prevent excessive function invocations.
|
41
|
+
attr_accessor_with_fallback :max_tool_calls
|
42
|
+
|
39
43
|
DEFAULT_MAX_TOKENS = 1000
|
40
44
|
DEFAULT_MAX_COMPLETION_TOKENS = 16_384
|
41
45
|
DEFAULT_MODEL = "meta-llama/llama-3.3-8b-instruct:free"
|
42
46
|
DEFAULT_TEMPERATURE = 0.0
|
47
|
+
DEFAULT_MAX_TOOL_CALLS = 25
|
43
48
|
|
44
49
|
# Initializes a new instance of the Configuration class with default values.
|
45
50
|
def initialize(fallback: nil)
|
@@ -47,6 +52,7 @@ module Raix
|
|
47
52
|
self.max_completion_tokens = DEFAULT_MAX_COMPLETION_TOKENS
|
48
53
|
self.max_tokens = DEFAULT_MAX_TOKENS
|
49
54
|
self.model = DEFAULT_MODEL
|
55
|
+
self.max_tool_calls = DEFAULT_MAX_TOOL_CALLS
|
50
56
|
self.fallback = fallback
|
51
57
|
end
|
52
58
|
|
@@ -100,7 +100,9 @@ module Raix
|
|
100
100
|
}
|
101
101
|
]
|
102
102
|
|
103
|
-
|
103
|
+
# Return the content - ChatCompletion will automatically continue
|
104
|
+
# the conversation after tool execution to get a final response
|
105
|
+
content
|
104
106
|
end
|
105
107
|
end
|
106
108
|
end
|
@@ -114,11 +116,11 @@ module Raix
|
|
114
116
|
super
|
115
117
|
end
|
116
118
|
|
117
|
-
# Stops the
|
118
|
-
# Useful
|
119
|
-
#
|
120
|
-
def
|
121
|
-
|
119
|
+
# Stops the automatic continuation of chat completions after this function call.
|
120
|
+
# Useful when you want to halt processing within a function and force the AI
|
121
|
+
# to provide a text response without making additional tool calls.
|
122
|
+
def stop_tool_calls_and_respond!
|
123
|
+
@stop_tool_calls_and_respond = true
|
122
124
|
end
|
123
125
|
|
124
126
|
def tools
|
data/lib/raix/mcp.rb
CHANGED
@@ -102,7 +102,7 @@ module Raix
|
|
102
102
|
filtered_tools.each do |tool|
|
103
103
|
remote_name = tool.name
|
104
104
|
# TODO: Revisit later whether this much context is needed in the function name
|
105
|
-
local_name = "#{remote_name}_#{client.unique_key}"
|
105
|
+
local_name = :"#{remote_name}_#{client.unique_key}"
|
106
106
|
|
107
107
|
description = tool.description
|
108
108
|
input_schema = tool.input_schema || {}
|
@@ -154,9 +154,8 @@ module Raix
|
|
154
154
|
}
|
155
155
|
]
|
156
156
|
|
157
|
-
#
|
158
|
-
|
159
|
-
|
157
|
+
# Return the content - ChatCompletion will automatically continue
|
158
|
+
# the conversation after tool execution
|
160
159
|
content_text
|
161
160
|
end
|
162
161
|
end
|
@@ -5,172 +5,162 @@ require "ostruct"
|
|
5
5
|
# This module provides a way to chain prompts and handle
|
6
6
|
# user responses in a serialized manner, with support for
|
7
7
|
# functions if the FunctionDispatch module is also included.
|
8
|
-
module
|
9
|
-
|
8
|
+
module Raix
|
9
|
+
# The PromptDeclarations module provides a way to chain prompts and handle
|
10
|
+
# user responses in a serialized manner, with support for
|
11
|
+
# functions if the FunctionDispatch module is also included.
|
12
|
+
module PromptDeclarations
|
13
|
+
extend ActiveSupport::Concern
|
14
|
+
|
15
|
+
module ClassMethods # rubocop:disable Style/Documentation
|
16
|
+
# Adds a prompt to the list of prompts. At minimum, provide a `text` or `call` parameter.
|
17
|
+
#
|
18
|
+
# @param system [Proc] A lambda that generates the system message.
|
19
|
+
# @param call [ChatCompletion] A callable class that includes ChatCompletion. Will be passed a context object when initialized.
|
20
|
+
# @param text Accepts 1) a lambda that returns the prompt text, 2) a string, or 3) a symbol that references a method.
|
21
|
+
# @param stream [Proc] A lambda stream handler
|
22
|
+
# @param success [Proc] The block of code to execute when the prompt is answered.
|
23
|
+
# @param params [Hash] Additional parameters for the completion API call
|
24
|
+
# @param if [Proc] A lambda that determines if the prompt should be executed.
|
25
|
+
def prompt(system: nil, call: nil, text: nil, stream: nil, success: nil, params: {}, if: nil, unless: nil, until: nil)
|
26
|
+
name = Digest::SHA256.hexdigest(text.inspect)[0..7]
|
27
|
+
prompts << OpenStruct.new({ name:, system:, call:, text:, stream:, success:, if:, unless:, until:, params: })
|
28
|
+
|
29
|
+
define_method(name) do |response|
|
30
|
+
return response if success.nil?
|
31
|
+
return send(success, response) if success.is_a?(Symbol)
|
32
|
+
|
33
|
+
instance_exec(response, &success)
|
34
|
+
end
|
35
|
+
end
|
10
36
|
|
11
|
-
|
12
|
-
|
13
|
-
#
|
14
|
-
# @param system [Proc] A lambda that generates the system message.
|
15
|
-
# @param call [ChatCompletion] A callable class that includes ChatCompletion. Will be passed a context object when initialized.
|
16
|
-
# @param text Accepts 1) a lambda that returns the prompt text, 2) a string, or 3) a symbol that references a method.
|
17
|
-
# @param stream [Proc] A lambda stream handler
|
18
|
-
# @param success [Proc] The block of code to execute when the prompt is answered.
|
19
|
-
# @param params [Hash] Additional parameters for the completion API call
|
20
|
-
# @param if [Proc] A lambda that determines if the prompt should be executed.
|
21
|
-
def prompt(system: nil, call: nil, text: nil, stream: nil, success: nil, params: {}, if: nil, unless: nil, until: nil)
|
22
|
-
name = Digest::SHA256.hexdigest(text.inspect)[0..7]
|
23
|
-
prompts << OpenStruct.new({ name:, system:, call:, text:, stream:, success:, if:, unless:, until:, params: })
|
24
|
-
|
25
|
-
define_method(name) do |response|
|
26
|
-
puts "_" * 80
|
27
|
-
puts "PromptDeclarations#response:"
|
28
|
-
puts "#{text&.source_location} (#{name})"
|
29
|
-
puts response
|
30
|
-
puts "_" * 80
|
31
|
-
|
32
|
-
return response if success.nil?
|
33
|
-
return send(success, response) if success.is_a?(Symbol)
|
34
|
-
|
35
|
-
instance_exec(response, &success)
|
37
|
+
def prompts
|
38
|
+
@prompts ||= []
|
36
39
|
end
|
37
40
|
end
|
38
41
|
|
39
|
-
|
40
|
-
|
41
|
-
|
42
|
-
end
|
42
|
+
attr_reader :current_prompt, :last_response
|
43
|
+
|
44
|
+
MAX_LOOP_COUNT = 5
|
43
45
|
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
|
51
|
-
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
|
57
|
-
|
58
|
-
|
59
|
-
|
60
|
-
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
|
90
|
-
|
91
|
-
|
92
|
-
current_prompt.call.new(self).call(input).tap do |response|
|
93
|
-
if response.present?
|
94
|
-
transcript << { assistant: response }
|
95
|
-
@last_response = send(current_prompt.name, response)
|
46
|
+
# Executes the chat completion process based on the class-level declared prompts.
|
47
|
+
# The response to each prompt is added to the transcript automatically and returned.
|
48
|
+
#
|
49
|
+
# Raises an error if there are not enough prompts defined.
|
50
|
+
#
|
51
|
+
# Uses system prompt in following order of priority:
|
52
|
+
# - system lambda specified in the prompt declaration
|
53
|
+
# - system_prompt instance method if defined
|
54
|
+
# - system_prompt class-level declaration if defined
|
55
|
+
#
|
56
|
+
# Prompts require a text lambda to be defined at minimum.
|
57
|
+
# TODO: shortcut syntax passes just a string prompt if no other options are needed.
|
58
|
+
#
|
59
|
+
# @raise [RuntimeError] If no prompts are defined.
|
60
|
+
#
|
61
|
+
# @param prompt [String] The prompt to use for the chat completion.
|
62
|
+
# @param params [Hash] Parameters for the chat completion.
|
63
|
+
# @param raw [Boolean] Whether to return the raw response.
|
64
|
+
#
|
65
|
+
# TODO: SHOULD NOT HAVE A DIFFERENT INTERFACE THAN PARENT
|
66
|
+
def chat_completion(prompt = nil, params: {}, raw: false, openai: false)
|
67
|
+
raise "No prompts defined" unless self.class.prompts.present?
|
68
|
+
|
69
|
+
loop_count = 0
|
70
|
+
|
71
|
+
current_prompts = self.class.prompts.clone
|
72
|
+
|
73
|
+
while (@current_prompt = current_prompts.shift)
|
74
|
+
next if @current_prompt.if.present? && !instance_exec(&@current_prompt.if)
|
75
|
+
next if @current_prompt.unless.present? && instance_exec(&@current_prompt.unless)
|
76
|
+
|
77
|
+
input = case current_prompt.text
|
78
|
+
when Proc
|
79
|
+
instance_exec(¤t_prompt.text)
|
80
|
+
when String
|
81
|
+
current_prompt.text
|
82
|
+
when Symbol
|
83
|
+
send(current_prompt.text)
|
84
|
+
else
|
85
|
+
last_response.presence || prompt
|
86
|
+
end
|
87
|
+
|
88
|
+
if current_prompt.call.present?
|
89
|
+
current_prompt.call.new(self).call(input).tap do |response|
|
90
|
+
if response.present?
|
91
|
+
transcript << { assistant: response }
|
92
|
+
@last_response = send(current_prompt.name, response)
|
93
|
+
end
|
96
94
|
end
|
95
|
+
else
|
96
|
+
__system_prompt = instance_exec(¤t_prompt.system) if current_prompt.system.present? # rubocop:disable Lint/UnderscorePrefixedVariableName
|
97
|
+
__system_prompt ||= system_prompt if respond_to?(:system_prompt)
|
98
|
+
__system_prompt ||= self.class.system_prompt.presence
|
99
|
+
transcript << { system: __system_prompt } if __system_prompt
|
100
|
+
transcript << { user: instance_exec(¤t_prompt.text) } # text is required
|
101
|
+
|
102
|
+
params = current_prompt.params.merge(params)
|
103
|
+
|
104
|
+
# set the stream if necessary
|
105
|
+
self.stream = instance_exec(¤t_prompt.stream) if current_prompt.stream.present?
|
106
|
+
|
107
|
+
execute_ai_request(params:, raw:, openai:, transcript:, loop_count:)
|
97
108
|
end
|
98
|
-
else
|
99
|
-
__system_prompt = instance_exec(¤t_prompt.system) if current_prompt.system.present? # rubocop:disable Lint/UnderscorePrefixedVariableName
|
100
|
-
__system_prompt ||= system_prompt if respond_to?(:system_prompt)
|
101
|
-
__system_prompt ||= self.class.system_prompt.presence
|
102
|
-
transcript << { system: __system_prompt } if __system_prompt
|
103
|
-
transcript << { user: instance_exec(¤t_prompt.text) } # text is required
|
104
109
|
|
105
|
-
|
110
|
+
next unless current_prompt.until.present? && !instance_exec(¤t_prompt.until)
|
106
111
|
|
107
|
-
|
108
|
-
|
112
|
+
if loop_count >= MAX_LOOP_COUNT
|
113
|
+
warn "Max loop count reached in chat_completion. Forcing return."
|
109
114
|
|
110
|
-
|
115
|
+
return last_response
|
116
|
+
else
|
117
|
+
current_prompts.unshift(@current_prompt) # put it back at the front
|
118
|
+
loop_count += 1
|
119
|
+
end
|
111
120
|
end
|
112
121
|
|
113
|
-
|
114
|
-
|
115
|
-
if loop_count >= MAX_LOOP_COUNT
|
116
|
-
Honeybadger.notify(
|
117
|
-
"Max loop count reached in chat_completion. Forcing return.",
|
118
|
-
context: {
|
119
|
-
current_prompts:,
|
120
|
-
prompt:,
|
121
|
-
usage_subject: usage_subject.inspect,
|
122
|
-
last_response: Current.or_response
|
123
|
-
}
|
124
|
-
)
|
125
|
-
|
126
|
-
return last_response
|
127
|
-
else
|
128
|
-
current_prompts.unshift(@current_prompt) # put it back at the front
|
129
|
-
loop_count += 1
|
130
|
-
end
|
122
|
+
last_response
|
131
123
|
end
|
132
124
|
|
133
|
-
|
134
|
-
|
125
|
+
def execute_ai_request(params:, raw:, openai:, transcript:, loop_count:)
|
126
|
+
chat_completion_from_superclass(params:, raw:, openai:).then do |response|
|
127
|
+
transcript << { assistant: response }
|
128
|
+
@last_response = send(current_prompt.name, response)
|
129
|
+
self.stream = nil # clear it again so it's not used for the next prompt
|
130
|
+
end
|
131
|
+
rescue StandardError => e
|
132
|
+
# Bubbles the error up the stack if no loops remain
|
133
|
+
raise e if loop_count >= MAX_LOOP_COUNT
|
135
134
|
|
136
|
-
|
137
|
-
chat_completion_from_superclass(params:, raw:, openai:).then do |response|
|
138
|
-
transcript << { assistant: response }
|
139
|
-
@last_response = send(current_prompt.name, response)
|
140
|
-
self.stream = nil # clear it again so it's not used for the next prompt
|
135
|
+
sleep 1 # Wait before continuing
|
141
136
|
end
|
142
|
-
rescue Conversation::StreamError => e
|
143
|
-
# Bubbles the error up the stack if no loops remain
|
144
|
-
raise Faraday::ServerError.new(nil, { status: e.status, body: e.response }) if loop_count >= MAX_LOOP_COUNT
|
145
|
-
|
146
|
-
sleep 1.second # Wait before continuing
|
147
|
-
end
|
148
137
|
|
149
|
-
|
150
|
-
|
151
|
-
|
152
|
-
|
153
|
-
|
154
|
-
|
138
|
+
# Returns the model parameter of the current prompt or the default model.
|
139
|
+
#
|
140
|
+
# @return [Object] The model parameter of the current prompt or the default model.
|
141
|
+
def model
|
142
|
+
@current_prompt.params[:model] || super
|
143
|
+
end
|
155
144
|
|
156
|
-
|
157
|
-
|
158
|
-
|
159
|
-
|
160
|
-
|
161
|
-
|
145
|
+
# Returns the temperature parameter of the current prompt or the default temperature.
|
146
|
+
#
|
147
|
+
# @return [Float] The temperature parameter of the current prompt or the default temperature.
|
148
|
+
def temperature
|
149
|
+
@current_prompt.params[:temperature] || super
|
150
|
+
end
|
162
151
|
|
163
|
-
|
164
|
-
|
165
|
-
|
166
|
-
|
167
|
-
|
168
|
-
|
152
|
+
# Returns the max_tokens parameter of the current prompt or the default max_tokens.
|
153
|
+
#
|
154
|
+
# @return [Integer] The max_tokens parameter of the current prompt or the default max_tokens.
|
155
|
+
def max_tokens
|
156
|
+
@current_prompt.params[:max_tokens] || super
|
157
|
+
end
|
169
158
|
|
170
|
-
|
159
|
+
protected
|
171
160
|
|
172
|
-
|
173
|
-
|
174
|
-
|
161
|
+
# workaround for super.chat_completion, which is not available in ruby
|
162
|
+
def chat_completion_from_superclass(*, **kargs)
|
163
|
+
method(:chat_completion).super_method.call(*, **kargs)
|
164
|
+
end
|
175
165
|
end
|
176
166
|
end
|
data/lib/raix/version.rb
CHANGED
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: raix
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 1.0.1
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Obie Fernandez
|
@@ -89,6 +89,7 @@ files:
|
|
89
89
|
- ".rubocop.yml"
|
90
90
|
- ".ruby-version"
|
91
91
|
- CHANGELOG.md
|
92
|
+
- CLAUDE.md
|
92
93
|
- CODE_OF_CONDUCT.md
|
93
94
|
- Gemfile
|
94
95
|
- Gemfile.lock
|