raix 0.5 → 0.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 7537dc5f13d858d6fdd7e5fcdeb60e1d5fd99e0f2aa84e618096f959f2c44ed1
4
- data.tar.gz: 8af2e5eb06b2fdaf2c4da8e9b9df28750c6c449d63e5eec25478d61068de2742
3
+ metadata.gz: f6783ed9671864edec5d734f8003822accee2dd7b89d0453a2c6973d35ecf2bf
4
+ data.tar.gz: 90eac19ba012daaeed4ce898a2be3476d1dac9c6df2cbb66fbe88a5620b971c6
5
5
  SHA512:
6
- metadata.gz: a76e3c1ac764735f05b603c83f6410c5d83274e0d8ef5b80e6a08dfb4ca2338ed324b14db3b4383399ebbd9a88d76615deed4ad8533592ac3dee2ccc84f1ab6d
7
- data.tar.gz: c4c89a3d2feac21a80dd1be7f4c0626f48a09756af55b0f38dd0abc769ec4cdb989ae210baabcbbb896f869ec79dd1e26454cecd5df28ee3ff016661d612e7c3
6
+ metadata.gz: 6753d494dd65e960373308189d7324e973c347f627ed82463b9ae3d7079446a87b7e012949c5a69adf734a1e9f37e8684b51a04f3a7b39b6b9772b501908949a
7
+ data.tar.gz: 07f8d142ca423979013c17da2f8758a7113fc583fc2ee47b10cf64072a7be12aa7b53447821340239033866e8c108b1bea76ffc202fcd24e02cf814607df5989
data/.rubocop.yml CHANGED
@@ -10,8 +10,10 @@ Style/StringLiteralsInInterpolation:
10
10
  Enabled: true
11
11
  EnforcedStyle: double_quotes
12
12
 
13
+ Style/IfUnlessModifier:
14
+ Enabled: false
13
15
  Layout/LineLength:
14
- Max: 180
16
+ Enabled: false
15
17
 
16
18
  Metrics/BlockLength:
17
19
  Enabled: false
@@ -30,3 +32,6 @@ Metrics/CyclomaticComplexity:
30
32
 
31
33
  Metrics/PerceivedComplexity:
32
34
  Enabled: false
35
+
36
+ Metrics/ParameterLists:
37
+ Enabled: false
data/CHANGELOG.md CHANGED
@@ -1,4 +1,27 @@
1
- ## [Unreleased]
1
+ ## [0.7] - 2024-04-02
2
+ - adds support for `until` condition in `PromptDeclarations` to control prompt looping
3
+ - adds support for `if` and `unless` conditions in `PromptDeclarations` to control prompt execution
4
+ - adds support for `success` callback in `PromptDeclarations` to handle prompt responses
5
+ - adds support for `stream` handler in `PromptDeclarations` to control response streaming
6
+ - adds support for `params` in `PromptDeclarations` to customize API parameters per prompt
7
+ - adds support for `system` directive in `PromptDeclarations` to set per-prompt system messages
8
+ - adds support for `call` in `PromptDeclarations` to delegate to callable prompt objects
9
+ - adds support for `text` in `PromptDeclarations` to specify prompt content via lambda, string, or symbol
10
+ - adds support for `raw` parameter in `PromptDeclarations` to return raw API responses
11
+ - adds support for `openai` parameter in `PromptDeclarations` to use OpenAI directly
12
+ - adds support for `prompt` parameter in `PromptDeclarations` to specify initial prompt
13
+ - adds support for `last_response` in `PromptDeclarations` to access previous prompt responses
14
+ - adds support for `current_prompt` in `PromptDeclarations` to access current prompt context
15
+ - adds support for `MAX_LOOP_COUNT` in `PromptDeclarations` to prevent infinite loops
16
+ - adds support for `execute_ai_request` in `PromptDeclarations` to handle API calls
17
+ - adds support for `chat_completion_from_superclass` in `PromptDeclarations` to handle superclass calls
18
+ - adds support for `model`, `temperature`, and `max_tokens` in `PromptDeclarations` to access prompt parameters
19
+ - fixes function return values in `FunctionDispatch` to properly return results from tool calls (thanks @ttilberg)
20
+ - Make automatic JSON parsing available to non-OpenAI providers that don't support the response_format parameter by scanning for json XML tags
21
+
22
+ ## [0.6.0] - 2024-11-12
23
+ - adds `save_response` option to `chat_completion` to control transcript updates
24
+ - fixes potential race conditions in transcript handling
2
25
 
3
26
  ## [0.1.0] - 2024-04-03
4
27
 
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- raix (0.5)
4
+ raix (0.7)
5
5
  activesupport (>= 6.0)
6
6
  open_router (~> 0.2)
7
7
  ruby-openai (~> 7.0)
data/README.llm ADDED
@@ -0,0 +1,106 @@
1
+ # Raix (Ruby AI eXtensions)
2
+ Raix adds LLM-based AI functionality to Ruby classes. It supports OpenAI or OpenRouter as providers and can work in non-Rails apps if you include ActiveSupport.
3
+
4
+ ## Chat Completion
5
+ You must include `Raix::ChatCompletion`. It gives you a `transcript` array for messages and a `chat_completion` method that sends them to the AI.
6
+
7
+ ```ruby
8
+ class MeaningOfLife
9
+ include Raix::ChatCompletion
10
+ end
11
+
12
+ ai = MeaningOfLife.new
13
+ ai.transcript << { user: "What is the meaning of life?" }
14
+ puts ai.chat_completion
15
+ ```
16
+
17
+ You can add messages using either `{ user: "..." }` or `{ role: "user", content: "..." }`.
18
+
19
+ ### Predicted Outputs
20
+ Pass `prediction` to support [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs):
21
+ ```ruby
22
+ ai.chat_completion(openai: "gpt-4o", params: { prediction: "..." })
23
+ ```
24
+
25
+ ### Prompt Caching
26
+ When using Anthropic models, you can specify `cache_at`. Messages above that size get sent as ephemeral multipart segments.
27
+ ```ruby
28
+ ai.chat_completion(params: { cache_at: 1000 })
29
+ ```
30
+
31
+ ## Function Dispatch
32
+ Include `Raix::FunctionDispatch` to declare functions AI can call in a chat loop. Use `chat_completion(loop: true)` so the AI can call functions and generate more messages until it outputs a final text response.
33
+
34
+ ```ruby
35
+ class WhatIsTheWeather
36
+ include Raix::ChatCompletion
37
+ include Raix::FunctionDispatch
38
+
39
+ function :check_weather, "Check the weather for a location", location: { type: "string" } do |args|
40
+ "The weather in #{args[:location]} is hot and sunny"
41
+ end
42
+ end
43
+ ```
44
+
45
+ If the AI calls multiple functions at once, Raix handles them in sequence and returns an array of results. Call `stop_looping!` inside a function to end the loop.
46
+
47
+ ## Prompt Declarations
48
+ Include `Raix::PromptDeclarations` to define a chain of prompts in order. Each prompt can be inline text or a callable class that also includes `ChatCompletion`.
49
+
50
+ ```ruby
51
+ class PromptSubscriber
52
+ include Raix::ChatCompletion
53
+ include Raix::PromptDeclarations
54
+
55
+ prompt call: FetchUrlCheck
56
+ prompt call: MemoryScan
57
+ prompt text: -> { user_message.content }
58
+
59
+ def message_created(user_message)
60
+ chat_completion(loop: true, openai: "gpt-4o")
61
+ end
62
+ end
63
+ ```
64
+
65
+ ## Predicate Module
66
+ Include `Raix::Predicate` to handle yes/no/maybe questions. Define blocks with the `yes?`, `no?`, and `maybe?` methods.
67
+
68
+ ```ruby
69
+ class Question
70
+ include Raix::Predicate
71
+
72
+ yes? { |explanation| puts "Affirmative: #{explanation}" }
73
+ no? { |explanation| puts "Negative: #{explanation}" }
74
+
75
+ end
76
+ ```
77
+
78
+ ## ResponseFormat (Experimental)
79
+ Use `Raix::ResponseFormat` to enforce JSON schemas for structured responses.
80
+
81
+ ```ruby
82
+ format = Raix::ResponseFormat.new("PersonInfo", {
83
+ name: { type: "string" },
84
+ age: { type: "integer" }
85
+ })
86
+
87
+ class StructuredResponse
88
+ include Raix::ChatCompletion
89
+
90
+ def analyze_person(name)
91
+ chat_completion(response_format: format)
92
+ end
93
+ end
94
+ ```
95
+
96
+ ## Installation
97
+ Add `gem "raix"` to your Gemfile or run `gem install raix`. Configure an OpenRouter or OpenAI client in an initializer:
98
+
99
+ ```ruby
100
+ # config/initializers/raix.rb
101
+ Raix.configure do |config|
102
+ config.openrouter_client = OpenRouter::Client.new
103
+ end
104
+ ```
105
+ Make sure you have valid API tokens for your chosen provider.
106
+ ```
data/README.md CHANGED
@@ -24,6 +24,10 @@ end
24
24
  => "The question of the meaning of life is one of the most profound and enduring inquiries in philosophy, religion, and science.
25
25
  Different perspectives offer various answers..."
26
26
 
27
+ By default, Raix will automatically add the AI's response to the transcript. This behavior can be controlled with the `save_response` parameter, which defaults to `true`. You may want to set it to `false` when making multiple chat completion calls during the lifecycle of a single object (whether sequentially or in parallel) and want to manage the transcript updates yourself:
28
+
29
+ ```ruby
30
+ >> ai.chat_completion(save_response: false)
27
31
  ```
28
32
 
29
33
  #### Transcript Format
@@ -127,6 +131,24 @@ results = example.chat_completion(openai: "gpt-4o")
127
131
  # => ["Result from first tool", "Result from second tool"]
128
132
  ```
129
133
 
134
+ Note that as of version 0.6.1, function return values are properly returned from tool calls, whether in single or multiple tool call scenarios. This means you can use the return values from your functions in your application logic.
135
+
136
+ ```ruby
137
+ class DiceRoller
138
+ include Raix::ChatCompletion
139
+ include Raix::FunctionDispatch
140
+
141
+ function :roll_dice, "Roll a die with specified number of faces", faces: { type: :integer } do |arguments|
142
+ rand(1..arguments[:faces])
143
+ end
144
+ end
145
+
146
+ roller = DiceRoller.new
147
+ roller.transcript << { user: "Roll a d20 twice" }
148
+ results = roller.chat_completion(openai: "gpt-4o")
149
+ # => [15, 7] # Actual random dice roll results
150
+ ```
151
+
130
152
  #### Manually Stopping a Loop
131
153
 
132
154
  To loop AI components that don't interact with end users, at least one function block should invoke `stop_looping!` whenever you're ready to stop processing.
@@ -222,7 +244,6 @@ class PromptSubscriber
222
244
  end
223
245
 
224
246
  ...
225
-
226
247
  end
227
248
 
228
249
  class FetchUrlCheck
@@ -264,6 +285,80 @@ Notably, Olympia does not use the `FunctionDispatch` module in its primary conve
264
285
 
265
286
  Streaming of the AI's response to the end user is handled by the `ReplyStream` class, passed to the final prompt declaration as its `stream` parameter. [Patterns of Application Development Using AI](https://leanpub.com/patterns-of-application-development-using-ai) devotes a whole chapter to describing how to write your own `ReplyStream` class.
266
287
 
288
+ #### Additional PromptDeclarations Options
289
+
290
+ The `PromptDeclarations` module supports several additional options that can be used to customize prompt behavior:
291
+
292
+ ```ruby
293
+ class CustomPromptExample
294
+ include Raix::ChatCompletion
295
+ include Raix::PromptDeclarations
296
+
297
+ # Basic prompt with text
298
+ prompt text: "Process this input"
299
+
300
+ # Prompt with system directive
301
+ prompt system: "You are a helpful assistant",
302
+ text: "Analyze this text"
303
+
304
+ # Prompt with conditions
305
+ prompt text: "Process this input",
306
+ if: -> { some_condition },
307
+ unless: -> { some_other_condition }
308
+
309
+ # Prompt with success callback
310
+ prompt text: "Process this input",
311
+ success: ->(response) { handle_response(response) }
312
+
313
+ # Prompt with custom parameters
314
+ prompt text: "Process with custom settings",
315
+ params: { temperature: 0.7, max_tokens: 1000 }
316
+
317
+ # Prompt with until condition for looping
318
+ prompt text: "Keep processing until complete",
319
+ until: -> { processing_complete? }
320
+
321
+ # Prompt with raw response
322
+ prompt text: "Get raw response",
323
+ raw: true
324
+
325
+ # Prompt using OpenAI directly
326
+ prompt text: "Use OpenAI",
327
+ openai: true
328
+ end
329
+ ```
330
+
331
+ The available options include:
332
+
333
+ - `system`: Set a system directive for the prompt
334
+ - `if`/`unless`: Control prompt execution with conditions
335
+ - `success`: Handle prompt responses with callbacks
336
+ - `params`: Customize API parameters per prompt
337
+ - `until`: Control prompt looping
338
+ - `raw`: Get raw API responses
339
+ - `openai`: Use OpenAI directly
340
+ - `stream`: Control response streaming
341
+ - `call`: Delegate to callable prompt objects
342
+
343
+ You can also access the current prompt context and previous responses:
344
+
345
+ ```ruby
346
+ class ContextAwarePrompt
347
+ include Raix::ChatCompletion
348
+ include Raix::PromptDeclarations
349
+
350
+ def process_with_context
351
+ # Access current prompt
352
+ current_prompt.params[:temperature]
353
+
354
+ # Access previous response
355
+ last_response
356
+
357
+ chat_completion
358
+ end
359
+ end
360
+ ```
361
+
267
362
  ## Predicate Module
268
363
 
269
364
  The `Raix::Predicate` module provides a simple way to handle yes/no/maybe questions using AI chat completion. It allows you to define blocks that handle different types of responses with their explanations. It is one of the concrete patterns described in the "Discrete Components" chapter of [Patterns of Application Development Using AI](https://leanpub.com/patterns-of-application-development-using-ai).
@@ -418,7 +513,7 @@ class StructuredResponse
418
513
  })
419
514
 
420
515
  transcript << { user: "Analyze the person named #{name}" }
421
- chat_completion(response_format: format)
516
+ chat_completion(params: { response_format: format })
422
517
  end
423
518
  end
424
519
 
@@ -40,11 +40,11 @@ module Raix
40
40
  #
41
41
  # @param params [Hash] The parameters for chat completion.
42
42
  # @option loop [Boolean] :loop (false) Whether to loop the chat completion after function calls.
43
- # @option params [Boolean] :json (false) Whether to return the parse the response as a JSON object.
43
+ # @option params [Boolean] :json (false) Whether to return the parse the response as a JSON object. Will search for <json> tags in the response first, then fall back to the default JSON parsing of the entire response.
44
44
  # @option params [Boolean] :openai (false) Whether to use OpenAI's API instead of OpenRouter's.
45
45
  # @option params [Boolean] :raw (false) Whether to return the raw response or dig the text content.
46
46
  # @return [String|Hash] The completed chat response.
47
- def chat_completion(params: {}, loop: false, json: false, raw: false, openai: false)
47
+ def chat_completion(params: {}, loop: false, json: false, raw: false, openai: false, save_response: true)
48
48
  # set params to default values if not provided
49
49
  params[:cache_at] ||= cache_at.presence
50
50
  params[:frequency_penalty] ||= frequency_penalty.presence
@@ -84,7 +84,10 @@ module Raix
84
84
  self.model ||= Raix.configuration.model
85
85
 
86
86
  adapter = MessageAdapters::Base.new(self)
87
- messages = transcript.flatten.compact.map { |msg| adapter.transform(msg) }
87
+
88
+ # duplicate the transcript to avoid race conditions in situations where
89
+ # chat_completion is called multiple times in parallel
90
+ messages = transcript.flatten.compact.map { |msg| adapter.transform(msg) }.dup
88
91
  raise "Can't complete an empty transcript" if messages.blank?
89
92
 
90
93
  begin
@@ -119,8 +122,14 @@ module Raix
119
122
 
120
123
  response.tap do |res|
121
124
  content = res.dig("choices", 0, "message", "content")
125
+
126
+ transcript << { assistant: content } if save_response
127
+ content = content.squish
128
+
122
129
  if json
123
- content = content.squish
130
+ # Make automatic JSON parsing available to non-OpenAI providers that don't support the response_format parameter
131
+ content = content.match(%r{<json>(.*?)</json>}m)[1] if content.include?("<json>")
132
+
124
133
  return JSON.parse(content)
125
134
  end
126
135
 
@@ -70,7 +70,7 @@ module Raix
70
70
  }
71
71
  ]
72
72
  }
73
- instance_exec(arguments, &block).tap do |content|
73
+ result = instance_exec(arguments, &block).tap do |content|
74
74
  transcript << {
75
75
  role: "tool",
76
76
  tool_call_id: id,
@@ -81,6 +81,10 @@ module Raix
81
81
  end
82
82
 
83
83
  chat_completion(**chat_completion_args) if loop
84
+
85
+ # Return the result of the function call in case that's what the caller wants
86
+ # https://github.com/OlympiaAI/raix/issues/16
87
+ result
84
88
  end
85
89
  end
86
90
  end
@@ -1,119 +1,176 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require "ostruct"
4
- require "active_support/core_ext/string/filters"
5
-
6
- module Raix
7
- # The PromptDeclarations module provides a way to chain prompts and handle
8
- # user responses in a serialized manner (in the order they were defined),
9
- # with support for functions if the FunctionDispatch module is also included.
10
- module PromptDeclarations
11
- extend ActiveSupport::Concern
12
- extend ChatCompletion
13
-
14
- module ClassMethods # rubocop:disable Style/Documentation
15
- # Adds a prompt to the list of prompts.
16
- #
17
- # @param system [Proc] A lambda that generates the system message.
18
- # @param text [Proc] A lambda that generates the prompt text. (Required)
19
- # @param success [Proc] The block of code to execute when the prompt is answered.
20
- # @param parameters [Hash] Additional parameters for the completion API call
21
- # @param stream [Boolean] Whether to stream the response.
22
- def prompt(text:, system: nil, success: nil, params: {}, stream: false)
23
- name = Digest::SHA256.hexdigest(text.inspect)[0..7]
24
- prompts << begin
25
- OpenStruct.new({ name:, system:, text:, success:, params:, stream: })
26
- end
27
-
28
- define_method(name) do |response|
29
- if Rails.env.local?
30
- puts "_" * 80
31
- puts "PromptDeclarations#response:"
32
- puts "#{text.source_location} (#{name})"
33
- puts response
34
- puts "_" * 80
35
- end
36
-
37
- return response if success.nil?
38
- return send(success, response) if success.is_a?(Symbol)
39
-
40
- instance_exec(response, &success)
41
- end
42
- end
43
4
 
44
- # the list of prompts declared at class level
45
- def prompts
46
- @prompts ||= []
47
- end
5
+ # This module provides a way to chain prompts and handle
6
+ # user responses in a serialized manner, with support for
7
+ # functions if the FunctionDispatch module is also included.
8
+ module PromptDeclarations
9
+ extend ActiveSupport::Concern
48
10
 
49
- # getter/setter for system prompt declared at class level
50
- def system_prompt(prompt = nil)
51
- prompt ? @system_prompt = prompt.squish : @system_prompt
11
+ module ClassMethods # rubocop:disable Style/Documentation
12
+ # Adds a prompt to the list of prompts. At minimum, provide a `text` or `call` parameter.
13
+ #
14
+ # @param system [Proc] A lambda that generates the system message.
15
+ # @param call [ChatCompletion] A callable class that includes ChatCompletion. Will be passed a context object when initialized.
16
+ # @param text Accepts 1) a lambda that returns the prompt text, 2) a string, or 3) a symbol that references a method.
17
+ # @param stream [Proc] A lambda stream handler
18
+ # @param success [Proc] The block of code to execute when the prompt is answered.
19
+ # @param params [Hash] Additional parameters for the completion API call
20
+ # @param if [Proc] A lambda that determines if the prompt should be executed.
21
+ def prompt(system: nil, call: nil, text: nil, stream: nil, success: nil, params: {}, if: nil, unless: nil, until: nil)
22
+ name = Digest::SHA256.hexdigest(text.inspect)[0..7]
23
+ prompts << OpenStruct.new({ name:, system:, call:, text:, stream:, success:, if:, unless:, until:, params: })
24
+
25
+ define_method(name) do |response|
26
+ puts "_" * 80
27
+ puts "PromptDeclarations#response:"
28
+ puts "#{text&.source_location} (#{name})"
29
+ puts response
30
+ puts "_" * 80
31
+
32
+ return response if success.nil?
33
+ return send(success, response) if success.is_a?(Symbol)
34
+
35
+ instance_exec(response, &success)
52
36
  end
53
37
  end
54
38
 
55
- # Executes the chat completion process based on the class-level declared prompts.
56
- # The response to each prompt is added to the transcript automatically and returned.
57
- #
58
- # Prompts require at least a `text` lambda parameter.
59
- #
60
- # @param params [Hash] Parameters for the chat completion override those defined in the current prompt.
61
- # @option params [Boolean] :raw (false) Whether to return the raw response or dig the text content.
62
- #
63
- # Uses system prompt in following order of priority:
64
- # - system lambda specified in the prompt declaration
65
- # - system_prompt instance method if defined
66
- # - system_prompt class-level declaration if defined
67
- #
68
- # TODO: shortcut syntax passes just a string prompt if no other options are needed.
69
- #
70
- # @raise [RuntimeError] If no prompts are defined.
71
- #
72
- def chat_completion(params: {}, raw: false)
73
- raise "No prompts defined" unless self.class.prompts.present?
74
-
75
- current_prompts = self.class.prompts.clone
39
+ def prompts
40
+ @prompts ||= []
41
+ end
42
+ end
76
43
 
77
- while (@current_prompt = current_prompts.shift)
78
- __system_prompt = instance_exec(&@current_prompt.system) if @current_prompt.system.present? # rubocop:disable Lint/UnderscorePrefixedVariableName
44
+ attr_reader :current_prompt, :last_response
45
+
46
+ MAX_LOOP_COUNT = 5
47
+
48
+ # Executes the chat completion process based on the class-level declared prompts.
49
+ # The response to each prompt is added to the transcript automatically and returned.
50
+ #
51
+ # Raises an error if there are not enough prompts defined.
52
+ #
53
+ # Uses system prompt in following order of priority:
54
+ # - system lambda specified in the prompt declaration
55
+ # - system_prompt instance method if defined
56
+ # - system_prompt class-level declaration if defined
57
+ #
58
+ # Prompts require a text lambda to be defined at minimum.
59
+ # TODO: shortcut syntax passes just a string prompt if no other options are needed.
60
+ #
61
+ # @raise [RuntimeError] If no prompts are defined.
62
+ #
63
+ # @param prompt [String] The prompt to use for the chat completion.
64
+ # @param params [Hash] Parameters for the chat completion.
65
+ # @param raw [Boolean] Whether to return the raw response.
66
+ #
67
+ # TODO: SHOULD NOT HAVE A DIFFERENT INTERFACE THAN PARENT
68
+ def chat_completion(prompt = nil, params: {}, raw: false, openai: false)
69
+ raise "No prompts defined" unless self.class.prompts.present?
70
+
71
+ loop_count = 0
72
+
73
+ current_prompts = self.class.prompts.clone
74
+
75
+ while (@current_prompt = current_prompts.shift)
76
+ next if @current_prompt.if.present? && !instance_exec(&@current_prompt.if)
77
+ next if @current_prompt.unless.present? && instance_exec(&@current_prompt.unless)
78
+
79
+ input = case current_prompt.text
80
+ when Proc
81
+ instance_exec(&current_prompt.text)
82
+ when String
83
+ current_prompt.text
84
+ when Symbol
85
+ send(current_prompt.text)
86
+ else
87
+ last_response.presence || prompt
88
+ end
89
+
90
+ if current_prompt.call.present?
91
+ Rails.logger.debug "Calling #{current_prompt.call} with input: #{input}"
92
+ current_prompt.call.new(self).call(input).tap do |response|
93
+ if response.present?
94
+ transcript << { assistant: response }
95
+ @last_response = send(current_prompt.name, response)
96
+ end
97
+ end
98
+ else
99
+ __system_prompt = instance_exec(&current_prompt.system) if current_prompt.system.present? # rubocop:disable Lint/UnderscorePrefixedVariableName
79
100
  __system_prompt ||= system_prompt if respond_to?(:system_prompt)
80
101
  __system_prompt ||= self.class.system_prompt.presence
81
102
  transcript << { system: __system_prompt } if __system_prompt
82
- transcript << { user: instance_exec(&@current_prompt.text) } # text is required
103
+ transcript << { user: instance_exec(&current_prompt.text) } # text is required
83
104
 
84
- params = @current_prompt.params.merge(params)
105
+ params = current_prompt.params.merge(params)
85
106
 
86
107
  # set the stream if necessary
87
- self.stream = instance_exec(&@current_prompt.stream) if @current_prompt.stream.present?
108
+ self.stream = instance_exec(&current_prompt.stream) if current_prompt.stream.present?
88
109
 
89
- super(params:, raw:).then do |response|
90
- transcript << { assistant: response }
91
- @last_response = send(@current_prompt.name, response)
92
- end
110
+ execute_ai_request(params:, raw:, openai:, transcript:, loop_count:)
93
111
  end
94
112
 
95
- @last_response
113
+ next unless current_prompt.until.present? && !instance_exec(&current_prompt.until)
114
+
115
+ if loop_count >= MAX_LOOP_COUNT
116
+ Honeybadger.notify(
117
+ "Max loop count reached in chat_completion. Forcing return.",
118
+ context: {
119
+ current_prompts:,
120
+ prompt:,
121
+ usage_subject: usage_subject.inspect,
122
+ last_response: Current.or_response
123
+ }
124
+ )
125
+
126
+ return last_response
127
+ else
128
+ current_prompts.unshift(@current_prompt) # put it back at the front
129
+ loop_count += 1
130
+ end
96
131
  end
97
132
 
98
- # Returns the model parameter of the current prompt or the default model.
99
- #
100
- # @return [Object] The model parameter of the current prompt or the default model.
101
- def model
102
- @current_prompt.params[:model] || super
103
- end
133
+ last_response
134
+ end
104
135
 
105
- # Returns the temperature parameter of the current prompt or the default temperature.
106
- #
107
- # @return [Float] The temperature parameter of the current prompt or the default temperature.
108
- def temperature
109
- @current_prompt.params[:temperature] || super
136
+ def execute_ai_request(params:, raw:, openai:, transcript:, loop_count:)
137
+ chat_completion_from_superclass(params:, raw:, openai:).then do |response|
138
+ transcript << { assistant: response }
139
+ @last_response = send(current_prompt.name, response)
140
+ self.stream = nil # clear it again so it's not used for the next prompt
110
141
  end
142
+ rescue Conversation::StreamError => e
143
+ # Bubbles the error up the stack if no loops remain
144
+ raise Faraday::ServerError.new(nil, { status: e.status, body: e.response }) if loop_count >= MAX_LOOP_COUNT
111
145
 
112
- # Returns the max_tokens parameter of the current prompt or the default max_tokens.
113
- #
114
- # @return [Integer] The max_tokens parameter of the current prompt or the default max_tokens.
115
- def max_tokens
116
- @current_prompt.params[:max_tokens] || super
117
- end
146
+ sleep 1.second # Wait before continuing
147
+ end
148
+
149
+ # Returns the model parameter of the current prompt or the default model.
150
+ #
151
+ # @return [Object] The model parameter of the current prompt or the default model.
152
+ def model
153
+ @current_prompt.params[:model] || super
154
+ end
155
+
156
+ # Returns the temperature parameter of the current prompt or the default temperature.
157
+ #
158
+ # @return [Float] The temperature parameter of the current prompt or the default temperature.
159
+ def temperature
160
+ @current_prompt.params[:temperature] || super
161
+ end
162
+
163
+ # Returns the max_tokens parameter of the current prompt or the default max_tokens.
164
+ #
165
+ # @return [Integer] The max_tokens parameter of the current prompt or the default max_tokens.
166
+ def max_tokens
167
+ @current_prompt.params[:max_tokens] || super
168
+ end
169
+
170
+ protected
171
+
172
+ # workaround for super.chat_completion, which is not available in ruby
173
+ def chat_completion_from_superclass(*args, **kargs)
174
+ method(:chat_completion).super_method.call(*args, **kargs)
118
175
  end
119
176
  end
data/lib/raix/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Raix
4
- VERSION = "0.5"
4
+ VERSION = "0.7"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: raix
3
3
  version: !ruby/object:Gem::Version
4
- version: '0.5'
4
+ version: '0.7'
5
5
  platform: ruby
6
6
  authors:
7
7
  - Obie Fernandez
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2025-02-10 00:00:00.000000000 Z
11
+ date: 2025-04-13 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport
@@ -68,6 +68,7 @@ files:
68
68
  - Gemfile.lock
69
69
  - Guardfile
70
70
  - LICENSE.txt
71
+ - README.llm
71
72
  - README.md
72
73
  - Rakefile
73
74
  - lib/raix.rb