raix 0.3.1 → 0.4.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: eaf67088dabee6158ede28a1a2279124257313326c9bea8ef3791efe3649e7ca
4
- data.tar.gz: b35022cc748f4ab851044300ee57dcc5ee317d133bdbf68fb3aeb2a6cbaa4bcd
3
+ metadata.gz: 318057e8ece37b63c06884a61c37dc1ef15f38cee2c05d5305bcaf6d3697420e
4
+ data.tar.gz: 250229d71a808203689b87cea7c1ce8dd31b085c92b54dce392787299cd420d6
5
5
  SHA512:
6
- metadata.gz: 8aa98c05854a5697de357f75bacb649e17f51be6a4fe183dfb227aedb803c01f8001ffed29948af2883f726950fd58706d93410abded42cf25e6139c5fde88a2
7
- data.tar.gz: f7eda9fc1aaf073d874b9c9d3c3c243605d6799d7c0778f701ae53d8ea339e4ca11ed358a0b66894675d774cdd2561cbdf38471bcd577e3e2638f3db947db609
6
+ metadata.gz: dc51d8fab907f8ffa5e95df2ef308ee3cd5fc443f46803e8a6f184be80e12719be2d25aeb51349a7912b35df55866db66cdec532c0258bc66a97aeabbc4017d0
7
+ data.tar.gz: 7b393143a5da05ba75ac11a8a77e31b0e61c1e7e9cb564fc6c986a4f63e2f98a24d43056f613e8e4685baab85251c9e819bcf99fcdc22ee7428b50eb9895d4e7
data/.rubocop.yml CHANGED
@@ -11,7 +11,7 @@ Style/StringLiteralsInInterpolation:
11
11
  EnforcedStyle: double_quotes
12
12
 
13
13
  Layout/LineLength:
14
- Max: 120
14
+ Max: 180
15
15
 
16
16
  Metrics/BlockLength:
17
17
  Enabled: false
data/.ruby-version ADDED
@@ -0,0 +1 @@
1
+ 3.2.2
data/CHANGELOG.md CHANGED
@@ -8,3 +8,9 @@
8
8
  - adds `ChatCompletion` module
9
9
  - adds `PromptDeclarations` module
10
10
  - adds `FunctionDispatch` module
11
+
12
+ ## [0.3.2] - 2024-06-29
13
+ - adds support for streaming
14
+
15
+ ## [0.4.0] - 2024-10-18
16
+ - adds support for Anthropic-style prompt caching
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- raix (0.1.0)
4
+ raix (0.3.2)
5
5
  activesupport (>= 6.0)
6
6
  open_router (~> 0.2)
7
7
 
@@ -41,6 +41,7 @@ GEM
41
41
  faraday-retry (2.2.1)
42
42
  faraday (~> 2.0)
43
43
  ffi (1.17.0-arm64-darwin)
44
+ ffi (1.17.0-x86_64-linux-gnu)
44
45
  formatador (1.1.0)
45
46
  guard (2.18.1)
46
47
  formatador (>= 0.2.4)
@@ -79,6 +80,8 @@ GEM
79
80
  netrc (0.11.0)
80
81
  nokogiri (1.16.6-arm64-darwin)
81
82
  racc (~> 1.4)
83
+ nokogiri (1.16.6-x86_64-linux)
84
+ racc (~> 1.4)
82
85
  notiffany (0.1.3)
83
86
  nenv (~> 0.1)
84
87
  shellany (~> 0.0)
@@ -164,6 +167,7 @@ GEM
164
167
  sorbet-static (= 0.5.11447)
165
168
  sorbet-runtime (0.5.11447)
166
169
  sorbet-static (0.5.11447-universal-darwin)
170
+ sorbet-static (0.5.11447-x86_64-linux)
167
171
  sorbet-static-and-runtime (0.5.11447)
168
172
  sorbet (= 0.5.11447)
169
173
  sorbet-runtime (= 0.5.11447)
@@ -194,6 +198,8 @@ GEM
194
198
 
195
199
  PLATFORMS
196
200
  arm64-darwin-21
201
+ arm64-darwin-22
202
+ x86_64-linux
197
203
 
198
204
  DEPENDENCIES
199
205
  activesupport (>= 6.0)
data/README.md CHANGED
@@ -42,6 +42,30 @@ transcript << { role: "user", content: "What is the meaning of life?" }
42
42
 
43
43
  One of the advantages of OpenRouter and the reason that it is used by default by this library is that it handles mapping message formats from the OpenAI standard to whatever other model you're wanting to use (Anthropic, Cohere, etc.)
44
44
 
45
+ ### Prompt Caching
46
+
47
+ Raix supports [Anthropic-style prompt caching](https://openrouter.ai/docs/prompt-caching#anthropic-claude) when using Anthropic's Claud family of models. You can specify a `cache_at` parameter when doing a chat completion. If the character count for the content of a particular message is longer than the cache_at parameter, it will be sent to Anthropic as a multipart message with a cache control "breakpoint" set to "ephemeral".
48
+
49
+ Note that there is a limit of four breakpoints, and the cache will expire within five minutes. Therefore, it is recommended to reserve the cache breakpoints for large bodies of text, such as character cards, CSV data, RAG data, book chapters, etc. Raix does not enforce a limit on the number of breakpoints, which means that you might get an error if you try to cache too many messages.
50
+
51
+ ```ruby
52
+ >> my_class.chat_completion(params: { cache_at: 1000 })
53
+ => {
54
+ "messages": [
55
+ {
56
+ "role": "system",
57
+ "content": [
58
+ {
59
+ "type": "text",
60
+ "text": "HUGE TEXT BODY LONGER THAN 1000 CHARACTERS",
61
+ "cache_control": {
62
+ "type": "ephemeral"
63
+ }
64
+ }
65
+ ]
66
+ },
67
+ ```
68
+
45
69
  ### Use of Tools/Functions
46
70
 
47
71
  The second (optional) module that you can add to your Ruby classes after `ChatCompletion` is `FunctionDispatch`. It lets you declare and implement functions to be called at the AI's discretion as part of a chat completion "loop" in a declarative, Rails-like "DSL" fashion.
@@ -216,6 +240,18 @@ If bundler is not being used to manage dependencies, install the gem by executin
216
240
 
217
241
  $ gem install raix
218
242
 
243
+ If you are using the default OpenRouter API, Raix expects `Raix.configuration.openrouter_client` to initialized with the OpenRouter API client instance.
244
+
245
+ You can add an initializer to your application's `config/initializers` directory:
246
+
247
+ ```ruby
248
+ # config/initializers/raix.rb
249
+ Raix.configure do |config|
250
+ config.openrouter_client = OpenRouter::Client.new
251
+ end
252
+ ```
253
+
254
+ You will also need to configure the OpenRouter API access token as per the instructions here: https://github.com/OlympiaAI/open_router?tab=readme-ov-file#quickstart
219
255
 
220
256
  ## Development
221
257
 
@@ -235,4 +271,4 @@ The gem is available as open source under the terms of the [MIT License](https:/
235
271
 
236
272
  ## Code of Conduct
237
273
 
238
- Everyone interacting in the Raix::Rails project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/[OlympiaAI]/raix/blob/main/CODE_OF_CONDUCT.md).
274
+ Everyone interacting in the Raix project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/[OlympiaAI]/raix/blob/main/CODE_OF_CONDUCT.md).
@@ -0,0 +1,183 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "active_support/concern"
4
+ require "active_support/core_ext/object/blank"
5
+ require "raix/message_adapters/base"
6
+ require "open_router"
7
+ require "openai"
8
+
9
+ module Raix
10
+ # The `ChatCompletion`` module is a Rails concern that provides a way to interact
11
+ # with the OpenRouter Chat Completion API via its client. The module includes a few
12
+ # methods that allow you to build a transcript of messages and then send them to
13
+ # the API for completion. The API will return a response that you can use however
14
+ # you see fit. If the response includes a function call, the module will dispatch
15
+ # the function call and return the result. Which implies that function calls need
16
+ # to be defined on the class that includes this module. (Note: You should probably
17
+ # use the `FunctionDispatch` module to define functions instead of doing it manually.)
18
+ module ChatCompletion
19
+ extend ActiveSupport::Concern
20
+
21
+ attr_accessor :cache_at, :frequency_penalty, :logit_bias, :logprobs, :loop, :min_p, :model, :presence_penalty,
22
+ :repetition_penalty, :response_format, :stream, :temperature, :max_completion_tokens,
23
+ :max_tokens, :seed, :stop, :top_a, :top_k, :top_logprobs, :top_p, :tools, :tool_choice, :provider
24
+
25
+ # This method performs chat completion based on the provided transcript and parameters.
26
+ #
27
+ # @param params [Hash] The parameters for chat completion.
28
+ # @option loop [Boolean] :loop (false) Whether to loop the chat completion after function calls.
29
+ # @option params [Boolean] :json (false) Whether to return the parse the response as a JSON object.
30
+ # @option params [Boolean] :openai (false) Whether to use OpenAI's API instead of OpenRouter's.
31
+ # @option params [Boolean] :raw (false) Whether to return the raw response or dig the text content.
32
+ # @return [String|Hash] The completed chat response.
33
+ def chat_completion(params: {}, loop: false, json: false, raw: false, openai: false)
34
+ # set params to default values if not provided
35
+ params[:cache_at] ||= cache_at.presence
36
+ params[:frequency_penalty] ||= frequency_penalty.presence
37
+ params[:logit_bias] ||= logit_bias.presence
38
+ params[:logprobs] ||= logprobs.presence
39
+ params[:max_completion_tokens] ||= max_completion_tokens.presence || Raix.configuration.max_completion_tokens
40
+ params[:max_tokens] ||= max_tokens.presence || Raix.configuration.max_tokens
41
+ params[:min_p] ||= min_p.presence
42
+ params[:presence_penalty] ||= presence_penalty.presence
43
+ params[:provider] ||= provider.presence
44
+ params[:repetition_penalty] ||= repetition_penalty.presence
45
+ params[:response_format] ||= response_format.presence
46
+ params[:seed] ||= seed.presence
47
+ params[:stop] ||= stop.presence
48
+ params[:temperature] ||= temperature.presence || Raix.configuration.temperature
49
+ params[:tool_choice] ||= tool_choice.presence
50
+ params[:tools] ||= tools.presence
51
+ params[:top_a] ||= top_a.presence
52
+ params[:top_k] ||= top_k.presence
53
+ params[:top_logprobs] ||= top_logprobs.presence
54
+ params[:top_p] ||= top_p.presence
55
+
56
+ if json
57
+ unless openai
58
+ params[:provider] ||= {}
59
+ params[:provider][:require_parameters] = true
60
+ end
61
+ params[:response_format] ||= {}
62
+ params[:response_format][:type] = "json_object"
63
+ end
64
+
65
+ # used by FunctionDispatch
66
+ self.loop = loop
67
+
68
+ # set the model to the default if not provided
69
+ self.model ||= Raix.configuration.model
70
+
71
+ adapter = MessageAdapters::Base.new(self)
72
+ messages = transcript.flatten.compact.map { |msg| adapter.transform(msg) }
73
+ raise "Can't complete an empty transcript" if messages.blank?
74
+
75
+ begin
76
+ response = if openai
77
+ openai_request(params:, model: openai, messages:)
78
+ else
79
+ openrouter_request(params:, model:, messages:)
80
+ end
81
+ retry_count = 0
82
+ content = nil
83
+
84
+ # no need for additional processing if streaming
85
+ return if stream && response.blank?
86
+
87
+ # tuck the full response into a thread local in case needed
88
+ Thread.current[:chat_completion_response] = response.with_indifferent_access
89
+
90
+ # TODO: add a standardized callback hook for usage events
91
+ # broadcast(:usage_event, usage_subject, self.class.name.to_s, response, premium?)
92
+
93
+ # TODO: handle parallel tool calls
94
+ if (function = response.dig("choices", 0, "message", "tool_calls", 0, "function"))
95
+ @current_function = function["name"]
96
+ # dispatch the called function
97
+ arguments = JSON.parse(function["arguments"].presence || "{}")
98
+ arguments[:bot_message] = bot_message if respond_to?(:bot_message)
99
+ return send(function["name"], arguments.with_indifferent_access)
100
+ end
101
+
102
+ response.tap do |res|
103
+ content = res.dig("choices", 0, "message", "content")
104
+ if json
105
+ content = content.squish
106
+ return JSON.parse(content)
107
+ end
108
+
109
+ return content unless raw
110
+ end
111
+ rescue JSON::ParserError => e
112
+ if e.message.include?("not a valid") # blank JSON
113
+ puts "Retrying blank JSON response... (#{retry_count} attempts) #{e.message}"
114
+ retry_count += 1
115
+ sleep 1 * retry_count # backoff
116
+ retry if retry_count < 3
117
+
118
+ raise e # just fail if we can't get content after 3 attempts
119
+ end
120
+
121
+ puts "Bad JSON received!!!!!!: #{content}"
122
+ raise e
123
+ rescue Faraday::BadRequestError => e
124
+ # make sure we see the actual error message on console or Honeybadger
125
+ puts "Chat completion failed!!!!!!!!!!!!!!!!: #{e.response[:body]}"
126
+ raise e
127
+ end
128
+ end
129
+
130
+ # This method returns the transcript array.
131
+ # Manually add your messages to it in the following abbreviated format
132
+ # before calling `chat_completion`.
133
+ #
134
+ # { system: "You are a pumpkin" },
135
+ # { user: "Hey what time is it?" },
136
+ # { assistant: "Sorry, pumpkins do not wear watches" }
137
+ #
138
+ # to add a function call use the following format:
139
+ # { function: { name: 'fancy_pants_function', arguments: { param: 'value' } } }
140
+ #
141
+ # to add a function result use the following format:
142
+ # { function: result, name: 'fancy_pants_function' }
143
+ #
144
+ # @return [Array] The transcript array.
145
+ def transcript
146
+ @transcript ||= []
147
+ end
148
+
149
+ private
150
+
151
+ def openai_request(params:, model:, messages:)
152
+ # deprecated in favor of max_completion_tokens
153
+ params.delete(:max_tokens)
154
+
155
+ params[:stream] ||= stream.presence
156
+ params[:stream_options] = { include_usage: true } if params[:stream]
157
+
158
+ params.delete(:temperature) if model == "o1-preview"
159
+
160
+ Raix.configuration.openai_client.chat(parameters: params.compact.merge(model:, messages:))
161
+ end
162
+
163
+ def openrouter_request(params:, model:, messages:)
164
+ # max_completion_tokens is not supported by OpenRouter
165
+ params.delete(:max_completion_tokens)
166
+
167
+ retry_count = 0
168
+
169
+ begin
170
+ Raix.configuration.openrouter_client.complete(messages, model:, extras: params.compact, stream:)
171
+ rescue OpenRouter::ServerError => e
172
+ if e.message.include?("retry")
173
+ puts "Retrying OpenRouter request... (#{retry_count} attempts) #{e.message}"
174
+ retry_count += 1
175
+ sleep 1 * retry_count # backoff
176
+ retry if retry_count < 5
177
+ end
178
+
179
+ raise e
180
+ end
181
+ end
182
+ end
183
+ end
@@ -0,0 +1,111 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "securerandom"
4
+ module Raix
5
+ # Provides declarative function definition for ChatCompletion classes.
6
+ #
7
+ # Example:
8
+ #
9
+ # class MeaningOfLife
10
+ # include Raix::ChatCompletion
11
+ # include Raix::FunctionDispatch
12
+ #
13
+ # function :ask_deep_thought do
14
+ # wait 236_682_000_000_000
15
+ # "The meaning of life is 42"
16
+ # end
17
+ #
18
+ # def initialize
19
+ # transcript << { user: "What is the meaning of life?" }
20
+ # chat_completion
21
+ # end
22
+ # end
23
+ module FunctionDispatch
24
+ extend ActiveSupport::Concern
25
+
26
+ class_methods do
27
+ attr_reader :functions
28
+
29
+ # Defines a function that can be dispatched by the ChatCompletion module while
30
+ # processing the response from an AI model.
31
+ #
32
+ # Declaring a function here will automatically add it (in JSON Schema format) to
33
+ # the list of tools provided to the OpenRouter Chat Completion API. The function
34
+ # will be dispatched by name, so make sure the name is unique. The function's block
35
+ # argument will be executed in the instance context of the class that includes this module.
36
+ #
37
+ # Example:
38
+ # function :google_search, "Search Google for something", query: { type: "string" } do |arguments|
39
+ # GoogleSearch.new(arguments[:query]).search
40
+ # end
41
+ #
42
+ # @param name [Symbol] The name of the function.
43
+ # @param description [String] An optional description of the function.
44
+ # @param parameters [Hash] The parameters that the function accepts.
45
+ # @param block [Proc] The block of code to execute when the function is called.
46
+ def function(name, description = nil, **parameters, &block)
47
+ @functions ||= []
48
+ @functions << begin
49
+ { name:, parameters: { type: "object", properties: {} } }.tap do |definition|
50
+ definition[:description] = description if description.present?
51
+ parameters.map do |key, value|
52
+ definition[:parameters][:properties][key] = value
53
+ end
54
+ end
55
+ end
56
+
57
+ define_method(name) do |arguments|
58
+ id = SecureRandom.uuid[0, 23]
59
+ transcript << {
60
+ role: "assistant",
61
+ content: nil,
62
+ tool_calls: [
63
+ {
64
+ id:,
65
+ type: "function",
66
+ function: {
67
+ name:,
68
+ arguments: arguments.to_json
69
+ }
70
+ }
71
+ ]
72
+ }
73
+ instance_exec(arguments, &block).tap do |content|
74
+ transcript << {
75
+ role: "tool",
76
+ tool_call_id: id,
77
+ name:,
78
+ content: content.to_s
79
+ }
80
+ # TODO: add on_error handler as optional parameter to function
81
+ end
82
+
83
+ chat_completion(**chat_completion_args) if loop
84
+ end
85
+ end
86
+ end
87
+
88
+ included do
89
+ attr_accessor :chat_completion_args
90
+ end
91
+
92
+ def chat_completion(**chat_completion_args)
93
+ raise "No functions defined" if self.class.functions.blank?
94
+
95
+ self.chat_completion_args = chat_completion_args
96
+
97
+ super
98
+ end
99
+
100
+ # Stops the looping of chat completion after function calls.
101
+ # Useful for manually halting processing in workflow components
102
+ # that do not require a final text response to an end user.
103
+ def stop_looping!
104
+ self.loop = false
105
+ end
106
+
107
+ def tools
108
+ self.class.functions.map { |function| { type: "function", function: } }
109
+ end
110
+ end
111
+ end
@@ -0,0 +1,118 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "ostruct"
4
+
5
+ module Raix
6
+ # The PromptDeclarations module provides a way to chain prompts and handle
7
+ # user responses in a serialized manner (in the order they were defined),
8
+ # with support for functions if the FunctionDispatch module is also included.
9
+ module PromptDeclarations
10
+ extend ActiveSupport::Concern
11
+ extend ChatCompletion
12
+
13
+ module ClassMethods # rubocop:disable Style/Documentation
14
+ # Adds a prompt to the list of prompts.
15
+ #
16
+ # @param system [Proc] A lambda that generates the system message.
17
+ # @param text [Proc] A lambda that generates the prompt text. (Required)
18
+ # @param success [Proc] The block of code to execute when the prompt is answered.
19
+ # @param parameters [Hash] Additional parameters for the completion API call
20
+ # @param stream [Boolean] Whether to stream the response.
21
+ def prompt(text:, system: nil, success: nil, params: {}, stream: false)
22
+ name = Digest::SHA256.hexdigest(text.inspect)[0..7]
23
+ prompts << begin
24
+ OpenStruct.new({ name:, system:, text:, success:, params:, stream: })
25
+ end
26
+
27
+ define_method(name) do |response|
28
+ if Rails.env.local?
29
+ puts "_" * 80
30
+ puts "PromptDeclarations#response:"
31
+ puts "#{text.source_location} (#{name})"
32
+ puts response
33
+ puts "_" * 80
34
+ end
35
+
36
+ return response if success.nil?
37
+ return send(success, response) if success.is_a?(Symbol)
38
+
39
+ instance_exec(response, &success)
40
+ end
41
+ end
42
+
43
+ # the list of prompts declared at class level
44
+ def prompts
45
+ @prompts ||= []
46
+ end
47
+
48
+ # getter/setter for system prompt declared at class level
49
+ def system_prompt(prompt = nil)
50
+ prompt ? @system_prompt = prompt.squish : @system_prompt
51
+ end
52
+ end
53
+
54
+ # Executes the chat completion process based on the class-level declared prompts.
55
+ # The response to each prompt is added to the transcript automatically and returned.
56
+ #
57
+ # Prompts require at least a `text` lambda parameter.
58
+ #
59
+ # @param params [Hash] Parameters for the chat completion override those defined in the current prompt.
60
+ # @option params [Boolean] :raw (false) Whether to return the raw response or dig the text content.
61
+ #
62
+ # Uses system prompt in following order of priority:
63
+ # - system lambda specified in the prompt declaration
64
+ # - system_prompt instance method if defined
65
+ # - system_prompt class-level declaration if defined
66
+ #
67
+ # TODO: shortcut syntax passes just a string prompt if no other options are needed.
68
+ #
69
+ # @raise [RuntimeError] If no prompts are defined.
70
+ #
71
+ def chat_completion(params: {}, raw: false)
72
+ raise "No prompts defined" unless self.class.prompts.present?
73
+
74
+ current_prompts = self.class.prompts.clone
75
+
76
+ while (@current_prompt = current_prompts.shift)
77
+ __system_prompt = instance_exec(&@current_prompt.system) if @current_prompt.system.present? # rubocop:disable Lint/UnderscorePrefixedVariableName
78
+ __system_prompt ||= system_prompt if respond_to?(:system_prompt)
79
+ __system_prompt ||= self.class.system_prompt.presence
80
+ transcript << { system: __system_prompt } if __system_prompt
81
+ transcript << { user: instance_exec(&@current_prompt.text) } # text is required
82
+
83
+ params = @current_prompt.params.merge(params)
84
+
85
+ # set the stream if necessary
86
+ self.stream = instance_exec(&@current_prompt.stream) if @current_prompt.stream.present?
87
+
88
+ super(params:, raw:).then do |response|
89
+ transcript << { assistant: response }
90
+ @last_response = send(@current_prompt.name, response)
91
+ end
92
+ end
93
+
94
+ @last_response
95
+ end
96
+
97
+ # Returns the model parameter of the current prompt or the default model.
98
+ #
99
+ # @return [Object] The model parameter of the current prompt or the default model.
100
+ def model
101
+ @current_prompt.params[:model] || super
102
+ end
103
+
104
+ # Returns the temperature parameter of the current prompt or the default temperature.
105
+ #
106
+ # @return [Float] The temperature parameter of the current prompt or the default temperature.
107
+ def temperature
108
+ @current_prompt.params[:temperature] || super
109
+ end
110
+
111
+ # Returns the max_tokens parameter of the current prompt or the default max_tokens.
112
+ #
113
+ # @return [Integer] The max_tokens parameter of the current prompt or the default max_tokens.
114
+ def max_tokens
115
+ @current_prompt.params[:max_tokens] || super
116
+ end
117
+ end
118
+ end
data/lib/raix/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Raix
4
- VERSION = "0.3.1"
4
+ VERSION = "0.4.0"
5
5
  end
data/lib/raix.rb CHANGED
@@ -16,6 +16,9 @@ module Raix
16
16
  # The max_tokens option determines the maximum number of tokens to generate.
17
17
  attr_accessor :max_tokens
18
18
 
19
+ # The max_completion_tokens option determines the maximum number of tokens to generate.
20
+ attr_accessor :max_completion_tokens
21
+
19
22
  # The model option determines the model to use for text generation. This option
20
23
  # is normally set in each class that includes the ChatCompletion module.
21
24
  attr_accessor :model
@@ -27,12 +30,14 @@ module Raix
27
30
  attr_accessor :openai_client
28
31
 
29
32
  DEFAULT_MAX_TOKENS = 1000
33
+ DEFAULT_MAX_COMPLETION_TOKENS = 16_384
30
34
  DEFAULT_MODEL = "meta-llama/llama-3-8b-instruct:free"
31
35
  DEFAULT_TEMPERATURE = 0.0
32
36
 
33
37
  # Initializes a new instance of the Configuration class with default values.
34
38
  def initialize
35
39
  self.temperature = DEFAULT_TEMPERATURE
40
+ self.max_completion_tokens = DEFAULT_MAX_COMPLETION_TOKENS
36
41
  self.max_tokens = DEFAULT_MAX_TOKENS
37
42
  self.model = DEFAULT_MODEL
38
43
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: raix
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.1
4
+ version: 0.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Obie Fernandez
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-06-26 00:00:00.000000000 Z
11
+ date: 2024-10-19 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport
@@ -47,6 +47,7 @@ extra_rdoc_files: []
47
47
  files:
48
48
  - ".rspec"
49
49
  - ".rubocop.yml"
50
+ - ".ruby-version"
50
51
  - CHANGELOG.md
51
52
  - CODE_OF_CONDUCT.md
52
53
  - Gemfile
@@ -55,6 +56,9 @@ files:
55
56
  - README.md
56
57
  - Rakefile
57
58
  - lib/raix.rb
59
+ - lib/raix/chat_completion.rb
60
+ - lib/raix/function_dispatch.rb
61
+ - lib/raix/prompt_declarations.rb
58
62
  - lib/raix/version.rb
59
63
  - raix.gemspec
60
64
  - sig/raix.rbs
@@ -80,7 +84,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
80
84
  - !ruby/object:Gem::Version
81
85
  version: '0'
82
86
  requirements: []
83
- rubygems_version: 3.4.10
87
+ rubygems_version: 3.5.21
84
88
  signing_key:
85
89
  specification_version: 4
86
90
  summary: Ruby AI eXtensions