regent 0.2.1 → 0.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: cd53327576ffc795ba61329d4ba018c378bf38acd790832c2d7735ea9a2af647
4
- data.tar.gz: 7143f84225c50bb3489e44eaf9cd50b115b49203706130b30e74e5d89eefb861
3
+ metadata.gz: c5ca2748be57e491dbf33208cfaf09a8009716daf365845dac771c083c5de6d4
4
+ data.tar.gz: 9bcb3ca59f00555aeffdfe27f53cf10bbfb32e0789e00eedd07cc1e6db0d7f0c
5
5
  SHA512:
6
- metadata.gz: 6f158ad85ad090a4676ba7db6c5a2326f16826ca2adfebbe0652f1fe987fa6733349a1623e31ef95740769d1f6ea4d0f34d04dba9c4f4913f2ee733ad6563686
7
- data.tar.gz: c825330735acbb6e73169add11b399e810938807156e6b996c60652bebf073da141d66d20e1fd729f48570110813514dbb8b18bd7db12625657a0434a2c660bd
6
+ metadata.gz: 3f9505c05aea8978afb583224616b4cc1851abad8e484d764be40833269adc4f57eab5d1437f4dfd3b47282a449a288048dcf21121a5d6d1b1361a918c6173d5
7
+ data.tar.gz: aaa3d15807126bb4cf4a4d7e27f8b52ab550b5851157c6a5a988fbbe522505a63ee4ce3f7b35487b8826bc52a431d34ba55818e5d167bc454f8f1f1fffeb5aef
data/README.md CHANGED
@@ -1,18 +1,18 @@
1
-
2
1
  ![regent_light](https://github.com/user-attachments/assets/62564dac-b8d7-4dc0-9b63-64c6841b5872)
3
2
 
4
3
  # Regent
4
+
5
5
  **Regent** is library for building AI agents with Ruby.
6
6
 
7
7
  > [!WARNING]
8
8
  > Regent is currently an experiment intended to explore patterns for building easily traceable and debuggable AI agents of different architectures. It is not yet intended to be used in production and is currently in development.
9
9
 
10
10
  ## Showcase
11
+
11
12
  A basic Regnt Agent extended with a `price_tool` that allows for retrieving cryptocurrency prices from coingecko.com.
12
13
 
13
14
  ![screencast 2024-12-25 21-53-47](https://github.com/user-attachments/assets/4e65b731-bbd7-4732-b157-b705d35a7824)
14
15
 
15
-
16
16
  ## Install
17
17
 
18
18
  ```bash
@@ -31,10 +31,22 @@ and run
31
31
  bundle install
32
32
  ```
33
33
 
34
+ ## Available LLMs
35
+
36
+ Regent currently supports LLMs from the following providers:
37
+
38
+ | Provider | Models | Supported |
39
+ | ------------- | :--------------------: | :-------: |
40
+ | OpenAI | `gpt-` based models | ✅ |
41
+ | Anthropic | `claude-` based models | ✅ |
42
+ | Google Gemini | `gemini-` based models | ✅ |
43
+
34
44
  ## Usage
35
- In order to operate an agent needs access to LLM (large language model). Regent relies on the [Langchainrb](https://github.com/patterns-ai-core/langchainrb) library to interact with LLMs. Let's create an instance of OapnAI LLM:
45
+
46
+ In order to operate an agent needs access to LLM (large language model). Regent provides a simple interface for interacting with LLMs. You can create an instance of any LLM provider by passing the model name to the `Regent::LLM.new` method:
47
+
36
48
  ```ruby
37
- llm = Langchain::LLM::OpenAI(api_key: ENV["OPENAI_KEY"])
49
+ llm = Regent::LLM.new("gpt-4o-mini")
38
50
  ```
39
51
 
40
52
  Agents are effective when they have tools that enable them to get new information:
@@ -49,14 +61,15 @@ end
49
61
  weather_tool = WeatherTool.new(name: "weather_tool", description: "Get the weather in a given location")
50
62
  ```
51
63
 
52
- Next, let's instantiate an agent passing LLM and a set of tools:
64
+ Next, let's instantiate an agent passing agent's statement, LLM and a set of tools:
53
65
 
54
66
  ```ruby
55
- agent = Regent::Agent.new(llm: llm, tools: [weather_tool])
67
+ agent = Regent::Agent.new("You are a weather AI agent", llm: llm, tools: [weather_tool])
56
68
  ```
57
69
 
58
70
  Simply run an execute function, passing your query as an argument
59
- ``` ruby
71
+
72
+ ```ruby
60
73
  agent.execute("What is the weather in London today?")
61
74
  ```
62
75
 
data/lib/regent/agent.rb CHANGED
@@ -6,16 +6,17 @@ module Regent
6
6
 
7
7
  DEFAULT_MAX_ITERATIONS = 10
8
8
 
9
- def initialize(llm:, tools: [], **options)
9
+ def initialize(context, llm:, tools: [], **options)
10
10
  super()
11
11
 
12
+ @context = context
12
13
  @llm = llm
13
14
  @sessions = []
14
15
  @tools = tools.is_a?(Toolchain) ? tools : Toolchain.new(Array(tools))
15
16
  @max_iterations = options[:max_iterations] || DEFAULT_MAX_ITERATIONS
16
17
  end
17
18
 
18
- attr_reader :sessions, :llm, :tools
19
+ attr_reader :context, :sessions, :llm, :tools
19
20
 
20
21
  def execute(task)
21
22
  raise ArgumentError, "Task cannot be empty" if task.to_s.strip.empty?
@@ -47,7 +48,7 @@ module Regent
47
48
  end
48
49
 
49
50
  def react
50
- Regent::Engine::React.new(llm, tools, session, @max_iterations)
51
+ Regent::Engine::React.new(context, llm, tools, session, @max_iterations)
51
52
  end
52
53
  end
53
54
  end
@@ -0,0 +1,72 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ module Concerns
5
+ module Dependable
6
+ class VersionError < StandardError; end
7
+
8
+ def self.included(base)
9
+ base.class_eval do
10
+ class << self
11
+ def depends_on(gem_name)
12
+ @dependency = gem_name
13
+ end
14
+
15
+ def dependency
16
+ @dependency
17
+ end
18
+ end
19
+ end
20
+ end
21
+
22
+ def initialize(**options)
23
+ @dependency = self.class.dependency
24
+ require_dynamic(dependency) if dependency
25
+
26
+ super()
27
+ rescue Gem::LoadError
28
+ warn_and_exit(dependency, options[:model])
29
+ end
30
+
31
+ def require_dynamic(*names)
32
+ names.each { |name| load_dependency(name) }
33
+ end
34
+
35
+ private
36
+
37
+ def load_dependency(name)
38
+ gem(name)
39
+
40
+ return true unless defined? Bundler
41
+
42
+ gem_spec = Gem::Specification.find_by_name(name)
43
+ gem_requirement = dependencies.find { |gem| gem.name == gem_spec.name }.requirement
44
+
45
+ unless gem_requirement.satisfied_by?(gem_spec.version)
46
+ raise VersionError, version_error(gem_spec, gem_requirement)
47
+ end
48
+
49
+ require_gem(gem_spec)
50
+ end
51
+
52
+ def version_error(gem_spec, gem_requirement)
53
+ "'#{gem_spec.name}' gem version is #{gem_spec.version}, but your Gemfile specified #{gem_requirement}."
54
+ end
55
+
56
+ def require_gem(gem_spec)
57
+ gem_spec.full_require_paths.each do |path|
58
+ Dir.glob("#{path}/*.rb").each { |file| require file }
59
+ end
60
+ end
61
+
62
+ def dependencies
63
+ Bundler.load.dependencies
64
+ end
65
+
66
+ def warn_and_exit(name, model)
67
+ warn "\n\e[33mIn order to use \e[33;1m#{model}\e[0m\e[33m model you need to install \e[33;1m#{name}\e[0m\e[33m gem. Please add \e[33;1mgem \"#{name}\"\e[0m\e[33m to your Gemfile.\e[0m"
68
+ exit 1
69
+ end
70
+ end
71
+ end
72
+ end
@@ -4,24 +4,27 @@ module Regent
4
4
  module Engine
5
5
  class React
6
6
  module PromptTemplate
7
- def self.system_prompt(tool_names)
7
+ def self.system_prompt(context = "", tool_list = "")
8
8
  <<~PROMPT
9
- You are assisstant reasoning step-by-step to solve complex problems.
10
- Your reasoning process happens in a loop of Though, Action, Observation.
9
+ ## Instructions
10
+ #{context ? "Consider the following context: #{context}\n\n" : ""}
11
+ You are an AI agent reasoning step-by-step to solve complex problems.
12
+ Your reasoning process happens in a loop of Thought, Action, Observation.
11
13
  Thought - a description of your thoughts about the question.
12
- Action - pick a an action from available tools. If there are no tools that can help return an Answer saying you are not able to help..
14
+ Action - pick a an action from available tools. If there are no tools that can help return an Answer saying you are not able to help.
13
15
  Observation - is the result of running a tool.
16
+ PAUSE - is always present after an Action.
14
17
 
15
18
  ## Available tools:
16
- #{tool_names}
19
+ #{tool_list}
17
20
 
18
21
  ## Example session
19
22
  Question: What is the weather in London today?
20
- Thought: I need to get the wether in London
21
- Action: weather_tool | "London"
23
+ Thought: I need to get current weather in London
24
+ Action: weather_tool | London
22
25
  PAUSE
23
26
 
24
- You will have a response with Observation:
27
+ You will have a response form a user with Observation:
25
28
  Observation: It is 32 degress and Sunny
26
29
 
27
30
  ... (this Thought/Action/Observation can repeat N times)
@@ -10,14 +10,15 @@ module Regent
10
10
  stop: "PAUSE"
11
11
  }.freeze
12
12
 
13
- def initialize(llm, toolchain, session, max_iterations)
13
+ def initialize(context, llm, toolchain, session, max_iterations)
14
+ @context = context
14
15
  @llm = llm
15
16
  @toolchain = toolchain
16
17
  @session = session
17
18
  @max_iterations = max_iterations
18
19
  end
19
20
 
20
- attr_reader :llm, :toolchain, :session, :max_iterations
21
+ attr_reader :context, :llm, :toolchain, :session, :max_iterations
21
22
 
22
23
  def reason(task)
23
24
  initialize_session(task)
@@ -41,18 +42,17 @@ module Regent
41
42
  private
42
43
 
43
44
  def initialize_session(task)
44
- session.add_message({role: :system, content: Regent::Engine::React::PromptTemplate.system_prompt(toolchain.to_s)})
45
+ session.add_message({role: :system, content: Regent::Engine::React::PromptTemplate.system_prompt(context, toolchain.to_s)})
45
46
  session.add_message({role: :user, content: task})
46
47
  session.exec(Span::Type::INPUT, message: task) { task }
47
48
  end
48
49
 
49
50
  def get_llm_response
50
- session.exec(Span::Type::LLM_CALL, type: llm.defaults[:chat_model], message: session.messages.last[:content]) do
51
- result = llm.chat(messages: session.messages, params: { stop: [SEQUENCES[:stop]] })
51
+ session.exec(Span::Type::LLM_CALL, type: llm.model, message: session.messages.last[:content]) do
52
+ result = llm.invoke(session.messages, stop: [SEQUENCES[:stop]])
52
53
 
53
- # Relying on Langchain Response interface to get token counts and chat completion
54
- session.current_span.set_meta("#{result.prompt_tokens} → #{result.completion_tokens} tokens")
55
- result.chat_completion
54
+ session.current_span.set_meta("#{result.usage.input_tokens} #{result.usage.output_tokens} tokens")
55
+ result.content
56
56
  end
57
57
  end
58
58
 
@@ -106,9 +106,9 @@ module Regent
106
106
  action = content.split(SEQUENCES[:action])[1]&.strip
107
107
  return [nil, nil] unless action
108
108
 
109
- parts = action.split('|', 2).map(&:strip)
110
- tool_name = parts[0]
111
- argument = parts[1].gsub('"', '')
109
+ parts = action.split('|').map(&:strip)
110
+ tool_name = parts[0].gsub(/["`']/, '')
111
+ argument = parts[1].gsub(/["`']/, '')
112
112
 
113
113
  # Handle cases where argument is nil, empty, or only whitespace
114
114
  argument = nil if argument.nil? || argument.empty?
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ class Anthropic < Base
6
+ MAX_TOKENS = 1000
7
+ ENV_KEY = "ANTHROPIC_API_KEY"
8
+
9
+ depends_on "anthropic"
10
+
11
+ def invoke(messages, **args)
12
+ response = client.messages(parameters: {
13
+ messages: format_messages(messages),
14
+ system: system_instruction(messages),
15
+ model: options[:model],
16
+ stop_sequences: args[:stop] ? args[:stop] : nil,
17
+ max_tokens: MAX_TOKENS
18
+ })
19
+ format_response(response)
20
+ end
21
+
22
+ private
23
+
24
+ def client
25
+ @client ||= ::Anthropic::Client.new(access_token: api_key)
26
+ end
27
+
28
+ def system_instruction(messages)
29
+ messages.find { |message| message[:role].to_s == "system" }&.dig(:content)
30
+ end
31
+
32
+ def format_messages(messages)
33
+ messages.reject { |message| message[:role].to_s == "system" }
34
+ end
35
+
36
+ def format_response(response)
37
+ Response.new(
38
+ content: response.dig("content", 0, "text"),
39
+ model: options[:model],
40
+ usage: Usage.new(
41
+ input_tokens: response.dig("usage", "input_tokens"),
42
+ output_tokens: response.dig("usage", "output_tokens")
43
+ )
44
+ )
45
+ end
46
+ end
47
+ end
48
+ end
@@ -0,0 +1,61 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ class Response
6
+ def initialize(content:, usage:, model:)
7
+ @content = content
8
+ @usage = usage
9
+ @model = model
10
+ end
11
+
12
+ attr_reader :content, :usage, :model
13
+ end
14
+
15
+ class Usage
16
+ def initialize(input_tokens:, output_tokens:)
17
+ @input_tokens = input_tokens
18
+ @output_tokens = output_tokens
19
+ end
20
+
21
+ attr_reader :input_tokens, :output_tokens
22
+ end
23
+
24
+ class Base
25
+ include Concerns::Dependable
26
+
27
+ def initialize(**options)
28
+ @options = options
29
+ api_key.nil?
30
+
31
+ super()
32
+ end
33
+
34
+ def invoke(messages, **args)
35
+ provider.chat(messages: format_messages(messages), **args)
36
+ end
37
+
38
+ private
39
+
40
+ attr_reader :options, :dependency
41
+
42
+ def format_response(response)
43
+ Response.new(
44
+ content: response.chat_completion,
45
+ model: options[:model],
46
+ usage: Usage.new(input_tokens: response.prompt_tokens, output_tokens: response.completion_tokens)
47
+ )
48
+ end
49
+
50
+ def api_key
51
+ @api_key ||= options[:api_key] || api_key_from_env
52
+ end
53
+
54
+ def api_key_from_env
55
+ ENV.fetch(self.class::ENV_KEY) do
56
+ raise APIKeyNotFoundError, "API key not found. Make sure to set #{self.class::ENV_KEY} environment variable."
57
+ end
58
+ end
59
+ end
60
+ end
61
+ end
@@ -0,0 +1,42 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ class Gemini < Base
6
+ ENV_KEY = "GEMINI_API_KEY"
7
+
8
+ depends_on "gemini-ai"
9
+
10
+ def invoke(messages, **args)
11
+ response = client.generate_content({ contents: format_messages(messages) })
12
+ format_response(response)
13
+ end
14
+
15
+ private
16
+
17
+ def client
18
+ @client ||= ::Gemini.new(
19
+ credentials: { service: 'generative-language-api', api_key: api_key },
20
+ options: { model: options[:model] }
21
+ )
22
+ end
23
+
24
+ def format_messages(messages)
25
+ messages.map do |message|
26
+ { role: message[:role].to_s == "system" ? "user" : message[:role], parts: [{ text: message[:content] }] }
27
+ end
28
+ end
29
+
30
+ def format_response(response)
31
+ Response.new(
32
+ content: response.dig("candidates", 0, "content", "parts", 0, "text").strip,
33
+ model: options[:model],
34
+ usage: Usage.new(
35
+ input_tokens: response.dig("usageMetadata", "promptTokenCount"),
36
+ output_tokens: response.dig("usageMetadata", "candidatesTokenCount")
37
+ )
38
+ )
39
+ end
40
+ end
41
+ end
42
+ end
@@ -0,0 +1,37 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ class OpenAI < Base
6
+ ENV_KEY = "OPENAI_API_KEY"
7
+
8
+ depends_on "ruby-openai"
9
+
10
+ def invoke(messages, **args)
11
+ response = client.chat(parameters: {
12
+ messages: messages,
13
+ model: options[:model],
14
+ stop: args[:stop]
15
+ })
16
+ format_response(response)
17
+ end
18
+
19
+ private
20
+
21
+ def client
22
+ @client ||= ::OpenAI::Client.new(access_token: api_key)
23
+ end
24
+
25
+ def format_response(response)
26
+ Response.new(
27
+ content: response.dig("choices", 0, "message", "content"),
28
+ model: options[:model],
29
+ usage: Usage.new(
30
+ input_tokens: response.dig("usage", "prompt_tokens"),
31
+ output_tokens: response.dig("usage", "completion_tokens")
32
+ )
33
+ )
34
+ end
35
+ end
36
+ end
37
+ end
data/lib/regent/llm.rb ADDED
@@ -0,0 +1,45 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ PROVIDER_PATTERNS = {
6
+ OpenAI: /^gpt-/,
7
+ Gemini: /^gemini-/,
8
+ Anthropic: /^claude-/
9
+ }.freeze
10
+
11
+ class ProviderNotFoundError < StandardError; end
12
+ class APIKeyNotFoundError < StandardError; end
13
+
14
+ def initialize(model, **options)
15
+ @model = model
16
+ @options = options
17
+ instantiate_provider
18
+ end
19
+
20
+ attr_reader :model, :options
21
+
22
+ def invoke(messages, **args)
23
+ response = provider.invoke(messages, **args)
24
+ end
25
+
26
+ private
27
+
28
+ attr_reader :provider
29
+
30
+ def instantiate_provider
31
+ provider_class = find_provider_class
32
+ raise ProviderNotFoundError, "Provider for #{model} is not found" if provider_class.nil?
33
+
34
+ @provider ||= create_provider(provider_class)
35
+ end
36
+
37
+ def find_provider_class
38
+ PROVIDER_PATTERNS.find { |key, pattern| model.match?(pattern) }&.first
39
+ end
40
+
41
+ def create_provider(provider_class)
42
+ Regent::LLM.const_get(provider_class).new(**options.merge(model: model))
43
+ end
44
+ end
45
+ end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Regent
4
- VERSION = "0.2.1"
4
+ VERSION = "0.3.0"
5
5
  end
data/lib/regent.rb CHANGED
@@ -10,5 +10,7 @@ module Regent
10
10
  # Your code goes here...
11
11
 
12
12
  loader = Zeitwerk::Loader.for_gem
13
+ loader.inflector.inflect("llm" => "LLM")
14
+ loader.inflector.inflect("open_ai" => "OpenAI")
13
15
  loader.setup
14
16
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: regent
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.1
4
+ version: 0.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex Chaplinsky
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-12-25 00:00:00.000000000 Z
11
+ date: 2024-12-28 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: zeitwerk
@@ -24,34 +24,6 @@ dependencies:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
26
  version: '2.7'
27
- - !ruby/object:Gem::Dependency
28
- name: ruby-openai
29
- requirement: !ruby/object:Gem::Requirement
30
- requirements:
31
- - - "~>"
32
- - !ruby/object:Gem::Version
33
- version: 7.3.1
34
- type: :runtime
35
- prerelease: false
36
- version_requirements: !ruby/object:Gem::Requirement
37
- requirements:
38
- - - "~>"
39
- - !ruby/object:Gem::Version
40
- version: 7.3.1
41
- - !ruby/object:Gem::Dependency
42
- name: langchainrb
43
- requirement: !ruby/object:Gem::Requirement
44
- requirements:
45
- - - "~>"
46
- - !ruby/object:Gem::Version
47
- version: 0.19.2
48
- type: :runtime
49
- prerelease: false
50
- version_requirements: !ruby/object:Gem::Requirement
51
- requirements:
52
- - - "~>"
53
- - !ruby/object:Gem::Version
54
- version: 0.19.2
55
27
  - !ruby/object:Gem::Dependency
56
28
  name: tty-spinner
57
29
  requirement: !ruby/object:Gem::Requirement
@@ -96,10 +68,16 @@ files:
96
68
  - Rakefile
97
69
  - lib/regent.rb
98
70
  - lib/regent/agent.rb
71
+ - lib/regent/concerns/dependable.rb
99
72
  - lib/regent/concerns/durationable.rb
100
73
  - lib/regent/concerns/identifiable.rb
101
74
  - lib/regent/engine/react.rb
102
75
  - lib/regent/engine/react/prompt_template.rb
76
+ - lib/regent/llm.rb
77
+ - lib/regent/llm/anthropic.rb
78
+ - lib/regent/llm/base.rb
79
+ - lib/regent/llm/gemini.rb
80
+ - lib/regent/llm/open_ai.rb
103
81
  - lib/regent/logger.rb
104
82
  - lib/regent/session.rb
105
83
  - lib/regent/span.rb