regent 0.2.1 → 0.3.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: cd53327576ffc795ba61329d4ba018c378bf38acd790832c2d7735ea9a2af647
4
- data.tar.gz: 7143f84225c50bb3489e44eaf9cd50b115b49203706130b30e74e5d89eefb861
3
+ metadata.gz: 9a728784fd3d7720bb7fd96bf2cd22a8d53e59eaed01c9b4240a2cb09023e261
4
+ data.tar.gz: 78ddbe87cf964ca94039f560ddfcb743a14757e551af0847363b0fa82228e4a7
5
5
  SHA512:
6
- metadata.gz: 6f158ad85ad090a4676ba7db6c5a2326f16826ca2adfebbe0652f1fe987fa6733349a1623e31ef95740769d1f6ea4d0f34d04dba9c4f4913f2ee733ad6563686
7
- data.tar.gz: c825330735acbb6e73169add11b399e810938807156e6b996c60652bebf073da141d66d20e1fd729f48570110813514dbb8b18bd7db12625657a0434a2c660bd
6
+ metadata.gz: c0306c99637469cff9e51a9b1b50424e646f38dd601dcb5b36d8eee417e2f280488b3e6c78747c242d78d8213fd6a9e004d3cd7065b692d7dc63a5900bf20e16
7
+ data.tar.gz: 68302a81b5062f54415a89e49513e2186afd0fb6d129dfa1111b33565ade932cbae9e657218d66b37cf2fd58b78103456da0dcc7c52c26948016134117cc4e42
data/README.md CHANGED
@@ -1,19 +1,37 @@
1
-
2
1
  ![regent_light](https://github.com/user-attachments/assets/62564dac-b8d7-4dc0-9b63-64c6841b5872)
3
2
 
3
+ <div align="center">
4
+
4
5
  # Regent
5
- **Regent** is library for building AI agents with Ruby.
6
+ [![Gem Version](https://badge.fury.io/rb/regent.svg)](https://badge.fury.io/rb/regent)
7
+ [![Build](https://github.com/alchaplinsky/regent/actions/workflows/main.yml/badge.svg)](https://github.com/alchaplinsky/regent/actions/workflows/main.yml)
8
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
9
+
10
+ </div>
11
+
12
+ **Regent** is a small and elegant Ruby framework for building AI agents that can think, reason, and take actions through tools. It provides a clean, intuitive interface for creating agents that can solve complex problems by breaking them down into logical steps.
6
13
 
7
14
  > [!WARNING]
8
15
  > Regent is currently an experiment intended to explore patterns for building easily traceable and debuggable AI agents of different architectures. It is not yet intended to be used in production and is currently in development.
9
16
 
17
+ ## Key Features
18
+
19
+ - **ReAct Pattern Implementation**: Agents follow the Reasoning-Action pattern, making decisions through a clear thought process before taking actions
20
+ - **Multi-LLM Support**: Seamlessly works with:
21
+ - OpenAI (GPT models)
22
+ - Anthropic (Claude models)
23
+ - Google (Gemini models)
24
+ - **Extensible Tool System**: Create custom tools that agents can use to interact with external services, APIs, or perform specific tasks
25
+ - **Built-in Tracing**: Every agent interaction is traced and can be replayed, making debugging and monitoring straightforward
26
+ - **Clean Ruby Interface**: Designed to feel natural to Ruby developers while maintaining powerful capabilities
27
+
10
28
  ## Showcase
29
+
11
30
  A basic Regnt Agent extended with a `price_tool` that allows for retrieving cryptocurrency prices from coingecko.com.
12
31
 
13
32
  ![screencast 2024-12-25 21-53-47](https://github.com/user-attachments/assets/4e65b731-bbd7-4732-b157-b705d35a7824)
14
33
 
15
-
16
- ## Install
34
+ ## Quick Start
17
35
 
18
36
  ```bash
19
37
  gem install regent
@@ -32,33 +50,40 @@ bundle install
32
50
  ```
33
51
 
34
52
  ## Usage
35
- In order to operate an agent needs access to LLM (large language model). Regent relies on the [Langchainrb](https://github.com/patterns-ai-core/langchainrb) library to interact with LLMs. Let's create an instance of OapnAI LLM:
36
- ```ruby
37
- llm = Langchain::LLM::OpenAI(api_key: ENV["OPENAI_KEY"])
38
- ```
39
53
 
40
- Agents are effective when they have tools that enable them to get new information:
54
+ Create your first agent:
41
55
 
42
56
  ```ruby
57
+ # Initialize the LLM
58
+ llm = Regent::LLM.new("gpt-4o")
59
+
60
+ # Create a custom tool
43
61
  class WeatherTool < Regent::Tool
44
62
  def call(location)
45
- # implementation of a call to weather API
63
+ # Implement weather lookup logic
64
+ "Currently 72°F and sunny in #{location}"
46
65
  end
47
66
  end
48
67
 
49
- weather_tool = WeatherTool.new(name: "weather_tool", description: "Get the weather in a given location")
50
- ```
51
-
52
- Next, let's instantiate an agent passing LLM and a set of tools:
53
-
54
- ```ruby
55
- agent = Regent::Agent.new(llm: llm, tools: [weather_tool])
68
+ # Create and configure the agent
69
+ agent = Regent::Agent.new(
70
+ "You are a helpful weather assistant",
71
+ llm: llm,
72
+ tools: [WeatherTool.new(
73
+ name: "weather_tool",
74
+ description: "Get current weather for a location"
75
+ )]
76
+ )
77
+
78
+ # Execute a query
79
+ result = agent.execute("What's the weather like in Tokyo?") # => "It is currently 72°F and sunny in Tokyo."
56
80
  ```
57
81
 
58
- Simply run an execute function, passing your query as an argument
59
- ``` ruby
60
- agent.execute("What is the weather in London today?")
61
- ```
82
+ ## Why Regent?
83
+ - **Transparent Decision Making**: Watch your agent's thought process as it reasons through problems
84
+ - **Flexible Architecture**: Easy to extend with custom tools and adapt to different use cases
85
+ - **Production Ready**: Built with tracing, error handling, and clean abstractions
86
+ - **Ruby-First Design**: Takes advantage of Ruby's elegant syntax and conventions
62
87
 
63
88
  ## Development
64
89
 
data/lib/regent/agent.rb CHANGED
@@ -6,16 +6,17 @@ module Regent
6
6
 
7
7
  DEFAULT_MAX_ITERATIONS = 10
8
8
 
9
- def initialize(llm:, tools: [], **options)
9
+ def initialize(context, llm:, tools: [], **options)
10
10
  super()
11
11
 
12
+ @context = context
12
13
  @llm = llm
13
14
  @sessions = []
14
15
  @tools = tools.is_a?(Toolchain) ? tools : Toolchain.new(Array(tools))
15
16
  @max_iterations = options[:max_iterations] || DEFAULT_MAX_ITERATIONS
16
17
  end
17
18
 
18
- attr_reader :sessions, :llm, :tools
19
+ attr_reader :context, :sessions, :llm, :tools
19
20
 
20
21
  def execute(task)
21
22
  raise ArgumentError, "Task cannot be empty" if task.to_s.strip.empty?
@@ -47,7 +48,7 @@ module Regent
47
48
  end
48
49
 
49
50
  def react
50
- Regent::Engine::React.new(llm, tools, session, @max_iterations)
51
+ Regent::Engine::React.new(context, llm, tools, session, @max_iterations)
51
52
  end
52
53
  end
53
54
  end
@@ -0,0 +1,72 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ module Concerns
5
+ module Dependable
6
+ class VersionError < StandardError; end
7
+
8
+ def self.included(base)
9
+ base.class_eval do
10
+ class << self
11
+ def depends_on(gem_name)
12
+ @dependency = gem_name
13
+ end
14
+
15
+ def dependency
16
+ @dependency
17
+ end
18
+ end
19
+ end
20
+ end
21
+
22
+ def initialize(**options)
23
+ @dependency = self.class.dependency
24
+ require_dynamic(dependency) if dependency
25
+
26
+ super()
27
+ rescue Gem::LoadError
28
+ warn_and_exit(dependency, options[:model])
29
+ end
30
+
31
+ def require_dynamic(*names)
32
+ names.each { |name| load_dependency(name) }
33
+ end
34
+
35
+ private
36
+
37
+ def load_dependency(name)
38
+ gem(name)
39
+
40
+ return true unless defined? Bundler
41
+
42
+ gem_spec = Gem::Specification.find_by_name(name)
43
+ gem_requirement = dependencies.find { |gem| gem.name == gem_spec.name }.requirement
44
+
45
+ unless gem_requirement.satisfied_by?(gem_spec.version)
46
+ raise VersionError, version_error(gem_spec, gem_requirement)
47
+ end
48
+
49
+ require_gem(gem_spec)
50
+ end
51
+
52
+ def version_error(gem_spec, gem_requirement)
53
+ "'#{gem_spec.name}' gem version is #{gem_spec.version}, but your Gemfile specified #{gem_requirement}."
54
+ end
55
+
56
+ def require_gem(gem_spec)
57
+ gem_spec.full_require_paths.each do |path|
58
+ Dir.glob("#{path}/*.rb").each { |file| require file }
59
+ end
60
+ end
61
+
62
+ def dependencies
63
+ Bundler.load.dependencies
64
+ end
65
+
66
+ def warn_and_exit(name, model)
67
+ warn "\n\e[33mIn order to use \e[33;1m#{model}\e[0m\e[33m model you need to install \e[33;1m#{name}\e[0m\e[33m gem. Please add \e[33;1mgem \"#{name}\"\e[0m\e[33m to your Gemfile.\e[0m"
68
+ exit 1
69
+ end
70
+ end
71
+ end
72
+ end
@@ -4,24 +4,27 @@ module Regent
4
4
  module Engine
5
5
  class React
6
6
  module PromptTemplate
7
- def self.system_prompt(tool_names)
7
+ def self.system_prompt(context = "", tool_list = "")
8
8
  <<~PROMPT
9
- You are assisstant reasoning step-by-step to solve complex problems.
10
- Your reasoning process happens in a loop of Though, Action, Observation.
9
+ ## Instructions
10
+ #{context ? "Consider the following context: #{context}\n\n" : ""}
11
+ You are an AI agent reasoning step-by-step to solve complex problems.
12
+ Your reasoning process happens in a loop of Thought, Action, Observation.
11
13
  Thought - a description of your thoughts about the question.
12
- Action - pick a an action from available tools. If there are no tools that can help return an Answer saying you are not able to help..
14
+ Action - pick a an action from available tools if required. If there are no tools that can help return an Answer saying you are not able to help.
13
15
  Observation - is the result of running a tool.
16
+ PAUSE - is always present after an Action.
14
17
 
15
18
  ## Available tools:
16
- #{tool_names}
19
+ #{tool_list}
17
20
 
18
21
  ## Example session
19
22
  Question: What is the weather in London today?
20
- Thought: I need to get the wether in London
21
- Action: weather_tool | "London"
23
+ Thought: I need to get current weather in London
24
+ Action: weather_tool | London
22
25
  PAUSE
23
26
 
24
- You will have a response with Observation:
27
+ You will have a response form a user with Observation:
25
28
  Observation: It is 32 degress and Sunny
26
29
 
27
30
  ... (this Thought/Action/Observation can repeat N times)
@@ -10,14 +10,15 @@ module Regent
10
10
  stop: "PAUSE"
11
11
  }.freeze
12
12
 
13
- def initialize(llm, toolchain, session, max_iterations)
13
+ def initialize(context, llm, toolchain, session, max_iterations)
14
+ @context = context
14
15
  @llm = llm
15
16
  @toolchain = toolchain
16
17
  @session = session
17
18
  @max_iterations = max_iterations
18
19
  end
19
20
 
20
- attr_reader :llm, :toolchain, :session, :max_iterations
21
+ attr_reader :context, :llm, :toolchain, :session, :max_iterations
21
22
 
22
23
  def reason(task)
23
24
  initialize_session(task)
@@ -41,18 +42,17 @@ module Regent
41
42
  private
42
43
 
43
44
  def initialize_session(task)
44
- session.add_message({role: :system, content: Regent::Engine::React::PromptTemplate.system_prompt(toolchain.to_s)})
45
+ session.add_message({role: :system, content: Regent::Engine::React::PromptTemplate.system_prompt(context, toolchain.to_s)})
45
46
  session.add_message({role: :user, content: task})
46
- session.exec(Span::Type::INPUT, message: task) { task }
47
+ session.exec(Span::Type::INPUT, top_level: true, message: task) { task }
47
48
  end
48
49
 
49
50
  def get_llm_response
50
- session.exec(Span::Type::LLM_CALL, type: llm.defaults[:chat_model], message: session.messages.last[:content]) do
51
- result = llm.chat(messages: session.messages, params: { stop: [SEQUENCES[:stop]] })
51
+ session.exec(Span::Type::LLM_CALL, type: llm.model, message: session.messages.last[:content]) do
52
+ result = llm.invoke(session.messages, stop: [SEQUENCES[:stop]])
52
53
 
53
- # Relying on Langchain Response interface to get token counts and chat completion
54
- session.current_span.set_meta("#{result.prompt_tokens} → #{result.completion_tokens} tokens")
55
- result.chat_completion
54
+ session.current_span.set_meta("#{result.usage.input_tokens} #{result.usage.output_tokens} tokens")
55
+ result.content
56
56
  end
57
57
  end
58
58
 
@@ -83,11 +83,11 @@ module Regent
83
83
  end
84
84
 
85
85
  def success_answer(content)
86
- session.exec(Span::Type::ANSWER, type: :success, message: content, duration: session.duration.round(2)) { content }
86
+ session.exec(Span::Type::ANSWER, top_level: true,type: :success, message: content, duration: session.duration.round(2)) { content }
87
87
  end
88
88
 
89
89
  def error_answer(content)
90
- session.exec(Span::Type::ANSWER, type: :failure, message: content, duration: session.duration.round(2)) { content }
90
+ session.exec(Span::Type::ANSWER, top_level: true, type: :failure, message: content, duration: session.duration.round(2)) { content }
91
91
  end
92
92
 
93
93
  def lookup_tool(content)
@@ -106,9 +106,9 @@ module Regent
106
106
  action = content.split(SEQUENCES[:action])[1]&.strip
107
107
  return [nil, nil] unless action
108
108
 
109
- parts = action.split('|', 2).map(&:strip)
110
- tool_name = parts[0]
111
- argument = parts[1].gsub('"', '')
109
+ parts = action.split('|').map(&:strip)
110
+ tool_name = parts[0].gsub(/["`']/, '')
111
+ argument = parts[1].gsub(/["`']/, '')
112
112
 
113
113
  # Handle cases where argument is nil, empty, or only whitespace
114
114
  argument = nil if argument.nil? || argument.empty?
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ class Anthropic < Base
6
+ MAX_TOKENS = 1000
7
+ ENV_KEY = "ANTHROPIC_API_KEY"
8
+
9
+ depends_on "anthropic"
10
+
11
+ def invoke(messages, **args)
12
+ response = client.messages(parameters: {
13
+ messages: format_messages(messages),
14
+ system: system_instruction(messages),
15
+ model: options[:model],
16
+ stop_sequences: args[:stop] ? args[:stop] : nil,
17
+ max_tokens: MAX_TOKENS
18
+ })
19
+ format_response(response)
20
+ end
21
+
22
+ private
23
+
24
+ def client
25
+ @client ||= ::Anthropic::Client.new(access_token: api_key)
26
+ end
27
+
28
+ def system_instruction(messages)
29
+ messages.find { |message| message[:role].to_s == "system" }&.dig(:content)
30
+ end
31
+
32
+ def format_messages(messages)
33
+ messages.reject { |message| message[:role].to_s == "system" }
34
+ end
35
+
36
+ def format_response(response)
37
+ Response.new(
38
+ content: response.dig("content", 0, "text"),
39
+ model: options[:model],
40
+ usage: Usage.new(
41
+ input_tokens: response.dig("usage", "input_tokens"),
42
+ output_tokens: response.dig("usage", "output_tokens")
43
+ )
44
+ )
45
+ end
46
+ end
47
+ end
48
+ end
@@ -0,0 +1,61 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ class Response
6
+ def initialize(content:, usage:, model:)
7
+ @content = content
8
+ @usage = usage
9
+ @model = model
10
+ end
11
+
12
+ attr_reader :content, :usage, :model
13
+ end
14
+
15
+ class Usage
16
+ def initialize(input_tokens:, output_tokens:)
17
+ @input_tokens = input_tokens
18
+ @output_tokens = output_tokens
19
+ end
20
+
21
+ attr_reader :input_tokens, :output_tokens
22
+ end
23
+
24
+ class Base
25
+ include Concerns::Dependable
26
+
27
+ def initialize(**options)
28
+ @options = options
29
+ api_key.nil?
30
+
31
+ super()
32
+ end
33
+
34
+ def invoke(messages, **args)
35
+ provider.chat(messages: format_messages(messages), **args)
36
+ end
37
+
38
+ private
39
+
40
+ attr_reader :options, :dependency
41
+
42
+ def format_response(response)
43
+ Response.new(
44
+ content: response.chat_completion,
45
+ model: options[:model],
46
+ usage: Usage.new(input_tokens: response.prompt_tokens, output_tokens: response.completion_tokens)
47
+ )
48
+ end
49
+
50
+ def api_key
51
+ @api_key ||= options[:api_key] || api_key_from_env
52
+ end
53
+
54
+ def api_key_from_env
55
+ ENV.fetch(self.class::ENV_KEY) do
56
+ raise APIKeyNotFoundError, "API key not found. Make sure to set #{self.class::ENV_KEY} environment variable."
57
+ end
58
+ end
59
+ end
60
+ end
61
+ end
@@ -0,0 +1,42 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ class Gemini < Base
6
+ ENV_KEY = "GEMINI_API_KEY"
7
+
8
+ depends_on "gemini-ai"
9
+
10
+ def invoke(messages, **args)
11
+ response = client.generate_content({ contents: format_messages(messages) })
12
+ format_response(response)
13
+ end
14
+
15
+ private
16
+
17
+ def client
18
+ @client ||= ::Gemini.new(
19
+ credentials: { service: 'generative-language-api', api_key: api_key },
20
+ options: { model: options[:model] }
21
+ )
22
+ end
23
+
24
+ def format_messages(messages)
25
+ messages.map do |message|
26
+ { role: message[:role].to_s == "system" ? "user" : message[:role], parts: [{ text: message[:content] }] }
27
+ end
28
+ end
29
+
30
+ def format_response(response)
31
+ Response.new(
32
+ content: response.dig("candidates", 0, "content", "parts", 0, "text").strip,
33
+ model: options[:model],
34
+ usage: Usage.new(
35
+ input_tokens: response.dig("usageMetadata", "promptTokenCount"),
36
+ output_tokens: response.dig("usageMetadata", "candidatesTokenCount")
37
+ )
38
+ )
39
+ end
40
+ end
41
+ end
42
+ end
@@ -0,0 +1,37 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ class OpenAI < Base
6
+ ENV_KEY = "OPENAI_API_KEY"
7
+
8
+ depends_on "ruby-openai"
9
+
10
+ def invoke(messages, **args)
11
+ response = client.chat(parameters: {
12
+ messages: messages,
13
+ model: options[:model],
14
+ stop: args[:stop]
15
+ })
16
+ format_response(response)
17
+ end
18
+
19
+ private
20
+
21
+ def client
22
+ @client ||= ::OpenAI::Client.new(access_token: api_key)
23
+ end
24
+
25
+ def format_response(response)
26
+ Response.new(
27
+ content: response.dig("choices", 0, "message", "content"),
28
+ model: options[:model],
29
+ usage: Usage.new(
30
+ input_tokens: response.dig("usage", "prompt_tokens"),
31
+ output_tokens: response.dig("usage", "completion_tokens")
32
+ )
33
+ )
34
+ end
35
+ end
36
+ end
37
+ end
data/lib/regent/llm.rb ADDED
@@ -0,0 +1,45 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Regent
4
+ class LLM
5
+ PROVIDER_PATTERNS = {
6
+ OpenAI: /^gpt-/,
7
+ Gemini: /^gemini-/,
8
+ Anthropic: /^claude-/
9
+ }.freeze
10
+
11
+ class ProviderNotFoundError < StandardError; end
12
+ class APIKeyNotFoundError < StandardError; end
13
+
14
+ def initialize(model, **options)
15
+ @model = model
16
+ @options = options
17
+ instantiate_provider
18
+ end
19
+
20
+ attr_reader :model, :options
21
+
22
+ def invoke(messages, **args)
23
+ provider.invoke(messages, **args)
24
+ end
25
+
26
+ private
27
+
28
+ attr_reader :provider
29
+
30
+ def instantiate_provider
31
+ provider_class = find_provider_class
32
+ raise ProviderNotFoundError, "Provider for #{model} is not found" if provider_class.nil?
33
+
34
+ @provider ||= create_provider(provider_class)
35
+ end
36
+
37
+ def find_provider_class
38
+ PROVIDER_PATTERNS.find { |key, pattern| model.match?(pattern) }&.first
39
+ end
40
+
41
+ def create_provider(provider_class)
42
+ Regent::LLM.const_get(provider_class).new(**options.merge(model: model))
43
+ end
44
+ end
45
+ end
data/lib/regent/logger.rb CHANGED
@@ -4,10 +4,10 @@ module Regent
4
4
  class Logger
5
5
  COLORS = %i[dim green yellow red blue cyan clear].freeze
6
6
 
7
- def initialize
7
+ def initialize(output: $stdout)
8
8
  @pastel = Pastel.new
9
- @spinner = build_spinner(spinner_symbol)
10
- @nested_spinner = build_spinner("#{dim(" ├──")}#{spinner_symbol}")
9
+ @spinner = build_spinner(spinner_symbol, output)
10
+ @nested_spinner = build_spinner("#{dim(" ├──")}#{spinner_symbol}", output)
11
11
  end
12
12
 
13
13
  attr_reader :spinner, :nested_spinner
@@ -50,8 +50,8 @@ module Regent
50
50
  "#{dim("[")}#{green(":spinner")}#{dim("]")}"
51
51
  end
52
52
 
53
- def build_spinner(spinner_format)
54
- TTY::Spinner.new("#{spinner_format} :title", format: :dots)
53
+ def build_spinner(spinner_format, output)
54
+ TTY::Spinner.new("#{spinner_format} :title", format: :dots, output: output)
55
55
  end
56
56
 
57
57
  COLORS.each do |color|
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Regent
4
- VERSION = "0.2.1"
4
+ VERSION = "0.3.1"
5
5
  end
data/lib/regent.rb CHANGED
@@ -10,5 +10,7 @@ module Regent
10
10
  # Your code goes here...
11
11
 
12
12
  loader = Zeitwerk::Loader.for_gem
13
+ loader.inflector.inflect("llm" => "LLM")
14
+ loader.inflector.inflect("open_ai" => "OpenAI")
13
15
  loader.setup
14
16
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: regent
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.1
4
+ version: 0.3.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Alex Chaplinsky
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-12-25 00:00:00.000000000 Z
11
+ date: 2024-12-29 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: zeitwerk
@@ -24,34 +24,6 @@ dependencies:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
26
  version: '2.7'
27
- - !ruby/object:Gem::Dependency
28
- name: ruby-openai
29
- requirement: !ruby/object:Gem::Requirement
30
- requirements:
31
- - - "~>"
32
- - !ruby/object:Gem::Version
33
- version: 7.3.1
34
- type: :runtime
35
- prerelease: false
36
- version_requirements: !ruby/object:Gem::Requirement
37
- requirements:
38
- - - "~>"
39
- - !ruby/object:Gem::Version
40
- version: 7.3.1
41
- - !ruby/object:Gem::Dependency
42
- name: langchainrb
43
- requirement: !ruby/object:Gem::Requirement
44
- requirements:
45
- - - "~>"
46
- - !ruby/object:Gem::Version
47
- version: 0.19.2
48
- type: :runtime
49
- prerelease: false
50
- version_requirements: !ruby/object:Gem::Requirement
51
- requirements:
52
- - - "~>"
53
- - !ruby/object:Gem::Version
54
- version: 0.19.2
55
27
  - !ruby/object:Gem::Dependency
56
28
  name: tty-spinner
57
29
  requirement: !ruby/object:Gem::Requirement
@@ -96,10 +68,16 @@ files:
96
68
  - Rakefile
97
69
  - lib/regent.rb
98
70
  - lib/regent/agent.rb
71
+ - lib/regent/concerns/dependable.rb
99
72
  - lib/regent/concerns/durationable.rb
100
73
  - lib/regent/concerns/identifiable.rb
101
74
  - lib/regent/engine/react.rb
102
75
  - lib/regent/engine/react/prompt_template.rb
76
+ - lib/regent/llm.rb
77
+ - lib/regent/llm/anthropic.rb
78
+ - lib/regent/llm/base.rb
79
+ - lib/regent/llm/gemini.rb
80
+ - lib/regent/llm/open_ai.rb
103
81
  - lib/regent/logger.rb
104
82
  - lib/regent/session.rb
105
83
  - lib/regent/span.rb