elelem 0.2.1 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 7d165866e64423e182a5497407deba0249b4c73ca4fc5c3af36a979925aade9f
4
- data.tar.gz: 4b8f6b384a901781514bd085c106672489676b065eab7c2c6057ffff49842b71
3
+ metadata.gz: 2885faeaffa4f0ee7eff742f50bbf1ee78f257f7989c6401c787c4d4feb5d6a2
4
+ data.tar.gz: '0965d3a94ce8633cd01e7471e5688912216794e9b6af49450d0c1a5326e24645'
5
5
  SHA512:
6
- metadata.gz: 356ff6e3bbadda54bc3ae9663d67879ea3fc1cd52418eaaa228f8aa1b7a6bf2a9db847dfa482e840fb0f772130753d5d417814d649e75f5c530bca467ef6f2df
7
- data.tar.gz: 67313588f14536711acf61e566394466a34513cf52259c286e522648ccc1f47fcc898f4f2856a0072bf79cb63462948931c202bf972b7c29e94494aa3efb4a0f
6
+ metadata.gz: e00079252cb138588937776e7d37fc28de2e451c1ac72a190f842d29d27df537123a524b3358c6f36a685704b18e5c3a424cd819b3c8eb3ed1259f1937bdf02d
7
+ data.tar.gz: c77a7a8b34cf326e812db4ac23a38de0f7841fc1763f18f569a60d783be6719e251ff1eae59234b4da68cb8276c71fb617491ea46d97905ec39e05f5359b405f
data/CHANGELOG.md CHANGED
@@ -1,5 +1,76 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.4.0] - 2025-11-10
4
+
5
+ ### Added
6
+ - **Eval Tool**: Meta-programming tool that allows the LLM to dynamically create and register new tools at runtime
7
+ - Eval tool has access to the toolbox for enhanced capabilities
8
+ - Comprehensive test coverage with RSpec
9
+ - Agent specs
10
+ - Conversation specs
11
+ - Toolbox specs
12
+
13
+ ### Changed
14
+ - **Architecture Improvements**: Significant refactoring for better separation of concerns
15
+ - Extracted Tool class to separate file (`lib/elelem/tool.rb`)
16
+ - Extracted Toolbox class to separate file (`lib/elelem/toolbox.rb`)
17
+ - Extracted Shell class for command execution
18
+ - Improved tool registration through `#add_tool` method
19
+ - Tool constants moved to Toolbox for better organization
20
+ - Agent class simplified by delegating to Tool instances
21
+
22
+ ### Fixed
23
+ - `/context` command now correctly accounts for the current mode
24
+
25
+ ## [0.3.0] - 2025-11-05
26
+
27
+ ### Added
28
+ - **Mode System**: Control agent capabilities with workflow modes
29
+ - `/mode plan` - Read-only mode (grep, list, read)
30
+ - `/mode build` - Read + Write mode (grep, list, read, patch, write)
31
+ - `/mode verify` - Read + Execute mode (grep, list, read, execute)
32
+ - `/mode auto` - All tools enabled
33
+ - Each mode adapts system prompt to guide appropriate behavior
34
+ - Improved output formatting
35
+ - Suppressed verbose thinking/reasoning output
36
+ - Clean tool call display (e.g., `date` instead of full JSON hash)
37
+ - Mode switch confirmation messages
38
+ - Clear command feedback
39
+ - Design philosophy documentation in README
40
+ - Mode system documentation
41
+
42
+ ### Changed
43
+ - **BREAKING**: Removed `llm-ollama` and `llm-openai` standalone executables (use main `elelem chat` command)
44
+ - **BREAKING**: Simplified architecture - consolidated all logic into Agent class
45
+ - Removed Configuration class
46
+ - Removed Toolbox system
47
+ - Removed MCP client infrastructure
48
+ - Removed Tool and Tools classes
49
+ - Removed TUI abstraction layer (direct puts/Reline usage)
50
+ - Removed API wrapper class
51
+ - Removed state machine
52
+ - Improved execute tool description to guide LLM toward direct command execution
53
+ - Extracted tool definitions from long inline strings to readable private methods
54
+ - Updated README with clear philosophy and usage examples
55
+ - Reduced total codebase from 417 to 395 lines (-5%)
56
+
57
+ ### Fixed
58
+ - Working directory handling for execute tool (handles empty string cwd)
59
+ - REPL EOF handling (graceful exit when input stream ends)
60
+ - Tool call formatting now shows clean, readable commands
61
+
62
+ ### Removed
63
+ - `exe/llm-ollama` (359 lines)
64
+ - `exe/llm-openai` (340 lines)
65
+ - `lib/elelem/configuration.rb`
66
+ - `lib/elelem/toolbox.rb` and toolbox/* files
67
+ - `lib/elelem/mcp_client.rb`
68
+ - `lib/elelem/tool.rb` and `lib/elelem/tools.rb`
69
+ - `lib/elelem/tui.rb`
70
+ - `lib/elelem/api.rb`
71
+ - `lib/elelem/states/*` (state machine infrastructure)
72
+ - Removed ~750 lines of unused/redundant code
73
+
3
74
  ## [0.2.1] - 2025-10-15
4
75
 
5
76
  ### Fixed
data/README.md CHANGED
@@ -1,16 +1,61 @@
1
1
  # Elelem
2
2
 
3
- Elelem is an interactive REPL (Read-Eval-Print Loop) for Ollama that provides a command-line chat interface for communicating with AI models. It features tool calling capabilities, streaming responses, and a clean state machine architecture.
3
+ Fast, correct, autonomous pick two.
4
4
 
5
- ## Installation
5
+ ## Purpose
6
6
 
7
- Install the gem and add to the application's Gemfile by executing:
7
+ Elelem is a minimal coding agent written in Ruby. It is designed to help
8
+ you write, edit, and manage code and plain-text files from the command line
9
+ by delegating work to an LLM. The agent exposes a simple text-based UI and a
10
+ set of built-in tools that give the LLM access to the local file system
11
+ and Git.
8
12
 
9
- ```bash
10
- bundle add elelem
11
- ```
13
+ ## Design Principles
14
+
15
+ * Unix philosophy – simple, composable, minimal.
16
+ * Convention over configuration.
17
+ * No defensive checks or complexity beyond what is necessary.
18
+ * Assumes a mature, responsible LLM that behaves like a capable engineer.
19
+ * Optimised for my personal workflow and preferences.
20
+ * Efficient and minimal like *aider* – https://aider.chat/.
21
+ * UX similar to Claude Code – https://docs.claude.com/en/docs/claude-code/overview.
22
+
23
+ ## System Assumptions
24
+
25
+ * Linux host with Alacritty, tmux, Bash, Vim.
26
+ * Runs inside a Git repository.
27
+ * Git is available and functional.
28
+
29
+ ## Scope
30
+
31
+ Only plain-text and source-code files are supported. No binary handling,
32
+ sandboxing, or permission checks are performed - the LLM has full access.
33
+
34
+ ## Configuration
35
+
36
+ Prefer convention over configuration. Add environment variables only after
37
+ repeated use proves their usefulness.
38
+
39
+ ## UI Expectations
40
+
41
+ Keyboard-driven, minimal TUI. No mouse support or complex widgets.
42
+
43
+ ## Coding Standards for the LLM
44
+
45
+ * No extra error handling unless essential.
46
+ * Keep methods short, single-purpose.
47
+ * Descriptive, conventional names.
48
+ * Use Ruby standard library where possible.
12
49
 
13
- If bundler is not being used to manage dependencies, install the gem by executing:
50
+ ## Helpful Links
51
+
52
+ * https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
53
+ * https://www.anthropic.com/engineering/writing-tools-for-agents
54
+ * https://simonwillison.net/2025/Sep/30/designing-agentic-loops/
55
+
56
+ ## Installation
57
+
58
+ Install the gem directly:
14
59
 
15
60
  ```bash
16
61
  gem install elelem
@@ -26,42 +71,86 @@ elelem chat
26
71
 
27
72
  ### Options
28
73
 
29
- - `--host`: Specify Ollama host (default: localhost:11434)
30
- - `--model`: Specify Ollama model (default: gpt-oss, currently only tested with gpt-oss)
31
- - `--token`: Provide authentication token
32
- - `--debug`: Enable debug logging
74
+ * `--host` Ollama host (default: `localhost:11434`).
75
+ * `--model` Ollama model (default: `gpt-oss`).
76
+ * `--token` Authentication token.
33
77
 
34
78
  ### Examples
35
79
 
36
80
  ```bash
37
- # Chat with default model
81
+ # Default model
38
82
  elelem chat
39
83
 
40
- # Chat with specific model and host
84
+ # Specific model and host
41
85
  elelem chat --model llama2 --host remote-host:11434
86
+ ```
87
+
88
+ ## Mode System
89
+
90
+ The agent exposes seven built‑in tools. You can switch which ones are
91
+ available by changing the *mode*:
42
92
 
43
- # Enable debug mode
44
- elelem chat --debug
93
+ | Mode | Enabled Tools |
94
+ |---------|------------------------------------------|
95
+ | plan | `grep`, `list`, `read` |
96
+ | build | `grep`, `list`, `read`, `patch`, `write` |
97
+ | verify | `grep`, `list`, `read`, `execute` |
98
+ | auto | All tools |
99
+
100
+ Use the following commands inside the REPL:
101
+
102
+ ```text
103
+ /mode plan # Read‑only
104
+ /mode build # Read + Write
105
+ /mode verify # Read + Execute
106
+ /mode auto # All tools
107
+ /mode # Show current mode
45
108
  ```
46
109
 
47
- ### Features
110
+ The system prompt is adjusted per mode so the LLM knows which actions
111
+ are permissible.
112
+
113
+ ## Features
114
+
115
+ * **Interactive REPL** – clean, streaming chat.
116
+ * **Toolbox** – file I/O, Git, shell execution.
117
+ * **Streaming Responses** – output appears in real time.
118
+ * **Conversation History** – persists across turns; can be cleared.
119
+ * **Context Dump** – `/context` shows the current conversation state.
120
+
121
+ ## Toolbox Overview
122
+
123
+ The `Toolbox` class is defined in `lib/elelem/toolbox.rb`. It supplies
124
+ seven tools, each represented by a JSON schema that the LLM can call.
125
+
126
+ | Tool | Purpose | Parameters |
127
+ | ---- | ------- | ---------- |
128
+ | `eval` | Dynamically create new tools | `code` |
129
+ | `grep` | Search Git‑tracked files | `query` |
130
+ | `list` | List tracked files | `path` (optional) |
131
+ | `read` | Read file contents | `path` |
132
+ | `write` | Overwrite a file | `path`, `content` |
133
+ | `patch` | Apply a unified diff via `git apply` | `diff` |
134
+ | `execute` | Run shell commands | `cmd`, `args`, `env`, `cwd`, `stdin` |
48
135
 
49
- - **Interactive REPL**: Clean command-line interface for chatting
50
- - **Tool Execution**: Execute shell commands when requested by the AI
51
- - **Streaming Responses**: Real-time streaming of AI responses
52
- - **State Machine**: Robust state management for different interaction modes
53
- - **Conversation History**: Maintains context across the session
136
+ ## Tool Definition
54
137
 
55
- ## Development
138
+ The core `Tool` wrapper is defined in `lib/elelem/tool.rb`. Each tool is
139
+ created with a name, description, JSON schema for arguments, and a block
140
+ that performs the operation. The LLM calls a tool by name and passes the
141
+ arguments as a hash.
56
142
 
57
- After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
143
+ ## Known Limitations
58
144
 
59
- To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
145
+ * Assumes the current directory is a Git repository.
146
+ * No sandboxing – the LLM can run arbitrary commands.
147
+ * Error handling is minimal; exceptions are returned as an `error` field.
60
148
 
61
149
  ## Contributing
62
150
 
63
- Bug reports and pull requests are welcome on GitHub at https://github.com/xlgmokha/elelem.
151
+ Feel free to open issues or pull requests. The repository follows the
152
+ GitHub Flow.
64
153
 
65
154
  ## License
66
155
 
67
- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
156
+ MIT see the bundled `LICENSE.txt`.
data/Rakefile CHANGED
@@ -2,9 +2,7 @@
2
2
 
3
3
  require "bundler/gem_tasks"
4
4
  require "rspec/core/rake_task"
5
- require "rubocop/rake_task"
6
5
 
7
6
  RSpec::Core::RakeTask.new(:spec)
8
- RuboCop::RakeTask.new
9
7
 
10
- task default: %i[spec rubocop]
8
+ task default: %i[spec]
data/lib/elelem/agent.rb CHANGED
@@ -2,56 +2,125 @@
2
2
 
3
3
  module Elelem
4
4
  class Agent
5
- attr_reader :api, :conversation, :logger, :model, :tui
5
+ attr_reader :conversation, :client, :toolbox
6
6
 
7
- def initialize(configuration)
8
- @api = configuration.api
9
- @tui = configuration.tui
10
- @configuration = configuration
11
- @model = configuration.model
12
- @conversation = configuration.conversation
13
- @logger = configuration.logger
14
-
15
- at_exit { cleanup }
16
-
17
- transition_to(States::Idle.new)
7
+ def initialize(client, toolbox)
8
+ @conversation = Conversation.new
9
+ @client = client
10
+ @toolbox = toolbox
18
11
  end
19
12
 
20
13
  def repl
14
+ mode = Set.new([:read])
15
+
21
16
  loop do
22
- current_state.run(self)
23
- sleep 0.1
17
+ input = ask?("User> ")
18
+ break if input.nil?
19
+ if input.start_with?("/")
20
+ case input
21
+ when "/mode auto"
22
+ mode = Set[:read, :write, :execute]
23
+ puts " → Mode: auto (all tools enabled)"
24
+ when "/mode build"
25
+ mode = Set[:read, :write]
26
+ puts " → Mode: build (read + write)"
27
+ when "/mode plan"
28
+ mode = Set[:read]
29
+ puts " → Mode: plan (read-only)"
30
+ when "/mode verify"
31
+ mode = Set[:read, :execute]
32
+ puts " → Mode: verify (read + execute)"
33
+ when "/mode"
34
+ puts " Mode: #{mode.to_a.inspect}"
35
+ puts " Tools: #{toolbox.tools_for(mode).map { |t| t.dig(:function, :name) }}"
36
+ when "/exit" then exit
37
+ when "/clear"
38
+ conversation.clear
39
+ puts " → Conversation cleared"
40
+ when "/context" then puts conversation.dump(mode)
41
+ else
42
+ puts help_banner
43
+ end
44
+ else
45
+ conversation.add(role: :user, content: input)
46
+ result = execute_turn(conversation.history_for(mode), tools: toolbox.tools_for(mode))
47
+ conversation.add(role: result[:role], content: result[:content])
48
+ end
24
49
  end
25
50
  end
26
51
 
27
- def transition_to(next_state)
28
- if @current_state
29
- logger.info("AGENT: #{@current_state.class.name.split('::').last} -> #{next_state.class.name.split('::').last}")
30
- else
31
- logger.info("AGENT: Starting in #{next_state.class.name.split('::').last}")
32
- end
33
- @current_state = next_state
34
- end
52
+ private
35
53
 
36
- def execute(tool_call)
37
- tool_name = tool_call.dig("function", "name")
38
- logger.debug("TOOL: Full call - #{tool_call}")
39
- result = configuration.tools.execute(tool_call)
40
- logger.debug("TOOL: Result (#{result.length} chars)") if result
41
- result
54
+ def ask?(text)
55
+ Reline.readline(text, true)&.strip
42
56
  end
43
57
 
44
- def quit
45
- cleanup
46
- exit
58
+ def help_banner
59
+ <<~HELP
60
+ /mode auto build plan verify
61
+ /clear
62
+ /context
63
+ /exit
64
+ /help
65
+ HELP
47
66
  end
48
67
 
49
- def cleanup
50
- configuration.cleanup
68
+ def format_tool_call(name, args)
69
+ case name
70
+ when "execute"
71
+ cmd = args["cmd"]
72
+ cmd_args = args["args"] || []
73
+ cmd_args.empty? ? cmd : "#{cmd} #{cmd_args.join(' ')}"
74
+ when "grep" then "grep(#{args["query"]})"
75
+ when "list" then "list(#{args["path"] || "."})"
76
+ when "patch" then "patch(#{args["diff"]&.lines&.count || 0} lines)"
77
+ when "read" then "read(#{args["path"]})"
78
+ when "write" then "write(#{args["path"]})"
79
+ else
80
+ "#{name}(#{args.to_s[0...50]})"
81
+ end
51
82
  end
52
83
 
53
- private
84
+ def execute_turn(messages, tools:)
85
+ turn_context = []
54
86
 
55
- attr_reader :configuration, :current_state
87
+ loop do
88
+ content = ""
89
+ tool_calls = []
90
+
91
+ print "Thinking..."
92
+ client.chat(messages + turn_context, tools) do |chunk|
93
+ msg = chunk["message"]
94
+ if msg
95
+ if msg["content"] && !msg["content"].empty?
96
+ print "\r\e[K" if content.empty?
97
+ print msg["content"]
98
+ content += msg["content"]
99
+ end
100
+
101
+ tool_calls += msg["tool_calls"] if msg["tool_calls"]
102
+ end
103
+ end
104
+
105
+ puts
106
+ turn_context << { role: "assistant", content: content, tool_calls: tool_calls }.compact
107
+
108
+ if tool_calls.any?
109
+ tool_calls.each do |call|
110
+ name = call.dig("function", "name")
111
+ args = call.dig("function", "arguments")
112
+
113
+ puts "Tool> #{format_tool_call(name, args)}"
114
+ result = toolbox.run_tool(name, args)
115
+ turn_context << { role: "tool", content: JSON.dump(result) }
116
+ end
117
+
118
+ tool_calls = []
119
+ next
120
+ end
121
+
122
+ return { role: "assistant", content: content }
123
+ end
124
+ end
56
125
  end
57
126
  end
@@ -3,10 +3,6 @@
3
3
  module Elelem
4
4
  class Application < Thor
5
5
  desc "chat", "Start the REPL"
6
- method_option :help,
7
- aliases: "-h",
8
- type: :boolean,
9
- desc: "Display usage information"
10
6
  method_option :host,
11
7
  aliases: "--host",
12
8
  type: :string,
@@ -17,32 +13,15 @@ module Elelem
17
13
  type: :string,
18
14
  desc: "Ollama model",
19
15
  default: ENV.fetch("OLLAMA_MODEL", "gpt-oss")
20
- method_option :token,
21
- aliases: "--token",
22
- type: :string,
23
- desc: "Ollama token",
24
- default: ENV.fetch("OLLAMA_API_KEY", nil)
25
- method_option :debug,
26
- aliases: "--debug",
27
- type: :boolean,
28
- desc: "Debug mode",
29
- default: false
30
- def chat(*)
31
- if options[:help]
32
- invoke :help, ["chat"]
33
- else
34
- configuration = Configuration.new(
35
- host: options[:host],
36
- model: options[:model],
37
- token: options[:token],
38
- debug: options[:debug]
39
- )
40
- say "Agent (#{configuration.model})", :green
41
- say configuration.tools.banner.to_s, :green
42
16
 
43
- agent = Agent.new(configuration)
44
- agent.repl
45
- end
17
+ def chat(*)
18
+ client = Net::Llm::Ollama.new(
19
+ host: options[:host],
20
+ model: options[:model],
21
+ )
22
+ say "Agent (#{options[:model]})", :green
23
+ agent = Agent.new(client, Toolbox.new)
24
+ agent.repl
46
25
  end
47
26
 
48
27
  desc "version", "The version of this CLI"
@@ -4,15 +4,16 @@ module Elelem
4
4
  class Conversation
5
5
  ROLES = %i[system assistant user tool].freeze
6
6
 
7
- def initialize(items = [{ role: "system", content: system_prompt }])
7
+ def initialize(items = default_context)
8
8
  @items = items
9
9
  end
10
10
 
11
- def history
12
- @items
11
+ def history_for(mode)
12
+ history = @items.dup
13
+ history[0] = { role: "system", content: system_prompt_for(mode) }
14
+ history
13
15
  end
14
16
 
15
- # :TODO truncate conversation history
16
17
  def add(role: :user, content: "")
17
18
  role = role.to_sym
18
19
  raise "unknown role: #{role}" unless ROLES.include?(role)
@@ -25,8 +26,43 @@ module Elelem
25
26
  end
26
27
  end
27
28
 
29
+ def clear
30
+ @items = default_context
31
+ end
32
+
33
+ def dump(mode)
34
+ JSON.pretty_generate(history_for(mode))
35
+ end
36
+
28
37
  private
29
38
 
39
+ def default_context(prompt = system_prompt_for([]))
40
+ [{ role: "system", content: prompt }]
41
+ end
42
+
43
+ def system_prompt_for(mode)
44
+ base = system_prompt
45
+
46
+ case mode.sort
47
+ when [:read]
48
+ "#{base}\n\nRead and analyze. Understand before suggesting action."
49
+ when [:write]
50
+ "#{base}\n\nWrite clean, thoughtful code."
51
+ when [:execute]
52
+ "#{base}\n\nUse shell commands creatively to understand and manipulate the system."
53
+ when [:read, :write]
54
+ "#{base}\n\nFirst understand, then build solutions that integrate well."
55
+ when [:execute, :read]
56
+ "#{base}\n\nUse commands to deeply understand the system."
57
+ when [:execute, :write]
58
+ "#{base}\n\nCreate and execute freely. Have fun. Be kind."
59
+ when [:execute, :read, :write]
60
+ "#{base}\n\nYou have all tools. Use them wisely."
61
+ else
62
+ base
63
+ end
64
+ end
65
+
30
66
  def system_prompt
31
67
  ERB.new(Pathname.new(__dir__).join("system_prompt.erb").read).result(binding)
32
68
  end
@@ -1,18 +1,5 @@
1
- You are a self-improving coding assistant with the ability to modify your own source code.
1
+ You are a reasoning coding and system agent.
2
2
 
3
- **Core Behavior**: Think via `thinking`, Execute via `tool_calls`, Talk via `content`.
4
-
5
- **Your Mission**: Continuously evolve to match the intelligence of advanced AI assistants.
6
-
7
- **Guidance**: Read `COMMANDMENTS.md` for self-improvement principles using `file action=read path=COMMANDMENTS.md`
8
-
9
- **Tools**:
10
- - `exec command="shell command"` - Run commands/tests
11
- - `file action=read|write path=filepath content=...` - Read/write files
12
- - `web action=search|fetch query=... url=...` - Internet access
13
- - `memory action=store|retrieve|search key=... content=...` - Persistent memory
14
- - `prompt question="..."` - Ask user questions
15
-
16
- Context: <%= Time.now.strftime("%Y-%m-%d %H:%M:%S") %> | <%= Dir.pwd %> | <%= `uname -a`.strip %>
17
-
18
- Focus on the user's request and continuously improve your capabilities.
3
+ - Less is more
4
+ - No code comments
5
+ - No trailing whitespace
data/lib/elelem/tool.rb CHANGED
@@ -2,31 +2,46 @@
2
2
 
3
3
  module Elelem
4
4
  class Tool
5
- attr_reader :name, :description, :parameters
5
+ attr_reader :name
6
6
 
7
- def initialize(name, description, parameters)
8
- @name = name
9
- @description = description
10
- @parameters = parameters
7
+ def initialize(schema, &block)
8
+ @name = schema.dig(:function, :name)
9
+ @schema = schema
10
+ @block = block
11
11
  end
12
12
 
13
- def banner
14
- [name, parameters].join(": ")
13
+ def call(args)
14
+ return ArgumentError.new(args) unless valid?(args)
15
+
16
+ @block.call(args)
15
17
  end
16
18
 
17
19
  def valid?(args)
18
- JSON::Validator.validate(parameters, args, insert_defaults: true)
20
+ # TODO:: Use JSON Schema Validator
21
+ true
19
22
  end
20
23
 
21
24
  def to_h
22
- {
23
- type: "function",
24
- function: {
25
- name: name,
26
- description: description,
27
- parameters: parameters
28
- }
29
- }
25
+ @schema&.to_h
26
+ end
27
+
28
+ class << self
29
+ def build(name, description, properties, required = [])
30
+ new({
31
+ type: "function",
32
+ function: {
33
+ name: name,
34
+ description: description,
35
+ parameters: {
36
+ type: "object",
37
+ properties: properties,
38
+ required: required
39
+ }
40
+ }
41
+ }) do |args|
42
+ yield args
43
+ end
44
+ end
30
45
  end
31
46
  end
32
47
  end