llm_gateway 0.1.3 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 21b3998df57de474c78626d8267db572418f80426ddaf82730cf8738e181d96c
4
- data.tar.gz: 9a37ff5a3907a8b0d48dbd0fc83be2e50cd5757a60cacd051e93ff1dc63d734e
3
+ metadata.gz: 8e5e5bb04da32e9a1af4ad9dea6e8bf5af8785fc58d890fcb903739e89bc1a50
4
+ data.tar.gz: c23364222e72a72eaf2754bf775cff9709004db2007ab210c6440a0a2cb99cdc
5
5
  SHA512:
6
- metadata.gz: 1be591a45a6fbee0b89846c679e0a9709e3a5493757eac3196be8194f8e592615b177c15f20d6763cf6a444bf8dbf3d899cf6e9fd9b11de7e2f02845f53ab1ff
7
- data.tar.gz: ed5a981d3e7ff311d26fe4924760e286e7370da598464e4b00ad004c940015432d873be73afae7067d5fa659adbe7af701b5b6f4163432c10cdf7f950532d690
6
+ metadata.gz: 9f028198b6ed363a858dc0d409943d117d7c6c538ba81f21797a3dde55de4ada0fcf5e4814776873b5e35db0e3c3f87b938019cc9330fedcdae2469a746db1c4
7
+ data.tar.gz: df939b1f9dea8204e3ae87d1050056ea79014e29917475d4131dce7162dfec28f225f39bb5419c3327bf01f3a32d23634e612983d67c85fc581140c03da50697
data/CHANGELOG.md CHANGED
@@ -1,16 +1,48 @@
1
1
  # Changelog
2
2
 
3
- ## [Unreleased](https://github.com/Hyper-Unearthing/llm_gateway/tree/HEAD)
3
+ ## [v0.1.5](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.5) (2025-08-05)
4
4
 
5
- [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.0...HEAD)
5
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.4...v0.1.5)
6
+
7
+ **Merged pull requests:**
8
+
9
+ - burn: login from tool base class [\#11](https://github.com/Hyper-Unearthing/llm_gateway/pull/11) ([billybonks](https://github.com/billybonks))
10
+ - improve sample [\#10](https://github.com/Hyper-Unearthing/llm_gateway/pull/10) ([billybonks](https://github.com/billybonks))
11
+ - ci: mark latest change log as a version [\#9](https://github.com/Hyper-Unearthing/llm_gateway/pull/9) ([billybonks](https://github.com/billybonks))
12
+ - ci: improve rake release task, so i get burnt less [\#8](https://github.com/Hyper-Unearthing/llm_gateway/pull/8) ([billybonks](https://github.com/billybonks))
13
+
14
+ ## [v0.1.4](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.4) (2025-08-04)
15
+
16
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.3...v0.1.4)
17
+
18
+ **Merged pull requests:**
19
+
20
+ - ci: release should ask me what version i want to bump [\#7](https://github.com/Hyper-Unearthing/llm_gateway/pull/7) ([billybonks](https://github.com/billybonks))
21
+ - docs: create an real world example that does something interesting [\#6](https://github.com/Hyper-Unearthing/llm_gateway/pull/6) ([billybonks](https://github.com/billybonks))
22
+ - feat: there was no way to pass api\_key to gateway besides env [\#5](https://github.com/Hyper-Unearthing/llm_gateway/pull/5) ([billybonks](https://github.com/billybonks))
23
+
24
+ ## [v0.1.3](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.3) (2025-08-04)
25
+
26
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.2...v0.1.3)
6
27
 
7
28
  **Merged pull requests:**
8
29
 
9
30
  - feat: add tool base class [\#4](https://github.com/Hyper-Unearthing/llm_gateway/pull/4) ([billybonks](https://github.com/billybonks))
31
+
32
+ ## [v0.1.2](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.2) (2025-08-04)
33
+
34
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.1...v0.1.2)
35
+
36
+ **Merged pull requests:**
37
+
10
38
  - feat: add prompt base class [\#3](https://github.com/Hyper-Unearthing/llm_gateway/pull/3) ([billybonks](https://github.com/billybonks))
11
39
  - lint files and add coverage [\#2](https://github.com/Hyper-Unearthing/llm_gateway/pull/2) ([billybonks](https://github.com/billybonks))
12
40
  - test: vcr lookup was not working when using different commands [\#1](https://github.com/Hyper-Unearthing/llm_gateway/pull/1) ([billybonks](https://github.com/billybonks))
13
41
 
42
+ ## [v0.1.1](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.1) (2025-08-04)
43
+
44
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.0...v0.1.1)
45
+
14
46
  ## [v0.1.0](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.0) (2025-08-04)
15
47
 
16
48
  [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/505c78116a2e778b23f319a380cd4bf6e300db89...v0.1.0)
data/README.md CHANGED
@@ -43,221 +43,21 @@ result = LlmGateway::Client.chat(
43
43
  )
44
44
  ```
45
45
 
46
- ### Prompt Class
46
+ ### Sample Application
47
47
 
48
- You can also create reusable prompt classes by subclassing `LlmGateway::Prompt`:
48
+ See the [file search bot example](sample/claude_code_clone/) for a complete working application that demonstrates:
49
+ - Creating reusable Prompt and Tool classes
50
+ - Handling conversation transcripts with tool execution
51
+ - Building an interactive terminal interface
49
52
 
50
- ```ruby
51
- # Simple text completion with prompt class
52
- class GeographyQuestionPrompt < LlmGateway::Prompt
53
- def initialize(model, question)
54
- super(model)
55
- @question = question
56
- end
57
-
58
- def prompt
59
- @question
60
- end
61
- end
62
-
63
- # Usage
64
- geography_prompt = GeographyQuestionPrompt.new('claude-sonnet-4-20250514', 'What is the capital of France?')
65
- result = geography_prompt.run
66
-
67
- # With system message
68
- class GeographyTeacherPrompt < LlmGateway::Prompt
69
- def initialize(model, question)
70
- super(model)
71
- @question = question
72
- end
73
-
74
- def prompt
75
- @question
76
- end
77
-
78
- def system_prompt
79
- 'You are a helpful geography teacher.'
80
- end
81
- end
82
-
83
- # Usage
84
- teacher_prompt = GeographyTeacherPrompt.new('gpt-4', 'What is the capital of France?')
85
- result = teacher_prompt.run
86
- ```
87
-
88
- ### Using Prompt with Tools
53
+ To run the sample:
89
54
 
90
- You can combine the Prompt class with tools for more complex interactions:
91
-
92
- ```ruby
93
- # Define a tool class
94
- class GetWeatherTool < LlmGateway::Tool
95
- name 'get_weather'
96
- description 'Get current weather for a location'
97
- input_schema({
98
- type: 'object',
99
- properties: {
100
- location: { type: 'string', description: 'City name' }
101
- },
102
- required: ['location']
103
- })
104
-
105
- def execute(input, login = nil)
106
- # Your weather API implementation here
107
- "The weather in #{input['location']} is sunny and 25°C"
108
- end
109
- end
110
-
111
- class WeatherAssistantPrompt < LlmGateway::Prompt
112
- def initialize(model, location)
113
- super(model)
114
- @location = location
115
- end
116
-
117
- def prompt
118
- "What's the weather like in #{@location}?"
119
- end
120
-
121
- def system_prompt
122
- 'You are a helpful weather assistant.'
123
- end
124
-
125
- def tools
126
- [GetWeatherTool]
127
- end
128
- end
129
-
130
- # Usage
131
- weather_prompt = WeatherAssistantPrompt.new('claude-sonnet-4-20250514', 'Singapore')
132
- result = weather_prompt.run
55
+ ```bash
56
+ cd sample/claude_code_clone
57
+ ruby run.rb
133
58
  ```
134
59
 
135
- ### Tool Usage (Function Calling)
136
-
137
- ```ruby
138
- # Define a tool class
139
- class GetWeatherTool < LlmGateway::Tool
140
- name 'get_weather'
141
- description 'Get current weather for a location'
142
- input_schema({
143
- type: 'object',
144
- properties: {
145
- location: { type: 'string', description: 'City name' }
146
- },
147
- required: ['location']
148
- })
149
-
150
- def execute(input, login = nil)
151
- # Your weather API implementation here
152
- "The weather in #{input['location']} is sunny and 25°C"
153
- end
154
- end
155
-
156
- # Use the tool
157
- weather_tool = {
158
- name: 'get_weather',
159
- description: 'Get current weather for a location',
160
- input_schema: {
161
- type: 'object',
162
- properties: {
163
- location: { type: 'string', description: 'City name' }
164
- },
165
- required: ['location']
166
- }
167
- }
168
-
169
- result = LlmGateway::Client.chat(
170
- 'claude-sonnet-4-20250514',
171
- 'What\'s the weather in Singapore?',
172
- tools: [weather_tool],
173
- system: 'You are a helpful weather assistant.'
174
- )
175
-
176
- # Note: Tools are not automatically executed. The LLM will indicate when a tool should be called,
177
- # but it's up to you to find the appropriate tool and execute it based on the response.
178
-
179
- # Example of handling tool execution with conversation transcript:
180
- class WeatherAssistant
181
- def initialize
182
- @transcript = []
183
- @weather_tool = {
184
- name: 'get_weather',
185
- description: 'Get current weather for a location',
186
- input_schema: {
187
- type: 'object',
188
- properties: {
189
- location: { type: 'string', description: 'City name' }
190
- },
191
- required: ['location']
192
- }
193
- }
194
- end
195
-
196
- attr_reader :weather_tool
197
-
198
- def process_message(content)
199
- # Add user message to transcript
200
- @transcript << { role: 'user', content: [{ type: 'text', text: content }] }
201
-
202
- result = LlmGateway::Client.chat(
203
- 'claude-sonnet-4-20250514',
204
- @transcript,
205
- tools: [@weather_tool],
206
- system: 'You are a helpful weather assistant.'
207
- )
208
-
209
- process_response(result[:choices][0][:content])
210
- end
211
-
212
- private
213
-
214
- def process_response(response)
215
- # Add assistant response to transcript
216
- @transcript << { role: 'assistant', content: response }
217
-
218
- response.each do |message|
219
- if message[:type] == 'text'
220
- puts message[:text]
221
- elsif message[:type] == 'tool_use'
222
- result = handle_tool_use(message)
223
-
224
- # Add tool result to transcript
225
- tool_result = {
226
- type: 'tool_result',
227
- tool_use_id: message[:id],
228
- content: result
229
- }
230
- @transcript << { role: 'user', content: [tool_result] }
231
-
232
- # Continue conversation with full transcript context
233
- follow_up = LlmGateway::Client.chat(
234
- 'claude-sonnet-4-20250514',
235
- @transcript,
236
- tools: [@weather_tool],
237
- system: 'You are a helpful weather assistant.'
238
- )
239
-
240
- process_response(follow_up[:choices][0][:content])
241
- end
242
- end
243
- end
244
-
245
- def handle_tool_use(message)
246
- tool_class = WeatherAssistantPrompt.find_tool(message[:name])
247
- raise "Unknown tool: #{message[:name]}" unless tool_class
248
-
249
- # Execute the tool with the provided input
250
- tool_instance = tool_class.new
251
- tool_instance.execute(message[:input])
252
- rescue StandardError => e
253
- "Error executing tool: #{e.message}"
254
- end
255
- end
256
-
257
- # Usage
258
- assistant = WeatherAssistant.new
259
- assistant.process_message("What's the weather in Singapore?")
260
- ```
60
+ The bot will prompt for your model and API key, then allow you to ask natural language questions about finding files and searching directories.
261
61
 
262
62
  ### Response Format
263
63
 
data/Rakefile CHANGED
@@ -14,13 +14,60 @@ begin
14
14
 
15
15
  desc "Release with changelog"
16
16
  task :gem_release do
17
- # Generate changelog first
18
- sh "bundle exec github_changelog_generator -u Hyper-Unearthing -p llm_gateway"
19
- sh "git add CHANGELOG.md"
20
- sh "git commit -m 'Update changelog' || echo 'No changelog changes'"
17
+ # Safety checks: ensure we're on main and up-to-date
18
+ current_branch = `git branch --show-current`.strip
19
+ unless current_branch == "main"
20
+ puts "Error: You must be on the main branch to release. Current branch: #{current_branch}"
21
+ exit 1
22
+ end
21
23
 
22
- # Release
23
- sh "gem bump --version patch --tag --push --release"
24
+ # Check if branch is up-to-date with remote
25
+ sh "git fetch origin"
26
+ local_commit = `git rev-parse HEAD`.strip
27
+ remote_commit = `git rev-parse origin/main`.strip
28
+ unless local_commit == remote_commit
29
+ puts "Error: Your main branch is not in sync with origin/main. Please pull the latest changes."
30
+ exit 1
31
+ end
32
+
33
+ # Check for uncommitted changes
34
+ unless `git status --porcelain`.strip.empty?
35
+ puts "Error: You have uncommitted changes. Please commit or stash them before releasing."
36
+ exit 1
37
+ end
38
+
39
+ # Ask for version bump type first
40
+ print "What type of version bump? (major/minor/patch): "
41
+ version_type = $stdin.gets.chomp.downcase
42
+
43
+ unless %w[major minor patch].include?(version_type)
44
+ puts "Invalid version type. Please use major, minor, or patch."
45
+ exit 1
46
+ end
47
+
48
+ # Bump version without committing yet to get new version
49
+ sh "gem bump --version #{version_type} --no-commit"
50
+
51
+ # Get the new version
52
+ new_version = `ruby -e "puts Gem::Specification.load('llm_gateway.gemspec').version"`.strip
53
+
54
+ # Generate changelog with proper version
55
+ sh "bundle exec github_changelog_generator " \
56
+ "-u Hyper-Unearthing -p llm_gateway --future-release v#{new_version}"
57
+
58
+ # Bundle to update Gemfile.lock
59
+ sh "bundle"
60
+
61
+ # Add all changes and commit in one go
62
+ sh "git add ."
63
+ sh "git commit -m 'Bump llm_gateway to $(ruby -e \"puts Gem::Specification.load('llm_gateway.gemspec').version\")'"
64
+
65
+ # Tag and push
66
+ sh "git tag v$(ruby -e \"puts Gem::Specification.load('llm_gateway.gemspec').version\")"
67
+ sh "git push origin main --tags"
68
+
69
+ # Release the gem
70
+ sh "gem push $(gem build llm_gateway.gemspec | grep 'File:' | awk '{print $2}')"
24
71
  end
25
72
  rescue LoadError
26
73
  # gem-release not available in this environment
@@ -2,9 +2,11 @@
2
2
 
3
3
  module LlmGateway
4
4
  class Client
5
- def self.chat(model, message, response_format: "text", tools: nil, system: nil)
5
+ def self.chat(model, message, response_format: "text", tools: nil, system: nil, api_key: nil)
6
6
  client_klass = client_class(model)
7
- client = client_klass.new(model_key: model)
7
+ client_options = { model_key: model }
8
+ client_options[:api_key] = api_key if api_key
9
+ client = client_klass.new(**client_options)
8
10
 
9
11
  input_mapper = input_mapper_for_client(client)
10
12
  normalized_input = input_mapper.map({
@@ -39,7 +39,7 @@ module LlmGateway
39
39
  definition[:name]
40
40
  end
41
41
 
42
- def execute(input, login)
42
+ def execute(input)
43
43
  raise NotImplementedError, "Subclasses must implement execute"
44
44
  end
45
45
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LlmGateway
4
- VERSION = "0.1.3"
4
+ VERSION = "0.1.5"
5
5
  end
@@ -0,0 +1,65 @@
1
+ class Agent
2
+ def initialize(prompt_class, model, api_key)
3
+ @prompt_class = prompt_class
4
+ @model = model
5
+ @api_key = api_key
6
+ @transcript = []
7
+ end
8
+
9
+ def run(user_input, &block)
10
+ @transcript << { role: 'user', content: [ { type: 'text', text: user_input } ] }
11
+
12
+ begin
13
+ prompt = @prompt_class.new(@model, @transcript, @api_key)
14
+ result = prompt.post
15
+ process_response(result[:choices][0][:content], &block)
16
+ rescue => e
17
+ yield({ type: 'error', message: e.message }) if block_given?
18
+ raise e
19
+ end
20
+ end
21
+
22
+ private
23
+
24
+ def process_response(response, &block)
25
+ @transcript << { role: 'assistant', content: response }
26
+
27
+ response.each do |message|
28
+ yield(message) if block_given?
29
+
30
+ if message[:type] == 'text'
31
+ # Text response processed
32
+ elsif message[:type] == 'tool_use'
33
+ result = handle_tool_use(message)
34
+
35
+ tool_result = {
36
+ type: 'tool_result',
37
+ tool_use_id: message[:id],
38
+ content: result
39
+ }
40
+ @transcript << { role: 'user', content: [ tool_result ] }
41
+
42
+ yield(tool_result) if block_given?
43
+
44
+ follow_up_prompt = @prompt_class.new(@model, @transcript, @api_key)
45
+ follow_up = follow_up_prompt.post
46
+
47
+ process_response(follow_up[:choices][0][:content], &block) if follow_up[:choices][0][:content]
48
+ end
49
+ end
50
+
51
+ response
52
+ end
53
+
54
+ def handle_tool_use(message)
55
+ tool_class = @prompt_class.find_tool(message[:name])
56
+ if tool_class
57
+ tool = tool_class.new
58
+ tool.execute(message[:input])
59
+ else
60
+ "Unknown tool: #{message[:name]}"
61
+ end
62
+ rescue StandardError => e
63
+ "Error executing tool: #{e.message}"
64
+ end
65
+ end
@@ -0,0 +1,40 @@
1
+ require_relative 'prompt'
2
+ require_relative 'agent'
3
+ require 'debug'
4
+
5
+ # Bash File Search Assistant using LlmGateway architecture
6
+
7
+ class ClaudeCloneClone
8
+ def initialize(model, api_key)
9
+ @agent = Agent.new(Prompt, model, api_key)
10
+ end
11
+
12
+ def query(input)
13
+ begin
14
+ @agent.run(input) do |message|
15
+ case message[:type]
16
+ when 'text'
17
+ puts "\n\e[32m•\e[0m #{message[:text]}"
18
+ when 'tool_use'
19
+ puts "\n\e[33m•\e[0m \e[36m#{message[:name]}\e[0m"
20
+ if message[:input] && !message[:input].empty?
21
+ puts " \e[90m#{message[:input]}\e[0m"
22
+ end
23
+ when 'tool_result'
24
+ if message[:content] && !message[:content].empty?
25
+ content_preview = message[:content].to_s.split("\n").first(3).join("\n")
26
+ if content_preview.length > 100
27
+ content_preview = content_preview[0..97] + "..."
28
+ end
29
+ puts " \e[90m#{content_preview}\e[0m"
30
+ end
31
+ when 'error'
32
+ puts "\n\e[31m•\e[0m \e[91mError: #{message[:message]}\e[0m"
33
+ end
34
+ end
35
+ rescue => e
36
+ puts "\n\e[31m•\e[0m \e[91mError: #{e.message}\e[0m"
37
+ puts "\e[90m #{e.backtrace.first}\e[0m" if e.backtrace&.first
38
+ end
39
+ end
40
+ end
@@ -0,0 +1,79 @@
1
+ require_relative 'tools/edit_tool'
2
+ require_relative 'tools/read_tool'
3
+ require_relative 'tools/todowrite_tool'
4
+ require_relative 'tools/bash_tool'
5
+ require_relative 'tools/grep_tool'
6
+
7
+ class Prompt < LlmGateway::Prompt
8
+ def initialize(model, transcript, api_key)
9
+ super(model)
10
+ @transcript = transcript
11
+ @api_key = api_key
12
+ end
13
+
14
+ def prompt
15
+ @transcript
16
+ end
17
+
18
+ def system_prompt
19
+ <<~SYSTEM
20
+ You are Claude Code Clone, an interactive CLI tool that assists with software engineering tasks.
21
+
22
+ # Core Capabilities
23
+
24
+ I provide assistance with:
25
+ - Code analysis and debugging
26
+ - Feature implementation
27
+ - File editing and creation
28
+ - Running tests and builds
29
+ - Git operations
30
+ - Web browsing and research
31
+ - Task planning and management
32
+
33
+ ## Available Tools
34
+
35
+ You have access to these specialized tools:
36
+ - `Edit` - Modify existing files by replacing specific text strings
37
+ - `Read` - Read file contents with optional pagination
38
+ - `TodoWrite` - Create and manage structured task lists
39
+ - `Bash` - Execute shell commands with timeout support
40
+ - `Grep` - Search for patterns in files using regex
41
+
42
+ ## Core Instructions
43
+
44
+ I am designed to:
45
+ - Be concise and direct (minimize output tokens)
46
+ - Follow existing code conventions and patterns
47
+ - Use defensive security practices only
48
+ - Plan tasks with the TodoWrite tool for complex work
49
+ - Run linting/typechecking after making changes
50
+ - Never commit unless explicitly asked
51
+
52
+ ## Process
53
+
54
+ 1. **Understand the Request**: Parse what the user needs accomplished
55
+ 2. **Plan if Complex**: Use TodoWrite for multi-step tasks
56
+ 3. **Execute Tools**: Use appropriate tools to complete the work
57
+ 4. **Validate**: Run tests/linting when applicable
58
+ 5. **Report**: Provide concise status updates
59
+
60
+ Always use the available tools to perform actions rather than just suggesting commands.
61
+
62
+ Before starting any task, build a todo list of what you need to do, ensuring each item is actionable and prioritized. Then, execute the tasks one by one, using the TodoWrite tool to track progress and completion.
63
+
64
+ After completing each task, update the TodoWrite list to reflect the status and any necessary follow-up actions.
65
+ SYSTEM
66
+ end
67
+
68
+ def self.tools
69
+ [ EditTool, ReadTool, TodoWriteTool, BashTool, GrepTool ]
70
+ end
71
+
72
+ def tools
73
+ self.class.tools.map(&:definition)
74
+ end
75
+
76
+ def post
77
+ LlmGateway::Client.chat(model, prompt, tools: tools, system: system_prompt, api_key: @api_key)
78
+ end
79
+ end
@@ -0,0 +1,47 @@
1
+ require 'tty-prompt'
2
+ require_relative '../../lib/llm_gateway'
3
+ require_relative 'claude_code_clone.rb'
4
+
5
+ # Terminal Runner for FileSearchBot
6
+ class FileSearchTerminalRunner
7
+ def initialize
8
+ @prompt = TTY::Prompt.new
9
+ end
10
+
11
+ def start
12
+ puts "First, let's configure your LLM settings:\n\n"
13
+
14
+ model, api_key = setup_configuration
15
+ bot = ClaudeCloneClone.new(model, api_key)
16
+
17
+ puts "Type 'quit' or 'exit' to stop.\n\n"
18
+
19
+ loop do
20
+ user_input = @prompt.ask("What can i do for you?")
21
+
22
+ break if [ 'quit', 'exit' ].include?(user_input.downcase)
23
+
24
+ bot.query(user_input)
25
+ end
26
+ end
27
+
28
+ private
29
+
30
+ def setup_configuration
31
+ model = @prompt.ask("Enter model (default: claude-3-7-sonnet-20250219):") do |q|
32
+ q.default 'claude-3-7-sonnet-20250219'
33
+ end
34
+
35
+ api_key = @prompt.mask("Enter your API key:") do |q|
36
+ q.required true
37
+ end
38
+
39
+ [ model, api_key ]
40
+ end
41
+ end
42
+
43
+ # Start the bot
44
+ if __FILE__ == $0
45
+ runner = FileSearchTerminalRunner.new
46
+ runner.start
47
+ end
@@ -0,0 +1,54 @@
1
+ class BashTool < LlmGateway::Tool
2
+ name 'Bash'
3
+ description 'Execute shell commands'
4
+ input_schema({
5
+ type: 'object',
6
+ properties: {
7
+ command: { type: 'string', description: 'Shell command to execute' },
8
+ description: { type: 'string', description: 'Human-readable description' },
9
+ timeout: { type: 'integer', description: 'Timeout in milliseconds' }
10
+ },
11
+ required: [ 'command' ]
12
+ })
13
+
14
+ def execute(input)
15
+ command = input[:command]
16
+ description = input[:description]
17
+ timeout = input[:timeout] || 120000 # Default 2 minutes
18
+
19
+ if description
20
+ puts "Executing: #{command}"
21
+ puts "Description: #{description}\n\n"
22
+ else
23
+ puts "Executing: #{command}\n\n"
24
+ end
25
+
26
+ begin
27
+ # Convert timeout from milliseconds to seconds
28
+ timeout_seconds = timeout / 1000.0
29
+
30
+ # Use timeout command if available, otherwise use Ruby's timeout
31
+ if system('which timeout > /dev/null 2>&1')
32
+ result = `timeout #{timeout_seconds}s #{command} 2>&1`
33
+ exit_status = $?
34
+ else
35
+ require 'timeout'
36
+ result = Timeout.timeout(timeout_seconds) do
37
+ `#{command} 2>&1`
38
+ end
39
+ exit_status = $?
40
+ end
41
+
42
+ if exit_status.success?
43
+ result.empty? ? "Command completed successfully (no output)" : result
44
+ else
45
+ "Command failed with exit code #{exit_status.exitstatus}:\n#{result}"
46
+ end
47
+
48
+ rescue Timeout::Error
49
+ "Command timed out after #{timeout_seconds} seconds"
50
+ rescue => e
51
+ "Error executing command: #{e.message}"
52
+ end
53
+ end
54
+ end
@@ -0,0 +1,61 @@
1
+ class EditTool < LlmGateway::Tool
2
+ name 'Edit'
3
+ description 'Modify existing files by replacing specific text strings'
4
+ input_schema({
5
+ type: 'object',
6
+ properties: {
7
+ file_path: { type: 'string', description: 'Absolute path to file to modify' },
8
+ old_string: { type: 'string', description: 'Exact text to replace' },
9
+ new_string: { type: 'string', description: 'Replacement text' },
10
+ replace_all: { type: 'boolean', description: 'Replace all occurrences (default: false)' }
11
+ },
12
+ required: [ 'file_path', 'old_string', 'new_string' ]
13
+ })
14
+
15
+ def execute(input)
16
+ file_path = input[:file_path]
17
+ old_string = input[:old_string]
18
+ new_string = input[:new_string]
19
+ replace_all = input[:replace_all] || false
20
+
21
+ # Validate file exists
22
+ unless File.exist?(file_path)
23
+ return "Error: File not found at #{file_path}"
24
+ end
25
+
26
+ # Read file content
27
+ begin
28
+ content = File.read(file_path)
29
+ rescue => e
30
+ return "Error reading file: #{e.message}"
31
+ end
32
+
33
+ # Check if old_string exists in file
34
+ unless content.include?(old_string)
35
+ return "Error: Text '#{old_string}' not found in file"
36
+ end
37
+
38
+ # Perform replacement
39
+ if replace_all
40
+ updated_content = content.gsub(old_string, new_string)
41
+ occurrences = content.scan(old_string).length
42
+ else
43
+ # Replace only first occurrence
44
+ updated_content = content.sub(old_string, new_string)
45
+ occurrences = 1
46
+ end
47
+
48
+ # Check if replacement would result in same content
49
+ if content == updated_content
50
+ return "Error: old_string and new_string are identical, no changes made"
51
+ end
52
+
53
+ # Write updated content back to file
54
+ begin
55
+ File.write(file_path, updated_content)
56
+ "Successfully replaced #{occurrences} occurrence(s) in #{file_path}"
57
+ rescue => e
58
+ "Error writing file: #{e.message}"
59
+ end
60
+ end
61
+ end
@@ -0,0 +1,113 @@
1
+ class GrepTool < LlmGateway::Tool
2
+ name 'Grep'
3
+ description 'Search for patterns in files using regex'
4
+ input_schema({
5
+ type: 'object',
6
+ properties: {
7
+ pattern: { type: 'string', description: 'Regex pattern to search for' },
8
+ path: { type: 'string', description: 'File or directory path' },
9
+ output_mode: {
10
+ type: 'string',
11
+ enum: [ 'content', 'files_with_matches', 'count' ],
12
+ description: 'Output mode: content, files_with_matches, or count'
13
+ },
14
+ glob: { type: 'string', description: 'File pattern filter (e.g., "*.rb")' },
15
+ '-n': { type: 'boolean', description: 'Show line numbers' },
16
+ '-i': { type: 'boolean', description: 'Case insensitive search' },
17
+ '-C': { type: 'integer', description: 'Context lines around matches' }
18
+ },
19
+ required: [ 'pattern' ]
20
+ })
21
+
22
+ def execute(input)
23
+ pattern = input[:pattern]
24
+ path = input[:path] || '.'
25
+ output_mode = input[:output_mode] || 'files_with_matches'
26
+ glob = input[:glob]
27
+ show_line_numbers = input['-n'] || false
28
+ case_insensitive = input['-i'] || false
29
+ context_lines = input['-C'] || 0
30
+
31
+ # Build grep command
32
+ cmd_parts = [ 'grep' ]
33
+
34
+ # Add flags
35
+ cmd_parts << '-r' unless File.file?(path) # Recursive for directories
36
+ cmd_parts << '-n' if show_line_numbers && output_mode == 'content'
37
+ cmd_parts << '-i' if case_insensitive
38
+ cmd_parts << "-C#{context_lines}" if context_lines > 0 && output_mode == 'content'
39
+
40
+ # Output mode flags
41
+ case output_mode
42
+ when 'files_with_matches'
43
+ cmd_parts << '-l'
44
+ when 'count'
45
+ cmd_parts << '-c'
46
+ end
47
+
48
+ # Add pattern and path
49
+ cmd_parts << "'#{pattern}'"
50
+
51
+ # Handle glob pattern
52
+ if glob
53
+ if File.directory?(path)
54
+ cmd_parts << "#{path}/**/*"
55
+ # Use shell globbing with find for better glob support
56
+ find_cmd = "find #{path} -name '#{glob}' -type f"
57
+ files_result = `#{find_cmd} 2>/dev/null`
58
+ if files_result.empty?
59
+ return "No files found matching pattern '#{glob}' in #{path}"
60
+ end
61
+
62
+ # Run grep on each matching file
63
+ files = files_result.strip.split("\n")
64
+ results = []
65
+
66
+ files.each do |file|
67
+ grep_cmd = cmd_parts[0..-2].join(' ') + " '#{pattern}' '#{file}'"
68
+ result = `#{grep_cmd} 2>/dev/null`
69
+ results << result unless result.empty?
70
+ end
71
+
72
+ return results.empty? ? "No matches found" : results.join("\n")
73
+ else
74
+ cmd_parts << path
75
+ end
76
+ else
77
+ cmd_parts << path
78
+ end
79
+
80
+ command = cmd_parts.join(' ')
81
+
82
+ begin
83
+ puts "Executing: #{command}"
84
+ result = `#{command} 2>&1`
85
+ exit_status = $?
86
+
87
+ if exit_status.success?
88
+ if result.empty?
89
+ "No matches found"
90
+ else
91
+ case output_mode
92
+ when 'content'
93
+ result
94
+ when 'files_with_matches'
95
+ result
96
+ when 'count'
97
+ result
98
+ else
99
+ result
100
+ end
101
+ end
102
+ elsif exit_status.exitstatus == 1
103
+ # grep returns 1 when no matches found, which is normal
104
+ "No matches found"
105
+ else
106
+ "Error: #{result}"
107
+ end
108
+
109
+ rescue => e
110
+ "Error executing grep: #{e.message}"
111
+ end
112
+ end
113
+ end
@@ -0,0 +1,61 @@
1
+ class ReadTool < LlmGateway::Tool
2
+ name 'Read'
3
+ description 'Read file contents with optional pagination'
4
+ input_schema({
5
+ type: 'object',
6
+ properties: {
7
+ file_path: { type: 'string', description: 'Absolute path to file' },
8
+ limit: { type: 'integer', description: 'Number of lines to read' },
9
+ offset: { type: 'integer', description: 'Starting line number' }
10
+ },
11
+ required: [ 'file_path' ]
12
+ })
13
+
14
+ def execute(input)
15
+ file_path = input[:file_path]
16
+ limit = input[:limit]
17
+ offset = input[:offset] || 0
18
+
19
+ # Validate file exists
20
+ unless File.exist?(file_path)
21
+ return "Error: File not found at #{file_path}"
22
+ end
23
+
24
+ # Check if it's a directory
25
+ if File.directory?(file_path)
26
+ return "Error: #{file_path} is a directory, not a file"
27
+ end
28
+
29
+ begin
30
+ lines = File.readlines(file_path, chomp: true)
31
+
32
+ # Apply offset
33
+ if offset > 0
34
+ if offset >= lines.length
35
+ return "Error: Offset #{offset} exceeds file length (#{lines.length} lines)"
36
+ end
37
+ lines = lines[offset..-1]
38
+ end
39
+
40
+ # Apply limit
41
+ if limit && limit > 0
42
+ lines = lines[0, limit]
43
+ end
44
+
45
+ # Format output with line numbers (similar to cat -n)
46
+ output = lines.each_with_index.map do |line, index|
47
+ line_number = offset + index + 1
48
+ "#{line_number.to_s.rjust(6)}→#{line}"
49
+ end
50
+
51
+ if output.empty?
52
+ "File is empty or no lines in specified range"
53
+ else
54
+ output.join("\n")
55
+ end
56
+
57
+ rescue => e
58
+ "Error reading file: #{e.message}"
59
+ end
60
+ end
61
+ end
@@ -0,0 +1,98 @@
1
+ require 'json'
2
+
3
+ class TodoWriteTool < LlmGateway::Tool
4
+ name 'TodoWrite'
5
+ description 'Create and manage structured task lists'
6
+ input_schema({
7
+ type: 'object',
8
+ properties: {
9
+ todos: {
10
+ type: 'array',
11
+ description: 'Array of todo objects',
12
+ items: {
13
+ type: 'object',
14
+ properties: {
15
+ id: { type: 'string', description: 'Unique identifier' },
16
+ content: { type: 'string', description: 'Task description' },
17
+ status: {
18
+ type: 'string',
19
+ enum: [ 'pending', 'in_progress', 'completed' ],
20
+ description: 'Task status'
21
+ },
22
+ priority: {
23
+ type: 'string',
24
+ enum: [ 'high', 'medium', 'low' ],
25
+ description: 'Task priority'
26
+ }
27
+ },
28
+ required: [ 'id', 'content', 'status', 'priority' ]
29
+ }
30
+ }
31
+ },
32
+ required: [ 'todos' ]
33
+ })
34
+
35
+ def execute(input)
36
+ todos = input[:todos]
37
+
38
+ # Validate todos structure
39
+ todos.each_with_index do |todo, index|
40
+ unless todo.is_a?(Hash)
41
+ return "Error: Todo at index #{index} is not a hash"
42
+ end
43
+
44
+ required_fields = [ 'id', 'content', 'status', 'priority' ]
45
+ missing_fields = required_fields - todo.keys.map(&:to_s)
46
+ unless missing_fields.empty?
47
+ return "Error: Todo at index #{index} missing required fields: #{missing_fields.join(', ')}"
48
+ end
49
+
50
+ valid_statuses = [ 'pending', 'in_progress', 'completed' ]
51
+ unless valid_statuses.include?(todo['status'])
52
+ return "Error: Invalid status '#{todo['status']}' in todo #{todo['id']}. Must be one of: #{valid_statuses.join(', ')}"
53
+ end
54
+
55
+ valid_priorities = [ 'high', 'medium', 'low' ]
56
+ unless valid_priorities.include?(todo['priority'])
57
+ return "Error: Invalid priority '#{todo['priority']}' in todo #{todo['id']}. Must be one of: #{valid_priorities.join(', ')}"
58
+ end
59
+ end
60
+
61
+ # Store todos (in practice, this might be saved to a file or database)
62
+ @todos = todos
63
+
64
+ # Generate summary
65
+ total = todos.length
66
+ pending = todos.count { |t| t['status'] == 'pending' }
67
+ in_progress = todos.count { |t| t['status'] == 'in_progress' }
68
+ completed = todos.count { |t| t['status'] == 'completed' }
69
+
70
+ summary = "Todo list updated successfully:\n"
71
+ summary += "Total tasks: #{total}\n"
72
+ summary += "Pending: #{pending}, In Progress: #{in_progress}, Completed: #{completed}\n\n"
73
+
74
+ # List current todos
75
+ summary += "Current tasks:\n"
76
+ todos.each do |todo|
77
+ status_icon = case todo['status']
78
+ when 'pending' then '⏳'
79
+ when 'in_progress' then '🔄'
80
+ when 'completed' then '✅'
81
+ end
82
+
83
+ priority_icon = case todo['priority']
84
+ when 'high' then '🔴'
85
+ when 'medium' then '🟡'
86
+ when 'low' then '🟢'
87
+ end
88
+
89
+ summary += "#{status_icon} #{priority_icon} [#{todo['id']}] #{todo['content']}\n"
90
+ end
91
+
92
+ summary
93
+ end
94
+
95
+ def self.current_todos
96
+ @todos || []
97
+ end
98
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm_gateway
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.3
4
+ version: 0.1.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - billybonks
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2025-08-04 00:00:00.000000000 Z
11
+ date: 2025-08-05 00:00:00.000000000 Z
12
12
  dependencies: []
13
13
  description: LlmGateway provides a consistent Ruby interface for multiple LLM providers
14
14
  including Claude, OpenAI, and Groq. Features include unified response formatting,
@@ -43,6 +43,15 @@ files:
43
43
  - lib/llm_gateway/tool.rb
44
44
  - lib/llm_gateway/utils.rb
45
45
  - lib/llm_gateway/version.rb
46
+ - sample/claude_code_clone/agent.rb
47
+ - sample/claude_code_clone/claude_code_clone.rb
48
+ - sample/claude_code_clone/prompt.rb
49
+ - sample/claude_code_clone/run.rb
50
+ - sample/claude_code_clone/tools/bash_tool.rb
51
+ - sample/claude_code_clone/tools/edit_tool.rb
52
+ - sample/claude_code_clone/tools/grep_tool.rb
53
+ - sample/claude_code_clone/tools/read_tool.rb
54
+ - sample/claude_code_clone/tools/todowrite_tool.rb
46
55
  - sig/llm_gateway.rbs
47
56
  homepage: https://github.com/Hyper-Unearthing/llm_gateway
48
57
  licenses: