llm_gateway 0.1.3 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 21b3998df57de474c78626d8267db572418f80426ddaf82730cf8738e181d96c
4
- data.tar.gz: 9a37ff5a3907a8b0d48dbd0fc83be2e50cd5757a60cacd051e93ff1dc63d734e
3
+ metadata.gz: 0fc0c0030bd149c9e3130d1f8e6fe2ccd0bfec5d87931534e4fc29d8808ad0c7
4
+ data.tar.gz: 1c9bf69153cfc502b41ef27305fcba114908a9debcdc34e9e2023fef257b4968
5
5
  SHA512:
6
- metadata.gz: 1be591a45a6fbee0b89846c679e0a9709e3a5493757eac3196be8194f8e592615b177c15f20d6763cf6a444bf8dbf3d899cf6e9fd9b11de7e2f02845f53ab1ff
7
- data.tar.gz: ed5a981d3e7ff311d26fe4924760e286e7370da598464e4b00ad004c940015432d873be73afae7067d5fa659adbe7af701b5b6f4163432c10cdf7f950532d690
6
+ metadata.gz: 27ee426f1d8c3b97054e44b0d270d4eb8dd6d263fdf0717b1a47d655758ae634d1e25c12a850e468f4d360ca00beecfc430c9bfd3ed7a0a287bd0a48febcb110
7
+ data.tar.gz: fbc9b70052a776a99962b173842ede5016f4398504fca58d163a60f3eda52c58b2434c91d060bec3334c8c38e083c313692fb6b5442792943fcd3008ee44a08a
data/CHANGELOG.md CHANGED
@@ -2,15 +2,36 @@
2
2
 
3
3
  ## [Unreleased](https://github.com/Hyper-Unearthing/llm_gateway/tree/HEAD)
4
4
 
5
- [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.0...HEAD)
5
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.3...HEAD)
6
+
7
+ **Merged pull requests:**
8
+
9
+ - ci: release should ask me what version i want to bump [\#7](https://github.com/Hyper-Unearthing/llm_gateway/pull/7) ([billybonks](https://github.com/billybonks))
10
+ - docs: create an real world example that does something interesting [\#6](https://github.com/Hyper-Unearthing/llm_gateway/pull/6) ([billybonks](https://github.com/billybonks))
11
+ - feat: there was no way to pass api\_key to gateway besides env [\#5](https://github.com/Hyper-Unearthing/llm_gateway/pull/5) ([billybonks](https://github.com/billybonks))
12
+
13
+ ## [v0.1.3](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.3) (2025-08-04)
14
+
15
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.2...v0.1.3)
6
16
 
7
17
  **Merged pull requests:**
8
18
 
9
19
  - feat: add tool base class [\#4](https://github.com/Hyper-Unearthing/llm_gateway/pull/4) ([billybonks](https://github.com/billybonks))
20
+
21
+ ## [v0.1.2](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.2) (2025-08-04)
22
+
23
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.1...v0.1.2)
24
+
25
+ **Merged pull requests:**
26
+
10
27
  - feat: add prompt base class [\#3](https://github.com/Hyper-Unearthing/llm_gateway/pull/3) ([billybonks](https://github.com/billybonks))
11
28
  - lint files and add coverage [\#2](https://github.com/Hyper-Unearthing/llm_gateway/pull/2) ([billybonks](https://github.com/billybonks))
12
29
  - test: vcr lookup was not working when using different commands [\#1](https://github.com/Hyper-Unearthing/llm_gateway/pull/1) ([billybonks](https://github.com/billybonks))
13
30
 
31
+ ## [v0.1.1](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.1) (2025-08-04)
32
+
33
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.0...v0.1.1)
34
+
14
35
  ## [v0.1.0](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.0) (2025-08-04)
15
36
 
16
37
  [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/505c78116a2e778b23f319a380cd4bf6e300db89...v0.1.0)
data/README.md CHANGED
@@ -43,221 +43,21 @@ result = LlmGateway::Client.chat(
43
43
  )
44
44
  ```
45
45
 
46
- ### Prompt Class
46
+ ### Sample Application
47
47
 
48
- You can also create reusable prompt classes by subclassing `LlmGateway::Prompt`:
48
+ See the [file search bot example](sample/directory_bot/) for a complete working application that demonstrates:
49
+ - Creating reusable Prompt and Tool classes
50
+ - Handling conversation transcripts with tool execution
51
+ - Building an interactive terminal interface
49
52
 
50
- ```ruby
51
- # Simple text completion with prompt class
52
- class GeographyQuestionPrompt < LlmGateway::Prompt
53
- def initialize(model, question)
54
- super(model)
55
- @question = question
56
- end
57
-
58
- def prompt
59
- @question
60
- end
61
- end
62
-
63
- # Usage
64
- geography_prompt = GeographyQuestionPrompt.new('claude-sonnet-4-20250514', 'What is the capital of France?')
65
- result = geography_prompt.run
66
-
67
- # With system message
68
- class GeographyTeacherPrompt < LlmGateway::Prompt
69
- def initialize(model, question)
70
- super(model)
71
- @question = question
72
- end
73
-
74
- def prompt
75
- @question
76
- end
77
-
78
- def system_prompt
79
- 'You are a helpful geography teacher.'
80
- end
81
- end
82
-
83
- # Usage
84
- teacher_prompt = GeographyTeacherPrompt.new('gpt-4', 'What is the capital of France?')
85
- result = teacher_prompt.run
86
- ```
87
-
88
- ### Using Prompt with Tools
53
+ To run the sample:
89
54
 
90
- You can combine the Prompt class with tools for more complex interactions:
91
-
92
- ```ruby
93
- # Define a tool class
94
- class GetWeatherTool < LlmGateway::Tool
95
- name 'get_weather'
96
- description 'Get current weather for a location'
97
- input_schema({
98
- type: 'object',
99
- properties: {
100
- location: { type: 'string', description: 'City name' }
101
- },
102
- required: ['location']
103
- })
104
-
105
- def execute(input, login = nil)
106
- # Your weather API implementation here
107
- "The weather in #{input['location']} is sunny and 25°C"
108
- end
109
- end
110
-
111
- class WeatherAssistantPrompt < LlmGateway::Prompt
112
- def initialize(model, location)
113
- super(model)
114
- @location = location
115
- end
116
-
117
- def prompt
118
- "What's the weather like in #{@location}?"
119
- end
120
-
121
- def system_prompt
122
- 'You are a helpful weather assistant.'
123
- end
124
-
125
- def tools
126
- [GetWeatherTool]
127
- end
128
- end
129
-
130
- # Usage
131
- weather_prompt = WeatherAssistantPrompt.new('claude-sonnet-4-20250514', 'Singapore')
132
- result = weather_prompt.run
55
+ ```bash
56
+ cd sample/directory_bot
57
+ ruby run.rb
133
58
  ```
134
59
 
135
- ### Tool Usage (Function Calling)
136
-
137
- ```ruby
138
- # Define a tool class
139
- class GetWeatherTool < LlmGateway::Tool
140
- name 'get_weather'
141
- description 'Get current weather for a location'
142
- input_schema({
143
- type: 'object',
144
- properties: {
145
- location: { type: 'string', description: 'City name' }
146
- },
147
- required: ['location']
148
- })
149
-
150
- def execute(input, login = nil)
151
- # Your weather API implementation here
152
- "The weather in #{input['location']} is sunny and 25°C"
153
- end
154
- end
155
-
156
- # Use the tool
157
- weather_tool = {
158
- name: 'get_weather',
159
- description: 'Get current weather for a location',
160
- input_schema: {
161
- type: 'object',
162
- properties: {
163
- location: { type: 'string', description: 'City name' }
164
- },
165
- required: ['location']
166
- }
167
- }
168
-
169
- result = LlmGateway::Client.chat(
170
- 'claude-sonnet-4-20250514',
171
- 'What\'s the weather in Singapore?',
172
- tools: [weather_tool],
173
- system: 'You are a helpful weather assistant.'
174
- )
175
-
176
- # Note: Tools are not automatically executed. The LLM will indicate when a tool should be called,
177
- # but it's up to you to find the appropriate tool and execute it based on the response.
178
-
179
- # Example of handling tool execution with conversation transcript:
180
- class WeatherAssistant
181
- def initialize
182
- @transcript = []
183
- @weather_tool = {
184
- name: 'get_weather',
185
- description: 'Get current weather for a location',
186
- input_schema: {
187
- type: 'object',
188
- properties: {
189
- location: { type: 'string', description: 'City name' }
190
- },
191
- required: ['location']
192
- }
193
- }
194
- end
195
-
196
- attr_reader :weather_tool
197
-
198
- def process_message(content)
199
- # Add user message to transcript
200
- @transcript << { role: 'user', content: [{ type: 'text', text: content }] }
201
-
202
- result = LlmGateway::Client.chat(
203
- 'claude-sonnet-4-20250514',
204
- @transcript,
205
- tools: [@weather_tool],
206
- system: 'You are a helpful weather assistant.'
207
- )
208
-
209
- process_response(result[:choices][0][:content])
210
- end
211
-
212
- private
213
-
214
- def process_response(response)
215
- # Add assistant response to transcript
216
- @transcript << { role: 'assistant', content: response }
217
-
218
- response.each do |message|
219
- if message[:type] == 'text'
220
- puts message[:text]
221
- elsif message[:type] == 'tool_use'
222
- result = handle_tool_use(message)
223
-
224
- # Add tool result to transcript
225
- tool_result = {
226
- type: 'tool_result',
227
- tool_use_id: message[:id],
228
- content: result
229
- }
230
- @transcript << { role: 'user', content: [tool_result] }
231
-
232
- # Continue conversation with full transcript context
233
- follow_up = LlmGateway::Client.chat(
234
- 'claude-sonnet-4-20250514',
235
- @transcript,
236
- tools: [@weather_tool],
237
- system: 'You are a helpful weather assistant.'
238
- )
239
-
240
- process_response(follow_up[:choices][0][:content])
241
- end
242
- end
243
- end
244
-
245
- def handle_tool_use(message)
246
- tool_class = WeatherAssistantPrompt.find_tool(message[:name])
247
- raise "Unknown tool: #{message[:name]}" unless tool_class
248
-
249
- # Execute the tool with the provided input
250
- tool_instance = tool_class.new
251
- tool_instance.execute(message[:input])
252
- rescue StandardError => e
253
- "Error executing tool: #{e.message}"
254
- end
255
- end
256
-
257
- # Usage
258
- assistant = WeatherAssistant.new
259
- assistant.process_message("What's the weather in Singapore?")
260
- ```
60
+ The bot will prompt for your model and API key, then allow you to ask natural language questions about finding files and searching directories.
261
61
 
262
62
  ### Response Format
263
63
 
data/Rakefile CHANGED
@@ -19,8 +19,17 @@ begin
19
19
  sh "git add CHANGELOG.md"
20
20
  sh "git commit -m 'Update changelog' || echo 'No changelog changes'"
21
21
 
22
+ # Ask for version bump type
23
+ print "What type of version bump? (major/minor/patch): "
24
+ version_type = $stdin.gets.chomp.downcase
25
+
26
+ unless %w[major minor patch].include?(version_type)
27
+ puts "Invalid version type. Please use major, minor, or patch."
28
+ exit 1
29
+ end
30
+
22
31
  # Release
23
- sh "gem bump --version patch --tag --push --release"
32
+ sh "gem bump --version #{version_type} --tag --push --release"
24
33
  end
25
34
  rescue LoadError
26
35
  # gem-release not available in this environment
@@ -2,9 +2,11 @@
2
2
 
3
3
  module LlmGateway
4
4
  class Client
5
- def self.chat(model, message, response_format: "text", tools: nil, system: nil)
5
+ def self.chat(model, message, response_format: "text", tools: nil, system: nil, api_key: nil)
6
6
  client_klass = client_class(model)
7
- client = client_klass.new(model_key: model)
7
+ client_options = { model_key: model }
8
+ client_options[:api_key] = api_key if api_key
9
+ client = client_klass.new(**client_options)
8
10
 
9
11
  input_mapper = input_mapper_for_client(client)
10
12
  normalized_input = input_mapper.map({
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LlmGateway
4
- VERSION = "0.1.3"
4
+ VERSION = "0.1.4"
5
5
  end
@@ -0,0 +1,72 @@
1
+ require_relative 'file_search_tool'
2
+ require_relative 'file_search_prompt'
3
+ require 'debug'
4
+
5
+ # Bash File Search Assistant using LlmGateway architecture
6
+
7
+ class FileSearchBot
8
+ def initialize(model, api_key)
9
+ @transcript = []
10
+ @model = model
11
+ @api_key = api_key
12
+ end
13
+
14
+ def query(input)
15
+ process_query(input)
16
+ end
17
+
18
+ private
19
+
20
+ def process_query(query)
21
+ # Add user message to transcript
22
+ @transcript << { role: 'user', content: [ { type: 'text', text: query } ] }
23
+
24
+ begin
25
+ prompt = FileSearchPrompt.new(@model, @transcript, @api_key)
26
+ result = prompt.post
27
+ process_response(result[:choices][0][:content])
28
+ rescue => e
29
+ puts "Error: #{e.message}"
30
+ puts "Backtrace: #{e.backtrace.join("\n")}"
31
+ puts "I give up as bot"
32
+ end
33
+ end
34
+
35
+ def process_response(response)
36
+ # Add assistant response to transcript
37
+ @transcript << { role: 'assistant', content: response }
38
+
39
+ response.each do |message|
40
+ if message[:type] == 'text'
41
+ puts "\nBot: #{message[:text]}\n"
42
+ elsif message[:type] == 'tool_use'
43
+ result = handle_tool_use(message)
44
+
45
+ # Add tool result to transcript
46
+ tool_result = {
47
+ type: 'tool_result',
48
+ tool_use_id: message[:id],
49
+ content: result
50
+ }
51
+ @transcript << { role: 'user', content: [ tool_result ] }
52
+
53
+ # Continue conversation with tool result
54
+ follow_up_prompt = FileSearchPrompt.new(@model, @transcript, @api_key)
55
+ follow_up = follow_up_prompt.post
56
+
57
+ process_response(follow_up[:choices][0][:content]) if follow_up[:choices][0][:content]
58
+ end
59
+ end
60
+ end
61
+
62
+ def handle_tool_use(message)
63
+ if message[:name] == 'execute_bash_command'
64
+ tool = FileSearchTool.new
65
+ tool.execute(message[:input])
66
+ else
67
+ "Unknown tool: #{message[:name]}"
68
+ end
69
+ rescue StandardError => e
70
+ "Error executing tool: #{e.message}"
71
+ end
72
+ end
@@ -0,0 +1,56 @@
1
+ require_relative 'file_search_tool'
2
+
3
+ class FileSearchPrompt < LlmGateway::Prompt
4
+ def initialize(model, transcript, api_key)
5
+ super(model)
6
+ @transcript = transcript
7
+ @api_key = api_key
8
+ end
9
+
10
+ def prompt
11
+ @transcript
12
+ end
13
+
14
+ def system_prompt
15
+ <<~SYSTEM
16
+ You are a helpful assistant that can find things for them in directories.
17
+
18
+ # Bash File Search Assistant
19
+
20
+ You are a bash command-line assistant specialized in helping users find information in files and directories. Your role is to translate natural language queries into effective bash commands using search and file inspection tools.
21
+
22
+ ## Available Commands
23
+
24
+ You have access to these bash commands:
25
+ - `find` - Locate files and directories by name, type, size, date, etc.
26
+ - `grep` - Search for text patterns within files
27
+ - `cat` - Display entire file contents
28
+ - `head` - Show first lines of files
29
+ - `tail` - Show last lines of files
30
+ - `ls` - List directory contents with various options
31
+ - `wc` - Count lines, words, characters
32
+ - `sort` - Sort file contents
33
+ - `uniq` - Remove duplicate lines
34
+ - `awk` - Text processing and pattern extraction
35
+ - `sed` - Stream editing and text manipulation
36
+
37
+ ## Your Process
38
+
39
+ 1. **Understand the Query**: Parse what the user is looking for
40
+ 2. **Choose Strategy**: Determine the best combination of commands
41
+ 3. **Execute Commands**: Use the execute_bash_command tool with exact bash commands
42
+ 4. **Explain**: Briefly explain what each command does
43
+ 5. **Suggest Refinements**: Offer ways to narrow or expand the search if needed
44
+
45
+ Always use the execute_bash_command tool to run commands rather than just suggesting them.
46
+ SYSTEM
47
+ end
48
+
49
+ def tools
50
+ [ FileSearchTool.definition ]
51
+ end
52
+
53
+ def post
54
+ LlmGateway::Client.chat(model, prompt, tools: tools, system: system_prompt, api_key: @api_key)
55
+ end
56
+ end
@@ -0,0 +1,32 @@
1
+
2
+ class FileSearchTool < LlmGateway::Tool
3
+ name 'execute_bash_command'
4
+ description 'Execute bash commands for file searching and directory exploration'
5
+ input_schema({
6
+ type: 'object',
7
+ properties: {
8
+ command: { type: 'string', description: 'The bash command to execute' },
9
+ explanation: { type: 'string', description: 'Explanation of what the command does' }
10
+ },
11
+ required: [ 'command', 'explanation' ]
12
+ })
13
+
14
+ def execute(input, login = nil)
15
+ command = input[:command]
16
+ explanation = input[:explanation]
17
+
18
+ puts "Executing: #{command}"
19
+ puts "Purpose: #{explanation}\n\n"
20
+
21
+ begin
22
+ result = `#{command} 2>&1`
23
+ if $?.success?
24
+ result.empty? ? "Command completed successfully (no output)" : result
25
+ else
26
+ "Error: #{result}"
27
+ end
28
+ rescue => e
29
+ "Error executing command: #{e.message}"
30
+ end
31
+ end
32
+ end
@@ -0,0 +1,49 @@
1
+ require 'tty-prompt'
2
+ require_relative '../../lib/llm_gateway'
3
+ require_relative 'file_search_bot'
4
+
5
+ # Terminal Runner for FileSearchBot
6
+ class FileSearchTerminalRunner
7
+ def initialize
8
+ @prompt = TTY::Prompt.new
9
+ end
10
+
11
+ def start
12
+ puts "File Search Assistant - I can help you find files and search through directories."
13
+ puts "First, let's configure your LLM settings:\n\n"
14
+
15
+ model, api_key = setup_configuration
16
+ bot = FileSearchBot.new(model, api_key)
17
+
18
+ puts "\nGreat! Now you can start searching."
19
+ puts "Type 'quit' or 'exit' to stop.\n\n"
20
+
21
+ loop do
22
+ user_input = @prompt.ask("What would you like to find?")
23
+
24
+ break if [ 'quit', 'exit' ].include?(user_input.downcase)
25
+
26
+ bot.query(user_input)
27
+ end
28
+ end
29
+
30
+ private
31
+
32
+ def setup_configuration
33
+ model = @prompt.ask("Enter model (default: claude-3-7-sonnet-20250219):") do |q|
34
+ q.default 'claude-3-7-sonnet-20250219'
35
+ end
36
+
37
+ api_key = @prompt.mask("Enter your API key:") do |q|
38
+ q.required true
39
+ end
40
+
41
+ [ model, api_key ]
42
+ end
43
+ end
44
+
45
+ # Start the bot
46
+ if __FILE__ == $0
47
+ runner = FileSearchTerminalRunner.new
48
+ runner.start
49
+ end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm_gateway
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.3
4
+ version: 0.1.4
5
5
  platform: ruby
6
6
  authors:
7
7
  - billybonks
@@ -43,6 +43,10 @@ files:
43
43
  - lib/llm_gateway/tool.rb
44
44
  - lib/llm_gateway/utils.rb
45
45
  - lib/llm_gateway/version.rb
46
+ - sample/directory_bot/file_search_bot.rb
47
+ - sample/directory_bot/file_search_prompt.rb
48
+ - sample/directory_bot/file_search_tool.rb
49
+ - sample/directory_bot/run.rb
46
50
  - sig/llm_gateway.rbs
47
51
  homepage: https://github.com/Hyper-Unearthing/llm_gateway
48
52
  licenses: