llm_gateway 0.1.2 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 98a42a5943a3b9797fa7ea4d782785e4932c65f7183eab6a452a86b2ca57e585
4
- data.tar.gz: cbe9e03dc5d76ebfdc8ba911b6c077b803113922f3c470c67fdadcac1b779ff8
3
+ metadata.gz: 0fc0c0030bd149c9e3130d1f8e6fe2ccd0bfec5d87931534e4fc29d8808ad0c7
4
+ data.tar.gz: 1c9bf69153cfc502b41ef27305fcba114908a9debcdc34e9e2023fef257b4968
5
5
  SHA512:
6
- metadata.gz: f58cb7936e3dcc9d643c1747f2d7404b2c1ca7474ad444f31178ce403351fba5ba1c981f91fabd1971097e7cb639b17709c9b62be9f784444d47d516801404b4
7
- data.tar.gz: 45de5da177dbdd30d1490d1f5f2115a5b5d4e7adeb213d5a4a9c874bf99ed84d557e0e9988a77194bdcac2c491eb5903e4b66659d985284032bb6bfccba6cd1f
6
+ metadata.gz: 27ee426f1d8c3b97054e44b0d270d4eb8dd6d263fdf0717b1a47d655758ae634d1e25c12a850e468f4d360ca00beecfc430c9bfd3ed7a0a287bd0a48febcb110
7
+ data.tar.gz: fbc9b70052a776a99962b173842ede5016f4398504fca58d163a60f3eda52c58b2434c91d060bec3334c8c38e083c313692fb6b5442792943fcd3008ee44a08a
data/CHANGELOG.md CHANGED
@@ -2,7 +2,25 @@
2
2
 
3
3
  ## [Unreleased](https://github.com/Hyper-Unearthing/llm_gateway/tree/HEAD)
4
4
 
5
- [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.0...HEAD)
5
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.3...HEAD)
6
+
7
+ **Merged pull requests:**
8
+
9
+ - ci: release should ask me what version i want to bump [\#7](https://github.com/Hyper-Unearthing/llm_gateway/pull/7) ([billybonks](https://github.com/billybonks))
10
+ - docs: create an real world example that does something interesting [\#6](https://github.com/Hyper-Unearthing/llm_gateway/pull/6) ([billybonks](https://github.com/billybonks))
11
+ - feat: there was no way to pass api\_key to gateway besides env [\#5](https://github.com/Hyper-Unearthing/llm_gateway/pull/5) ([billybonks](https://github.com/billybonks))
12
+
13
+ ## [v0.1.3](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.3) (2025-08-04)
14
+
15
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.2...v0.1.3)
16
+
17
+ **Merged pull requests:**
18
+
19
+ - feat: add tool base class [\#4](https://github.com/Hyper-Unearthing/llm_gateway/pull/4) ([billybonks](https://github.com/billybonks))
20
+
21
+ ## [v0.1.2](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.2) (2025-08-04)
22
+
23
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.1...v0.1.2)
6
24
 
7
25
  **Merged pull requests:**
8
26
 
@@ -10,6 +28,10 @@
10
28
  - lint files and add coverage [\#2](https://github.com/Hyper-Unearthing/llm_gateway/pull/2) ([billybonks](https://github.com/billybonks))
11
29
  - test: vcr lookup was not working when using different commands [\#1](https://github.com/Hyper-Unearthing/llm_gateway/pull/1) ([billybonks](https://github.com/billybonks))
12
30
 
31
+ ## [v0.1.1](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.1) (2025-08-04)
32
+
33
+ [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/v0.1.0...v0.1.1)
34
+
13
35
  ## [v0.1.0](https://github.com/Hyper-Unearthing/llm_gateway/tree/v0.1.0) (2025-08-04)
14
36
 
15
37
  [Full Changelog](https://github.com/Hyper-Unearthing/llm_gateway/compare/505c78116a2e778b23f319a380cd4bf6e300db89...v0.1.0)
data/README.md CHANGED
@@ -43,183 +43,21 @@ result = LlmGateway::Client.chat(
43
43
  )
44
44
  ```
45
45
 
46
- ### Prompt Class
46
+ ### Sample Application
47
47
 
48
- You can also create reusable prompt classes by subclassing `LlmGateway::Prompt`:
48
+ See the [file search bot example](sample/directory_bot/) for a complete working application that demonstrates:
49
+ - Creating reusable Prompt and Tool classes
50
+ - Handling conversation transcripts with tool execution
51
+ - Building an interactive terminal interface
49
52
 
50
- ```ruby
51
- # Simple text completion with prompt class
52
- class GeographyQuestionPrompt < LlmGateway::Prompt
53
- def initialize(model, question)
54
- super(model)
55
- @question = question
56
- end
57
-
58
- def prompt
59
- @question
60
- end
61
- end
62
-
63
- # Usage
64
- geography_prompt = GeographyQuestionPrompt.new('claude-sonnet-4-20250514', 'What is the capital of France?')
65
- result = geography_prompt.run
66
-
67
- # With system message
68
- class GeographyTeacherPrompt < LlmGateway::Prompt
69
- def initialize(model, question)
70
- super(model)
71
- @question = question
72
- end
73
-
74
- def prompt
75
- @question
76
- end
77
-
78
- def system_prompt
79
- 'You are a helpful geography teacher.'
80
- end
81
- end
82
-
83
- # Usage
84
- teacher_prompt = GeographyTeacherPrompt.new('gpt-4', 'What is the capital of France?')
85
- result = teacher_prompt.run
86
- ```
87
-
88
- ### Using Prompt with Tools
89
-
90
- You can combine the Prompt class with tools for more complex interactions:
91
-
92
- ```ruby
93
- class WeatherAssistantPrompt < LlmGateway::Prompt
94
- def initialize(model, location)
95
- super(model)
96
- @location = location
97
- end
98
-
99
- def prompt
100
- "What's the weather like in #{@location}?"
101
- end
102
-
103
- def system_prompt
104
- 'You are a helpful weather assistant.'
105
- end
106
-
107
- def tools
108
- [{
109
- name: 'get_weather',
110
- description: 'Get current weather for a location',
111
- input_schema: {
112
- type: 'object',
113
- properties: {
114
- location: { type: 'string', description: 'City name' }
115
- },
116
- required: ['location']
117
- }
118
- }]
119
- end
120
- end
53
+ To run the sample:
121
54
 
122
- # Usage
123
- weather_prompt = WeatherAssistantPrompt.new('claude-sonnet-4-20250514', 'Singapore')
124
- result = weather_prompt.run
55
+ ```bash
56
+ cd sample/directory_bot
57
+ ruby run.rb
125
58
  ```
126
59
 
127
- ### Tool Usage (Function Calling)
128
-
129
- ```ruby
130
- # Define a tool
131
- weather_tool = {
132
- name: 'get_weather',
133
- description: 'Get current weather for a location',
134
- input_schema: {
135
- type: 'object',
136
- properties: {
137
- location: { type: 'string', description: 'City name' }
138
- },
139
- required: ['location']
140
- }
141
- }
142
-
143
- # Use the tool
144
- result = LlmGateway::Client.chat(
145
- 'claude-sonnet-4-20250514',
146
- 'What\'s the weather in Singapore?',
147
- tools: [weather_tool],
148
- system: 'You are a helpful weather assistant.'
149
- )
150
-
151
- # Note: Tools are not automatically executed. The LLM will indicate when a tool should be called,
152
- # but it's up to you to find the appropriate tool and execute it based on the response.
153
-
154
- # Example of handling tool execution with conversation transcript:
155
- class WeatherAssistant
156
- def initialize
157
- @transcript = []
158
- end
159
-
160
- def process_message(content)
161
- # Add user message to transcript
162
- @transcript << { role: 'user', content: [{ type: 'text', text: content }] }
163
-
164
- result = LlmGateway::Client.chat(
165
- 'claude-sonnet-4-20250514',
166
- @transcript,
167
- tools: [weather_tool],
168
- system: 'You are a helpful weather assistant.'
169
- )
170
-
171
- process_response(result[:choices][0][:content])
172
- end
173
-
174
- private
175
-
176
- def process_response(response)
177
- # Add assistant response to transcript
178
- @transcript << { role: 'assistant', content: response }
179
-
180
- response.each do |message|
181
- if message[:type] == 'text'
182
- puts message[:text]
183
- elsif message[:type] == 'tool_use'
184
- result = handle_tool_use(message)
185
-
186
- # Add tool result to transcript
187
- tool_result = {
188
- type: 'tool_result',
189
- tool_use_id: message[:id],
190
- content: result
191
- }
192
- @transcript << { role: 'user', content: [tool_result] }
193
-
194
- # Continue conversation with full transcript context
195
- follow_up = LlmGateway::Client.chat(
196
- 'claude-sonnet-4-20250514',
197
- @transcript,
198
- tools: [weather_tool],
199
- system: 'You are a helpful weather assistant.'
200
- )
201
-
202
- process_response(follow_up[:choices][0][:content])
203
- end
204
- end
205
- end
206
-
207
- def handle_tool_use(message)
208
- tool_class = WeatherAssistantPrompt.find_tool(message[:name])
209
- raise "Unknown tool: #{message[:name]}" unless tool_class
210
-
211
- # Execute the tool with the provided input
212
- tool_instance = tool_class.new
213
- tool_instance.execute(message[:input])
214
- rescue StandardError => e
215
- "Error executing tool: #{e.message}"
216
- end
217
- end
218
-
219
- # Usage
220
- assistant = WeatherAssistant.new
221
- assistant.process_message("What's the weather in Singapore?")
222
- ```
60
+ The bot will prompt for your model and API key, then allow you to ask natural language questions about finding files and searching directories.
223
61
 
224
62
  ### Response Format
225
63
 
data/Rakefile CHANGED
@@ -19,8 +19,17 @@ begin
19
19
  sh "git add CHANGELOG.md"
20
20
  sh "git commit -m 'Update changelog' || echo 'No changelog changes'"
21
21
 
22
+ # Ask for version bump type
23
+ print "What type of version bump? (major/minor/patch): "
24
+ version_type = $stdin.gets.chomp.downcase
25
+
26
+ unless %w[major minor patch].include?(version_type)
27
+ puts "Invalid version type. Please use major, minor, or patch."
28
+ exit 1
29
+ end
30
+
22
31
  # Release
23
- sh "gem bump --version patch --tag --push --release"
32
+ sh "gem bump --version #{version_type} --tag --push --release"
24
33
  end
25
34
  rescue LoadError
26
35
  # gem-release not available in this environment
@@ -2,9 +2,11 @@
2
2
 
3
3
  module LlmGateway
4
4
  class Client
5
- def self.chat(model, message, response_format: "text", tools: nil, system: nil)
5
+ def self.chat(model, message, response_format: "text", tools: nil, system: nil, api_key: nil)
6
6
  client_klass = client_class(model)
7
- client = client_klass.new(model_key: model)
7
+ client_options = { model_key: model }
8
+ client_options[:api_key] = api_key if api_key
9
+ client = client_klass.new(**client_options)
8
10
 
9
11
  input_mapper = input_mapper_for_client(client)
10
12
  normalized_input = input_mapper.map({
@@ -0,0 +1,46 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LlmGateway
4
+ class Tool
5
+ def initialize(*args)
6
+ # Empty constructor to allow subclasses to call super
7
+ end
8
+
9
+ def self.name(value = nil)
10
+ @name = value if value
11
+ @name
12
+ end
13
+
14
+ def self.description(value = nil)
15
+ @description = value if value
16
+ @description
17
+ end
18
+
19
+ def self.input_schema(value = nil)
20
+ @input_schema = value if value
21
+ @input_schema
22
+ end
23
+
24
+ def self.cache(value = nil)
25
+ @cache = value if value
26
+ @cache
27
+ end
28
+
29
+ def self.definition
30
+ {
31
+ name: @name,
32
+ description: @description,
33
+ input_schema: @input_schema,
34
+ cache_control: @cache ? { type: "ephemeral" } : nil
35
+ }.compact
36
+ end
37
+
38
+ def self.tool_name
39
+ definition[:name]
40
+ end
41
+
42
+ def execute(input, login)
43
+ raise NotImplementedError, "Subclasses must implement execute"
44
+ end
45
+ end
46
+ end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LlmGateway
4
- VERSION = "0.1.2"
4
+ VERSION = "0.1.4"
5
5
  end
data/lib/llm_gateway.rb CHANGED
@@ -7,6 +7,7 @@ require_relative "llm_gateway/fluent_mapper"
7
7
  require_relative "llm_gateway/base_client"
8
8
  require_relative "llm_gateway/client"
9
9
  require_relative "llm_gateway/prompt"
10
+ require_relative "llm_gateway/tool"
10
11
 
11
12
  # Load adapters - order matters for inheritance
12
13
  require_relative "llm_gateway/adapters/claude/client"
@@ -0,0 +1,72 @@
1
+ require_relative 'file_search_tool'
2
+ require_relative 'file_search_prompt'
3
+ require 'debug'
4
+
5
+ # Bash File Search Assistant using LlmGateway architecture
6
+
7
+ class FileSearchBot
8
+ def initialize(model, api_key)
9
+ @transcript = []
10
+ @model = model
11
+ @api_key = api_key
12
+ end
13
+
14
+ def query(input)
15
+ process_query(input)
16
+ end
17
+
18
+ private
19
+
20
+ def process_query(query)
21
+ # Add user message to transcript
22
+ @transcript << { role: 'user', content: [ { type: 'text', text: query } ] }
23
+
24
+ begin
25
+ prompt = FileSearchPrompt.new(@model, @transcript, @api_key)
26
+ result = prompt.post
27
+ process_response(result[:choices][0][:content])
28
+ rescue => e
29
+ puts "Error: #{e.message}"
30
+ puts "Backtrace: #{e.backtrace.join("\n")}"
31
+ puts "I give up as bot"
32
+ end
33
+ end
34
+
35
+ def process_response(response)
36
+ # Add assistant response to transcript
37
+ @transcript << { role: 'assistant', content: response }
38
+
39
+ response.each do |message|
40
+ if message[:type] == 'text'
41
+ puts "\nBot: #{message[:text]}\n"
42
+ elsif message[:type] == 'tool_use'
43
+ result = handle_tool_use(message)
44
+
45
+ # Add tool result to transcript
46
+ tool_result = {
47
+ type: 'tool_result',
48
+ tool_use_id: message[:id],
49
+ content: result
50
+ }
51
+ @transcript << { role: 'user', content: [ tool_result ] }
52
+
53
+ # Continue conversation with tool result
54
+ follow_up_prompt = FileSearchPrompt.new(@model, @transcript, @api_key)
55
+ follow_up = follow_up_prompt.post
56
+
57
+ process_response(follow_up[:choices][0][:content]) if follow_up[:choices][0][:content]
58
+ end
59
+ end
60
+ end
61
+
62
+ def handle_tool_use(message)
63
+ if message[:name] == 'execute_bash_command'
64
+ tool = FileSearchTool.new
65
+ tool.execute(message[:input])
66
+ else
67
+ "Unknown tool: #{message[:name]}"
68
+ end
69
+ rescue StandardError => e
70
+ "Error executing tool: #{e.message}"
71
+ end
72
+ end
@@ -0,0 +1,56 @@
1
+ require_relative 'file_search_tool'
2
+
3
+ class FileSearchPrompt < LlmGateway::Prompt
4
+ def initialize(model, transcript, api_key)
5
+ super(model)
6
+ @transcript = transcript
7
+ @api_key = api_key
8
+ end
9
+
10
+ def prompt
11
+ @transcript
12
+ end
13
+
14
+ def system_prompt
15
+ <<~SYSTEM
16
+ You are a helpful assistant that can find things for them in directories.
17
+
18
+ # Bash File Search Assistant
19
+
20
+ You are a bash command-line assistant specialized in helping users find information in files and directories. Your role is to translate natural language queries into effective bash commands using search and file inspection tools.
21
+
22
+ ## Available Commands
23
+
24
+ You have access to these bash commands:
25
+ - `find` - Locate files and directories by name, type, size, date, etc.
26
+ - `grep` - Search for text patterns within files
27
+ - `cat` - Display entire file contents
28
+ - `head` - Show first lines of files
29
+ - `tail` - Show last lines of files
30
+ - `ls` - List directory contents with various options
31
+ - `wc` - Count lines, words, characters
32
+ - `sort` - Sort file contents
33
+ - `uniq` - Remove duplicate lines
34
+ - `awk` - Text processing and pattern extraction
35
+ - `sed` - Stream editing and text manipulation
36
+
37
+ ## Your Process
38
+
39
+ 1. **Understand the Query**: Parse what the user is looking for
40
+ 2. **Choose Strategy**: Determine the best combination of commands
41
+ 3. **Execute Commands**: Use the execute_bash_command tool with exact bash commands
42
+ 4. **Explain**: Briefly explain what each command does
43
+ 5. **Suggest Refinements**: Offer ways to narrow or expand the search if needed
44
+
45
+ Always use the execute_bash_command tool to run commands rather than just suggesting them.
46
+ SYSTEM
47
+ end
48
+
49
+ def tools
50
+ [ FileSearchTool.definition ]
51
+ end
52
+
53
+ def post
54
+ LlmGateway::Client.chat(model, prompt, tools: tools, system: system_prompt, api_key: @api_key)
55
+ end
56
+ end
@@ -0,0 +1,32 @@
1
+
2
+ class FileSearchTool < LlmGateway::Tool
3
+ name 'execute_bash_command'
4
+ description 'Execute bash commands for file searching and directory exploration'
5
+ input_schema({
6
+ type: 'object',
7
+ properties: {
8
+ command: { type: 'string', description: 'The bash command to execute' },
9
+ explanation: { type: 'string', description: 'Explanation of what the command does' }
10
+ },
11
+ required: [ 'command', 'explanation' ]
12
+ })
13
+
14
+ def execute(input, login = nil)
15
+ command = input[:command]
16
+ explanation = input[:explanation]
17
+
18
+ puts "Executing: #{command}"
19
+ puts "Purpose: #{explanation}\n\n"
20
+
21
+ begin
22
+ result = `#{command} 2>&1`
23
+ if $?.success?
24
+ result.empty? ? "Command completed successfully (no output)" : result
25
+ else
26
+ "Error: #{result}"
27
+ end
28
+ rescue => e
29
+ "Error executing command: #{e.message}"
30
+ end
31
+ end
32
+ end
@@ -0,0 +1,49 @@
1
+ require 'tty-prompt'
2
+ require_relative '../../lib/llm_gateway'
3
+ require_relative 'file_search_bot'
4
+
5
+ # Terminal Runner for FileSearchBot
6
+ class FileSearchTerminalRunner
7
+ def initialize
8
+ @prompt = TTY::Prompt.new
9
+ end
10
+
11
+ def start
12
+ puts "File Search Assistant - I can help you find files and search through directories."
13
+ puts "First, let's configure your LLM settings:\n\n"
14
+
15
+ model, api_key = setup_configuration
16
+ bot = FileSearchBot.new(model, api_key)
17
+
18
+ puts "\nGreat! Now you can start searching."
19
+ puts "Type 'quit' or 'exit' to stop.\n\n"
20
+
21
+ loop do
22
+ user_input = @prompt.ask("What would you like to find?")
23
+
24
+ break if [ 'quit', 'exit' ].include?(user_input.downcase)
25
+
26
+ bot.query(user_input)
27
+ end
28
+ end
29
+
30
+ private
31
+
32
+ def setup_configuration
33
+ model = @prompt.ask("Enter model (default: claude-3-7-sonnet-20250219):") do |q|
34
+ q.default 'claude-3-7-sonnet-20250219'
35
+ end
36
+
37
+ api_key = @prompt.mask("Enter your API key:") do |q|
38
+ q.required true
39
+ end
40
+
41
+ [ model, api_key ]
42
+ end
43
+ end
44
+
45
+ # Start the bot
46
+ if __FILE__ == $0
47
+ runner = FileSearchTerminalRunner.new
48
+ runner.start
49
+ end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm_gateway
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.2
4
+ version: 0.1.4
5
5
  platform: ruby
6
6
  authors:
7
7
  - billybonks
@@ -40,8 +40,13 @@ files:
40
40
  - lib/llm_gateway/errors.rb
41
41
  - lib/llm_gateway/fluent_mapper.rb
42
42
  - lib/llm_gateway/prompt.rb
43
+ - lib/llm_gateway/tool.rb
43
44
  - lib/llm_gateway/utils.rb
44
45
  - lib/llm_gateway/version.rb
46
+ - sample/directory_bot/file_search_bot.rb
47
+ - sample/directory_bot/file_search_prompt.rb
48
+ - sample/directory_bot/file_search_tool.rb
49
+ - sample/directory_bot/run.rb
45
50
  - sig/llm_gateway.rbs
46
51
  homepage: https://github.com/Hyper-Unearthing/llm_gateway
47
52
  licenses: