n2b 0.3.1 → 0.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +106 -17
- data/lib/n2b/base.rb +73 -27
- data/lib/n2b/cli.rb +45 -16
- data/lib/n2b/config/models.yml +47 -0
- data/lib/n2b/llm/claude.rb +14 -3
- data/lib/n2b/llm/gemini.rb +13 -5
- data/lib/n2b/llm/ollama.rb +129 -0
- data/lib/n2b/llm/open_ai.rb +13 -3
- data/lib/n2b/llm/open_router.rb +116 -0
- data/lib/n2b/model_config.rb +112 -0
- data/lib/n2b/version.rb +1 -1
- metadata +7 -3
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 58382cb632f67e6ad3ca031c71a2c53bd5cd477ea32963d89f0d253d8b43bbeb
|
4
|
+
data.tar.gz: facb990aad05d93cac662fdbf60075ad3606dd43083f408a8e0d60d5940df359
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 8dd53c35513bf7071c5aecb7f724ebc44f522a166ef1ca8dc4aecfda3f9cd801dd7d85bce6762a4fc8bd5e1a98aa87eaa9e8080db3302a17d5875f8010b90867
|
7
|
+
data.tar.gz: 866846f2c517fe7dcee27172640151b1d6e567161b473a7c5a39c3894df15ea25b43e2278b24fd783f93c5eac8f507e7d331a85ef5fc5084990f89c227c19485
|
data/README.md
CHANGED
@@ -16,6 +16,15 @@ N2B (Natural Language to Bash & Ruby) is a Ruby gem that leverages AI to convert
|
|
16
16
|
- **📊 Errbit Integration**: Analyze Errbit errors and generate detailed reports
|
17
17
|
- **🎫 Scrum Tickets**: Create formatted Scrum tickets from errors
|
18
18
|
|
19
|
+
### 🆕 **New in v0.4.0: Flexible Model Configuration**
|
20
|
+
|
21
|
+
- **🎯 Multiple LLM Providers**: Claude, OpenAI, Gemini, OpenRouter, Ollama
|
22
|
+
- **🔧 Custom Models**: Use any model name - fine-tunes, beta models, custom deployments
|
23
|
+
- **📋 Suggested Models**: Curated lists of latest models with easy selection
|
24
|
+
- **🚀 Latest Models**: OpenAI O3/O4 series, Gemini 2.5, updated OpenRouter models
|
25
|
+
- **🔄 Backward Compatible**: Existing configurations continue working seamlessly
|
26
|
+
- **⚡ No Restrictions**: Direct API model names accepted without validation
|
27
|
+
|
19
28
|
## Installation
|
20
29
|
|
21
30
|
```bash
|
@@ -61,16 +70,74 @@ n2rscrum "Create a user authentication system"
|
|
61
70
|
|
62
71
|
## Configuration
|
63
72
|
|
64
|
-
|
73
|
+
N2B now features a **flexible model configuration system** that supports both suggested models and custom model names across all providers.
|
74
|
+
|
75
|
+
### Quick Setup
|
65
76
|
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
77
|
+
Run the configuration command to get started:
|
78
|
+
|
79
|
+
```bash
|
80
|
+
n2b -c
|
81
|
+
```
|
82
|
+
|
83
|
+
This will guide you through:
|
84
|
+
- Selecting your preferred LLM provider (Claude, OpenAI, Gemini, OpenRouter, Ollama)
|
85
|
+
- Choosing from suggested models or entering custom model names
|
86
|
+
- Setting up API keys
|
87
|
+
- Configuring privacy settings
|
88
|
+
|
89
|
+
### Supported Models
|
90
|
+
|
91
|
+
#### 🤖 **Claude** (Default: sonnet)
|
92
|
+
- `haiku` → claude-3-haiku-20240307
|
93
|
+
- `sonnet` → claude-3-sonnet-20240229
|
94
|
+
- `sonnet35` → claude-3-5-sonnet-20240620
|
95
|
+
- `sonnet37` → claude-3-7-sonnet-20250219
|
96
|
+
- `sonnet40` → claude-sonnet-4-20250514
|
97
|
+
- **Custom models**: Any Claude model name
|
98
|
+
|
99
|
+
#### 🧠 **OpenAI** (Default: gpt-4o-mini)
|
100
|
+
- `gpt-4o` → gpt-4o
|
101
|
+
- `gpt-4o-mini` → gpt-4o-mini
|
102
|
+
- `o3` → o3 (Latest reasoning model)
|
103
|
+
- `o3-mini` → o3-mini
|
104
|
+
- `o3-mini-high` → o3-mini-high
|
105
|
+
- `o4` → o4
|
106
|
+
- `o4-mini` → o4-mini
|
107
|
+
- `o4-mini-high` → o4-mini-high
|
108
|
+
- **Custom models**: Any OpenAI model name
|
109
|
+
|
110
|
+
#### 🔮 **Gemini** (Default: gemini-2.5-flash)
|
111
|
+
- `gemini-2.5-flash` → gemini-2.5-flash-preview-05-20
|
112
|
+
- `gemini-2.5-pro` → gemini-2.5-pro-preview-05-06
|
113
|
+
- **Custom models**: Any Gemini model name
|
114
|
+
|
115
|
+
#### 🌐 **OpenRouter** (Default: deepseek-v3)
|
116
|
+
- `deepseek-v3` → deepseek-v3-0324
|
117
|
+
- `deepseek-r1-llama-8b` → deepseek-r1-distill-llama-8b
|
118
|
+
- `llama-3.3-70b` → llama-3.3-70b-instruct
|
119
|
+
- `llama-3.3-8b` → llama-3.3-8b-instruct
|
120
|
+
- `wayfinder-large` → wayfinder-large-70b-llama-3.3
|
121
|
+
- **Custom models**: Any OpenRouter model name
|
122
|
+
|
123
|
+
#### 🦙 **Ollama** (Default: llama3)
|
124
|
+
- `llama3` → llama3
|
125
|
+
- `mistral` → mistral
|
126
|
+
- `codellama` → codellama
|
127
|
+
- `qwen` → qwen2.5
|
128
|
+
- **Custom models**: Any locally available Ollama model
|
129
|
+
|
130
|
+
### Custom Configuration File
|
131
|
+
|
132
|
+
You can use a custom config file by setting the `N2B_CONFIG_FILE` environment variable:
|
133
|
+
|
134
|
+
```bash
|
135
|
+
export N2B_CONFIG_FILE=/path/to/your/config.yml
|
136
|
+
```
|
137
|
+
|
138
|
+
You can also set the history file location using the `N2B_HISTORY_FILE` environment variable:
|
139
|
+
```bash
|
140
|
+
export N2B_HISTORY_FILE=/path/to/your/history
|
74
141
|
```
|
75
142
|
|
76
143
|
## Quick Example N2B
|
@@ -119,16 +186,38 @@ input_string.to_s.scan(/[\/\w.-]+\.rb(?=\s|:|$)/)
|
|
119
186
|
Install the gem by running:
|
120
187
|
gem install n2b
|
121
188
|
|
122
|
-
##
|
189
|
+
## Enhanced Model Selection
|
123
190
|
|
124
|
-
|
125
|
-
|
191
|
+
N2B v0.4.0 introduces a **flexible model configuration system**:
|
192
|
+
|
193
|
+
### Interactive Model Selection
|
194
|
+
|
195
|
+
When you run `n2b -c`, you'll see an enhanced interface:
|
196
|
+
|
197
|
+
```
|
198
|
+
Choose a model for openai:
|
199
|
+
1. gpt-4o (gpt-4o)
|
200
|
+
2. gpt-4o-mini (gpt-4o-mini) [default]
|
201
|
+
3. o3 (o3)
|
202
|
+
4. o3-mini (o3-mini)
|
203
|
+
5. o3-mini-high (o3-mini-high)
|
204
|
+
6. o4 (o4)
|
205
|
+
7. o4-mini (o4-mini)
|
206
|
+
8. o4-mini-high (o4-mini-high)
|
207
|
+
9. custom (enter your own model name)
|
208
|
+
|
209
|
+
Enter choice (1-9) or model name [gpt-4o-mini]: 9
|
210
|
+
Enter custom model name: gpt-5-preview
|
211
|
+
✓ Using custom model: gpt-5-preview
|
212
|
+
```
|
213
|
+
|
214
|
+
### Key Features
|
126
215
|
|
127
|
-
|
128
|
-
-
|
129
|
-
-
|
130
|
-
-
|
131
|
-
-
|
216
|
+
- **🎯 Suggested Models**: Curated list of latest models for each provider
|
217
|
+
- **🔧 Custom Models**: Enter any model name for fine-tunes, beta models, etc.
|
218
|
+
- **🔄 Backward Compatible**: Existing configurations continue working
|
219
|
+
- **🚀 Latest Models**: Access to newest OpenAI O-series, Gemini 2.5, etc.
|
220
|
+
- **⚡ No Validation**: Direct API model names accepted - let the API handle errors
|
132
221
|
|
133
222
|
Configuration is stored in `~/.n2b/config.yml`.
|
134
223
|
|
data/lib/n2b/base.rb
CHANGED
@@ -1,3 +1,5 @@
|
|
1
|
+
require_relative 'model_config'
|
2
|
+
|
1
3
|
module N2B
|
2
4
|
class Base
|
3
5
|
|
@@ -14,42 +16,86 @@ module N2B
|
|
14
16
|
|
15
17
|
def get_config( reconfigure: false)
|
16
18
|
config = load_config
|
17
|
-
api_key = ENV['CLAUDE_API_KEY'] || config['access_key']
|
18
|
-
model = config['model']
|
19
|
+
api_key = ENV['CLAUDE_API_KEY'] || config['access_key'] # This will be used as default or for existing configs
|
20
|
+
model = config['model'] # Model will be handled per LLM
|
19
21
|
|
20
|
-
if api_key.nil? || api_key == '' ||
|
21
|
-
print "choose a language model to use (1:claude, 2:openai, 3:gemini) #{ config['llm'] }: "
|
22
|
+
if api_key.nil? || api_key == '' || config['llm'].nil? || reconfigure # Added config['llm'].nil? to force config if llm type isn't set
|
23
|
+
print "choose a language model to use (1:claude, 2:openai, 3:gemini, 4:openrouter, 5:ollama) #{ config['llm'] }: "
|
22
24
|
llm = $stdin.gets.chomp
|
23
|
-
llm = config['llm'] if llm.empty?
|
24
|
-
|
25
|
-
|
25
|
+
llm = config['llm'] if llm.empty? && !config['llm'].nil? # Keep current if input is empty and current exists
|
26
|
+
|
27
|
+
unless ['claude', 'openai', 'gemini', 'openrouter', 'ollama', '1', '2', '3', '4', '5'].include?(llm)
|
28
|
+
puts "Invalid language model. Choose from: claude, openai, gemini, openrouter, ollama, or 1-5"
|
26
29
|
exit 1
|
27
30
|
end
|
28
31
|
llm = 'claude' if llm == '1'
|
29
32
|
llm = 'openai' if llm == '2'
|
30
33
|
llm = 'gemini' if llm == '3'
|
31
|
-
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
|
40
|
-
print "Enter your #{llm} API key: #{ api_key.nil? || api_key.empty? ? '' : '(leave blank to keep the current key '+api_key[0..10]+'...)' }"
|
41
|
-
api_key = $stdin.gets.chomp
|
42
|
-
api_key = config['access_key'] if api_key.empty?
|
43
|
-
print "Choose a model (#{ llm_class::MODELS.keys }, #{ llm_class::MODELS.keys.first } default): "
|
44
|
-
model = $stdin.gets.chomp
|
45
|
-
model = llm_class::MODELS.keys.first if model.empty?
|
34
|
+
llm = 'openrouter' if llm == '4'
|
35
|
+
llm = 'ollama' if llm == '5'
|
36
|
+
|
37
|
+
# Set default LLM if needed
|
38
|
+
if llm.nil? || llm.empty?
|
39
|
+
llm = 'claude'
|
40
|
+
end
|
41
|
+
|
46
42
|
config['llm'] = llm
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
puts "
|
51
|
-
|
43
|
+
|
44
|
+
if llm == 'ollama'
|
45
|
+
# Ollama specific configuration
|
46
|
+
puts "Ollama typically runs locally and doesn't require an API key."
|
47
|
+
|
48
|
+
# Use new model configuration system
|
49
|
+
current_model = config['model']
|
50
|
+
model = N2B::ModelConfig.get_model_choice(llm, current_model)
|
51
|
+
|
52
|
+
# Configure Ollama API URL
|
53
|
+
current_ollama_api_url = config['ollama_api_url'] || N2M::Llm::Ollama::DEFAULT_OLLAMA_API_URI
|
54
|
+
print "Enter Ollama API base URL (default: #{current_ollama_api_url}): "
|
55
|
+
ollama_api_url_input = $stdin.gets.chomp
|
56
|
+
config['ollama_api_url'] = ollama_api_url_input.empty? ? current_ollama_api_url : ollama_api_url_input
|
57
|
+
config.delete('access_key') # Remove access_key if switching to ollama
|
58
|
+
else
|
59
|
+
# Configuration for API key based LLMs (Claude, OpenAI, Gemini, OpenRouter)
|
60
|
+
current_api_key = config['access_key'] # Use existing key from config as default
|
61
|
+
print "Enter your #{llm} API key: #{ current_api_key.nil? || current_api_key.empty? ? '' : '(leave blank to keep the current key)' }"
|
62
|
+
api_key_input = $stdin.gets.chomp
|
63
|
+
config['access_key'] = api_key_input if !api_key_input.empty?
|
64
|
+
config['access_key'] = current_api_key if api_key_input.empty? && !current_api_key.nil? && !current_api_key.empty?
|
65
|
+
|
66
|
+
# Ensure API key is present if not Ollama
|
67
|
+
if config['access_key'].nil? || config['access_key'].empty?
|
68
|
+
puts "API key is required for #{llm}."
|
69
|
+
exit 1
|
70
|
+
end
|
71
|
+
|
72
|
+
# Use new model configuration system
|
73
|
+
current_model = config['model']
|
74
|
+
model = N2B::ModelConfig.get_model_choice(llm, current_model)
|
75
|
+
|
76
|
+
if llm == 'openrouter'
|
77
|
+
current_site_url = config['openrouter_site_url'] || ""
|
78
|
+
print "Enter your OpenRouter Site URL (optional, for HTTP-Referer header, current: #{current_site_url}): "
|
79
|
+
openrouter_site_url_input = $stdin.gets.chomp
|
80
|
+
config['openrouter_site_url'] = openrouter_site_url_input.empty? ? current_site_url : openrouter_site_url_input
|
81
|
+
|
82
|
+
current_site_name = config['openrouter_site_name'] || ""
|
83
|
+
print "Enter your OpenRouter Site Name (optional, for X-Title header, current: #{current_site_name}): "
|
84
|
+
openrouter_site_name_input = $stdin.gets.chomp
|
85
|
+
config['openrouter_site_name'] = openrouter_site_name_input.empty? ? current_site_name : openrouter_site_name_input
|
86
|
+
end
|
52
87
|
end
|
88
|
+
|
89
|
+
config['model'] = model # Store selected model for all types
|
90
|
+
|
91
|
+
# Ensure privacy settings are initialized if not present
|
92
|
+
config['privacy'] ||= {}
|
93
|
+
# Set defaults for any privacy settings that are nil
|
94
|
+
config['privacy']['send_shell_history'] = false if config['privacy']['send_shell_history'].nil?
|
95
|
+
config['privacy']['send_llm_history'] = true if config['privacy']['send_llm_history'].nil?
|
96
|
+
config['privacy']['send_current_directory'] = true if config['privacy']['send_current_directory'].nil?
|
97
|
+
config['append_to_shell_history'] = false if config['append_to_shell_history'].nil?
|
98
|
+
|
53
99
|
puts "configure privacy settings directly in the config file #{CONFIG_FILE}"
|
54
100
|
config['privacy'] ||= {}
|
55
101
|
config['privacy']['send_shell_history'] = false
|
data/lib/n2b/cli.rb
CHANGED
@@ -457,6 +457,10 @@ JSON_INSTRUCTION
|
|
457
457
|
N2M::Llm::Claude.new(config)
|
458
458
|
when 'gemini'
|
459
459
|
N2M::Llm::Gemini.new(config)
|
460
|
+
when 'openrouter'
|
461
|
+
N2M::Llm::OpenRouter.new(config)
|
462
|
+
when 'ollama'
|
463
|
+
N2M::Llm::Ollama.new(config)
|
460
464
|
else
|
461
465
|
# Should not happen if config is validated, but as a safeguard:
|
462
466
|
raise N2B::Error, "Unsupported LLM service: #{llm_service_name}"
|
@@ -466,6 +470,13 @@ JSON_INSTRUCTION
|
|
466
470
|
response_json_str
|
467
471
|
rescue N2B::LlmApiError => e # This catches errors from analyze_code_diff
|
468
472
|
puts "Error communicating with the LLM: #{e.message}"
|
473
|
+
|
474
|
+
# Check if it might be a model-related error
|
475
|
+
if e.message.include?('model') || e.message.include?('Model') || e.message.include?('invalid') || e.message.include?('not found')
|
476
|
+
puts "\nThis might be due to an invalid or unsupported model configuration."
|
477
|
+
puts "Run 'n2b -c' to reconfigure your model settings."
|
478
|
+
end
|
479
|
+
|
469
480
|
return '{"summary": "Error: Could not analyze diff due to LLM API error.", "errors": [], "improvements": []}'
|
470
481
|
end
|
471
482
|
end
|
@@ -485,7 +496,23 @@ JSON_INSTRUCTION
|
|
485
496
|
|
486
497
|
def call_llm(prompt, config)
|
487
498
|
begin # Added begin for LlmApiError rescue
|
488
|
-
|
499
|
+
llm_service_name = config['llm']
|
500
|
+
llm = case llm_service_name
|
501
|
+
when 'openai'
|
502
|
+
N2M::Llm::OpenAi.new(config)
|
503
|
+
when 'claude'
|
504
|
+
N2M::Llm::Claude.new(config)
|
505
|
+
when 'gemini'
|
506
|
+
N2M::Llm::Gemini.new(config)
|
507
|
+
when 'openrouter'
|
508
|
+
N2M::Llm::OpenRouter.new(config)
|
509
|
+
when 'ollama'
|
510
|
+
N2M::Llm::Ollama.new(config)
|
511
|
+
else
|
512
|
+
# Fallback or error, though config validation should prevent this
|
513
|
+
puts "Warning: Unsupported LLM service '#{llm_service_name}' configured. Falling back to Claude."
|
514
|
+
N2M::Llm::Claude.new(config)
|
515
|
+
end
|
489
516
|
|
490
517
|
# This content is specific to bash command generation
|
491
518
|
content = <<-EOF
|
@@ -508,24 +535,26 @@ JSON_INSTRUCTION
|
|
508
535
|
# which implies it expects a Hash. Let's ensure call_llm returns a Hash.
|
509
536
|
# This internal JSON parsing is for the *content* of a successful LLM response.
|
510
537
|
# The LlmApiError for network/auth issues should be caught before this.
|
511
|
-
|
512
|
-
# Check if response_json_str is already a Hash (parsed JSON)
|
513
|
-
if response_json_str.is_a?(Hash)
|
514
|
-
response_json_str
|
515
|
-
else
|
538
|
+
begin
|
516
539
|
parsed_response = JSON.parse(response_json_str)
|
517
540
|
parsed_response
|
541
|
+
rescue JSON::ParserError => e
|
542
|
+
puts "Error parsing LLM response JSON for command generation: #{e.message}"
|
543
|
+
# This is a fallback for when the LLM response *content* is not valid JSON.
|
544
|
+
{ "commands" => ["echo 'Error: LLM returned invalid JSON content.'"], "explanation" => "The response from the language model was not valid JSON." }
|
518
545
|
end
|
519
|
-
rescue
|
520
|
-
puts "Error
|
521
|
-
|
522
|
-
|
523
|
-
|
524
|
-
|
525
|
-
|
526
|
-
|
527
|
-
|
528
|
-
|
546
|
+
rescue N2B::LlmApiError => e
|
547
|
+
puts "Error communicating with the LLM: #{e.message}"
|
548
|
+
|
549
|
+
# Check if it might be a model-related error
|
550
|
+
if e.message.include?('model') || e.message.include?('Model') || e.message.include?('invalid') || e.message.include?('not found')
|
551
|
+
puts "\nThis might be due to an invalid or unsupported model configuration."
|
552
|
+
puts "Run 'n2b -c' to reconfigure your model settings."
|
553
|
+
end
|
554
|
+
|
555
|
+
# This is the fallback for LlmApiError (network, auth, etc.)
|
556
|
+
{ "commands" => ["echo 'LLM API error occurred. Please check your configuration and network.'"], "explanation" => "Failed to connect to the LLM." }
|
557
|
+
end
|
529
558
|
end
|
530
559
|
|
531
560
|
def get_user_shell
|
@@ -0,0 +1,47 @@
|
|
1
|
+
# N2B Model Configuration
|
2
|
+
# Format: display_name -> api_model_name
|
3
|
+
# Users can select from suggested models or enter custom model names
|
4
|
+
|
5
|
+
claude:
|
6
|
+
suggested:
|
7
|
+
haiku: "claude-3-haiku-20240307"
|
8
|
+
sonnet: "claude-3-sonnet-20240229"
|
9
|
+
sonnet35: "claude-3-5-sonnet-20240620"
|
10
|
+
sonnet37: "claude-3-7-sonnet-20250219"
|
11
|
+
sonnet40: "claude-sonnet-4-20250514"
|
12
|
+
default: "sonnet"
|
13
|
+
|
14
|
+
openai:
|
15
|
+
suggested:
|
16
|
+
gpt-4o: "gpt-4o"
|
17
|
+
gpt-4o-mini: "gpt-4o-mini"
|
18
|
+
o3: "o3"
|
19
|
+
o3-mini: "o3-mini"
|
20
|
+
o3-mini-high: "o3-mini-high"
|
21
|
+
o4: "o4"
|
22
|
+
o4-mini: "o4-mini"
|
23
|
+
o4-mini-high: "o4-mini-high"
|
24
|
+
default: "gpt-4o-mini"
|
25
|
+
|
26
|
+
gemini:
|
27
|
+
suggested:
|
28
|
+
gemini-2.5-flash: "gemini-2.5-flash-preview-05-20"
|
29
|
+
gemini-2.5-pro: "gemini-2.5-pro-preview-05-06"
|
30
|
+
default: "gemini-2.5-flash"
|
31
|
+
|
32
|
+
openrouter:
|
33
|
+
suggested:
|
34
|
+
deepseek-v3: "deepseek-v3-0324"
|
35
|
+
deepseek-r1-llama-8b: "deepseek-r1-distill-llama-8b"
|
36
|
+
llama-3.3-70b: "llama-3.3-70b-instruct"
|
37
|
+
llama-3.3-8b: "llama-3.3-8b-instruct"
|
38
|
+
wayfinder-large: "wayfinder-large-70b-llama-3.3"
|
39
|
+
default: "deepseek-v3"
|
40
|
+
|
41
|
+
ollama:
|
42
|
+
suggested:
|
43
|
+
llama3: "llama3"
|
44
|
+
mistral: "mistral"
|
45
|
+
codellama: "codellama"
|
46
|
+
qwen: "qwen2.5"
|
47
|
+
default: "llama3"
|
data/lib/n2b/llm/claude.rb
CHANGED
@@ -1,13 +1,24 @@
|
|
1
|
+
require_relative '../model_config'
|
2
|
+
|
1
3
|
module N2M
|
2
4
|
module Llm
|
3
5
|
class Claude
|
4
6
|
API_URI = URI.parse('https://api.anthropic.com/v1/messages')
|
5
|
-
MODELS = { 'haiku' => 'claude-3-haiku-20240307', 'sonnet' => 'claude-3-sonnet-20240229', 'sonnet35' => 'claude-3-5-sonnet-20240620', "sonnet37" => "claude-3-7-sonnet-20250219" }
|
6
7
|
|
7
8
|
def initialize(config)
|
8
9
|
@config = config
|
9
10
|
end
|
10
11
|
|
12
|
+
def get_model_name
|
13
|
+
# Resolve model name using the centralized configuration
|
14
|
+
model_name = N2B::ModelConfig.resolve_model('claude', @config['model'])
|
15
|
+
if model_name.nil? || model_name.empty?
|
16
|
+
# Fallback to default if no model specified
|
17
|
+
model_name = N2B::ModelConfig.resolve_model('claude', N2B::ModelConfig.default_model('claude'))
|
18
|
+
end
|
19
|
+
model_name
|
20
|
+
end
|
21
|
+
|
11
22
|
def make_request( content)
|
12
23
|
uri = URI.parse('https://api.anthropic.com/v1/messages')
|
13
24
|
request = Net::HTTP::Post.new(uri)
|
@@ -16,7 +27,7 @@ module N2M
|
|
16
27
|
request['anthropic-version'] = '2023-06-01'
|
17
28
|
|
18
29
|
request.body = JSON.dump({
|
19
|
-
"model" =>
|
30
|
+
"model" => get_model_name,
|
20
31
|
"max_tokens" => 1024,
|
21
32
|
"messages" => [
|
22
33
|
{
|
@@ -71,7 +82,7 @@ module N2M
|
|
71
82
|
request['anthropic-version'] = '2023-06-01'
|
72
83
|
|
73
84
|
request.body = JSON.dump({
|
74
|
-
"model" =>
|
85
|
+
"model" => get_model_name,
|
75
86
|
"max_tokens" => @config['max_tokens'] || 1024, # Allow overriding max_tokens from config
|
76
87
|
"messages" => [
|
77
88
|
{
|
data/lib/n2b/llm/gemini.rb
CHANGED
@@ -1,21 +1,29 @@
|
|
1
1
|
require 'net/http'
|
2
2
|
require 'json'
|
3
3
|
require 'uri'
|
4
|
+
require_relative '../model_config'
|
4
5
|
|
5
6
|
module N2M
|
6
7
|
module Llm
|
7
8
|
class Gemini
|
8
9
|
API_URI = URI.parse('https://generativelanguage.googleapis.com/v1beta/models')
|
9
|
-
MODELS = {
|
10
|
-
'gemini-flash' => 'gemini-2.0-flash'
|
11
|
-
}
|
12
10
|
|
13
11
|
def initialize(config)
|
14
12
|
@config = config
|
15
13
|
end
|
16
14
|
|
15
|
+
def get_model_name
|
16
|
+
# Resolve model name using the centralized configuration
|
17
|
+
model_name = N2B::ModelConfig.resolve_model('gemini', @config['model'])
|
18
|
+
if model_name.nil? || model_name.empty?
|
19
|
+
# Fallback to default if no model specified
|
20
|
+
model_name = N2B::ModelConfig.resolve_model('gemini', N2B::ModelConfig.default_model('gemini'))
|
21
|
+
end
|
22
|
+
model_name
|
23
|
+
end
|
24
|
+
|
17
25
|
def make_request(content)
|
18
|
-
model =
|
26
|
+
model = get_model_name
|
19
27
|
uri = URI.parse("#{API_URI}/#{model}:generateContent?key=#{@config['access_key']}")
|
20
28
|
|
21
29
|
request = Net::HTTP::Post.new(uri)
|
@@ -65,7 +73,7 @@ module N2M
|
|
65
73
|
def analyze_code_diff(prompt_content)
|
66
74
|
# This method assumes prompt_content is the full, ready-to-send prompt
|
67
75
|
# including all instructions for the LLM (system message, diff, user additions, JSON format).
|
68
|
-
model =
|
76
|
+
model = get_model_name
|
69
77
|
uri = URI.parse("#{API_URI}/#{model}:generateContent?key=#{@config['access_key']}")
|
70
78
|
|
71
79
|
request = Net::HTTP::Post.new(uri)
|
@@ -0,0 +1,129 @@
|
|
1
|
+
require 'net/http'
|
2
|
+
require 'json'
|
3
|
+
require 'uri'
|
4
|
+
require_relative '../model_config'
|
5
|
+
|
6
|
+
module N2M
|
7
|
+
module Llm
|
8
|
+
class Ollama
|
9
|
+
# Default API URI for Ollama. This might need to be configurable later.
|
10
|
+
DEFAULT_OLLAMA_API_URI = 'http://localhost:11434/api/chat'
|
11
|
+
|
12
|
+
def initialize(config)
|
13
|
+
@config = config
|
14
|
+
# Allow overriding the Ollama API URI from config if needed
|
15
|
+
@api_uri = URI.parse(@config['ollama_api_url'] || DEFAULT_OLLAMA_API_URI)
|
16
|
+
end
|
17
|
+
|
18
|
+
def get_model_name
|
19
|
+
# Resolve model name using the centralized configuration
|
20
|
+
model_name = N2B::ModelConfig.resolve_model('ollama', @config['model'])
|
21
|
+
if model_name.nil? || model_name.empty?
|
22
|
+
# Fallback to default if no model specified
|
23
|
+
model_name = N2B::ModelConfig.resolve_model('ollama', N2B::ModelConfig.default_model('ollama'))
|
24
|
+
end
|
25
|
+
model_name
|
26
|
+
end
|
27
|
+
|
28
|
+
def make_request(prompt_content)
|
29
|
+
request = Net::HTTP::Post.new(@api_uri)
|
30
|
+
request.content_type = 'application/json'
|
31
|
+
|
32
|
+
# Ollama expects the model name directly in the request body.
|
33
|
+
# It also expects the full message history.
|
34
|
+
request.body = JSON.dump({
|
35
|
+
"model" => get_model_name,
|
36
|
+
"messages" => [
|
37
|
+
{
|
38
|
+
"role" => "user",
|
39
|
+
"content" => prompt_content
|
40
|
+
}
|
41
|
+
],
|
42
|
+
"stream" => false # Ensure we get the full response, not a stream
|
43
|
+
# "format" => "json" # For some Ollama versions/models to enforce JSON output
|
44
|
+
})
|
45
|
+
|
46
|
+
begin
|
47
|
+
response = Net::HTTP.start(@api_uri.hostname, @api_uri.port, use_ssl: @api_uri.scheme == 'https') do |http|
|
48
|
+
# Set timeouts: open_timeout for connection, read_timeout for waiting for response
|
49
|
+
http.open_timeout = 5 # seconds
|
50
|
+
http.read_timeout = 120 # seconds
|
51
|
+
http.request(request)
|
52
|
+
end
|
53
|
+
rescue Net::OpenTimeout, Net::ReadTimeout => e
|
54
|
+
raise N2B::LlmApiError.new("Ollama API Error: Timeout connecting or reading from Ollama at #{@api_uri}: #{e.message}")
|
55
|
+
rescue Errno::ECONNREFUSED => e
|
56
|
+
raise N2B::LlmApiError.new("Ollama API Error: Connection refused at #{@api_uri}. Is Ollama running? #{e.message}")
|
57
|
+
end
|
58
|
+
|
59
|
+
|
60
|
+
if response.code != '200'
|
61
|
+
raise N2B::LlmApiError.new("Ollama API Error: #{response.code} #{response.message} - #{response.body}")
|
62
|
+
end
|
63
|
+
|
64
|
+
# Ollama's chat response structure is slightly different. The message is in `message.content`.
|
65
|
+
raw_response_body = JSON.parse(response.body)
|
66
|
+
answer_content = raw_response_body['message']['content']
|
67
|
+
|
68
|
+
begin
|
69
|
+
# Attempt to parse the answer_content as JSON
|
70
|
+
# This is for n2b's expectation of JSON with 'commands' and 'explanation'
|
71
|
+
parsed_answer = JSON.parse(answer_content)
|
72
|
+
if parsed_answer.is_a?(Hash) && parsed_answer.key?('commands')
|
73
|
+
parsed_answer
|
74
|
+
else
|
75
|
+
# If the content itself is valid JSON but not the expected structure, wrap it.
|
76
|
+
{ 'commands' => [answer_content], 'explanation' => 'Response from LLM (JSON content).' }
|
77
|
+
end
|
78
|
+
rescue JSON::ParserError
|
79
|
+
# If answer_content is not JSON, wrap it in the n2b expected structure
|
80
|
+
{ 'commands' => [answer_content], 'explanation' => answer_content }
|
81
|
+
end
|
82
|
+
end
|
83
|
+
|
84
|
+
def analyze_code_diff(prompt_content)
|
85
|
+
request = Net::HTTP::Post.new(@api_uri)
|
86
|
+
request.content_type = 'application/json'
|
87
|
+
|
88
|
+
# The prompt_content for diff analysis should instruct the LLM to return JSON.
|
89
|
+
# For Ollama, you can also try adding "format": "json" to the request if the model supports it.
|
90
|
+
request_body = {
|
91
|
+
"model" => @config['model'] || MODELS.keys.first,
|
92
|
+
"messages" => [
|
93
|
+
{
|
94
|
+
"role" => "user",
|
95
|
+
"content" => prompt_content # This prompt must ask for JSON output
|
96
|
+
}
|
97
|
+
],
|
98
|
+
"stream" => false
|
99
|
+
}
|
100
|
+
# Some Ollama models/versions might respect a "format": "json" parameter
|
101
|
+
# request_body["format"] = "json" # Uncomment if you want to try this
|
102
|
+
|
103
|
+
request.body = JSON.dump(request_body)
|
104
|
+
|
105
|
+
begin
|
106
|
+
response = Net::HTTP.start(@api_uri.hostname, @api_uri.port, use_ssl: @api_uri.scheme == 'https') do |http|
|
107
|
+
http.open_timeout = 5
|
108
|
+
http.read_timeout = 180 # Potentially longer for analysis
|
109
|
+
http.request(request)
|
110
|
+
end
|
111
|
+
rescue Net::OpenTimeout, Net::ReadTimeout => e
|
112
|
+
raise N2B::LlmApiError.new("Ollama API Error (analyze_code_diff): Timeout for #{@api_uri}: #{e.message}")
|
113
|
+
rescue Errno::ECONNREFUSED => e
|
114
|
+
raise N2B::LlmApiError.new("Ollama API Error (analyze_code_diff): Connection refused at #{@api_uri}. Is Ollama running? #{e.message}")
|
115
|
+
end
|
116
|
+
|
117
|
+
|
118
|
+
if response.code != '200'
|
119
|
+
raise N2B::LlmApiError.new("Ollama API Error (analyze_code_diff): #{response.code} #{response.message} - #{response.body}")
|
120
|
+
end
|
121
|
+
|
122
|
+
# Return the raw JSON string from the LLM's response content.
|
123
|
+
# The calling method (call_llm_for_diff_analysis in cli.rb) will parse this.
|
124
|
+
raw_response_body = JSON.parse(response.body)
|
125
|
+
raw_response_body['message']['content']
|
126
|
+
end
|
127
|
+
end
|
128
|
+
end
|
129
|
+
end
|
data/lib/n2b/llm/open_ai.rb
CHANGED
@@ -1,24 +1,34 @@
|
|
1
1
|
require 'net/http'
|
2
2
|
require 'json'
|
3
3
|
require 'uri'
|
4
|
+
require_relative '../model_config'
|
4
5
|
|
5
6
|
module N2M
|
6
7
|
module Llm
|
7
8
|
class OpenAi
|
8
9
|
API_URI = URI.parse('https://api.openai.com/v1/chat/completions')
|
9
|
-
MODELS = { 'gpt-4o' => 'gpt-4o','gpt-4o-mini'=>'gpt-4o-mini', 'gpt-35' => 'gpt-3.5-turbo-1106' }
|
10
10
|
|
11
11
|
def initialize(config)
|
12
12
|
@config = config
|
13
13
|
end
|
14
14
|
|
15
|
+
def get_model_name
|
16
|
+
# Resolve model name using the centralized configuration
|
17
|
+
model_name = N2B::ModelConfig.resolve_model('openai', @config['model'])
|
18
|
+
if model_name.nil? || model_name.empty?
|
19
|
+
# Fallback to default if no model specified
|
20
|
+
model_name = N2B::ModelConfig.resolve_model('openai', N2B::ModelConfig.default_model('openai'))
|
21
|
+
end
|
22
|
+
model_name
|
23
|
+
end
|
24
|
+
|
15
25
|
def make_request(content)
|
16
26
|
request = Net::HTTP::Post.new(API_URI)
|
17
27
|
request.content_type = 'application/json'
|
18
28
|
request['Authorization'] = "Bearer #{@config['access_key']}"
|
19
29
|
|
20
30
|
request.body = JSON.dump({
|
21
|
-
"model" =>
|
31
|
+
"model" => get_model_name,
|
22
32
|
response_format: { type: 'json_object' },
|
23
33
|
"messages" => [
|
24
34
|
{
|
@@ -54,7 +64,7 @@ module N2M
|
|
54
64
|
request['Authorization'] = "Bearer #{@config['access_key']}"
|
55
65
|
|
56
66
|
request.body = JSON.dump({
|
57
|
-
"model" =>
|
67
|
+
"model" => get_model_name,
|
58
68
|
"response_format" => { "type" => "json_object" }, # Crucial for OpenAI to return JSON
|
59
69
|
"messages" => [
|
60
70
|
{
|
@@ -0,0 +1,116 @@
|
|
1
|
+
require 'net/http'
|
2
|
+
require 'json'
|
3
|
+
require 'uri'
|
4
|
+
require_relative '../model_config'
|
5
|
+
|
6
|
+
module N2M
|
7
|
+
module Llm
|
8
|
+
class OpenRouter
|
9
|
+
API_URI = URI.parse('https://openrouter.ai/api/v1/chat/completions')
|
10
|
+
|
11
|
+
def initialize(config)
|
12
|
+
@config = config
|
13
|
+
@api_key = @config['access_key']
|
14
|
+
@site_url = @config['openrouter_site_url'] || '' # Optional: Read from config
|
15
|
+
@site_name = @config['openrouter_site_name'] || '' # Optional: Read from config
|
16
|
+
end
|
17
|
+
|
18
|
+
def get_model_name
|
19
|
+
# Resolve model name using the centralized configuration
|
20
|
+
model_name = N2B::ModelConfig.resolve_model('openrouter', @config['model'])
|
21
|
+
if model_name.nil? || model_name.empty?
|
22
|
+
# Fallback to default if no model specified
|
23
|
+
model_name = N2B::ModelConfig.resolve_model('openrouter', N2B::ModelConfig.default_model('openrouter'))
|
24
|
+
end
|
25
|
+
model_name
|
26
|
+
end
|
27
|
+
|
28
|
+
def make_request(prompt_content)
|
29
|
+
request = Net::HTTP::Post.new(API_URI)
|
30
|
+
request.content_type = 'application/json'
|
31
|
+
request['Authorization'] = "Bearer #{@api_key}"
|
32
|
+
|
33
|
+
# Add OpenRouter specific headers
|
34
|
+
request['HTTP-Referer'] = @site_url unless @site_url.empty?
|
35
|
+
request['X-Title'] = @site_name unless @site_name.empty?
|
36
|
+
|
37
|
+
request.body = JSON.dump({
|
38
|
+
"model" => get_model_name,
|
39
|
+
"messages" => [
|
40
|
+
{
|
41
|
+
"role" => "user",
|
42
|
+
"content" => prompt_content
|
43
|
+
}
|
44
|
+
]
|
45
|
+
# TODO: Consider adding max_tokens, temperature, etc. from @config if needed
|
46
|
+
})
|
47
|
+
|
48
|
+
response = Net::HTTP.start(API_URI.hostname, API_URI.port, use_ssl: true) do |http|
|
49
|
+
http.request(request)
|
50
|
+
end
|
51
|
+
|
52
|
+
if response.code != '200'
|
53
|
+
raise N2B::LlmApiError.new("LLM API Error: #{response.code} #{response.message} - #{response.body}")
|
54
|
+
end
|
55
|
+
|
56
|
+
# Assuming OpenRouter returns a similar structure to OpenAI for chat completions
|
57
|
+
answer_content = JSON.parse(response.body)['choices'].first['message']['content']
|
58
|
+
|
59
|
+
begin
|
60
|
+
# Attempt to parse the answer as JSON, as expected by the calling CLI's process_natural_language_command
|
61
|
+
parsed_answer = JSON.parse(answer_content)
|
62
|
+
# Ensure it has the 'commands' and 'explanation' structure if it's for n2b's command generation
|
63
|
+
# This might need adjustment based on how `make_request` is used.
|
64
|
+
# If it's just for generic requests, this parsing might be too specific.
|
65
|
+
# For now, mirroring the OpenAI class's attempt to parse JSON from the content.
|
66
|
+
if parsed_answer.is_a?(Hash) && parsed_answer.key?('commands')
|
67
|
+
parsed_answer
|
68
|
+
else
|
69
|
+
# If the content itself isn't the JSON structure n2b expects,
|
70
|
+
# but is valid JSON, return it. Otherwise, wrap it.
|
71
|
+
# This part needs to be robust based on actual OpenRouter responses.
|
72
|
+
{ 'commands' => [answer_content], 'explanation' => 'Response from LLM.' } # Fallback
|
73
|
+
end
|
74
|
+
rescue JSON::ParserError
|
75
|
+
# If the content isn't JSON, wrap it in the expected structure for n2b
|
76
|
+
{ 'commands' => [answer_content], 'explanation' => answer_content }
|
77
|
+
end
|
78
|
+
end
|
79
|
+
|
80
|
+
def analyze_code_diff(prompt_content)
|
81
|
+
request = Net::HTTP::Post.new(API_URI) # Chat completions endpoint
|
82
|
+
request.content_type = 'application/json'
|
83
|
+
request['Authorization'] = "Bearer #{@api_key}"
|
84
|
+
|
85
|
+
# Add OpenRouter specific headers
|
86
|
+
request['HTTP-Referer'] = @site_url unless @site_url.empty?
|
87
|
+
request['X-Title'] = @site_name unless @site_name.empty?
|
88
|
+
|
89
|
+
# The prompt_content for diff analysis should already instruct the LLM to return JSON.
|
90
|
+
request.body = JSON.dump({
|
91
|
+
"model" => get_model_name,
|
92
|
+
# "response_format" => { "type" => "json_object" }, # Some models on OpenRouter might support this
|
93
|
+
"messages" => [
|
94
|
+
{
|
95
|
+
"role" => "user",
|
96
|
+
"content" => prompt_content # This prompt should ask for JSON output
|
97
|
+
}
|
98
|
+
],
|
99
|
+
"max_tokens" => @config['max_tokens'] || 2048 # Ensure enough tokens for JSON
|
100
|
+
})
|
101
|
+
|
102
|
+
response = Net::HTTP.start(API_URI.hostname, API_URI.port, use_ssl: true) do |http|
|
103
|
+
http.request(request)
|
104
|
+
end
|
105
|
+
|
106
|
+
if response.code != '200'
|
107
|
+
raise N2B::LlmApiError.new("LLM API Error: #{response.code} #{response.message} - #{response.body}")
|
108
|
+
end
|
109
|
+
|
110
|
+
# Return the raw JSON string from the LLM's response content.
|
111
|
+
# The calling method (call_llm_for_diff_analysis in cli.rb) will parse this.
|
112
|
+
JSON.parse(response.body)['choices'].first['message']['content']
|
113
|
+
end
|
114
|
+
end
|
115
|
+
end
|
116
|
+
end
|
@@ -0,0 +1,112 @@
|
|
1
|
+
require 'yaml'
|
2
|
+
|
3
|
+
module N2B
|
4
|
+
class ModelConfig
|
5
|
+
CONFIG_PATH = File.expand_path('config/models.yml', __dir__)
|
6
|
+
|
7
|
+
def self.load_models
|
8
|
+
@models ||= YAML.load_file(CONFIG_PATH)
|
9
|
+
rescue => e
|
10
|
+
puts "Warning: Could not load models configuration: #{e.message}"
|
11
|
+
puts "Using fallback model configuration."
|
12
|
+
fallback_models
|
13
|
+
end
|
14
|
+
|
15
|
+
def self.fallback_models
|
16
|
+
{
|
17
|
+
'claude' => { 'suggested' => { 'sonnet' => 'claude-3-sonnet-20240229' }, 'default' => 'sonnet' },
|
18
|
+
'openai' => { 'suggested' => { 'gpt-4o-mini' => 'gpt-4o-mini' }, 'default' => 'gpt-4o-mini' },
|
19
|
+
'gemini' => { 'suggested' => { 'gemini-flash' => 'gemini-2.0-flash' }, 'default' => 'gemini-flash' },
|
20
|
+
'openrouter' => { 'suggested' => { 'gpt-4o' => 'openai/gpt-4o' }, 'default' => 'gpt-4o' },
|
21
|
+
'ollama' => { 'suggested' => { 'llama3' => 'llama3' }, 'default' => 'llama3' }
|
22
|
+
}
|
23
|
+
end
|
24
|
+
|
25
|
+
def self.suggested_models(provider)
|
26
|
+
load_models.dig(provider, 'suggested') || {}
|
27
|
+
end
|
28
|
+
|
29
|
+
def self.default_model(provider)
|
30
|
+
load_models.dig(provider, 'default')
|
31
|
+
end
|
32
|
+
|
33
|
+
def self.resolve_model(provider, user_input)
|
34
|
+
return nil if user_input.nil? || user_input.empty?
|
35
|
+
|
36
|
+
suggested = suggested_models(provider)
|
37
|
+
|
38
|
+
# If user input matches a suggested model key, return the API name
|
39
|
+
if suggested.key?(user_input)
|
40
|
+
suggested[user_input]
|
41
|
+
else
|
42
|
+
# Otherwise, treat as custom model (return as-is)
|
43
|
+
user_input
|
44
|
+
end
|
45
|
+
end
|
46
|
+
|
47
|
+
def self.display_model_options(provider)
|
48
|
+
suggested = suggested_models(provider)
|
49
|
+
default = default_model(provider)
|
50
|
+
|
51
|
+
options = []
|
52
|
+
suggested.each_with_index do |(key, api_name), index|
|
53
|
+
default_marker = key == default ? " [default]" : ""
|
54
|
+
options << "#{index + 1}. #{key} (#{api_name})#{default_marker}"
|
55
|
+
end
|
56
|
+
options << "#{suggested.size + 1}. custom (enter your own model name)"
|
57
|
+
|
58
|
+
options
|
59
|
+
end
|
60
|
+
|
61
|
+
def self.get_model_choice(provider, current_model = nil)
|
62
|
+
options = display_model_options(provider)
|
63
|
+
suggested = suggested_models(provider)
|
64
|
+
default = default_model(provider)
|
65
|
+
|
66
|
+
puts "\nChoose a model for #{provider}:"
|
67
|
+
options.each { |option| puts " #{option}" }
|
68
|
+
|
69
|
+
current_display = current_model || default
|
70
|
+
print "\nEnter choice (1-#{options.size}) or model name [#{current_display}]: "
|
71
|
+
|
72
|
+
input = $stdin.gets.chomp
|
73
|
+
|
74
|
+
# If empty input, use current or default
|
75
|
+
if input.empty?
|
76
|
+
return current_model || resolve_model(provider, default)
|
77
|
+
end
|
78
|
+
|
79
|
+
# If numeric input, handle menu selection
|
80
|
+
if input.match?(/^\d+$/)
|
81
|
+
choice_num = input.to_i
|
82
|
+
if choice_num >= 1 && choice_num <= suggested.size
|
83
|
+
# Selected a suggested model
|
84
|
+
selected_key = suggested.keys[choice_num - 1]
|
85
|
+
return resolve_model(provider, selected_key)
|
86
|
+
elsif choice_num == suggested.size + 1
|
87
|
+
# Selected custom option
|
88
|
+
print "Enter custom model name: "
|
89
|
+
custom_model = $stdin.gets.chomp
|
90
|
+
if custom_model.empty?
|
91
|
+
puts "Custom model name cannot be empty. Using default."
|
92
|
+
return resolve_model(provider, default)
|
93
|
+
end
|
94
|
+
puts "✓ Using custom model: #{custom_model}"
|
95
|
+
return custom_model
|
96
|
+
else
|
97
|
+
puts "Invalid choice. Using default."
|
98
|
+
return resolve_model(provider, default)
|
99
|
+
end
|
100
|
+
else
|
101
|
+
# Direct model name input
|
102
|
+
resolved = resolve_model(provider, input)
|
103
|
+
if suggested.key?(input)
|
104
|
+
puts "✓ Using suggested model: #{input} (#{resolved})"
|
105
|
+
else
|
106
|
+
puts "✓ Using custom model: #{resolved}"
|
107
|
+
end
|
108
|
+
return resolved
|
109
|
+
end
|
110
|
+
end
|
111
|
+
end
|
112
|
+
end
|
data/lib/n2b/version.rb
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: n2b
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.4.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Stefan Nothegger
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2025-06-
|
11
|
+
date: 2025-06-05 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: json
|
@@ -67,11 +67,15 @@ files:
|
|
67
67
|
- lib/n2b.rb
|
68
68
|
- lib/n2b/base.rb
|
69
69
|
- lib/n2b/cli.rb
|
70
|
+
- lib/n2b/config/models.yml
|
70
71
|
- lib/n2b/errors.rb
|
71
72
|
- lib/n2b/irb.rb
|
72
73
|
- lib/n2b/llm/claude.rb
|
73
74
|
- lib/n2b/llm/gemini.rb
|
75
|
+
- lib/n2b/llm/ollama.rb
|
74
76
|
- lib/n2b/llm/open_ai.rb
|
77
|
+
- lib/n2b/llm/open_router.rb
|
78
|
+
- lib/n2b/model_config.rb
|
75
79
|
- lib/n2b/version.rb
|
76
80
|
homepage: https://github.com/stefan-kp/n2b
|
77
81
|
licenses:
|
@@ -95,7 +99,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
95
99
|
- !ruby/object:Gem::Version
|
96
100
|
version: '0'
|
97
101
|
requirements: []
|
98
|
-
rubygems_version: 3.5.
|
102
|
+
rubygems_version: 3.5.3
|
99
103
|
signing_key:
|
100
104
|
specification_version: 4
|
101
105
|
summary: Convert natural language to bash commands or ruby code and help with debugging.
|