n2b 0.3.0 โ†’ 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 698bca4814bd42e8587efb9d278ce1cb090b3ea23afd645c9c082b1c277ea22d
4
- data.tar.gz: c7ec9715a8b5807b1ea5eb90e448fd70767fdc3e6ace987e1b123e29ee37dd33
3
+ metadata.gz: 58382cb632f67e6ad3ca031c71a2c53bd5cd477ea32963d89f0d253d8b43bbeb
4
+ data.tar.gz: facb990aad05d93cac662fdbf60075ad3606dd43083f408a8e0d60d5940df359
5
5
  SHA512:
6
- metadata.gz: 7d00b68b8e2925b63d4bf6b2e4842a18034e2dc318686d55261480c2612f47f8574379df85ae166670a721aa6bd41406d08323cddb0ebd831e3777dc1773ca1a
7
- data.tar.gz: e7b3621fcf9051d433ca0fba961b12d348b03f9d73981d641b252b4a4223b8edfec0f8ee438f6c603382ca62c1cf4679e9960f6c40f16b84c644d6939290dbe5
6
+ metadata.gz: 8dd53c35513bf7071c5aecb7f724ebc44f522a166ef1ca8dc4aecfda3f9cd801dd7d85bce6762a4fc8bd5e1a98aa87eaa9e8080db3302a17d5875f8010b90867
7
+ data.tar.gz: 866846f2c517fe7dcee27172640151b1d6e567161b473a7c5a39c3894df15ea25b43e2278b24fd783f93c5eac8f507e7d331a85ef5fc5084990f89c227c19485
data/README.md CHANGED
@@ -6,10 +6,24 @@ N2B (Natural Language to Bash & Ruby) is a Ruby gem that leverages AI to convert
6
6
 
7
7
  ## Features
8
8
 
9
- - Convert natural language to bash commands
10
- - Generate Ruby code from natural language instructions
11
- - Analyze Errbit errors and generate detailed reports
12
- - Create formatted Scrum tickets from errors
9
+ - **๐Ÿค– Natural Language to Commands**: Convert natural language to bash commands
10
+ - **๐Ÿ’Ž Ruby Code Generation**: Generate Ruby code from natural language instructions
11
+ - **๐Ÿ” AI-Powered Diff Analysis**: Analyze git/hg diffs with comprehensive code review
12
+ - **๐Ÿ“‹ Requirements Compliance**: Check if code changes meet specified requirements
13
+ - **๐Ÿงช Test Coverage Assessment**: Evaluate test coverage for code changes
14
+ - **๐ŸŒฟ Branch Comparison**: Compare changes against any branch (main/master/default)
15
+ - **๐Ÿ› ๏ธ VCS Support**: Full support for both Git and Mercurial repositories
16
+ - **๐Ÿ“Š Errbit Integration**: Analyze Errbit errors and generate detailed reports
17
+ - **๐ŸŽซ Scrum Tickets**: Create formatted Scrum tickets from errors
18
+
19
+ ### ๐Ÿ†• **New in v0.4.0: Flexible Model Configuration**
20
+
21
+ - **๐ŸŽฏ Multiple LLM Providers**: Claude, OpenAI, Gemini, OpenRouter, Ollama
22
+ - **๐Ÿ”ง Custom Models**: Use any model name - fine-tunes, beta models, custom deployments
23
+ - **๐Ÿ“‹ Suggested Models**: Curated lists of latest models with easy selection
24
+ - **๐Ÿš€ Latest Models**: OpenAI O3/O4 series, Gemini 2.5, updated OpenRouter models
25
+ - **๐Ÿ”„ Backward Compatible**: Existing configurations continue working seamlessly
26
+ - **โšก No Restrictions**: Direct API model names accepted without validation
13
27
 
14
28
  ## Installation
15
29
 
@@ -56,16 +70,74 @@ n2rscrum "Create a user authentication system"
56
70
 
57
71
  ## Configuration
58
72
 
59
- Create a config file at `~/.n2b/config.yml` with your API keys:
73
+ N2B now features a **flexible model configuration system** that supports both suggested models and custom model names across all providers.
60
74
 
61
- ```yaml
62
- llm: claude # or openai
63
- claude:
64
- key: your-anthropic-api-key
65
- model: claude-3-opus-20240229 # or opus, haiku, sonnet
66
- openai:
67
- key: your-openai-api-key
68
- model: gpt-4 # or gpt-3.5-turbo
75
+ ### Quick Setup
76
+
77
+ Run the configuration command to get started:
78
+
79
+ ```bash
80
+ n2b -c
81
+ ```
82
+
83
+ This will guide you through:
84
+ - Selecting your preferred LLM provider (Claude, OpenAI, Gemini, OpenRouter, Ollama)
85
+ - Choosing from suggested models or entering custom model names
86
+ - Setting up API keys
87
+ - Configuring privacy settings
88
+
89
+ ### Supported Models
90
+
91
+ #### ๐Ÿค– **Claude** (Default: sonnet)
92
+ - `haiku` โ†’ claude-3-haiku-20240307
93
+ - `sonnet` โ†’ claude-3-sonnet-20240229
94
+ - `sonnet35` โ†’ claude-3-5-sonnet-20240620
95
+ - `sonnet37` โ†’ claude-3-7-sonnet-20250219
96
+ - `sonnet40` โ†’ claude-sonnet-4-20250514
97
+ - **Custom models**: Any Claude model name
98
+
99
+ #### ๐Ÿง  **OpenAI** (Default: gpt-4o-mini)
100
+ - `gpt-4o` โ†’ gpt-4o
101
+ - `gpt-4o-mini` โ†’ gpt-4o-mini
102
+ - `o3` โ†’ o3 (Latest reasoning model)
103
+ - `o3-mini` โ†’ o3-mini
104
+ - `o3-mini-high` โ†’ o3-mini-high
105
+ - `o4` โ†’ o4
106
+ - `o4-mini` โ†’ o4-mini
107
+ - `o4-mini-high` โ†’ o4-mini-high
108
+ - **Custom models**: Any OpenAI model name
109
+
110
+ #### ๐Ÿ”ฎ **Gemini** (Default: gemini-2.5-flash)
111
+ - `gemini-2.5-flash` โ†’ gemini-2.5-flash-preview-05-20
112
+ - `gemini-2.5-pro` โ†’ gemini-2.5-pro-preview-05-06
113
+ - **Custom models**: Any Gemini model name
114
+
115
+ #### ๐ŸŒ **OpenRouter** (Default: deepseek-v3)
116
+ - `deepseek-v3` โ†’ deepseek-v3-0324
117
+ - `deepseek-r1-llama-8b` โ†’ deepseek-r1-distill-llama-8b
118
+ - `llama-3.3-70b` โ†’ llama-3.3-70b-instruct
119
+ - `llama-3.3-8b` โ†’ llama-3.3-8b-instruct
120
+ - `wayfinder-large` โ†’ wayfinder-large-70b-llama-3.3
121
+ - **Custom models**: Any OpenRouter model name
122
+
123
+ #### ๐Ÿฆ™ **Ollama** (Default: llama3)
124
+ - `llama3` โ†’ llama3
125
+ - `mistral` โ†’ mistral
126
+ - `codellama` โ†’ codellama
127
+ - `qwen` โ†’ qwen2.5
128
+ - **Custom models**: Any locally available Ollama model
129
+
130
+ ### Custom Configuration File
131
+
132
+ You can use a custom config file by setting the `N2B_CONFIG_FILE` environment variable:
133
+
134
+ ```bash
135
+ export N2B_CONFIG_FILE=/path/to/your/config.yml
136
+ ```
137
+
138
+ You can also set the history file location using the `N2B_HISTORY_FILE` environment variable:
139
+ ```bash
140
+ export N2B_HISTORY_FILE=/path/to/your/history
69
141
  ```
70
142
 
71
143
  ## Quick Example N2B
@@ -114,16 +186,38 @@ input_string.to_s.scan(/[\/\w.-]+\.rb(?=\s|:|$)/)
114
186
  Install the gem by running:
115
187
  gem install n2b
116
188
 
117
- ## Configuration
189
+ ## Enhanced Model Selection
118
190
 
119
- Before using n2b, you need to configure it with your Claude API key and preferences. Run:
120
- n2b -c
191
+ N2B v0.4.0 introduces a **flexible model configuration system**:
192
+
193
+ ### Interactive Model Selection
194
+
195
+ When you run `n2b -c`, you'll see an enhanced interface:
196
+
197
+ ```
198
+ Choose a model for openai:
199
+ 1. gpt-4o (gpt-4o)
200
+ 2. gpt-4o-mini (gpt-4o-mini) [default]
201
+ 3. o3 (o3)
202
+ 4. o3-mini (o3-mini)
203
+ 5. o3-mini-high (o3-mini-high)
204
+ 6. o4 (o4)
205
+ 7. o4-mini (o4-mini)
206
+ 8. o4-mini-high (o4-mini-high)
207
+ 9. custom (enter your own model name)
208
+
209
+ Enter choice (1-9) or model name [gpt-4o-mini]: 9
210
+ Enter custom model name: gpt-5-preview
211
+ โœ“ Using custom model: gpt-5-preview
212
+ ```
213
+
214
+ ### Key Features
121
215
 
122
- This will prompt you to enter:
123
- - Your Claude API or OpenAI key
124
- - Preferred model (e.g. haiku, sonnet, or sonnet35)
125
- - Privacy settings (whether to send shell history, past requests, current directory)
126
- - Whether to append generated commands to your shell history
216
+ - **๐ŸŽฏ Suggested Models**: Curated list of latest models for each provider
217
+ - **๐Ÿ”ง Custom Models**: Enter any model name for fine-tunes, beta models, etc.
218
+ - **๐Ÿ”„ Backward Compatible**: Existing configurations continue working
219
+ - **๐Ÿš€ Latest Models**: Access to newest OpenAI O-series, Gemini 2.5, etc.
220
+ - **โšก No Validation**: Direct API model names accepted - let the API handle errors
127
221
 
128
222
  Configuration is stored in `~/.n2b/config.yml`.
129
223
 
@@ -135,6 +229,9 @@ n2b [options] your natural language instruction
135
229
 
136
230
  Options:
137
231
  - `-x` or `--execute`: Execute the generated commands after confirmation
232
+ - `-d` or `--diff`: Analyze git/hg diff with AI-powered code review
233
+ - `-b` or `--branch [BRANCH]`: Compare against specific branch (auto-detects main/master/default)
234
+ - `-r` or `--requirements FILE`: Requirements file for compliance checking
138
235
  - `-c` or `--config`: Reconfigure the tool
139
236
  - `-h` or `--help`: Display help information
140
237
 
@@ -152,6 +249,82 @@ Examples:
152
249
 
153
250
  ```n2b -c ```
154
251
 
252
+ ## ๐Ÿ” AI-Powered Diff Analysis
253
+
254
+ N2B provides comprehensive AI-powered code review for your git and mercurial repositories.
255
+
256
+ ### Basic Diff Analysis
257
+
258
+ ```bash
259
+ # Analyze uncommitted changes
260
+ n2b --diff
261
+
262
+ # Analyze changes against specific branch
263
+ n2b --diff --branch main
264
+ n2b --diff --branch feature/auth
265
+
266
+ # Auto-detect default branch (main/master/default)
267
+ n2b --diff --branch
268
+
269
+ # Short form
270
+ n2b -d -b main
271
+ ```
272
+
273
+ ### Requirements Compliance Checking
274
+
275
+ ```bash
276
+ # Check if changes meet requirements
277
+ n2b --diff --requirements requirements.md
278
+ n2b -d -r req.md
279
+
280
+ # Combine with branch comparison
281
+ n2b --diff --branch main --requirements requirements.md
282
+ ```
283
+
284
+ ### What You Get
285
+
286
+ The AI analysis provides:
287
+
288
+ - **๐Ÿ“ Summary**: Clear overview of what changed
289
+ - **๐Ÿšจ Potential Errors**: Bugs, security issues, logic problems with exact file/line references
290
+ - **๐Ÿ’ก Suggested Improvements**: Code quality, performance, style recommendations
291
+ - **๐Ÿงช Test Coverage Assessment**: Evaluation of test completeness and quality
292
+ - **๐Ÿ“‹ Requirements Evaluation**: Compliance check with clear status indicators:
293
+ - โœ… **IMPLEMENTED**: Requirement fully satisfied
294
+ - โš ๏ธ **PARTIALLY IMPLEMENTED**: Needs more work
295
+ - โŒ **NOT IMPLEMENTED**: Not addressed
296
+ - ๐Ÿ” **UNCLEAR**: Cannot determine from diff
297
+
298
+ ### Example Output
299
+
300
+ ```
301
+ Code Diff Analysis:
302
+ -------------------
303
+ Summary:
304
+ Added user authentication with JWT tokens and password validation.
305
+
306
+ Potential Errors:
307
+ - lib/auth.rb line 42: Password validation allows weak passwords
308
+ - controllers/auth_controller.rb lines 15-20: Missing rate limiting for login attempts
309
+
310
+ Suggested Improvements:
311
+ - lib/auth.rb line 30: Consider using bcrypt for password hashing
312
+ - spec/auth_spec.rb: Add tests for edge cases and security scenarios
313
+
314
+ Test Coverage Assessment:
315
+ Good: Basic authentication flow is tested. Missing: No tests for password validation edge cases, JWT expiration handling, or security attack scenarios.
316
+
317
+ Requirements Evaluation:
318
+ โœ… IMPLEMENTED: User login/logout functionality fully working
319
+ โš ๏ธ PARTIALLY IMPLEMENTED: Password strength requirements present but not comprehensive
320
+ โŒ NOT IMPLEMENTED: Two-factor authentication not addressed in this diff
321
+ -------------------
322
+ ```
323
+
324
+ ### Supported Version Control Systems
325
+
326
+ - **Git**: Full support with auto-detection of main/master branches
327
+ - **Mercurial (hg)**: Full support with auto-detection of default branch
155
328
 
156
329
  n2r in ruby or rails console
157
330
  n2r "your question", files:['file1.rb', 'file2.rb'], exception: AnError
@@ -240,4 +413,4 @@ The generated tickets include:
240
413
  - Acceptance criteria
241
414
  - Story point estimate
242
415
  - Priority level
243
- - Reference to the original Errbit URL
416
+ - Reference to the original Errbit URL# Test change
data/lib/n2b/base.rb CHANGED
@@ -1,3 +1,5 @@
1
+ require_relative 'model_config'
2
+
1
3
  module N2B
2
4
  class Base
3
5
 
@@ -14,42 +16,86 @@ module N2B
14
16
 
15
17
  def get_config( reconfigure: false)
16
18
  config = load_config
17
- api_key = ENV['CLAUDE_API_KEY'] || config['access_key']
18
- model = config['model'] || 'sonnet35'
19
+ api_key = ENV['CLAUDE_API_KEY'] || config['access_key'] # This will be used as default or for existing configs
20
+ model = config['model'] # Model will be handled per LLM
19
21
 
20
- if api_key.nil? || api_key == '' || reconfigure
21
- print "choose a language model to use (1:claude, 2:openai, 3:gemini) #{ config['llm'] }: "
22
+ if api_key.nil? || api_key == '' || config['llm'].nil? || reconfigure # Added config['llm'].nil? to force config if llm type isn't set
23
+ print "choose a language model to use (1:claude, 2:openai, 3:gemini, 4:openrouter, 5:ollama) #{ config['llm'] }: "
22
24
  llm = $stdin.gets.chomp
23
- llm = config['llm'] if llm.empty?
24
- unless ['claude', 'openai', 'gemini', '1', '2', '3'].include?(llm)
25
- puts "Invalid language model. Choose from: claude, openai, gemini"
25
+ llm = config['llm'] if llm.empty? && !config['llm'].nil? # Keep current if input is empty and current exists
26
+
27
+ unless ['claude', 'openai', 'gemini', 'openrouter', 'ollama', '1', '2', '3', '4', '5'].include?(llm)
28
+ puts "Invalid language model. Choose from: claude, openai, gemini, openrouter, ollama, or 1-5"
26
29
  exit 1
27
30
  end
28
31
  llm = 'claude' if llm == '1'
29
32
  llm = 'openai' if llm == '2'
30
33
  llm = 'gemini' if llm == '3'
31
- llm_class = case llm
32
- when 'openai'
33
- N2M::Llm::OpenAi
34
- when 'gemini'
35
- N2M::Llm::Gemini
36
- else
37
- N2M::Llm::Claude
38
- end
39
-
40
- print "Enter your #{llm} API key: #{ api_key.nil? || api_key.empty? ? '' : '(leave blank to keep the current key '+api_key[0..10]+'...)' }"
41
- api_key = $stdin.gets.chomp
42
- api_key = config['access_key'] if api_key.empty?
43
- print "Choose a model (#{ llm_class::MODELS.keys }, #{ llm_class::MODELS.keys.first } default): "
44
- model = $stdin.gets.chomp
45
- model = llm_class::MODELS.keys.first if model.empty?
34
+ llm = 'openrouter' if llm == '4'
35
+ llm = 'ollama' if llm == '5'
36
+
37
+ # Set default LLM if needed
38
+ if llm.nil? || llm.empty?
39
+ llm = 'claude'
40
+ end
41
+
46
42
  config['llm'] = llm
47
- config['access_key'] = api_key
48
- config['model'] = model
49
- unless llm_class::MODELS.keys.include?(model)
50
- puts "Invalid model. Choose from: #{llm_class::MODELS.keys.join(', ')}"
51
- exit 1
43
+
44
+ if llm == 'ollama'
45
+ # Ollama specific configuration
46
+ puts "Ollama typically runs locally and doesn't require an API key."
47
+
48
+ # Use new model configuration system
49
+ current_model = config['model']
50
+ model = N2B::ModelConfig.get_model_choice(llm, current_model)
51
+
52
+ # Configure Ollama API URL
53
+ current_ollama_api_url = config['ollama_api_url'] || N2M::Llm::Ollama::DEFAULT_OLLAMA_API_URI
54
+ print "Enter Ollama API base URL (default: #{current_ollama_api_url}): "
55
+ ollama_api_url_input = $stdin.gets.chomp
56
+ config['ollama_api_url'] = ollama_api_url_input.empty? ? current_ollama_api_url : ollama_api_url_input
57
+ config.delete('access_key') # Remove access_key if switching to ollama
58
+ else
59
+ # Configuration for API key based LLMs (Claude, OpenAI, Gemini, OpenRouter)
60
+ current_api_key = config['access_key'] # Use existing key from config as default
61
+ print "Enter your #{llm} API key: #{ current_api_key.nil? || current_api_key.empty? ? '' : '(leave blank to keep the current key)' }"
62
+ api_key_input = $stdin.gets.chomp
63
+ config['access_key'] = api_key_input if !api_key_input.empty?
64
+ config['access_key'] = current_api_key if api_key_input.empty? && !current_api_key.nil? && !current_api_key.empty?
65
+
66
+ # Ensure API key is present if not Ollama
67
+ if config['access_key'].nil? || config['access_key'].empty?
68
+ puts "API key is required for #{llm}."
69
+ exit 1
70
+ end
71
+
72
+ # Use new model configuration system
73
+ current_model = config['model']
74
+ model = N2B::ModelConfig.get_model_choice(llm, current_model)
75
+
76
+ if llm == 'openrouter'
77
+ current_site_url = config['openrouter_site_url'] || ""
78
+ print "Enter your OpenRouter Site URL (optional, for HTTP-Referer header, current: #{current_site_url}): "
79
+ openrouter_site_url_input = $stdin.gets.chomp
80
+ config['openrouter_site_url'] = openrouter_site_url_input.empty? ? current_site_url : openrouter_site_url_input
81
+
82
+ current_site_name = config['openrouter_site_name'] || ""
83
+ print "Enter your OpenRouter Site Name (optional, for X-Title header, current: #{current_site_name}): "
84
+ openrouter_site_name_input = $stdin.gets.chomp
85
+ config['openrouter_site_name'] = openrouter_site_name_input.empty? ? current_site_name : openrouter_site_name_input
86
+ end
52
87
  end
88
+
89
+ config['model'] = model # Store selected model for all types
90
+
91
+ # Ensure privacy settings are initialized if not present
92
+ config['privacy'] ||= {}
93
+ # Set defaults for any privacy settings that are nil
94
+ config['privacy']['send_shell_history'] = false if config['privacy']['send_shell_history'].nil?
95
+ config['privacy']['send_llm_history'] = true if config['privacy']['send_llm_history'].nil?
96
+ config['privacy']['send_current_directory'] = true if config['privacy']['send_current_directory'].nil?
97
+ config['append_to_shell_history'] = false if config['append_to_shell_history'].nil?
98
+
53
99
  puts "configure privacy settings directly in the config file #{CONFIG_FILE}"
54
100
  config['privacy'] ||= {}
55
101
  config['privacy']['send_shell_history'] = false
data/lib/n2b/cli.rb CHANGED
@@ -46,7 +46,7 @@ module N2B
46
46
  requirements_content = File.read(requirements_filepath)
47
47
  end
48
48
 
49
- diff_output = execute_vcs_diff(vcs_type)
49
+ diff_output = execute_vcs_diff(vcs_type, @options[:branch])
50
50
  analyze_diff(diff_output, config, user_prompt_addition, requirements_content)
51
51
  end
52
52
 
@@ -60,17 +60,139 @@ module N2B
60
60
  end
61
61
  end
62
62
 
63
- def execute_vcs_diff(vcs_type)
63
+ def execute_vcs_diff(vcs_type, branch_option = nil)
64
64
  case vcs_type
65
65
  when :git
66
- `git diff HEAD`
66
+ if branch_option
67
+ target_branch = branch_option == 'auto' ? detect_git_default_branch : branch_option
68
+ if target_branch
69
+ # Validate that the target branch exists
70
+ unless validate_git_branch_exists(target_branch)
71
+ puts "Error: Branch '#{target_branch}' does not exist."
72
+ puts "Available branches:"
73
+ puts `git branch -a`.lines.map(&:strip).reject(&:empty?)
74
+ exit 1
75
+ end
76
+
77
+ puts "Comparing current branch against '#{target_branch}'..."
78
+ `git diff #{target_branch}...HEAD`
79
+ else
80
+ puts "Could not detect default branch, falling back to HEAD diff..."
81
+ `git diff HEAD`
82
+ end
83
+ else
84
+ `git diff HEAD`
85
+ end
67
86
  when :hg
68
- `hg diff`
87
+ if branch_option
88
+ target_branch = branch_option == 'auto' ? detect_hg_default_branch : branch_option
89
+ if target_branch
90
+ # Validate that the target branch exists
91
+ unless validate_hg_branch_exists(target_branch)
92
+ puts "Error: Branch '#{target_branch}' does not exist."
93
+ puts "Available branches:"
94
+ puts `hg branches`.lines.map(&:strip).reject(&:empty?)
95
+ exit 1
96
+ end
97
+
98
+ puts "Comparing current branch against '#{target_branch}'..."
99
+ `hg diff -r #{target_branch}`
100
+ else
101
+ puts "Could not detect default branch, falling back to standard diff..."
102
+ `hg diff`
103
+ end
104
+ else
105
+ `hg diff`
106
+ end
69
107
  else
70
108
  "" # Should not happen if get_vcs_type logic is correct and checked before calling
71
109
  end
72
110
  end
73
111
 
112
+ def detect_git_default_branch
113
+ # Try multiple methods to detect the default branch
114
+
115
+ # Method 1: Check origin/HEAD symbolic ref
116
+ result = `git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null`.strip
117
+ if $?.success? && !result.empty?
118
+ return result.split('/').last
119
+ end
120
+
121
+ # Method 2: Check remote show origin
122
+ result = `git remote show origin 2>/dev/null | grep "HEAD branch"`.strip
123
+ if $?.success? && !result.empty?
124
+ match = result.match(/HEAD branch:\s*(\w+)/)
125
+ return match[1] if match
126
+ end
127
+
128
+ # Method 3: Check if common default branches exist
129
+ ['main', 'master'].each do |branch|
130
+ result = `git rev-parse --verify origin/#{branch} 2>/dev/null`
131
+ if $?.success?
132
+ return branch
133
+ end
134
+ end
135
+
136
+ # Method 4: Fallback - check local branches
137
+ ['main', 'master'].each do |branch|
138
+ result = `git rev-parse --verify #{branch} 2>/dev/null`
139
+ if $?.success?
140
+ return branch
141
+ end
142
+ end
143
+
144
+ # If all else fails, return nil
145
+ nil
146
+ end
147
+
148
+ def detect_hg_default_branch
149
+ # Method 1: Check current branch (if it's 'default', that's the main branch)
150
+ result = `hg branch 2>/dev/null`.strip
151
+ if $?.success? && result == 'default'
152
+ return 'default'
153
+ end
154
+
155
+ # Method 2: Look for 'default' branch in branch list
156
+ result = `hg branches 2>/dev/null`
157
+ if $?.success? && result.include?('default')
158
+ return 'default'
159
+ end
160
+
161
+ # Method 3: Check if there are any branches at all
162
+ result = `hg branches 2>/dev/null`.strip
163
+ if $?.success? && !result.empty?
164
+ # Get the first branch (usually the main one)
165
+ first_branch = result.lines.first&.split&.first
166
+ return first_branch if first_branch
167
+ end
168
+
169
+ # Fallback to 'default' (standard hg main branch name)
170
+ 'default'
171
+ end
172
+
173
+ def validate_git_branch_exists(branch)
174
+ # Check if branch exists locally
175
+ result = `git rev-parse --verify #{branch} 2>/dev/null`
176
+ return true if $?.success?
177
+
178
+ # Check if branch exists on remote
179
+ result = `git rev-parse --verify origin/#{branch} 2>/dev/null`
180
+ return true if $?.success?
181
+
182
+ false
183
+ end
184
+
185
+ def validate_hg_branch_exists(branch)
186
+ # Check if branch exists in hg branches
187
+ result = `hg branches 2>/dev/null`
188
+ if $?.success?
189
+ return result.lines.any? { |line| line.strip.start_with?(branch) }
190
+ end
191
+
192
+ # If we can't list branches, assume it exists (hg is more permissive)
193
+ true
194
+ end
195
+
74
196
  private
75
197
 
76
198
  def process_natural_language_command(input_text, config)
@@ -335,6 +457,10 @@ JSON_INSTRUCTION
335
457
  N2M::Llm::Claude.new(config)
336
458
  when 'gemini'
337
459
  N2M::Llm::Gemini.new(config)
460
+ when 'openrouter'
461
+ N2M::Llm::OpenRouter.new(config)
462
+ when 'ollama'
463
+ N2M::Llm::Ollama.new(config)
338
464
  else
339
465
  # Should not happen if config is validated, but as a safeguard:
340
466
  raise N2B::Error, "Unsupported LLM service: #{llm_service_name}"
@@ -344,6 +470,13 @@ JSON_INSTRUCTION
344
470
  response_json_str
345
471
  rescue N2B::LlmApiError => e # This catches errors from analyze_code_diff
346
472
  puts "Error communicating with the LLM: #{e.message}"
473
+
474
+ # Check if it might be a model-related error
475
+ if e.message.include?('model') || e.message.include?('Model') || e.message.include?('invalid') || e.message.include?('not found')
476
+ puts "\nThis might be due to an invalid or unsupported model configuration."
477
+ puts "Run 'n2b -c' to reconfigure your model settings."
478
+ end
479
+
347
480
  return '{"summary": "Error: Could not analyze diff due to LLM API error.", "errors": [], "improvements": []}'
348
481
  end
349
482
  end
@@ -363,7 +496,23 @@ JSON_INSTRUCTION
363
496
 
364
497
  def call_llm(prompt, config)
365
498
  begin # Added begin for LlmApiError rescue
366
- llm = config['llm'] == 'openai' ? N2M::Llm::OpenAi.new(config) : N2M::Llm::Claude.new(config)
499
+ llm_service_name = config['llm']
500
+ llm = case llm_service_name
501
+ when 'openai'
502
+ N2M::Llm::OpenAi.new(config)
503
+ when 'claude'
504
+ N2M::Llm::Claude.new(config)
505
+ when 'gemini'
506
+ N2M::Llm::Gemini.new(config)
507
+ when 'openrouter'
508
+ N2M::Llm::OpenRouter.new(config)
509
+ when 'ollama'
510
+ N2M::Llm::Ollama.new(config)
511
+ else
512
+ # Fallback or error, though config validation should prevent this
513
+ puts "Warning: Unsupported LLM service '#{llm_service_name}' configured. Falling back to Claude."
514
+ N2M::Llm::Claude.new(config)
515
+ end
367
516
 
368
517
  # This content is specific to bash command generation
369
518
  content = <<-EOF
@@ -386,19 +535,26 @@ JSON_INSTRUCTION
386
535
  # which implies it expects a Hash. Let's ensure call_llm returns a Hash.
387
536
  # This internal JSON parsing is for the *content* of a successful LLM response.
388
537
  # The LlmApiError for network/auth issues should be caught before this.
389
- begin
390
- parsed_response = JSON.parse(response_json_str)
391
- parsed_response
392
- rescue JSON::ParserError => e
393
- puts "Error parsing LLM response JSON for command generation: #{e.message}"
394
- # This is a fallback for when the LLM response *content* is not valid JSON.
395
- { "commands" => ["echo 'Error: LLM returned invalid JSON content.'"], "explanation" => "The response from the language model was not valid JSON." }
396
- end
397
- rescue N2B::LlmApiError => e
398
- puts "Error communicating with the LLM: #{e.message}"
399
- # This is the fallback for LlmApiError (network, auth, etc.)
400
- { "commands" => ["echo 'LLM API error occurred. Please check your configuration and network.'"], "explanation" => "Failed to connect to the LLM." }
401
- end
538
+ begin
539
+ parsed_response = JSON.parse(response_json_str)
540
+ parsed_response
541
+ rescue JSON::ParserError => e
542
+ puts "Error parsing LLM response JSON for command generation: #{e.message}"
543
+ # This is a fallback for when the LLM response *content* is not valid JSON.
544
+ { "commands" => ["echo 'Error: LLM returned invalid JSON content.'"], "explanation" => "The response from the language model was not valid JSON." }
545
+ end
546
+ rescue N2B::LlmApiError => e
547
+ puts "Error communicating with the LLM: #{e.message}"
548
+
549
+ # Check if it might be a model-related error
550
+ if e.message.include?('model') || e.message.include?('Model') || e.message.include?('invalid') || e.message.include?('not found')
551
+ puts "\nThis might be due to an invalid or unsupported model configuration."
552
+ puts "Run 'n2b -c' to reconfigure your model settings."
553
+ end
554
+
555
+ # This is the fallback for LlmApiError (network, auth, etc.)
556
+ { "commands" => ["echo 'LLM API error occurred. Please check your configuration and network.'"], "explanation" => "Failed to connect to the LLM." }
557
+ end
402
558
  end
403
559
 
404
560
  def get_user_shell
@@ -490,7 +646,7 @@ JSON_INSTRUCTION
490
646
 
491
647
 
492
648
  def parse_options
493
- options = { execute: false, config: nil, diff: false, requirements: nil }
649
+ options = { execute: false, config: nil, diff: false, requirements: nil, branch: nil }
494
650
 
495
651
  parser = OptionParser.new do |opts|
496
652
  opts.banner = "Usage: n2b [options] [natural language command]"
@@ -503,6 +659,10 @@ JSON_INSTRUCTION
503
659
  options[:diff] = true
504
660
  end
505
661
 
662
+ opts.on('-b', '--branch [BRANCH]', 'Compare against branch (default: auto-detect main/master)') do |branch|
663
+ options[:branch] = branch || 'auto'
664
+ end
665
+
506
666
  opts.on('-r', '--requirements FILE', 'Requirements file for diff analysis') do |file|
507
667
  options[:requirements] = file
508
668
  end
@@ -526,6 +686,14 @@ JSON_INSTRUCTION
526
686
  exit 1
527
687
  end
528
688
 
689
+ # Validate option combinations
690
+ if options[:branch] && !options[:diff]
691
+ puts "Error: --branch option can only be used with --diff"
692
+ puts ""
693
+ puts parser.help
694
+ exit 1
695
+ end
696
+
529
697
  options
530
698
  end
531
699
  end
@@ -0,0 +1,47 @@
1
+ # N2B Model Configuration
2
+ # Format: display_name -> api_model_name
3
+ # Users can select from suggested models or enter custom model names
4
+
5
+ claude:
6
+ suggested:
7
+ haiku: "claude-3-haiku-20240307"
8
+ sonnet: "claude-3-sonnet-20240229"
9
+ sonnet35: "claude-3-5-sonnet-20240620"
10
+ sonnet37: "claude-3-7-sonnet-20250219"
11
+ sonnet40: "claude-sonnet-4-20250514"
12
+ default: "sonnet"
13
+
14
+ openai:
15
+ suggested:
16
+ gpt-4o: "gpt-4o"
17
+ gpt-4o-mini: "gpt-4o-mini"
18
+ o3: "o3"
19
+ o3-mini: "o3-mini"
20
+ o3-mini-high: "o3-mini-high"
21
+ o4: "o4"
22
+ o4-mini: "o4-mini"
23
+ o4-mini-high: "o4-mini-high"
24
+ default: "gpt-4o-mini"
25
+
26
+ gemini:
27
+ suggested:
28
+ gemini-2.5-flash: "gemini-2.5-flash-preview-05-20"
29
+ gemini-2.5-pro: "gemini-2.5-pro-preview-05-06"
30
+ default: "gemini-2.5-flash"
31
+
32
+ openrouter:
33
+ suggested:
34
+ deepseek-v3: "deepseek-v3-0324"
35
+ deepseek-r1-llama-8b: "deepseek-r1-distill-llama-8b"
36
+ llama-3.3-70b: "llama-3.3-70b-instruct"
37
+ llama-3.3-8b: "llama-3.3-8b-instruct"
38
+ wayfinder-large: "wayfinder-large-70b-llama-3.3"
39
+ default: "deepseek-v3"
40
+
41
+ ollama:
42
+ suggested:
43
+ llama3: "llama3"
44
+ mistral: "mistral"
45
+ codellama: "codellama"
46
+ qwen: "qwen2.5"
47
+ default: "llama3"
@@ -1,13 +1,24 @@
1
+ require_relative '../model_config'
2
+
1
3
  module N2M
2
4
  module Llm
3
5
  class Claude
4
6
  API_URI = URI.parse('https://api.anthropic.com/v1/messages')
5
- MODELS = { 'haiku' => 'claude-3-haiku-20240307', 'sonnet' => 'claude-3-sonnet-20240229', 'sonnet35' => 'claude-3-5-sonnet-20240620', "sonnet37" => "claude-3-7-sonnet-20250219" }
6
7
 
7
8
  def initialize(config)
8
9
  @config = config
9
10
  end
10
11
 
12
+ def get_model_name
13
+ # Resolve model name using the centralized configuration
14
+ model_name = N2B::ModelConfig.resolve_model('claude', @config['model'])
15
+ if model_name.nil? || model_name.empty?
16
+ # Fallback to default if no model specified
17
+ model_name = N2B::ModelConfig.resolve_model('claude', N2B::ModelConfig.default_model('claude'))
18
+ end
19
+ model_name
20
+ end
21
+
11
22
  def make_request( content)
12
23
  uri = URI.parse('https://api.anthropic.com/v1/messages')
13
24
  request = Net::HTTP::Post.new(uri)
@@ -16,7 +27,7 @@ module N2M
16
27
  request['anthropic-version'] = '2023-06-01'
17
28
 
18
29
  request.body = JSON.dump({
19
- "model" => MODELS[@config['model']],
30
+ "model" => get_model_name,
20
31
  "max_tokens" => 1024,
21
32
  "messages" => [
22
33
  {
@@ -71,7 +82,7 @@ module N2M
71
82
  request['anthropic-version'] = '2023-06-01'
72
83
 
73
84
  request.body = JSON.dump({
74
- "model" => MODELS[@config['model']],
85
+ "model" => get_model_name,
75
86
  "max_tokens" => @config['max_tokens'] || 1024, # Allow overriding max_tokens from config
76
87
  "messages" => [
77
88
  {
@@ -1,21 +1,29 @@
1
1
  require 'net/http'
2
2
  require 'json'
3
3
  require 'uri'
4
+ require_relative '../model_config'
4
5
 
5
6
  module N2M
6
7
  module Llm
7
8
  class Gemini
8
9
  API_URI = URI.parse('https://generativelanguage.googleapis.com/v1beta/models')
9
- MODELS = {
10
- 'gemini-flash' => 'gemini-2.0-flash'
11
- }
12
10
 
13
11
  def initialize(config)
14
12
  @config = config
15
13
  end
16
14
 
15
+ def get_model_name
16
+ # Resolve model name using the centralized configuration
17
+ model_name = N2B::ModelConfig.resolve_model('gemini', @config['model'])
18
+ if model_name.nil? || model_name.empty?
19
+ # Fallback to default if no model specified
20
+ model_name = N2B::ModelConfig.resolve_model('gemini', N2B::ModelConfig.default_model('gemini'))
21
+ end
22
+ model_name
23
+ end
24
+
17
25
  def make_request(content)
18
- model = MODELS[@config['model']] || 'gemini-flash'
26
+ model = get_model_name
19
27
  uri = URI.parse("#{API_URI}/#{model}:generateContent?key=#{@config['access_key']}")
20
28
 
21
29
  request = Net::HTTP::Post.new(uri)
@@ -65,7 +73,7 @@ module N2M
65
73
  def analyze_code_diff(prompt_content)
66
74
  # This method assumes prompt_content is the full, ready-to-send prompt
67
75
  # including all instructions for the LLM (system message, diff, user additions, JSON format).
68
- model = MODELS[@config['model']] || 'gemini-flash' # Or a specific model for analysis if different
76
+ model = get_model_name
69
77
  uri = URI.parse("#{API_URI}/#{model}:generateContent?key=#{@config['access_key']}")
70
78
 
71
79
  request = Net::HTTP::Post.new(uri)
@@ -0,0 +1,129 @@
1
+ require 'net/http'
2
+ require 'json'
3
+ require 'uri'
4
+ require_relative '../model_config'
5
+
6
+ module N2M
7
+ module Llm
8
+ class Ollama
9
+ # Default API URI for Ollama. This might need to be configurable later.
10
+ DEFAULT_OLLAMA_API_URI = 'http://localhost:11434/api/chat'
11
+
12
+ def initialize(config)
13
+ @config = config
14
+ # Allow overriding the Ollama API URI from config if needed
15
+ @api_uri = URI.parse(@config['ollama_api_url'] || DEFAULT_OLLAMA_API_URI)
16
+ end
17
+
18
+ def get_model_name
19
+ # Resolve model name using the centralized configuration
20
+ model_name = N2B::ModelConfig.resolve_model('ollama', @config['model'])
21
+ if model_name.nil? || model_name.empty?
22
+ # Fallback to default if no model specified
23
+ model_name = N2B::ModelConfig.resolve_model('ollama', N2B::ModelConfig.default_model('ollama'))
24
+ end
25
+ model_name
26
+ end
27
+
28
+ def make_request(prompt_content)
29
+ request = Net::HTTP::Post.new(@api_uri)
30
+ request.content_type = 'application/json'
31
+
32
+ # Ollama expects the model name directly in the request body.
33
+ # It also expects the full message history.
34
+ request.body = JSON.dump({
35
+ "model" => get_model_name,
36
+ "messages" => [
37
+ {
38
+ "role" => "user",
39
+ "content" => prompt_content
40
+ }
41
+ ],
42
+ "stream" => false # Ensure we get the full response, not a stream
43
+ # "format" => "json" # For some Ollama versions/models to enforce JSON output
44
+ })
45
+
46
+ begin
47
+ response = Net::HTTP.start(@api_uri.hostname, @api_uri.port, use_ssl: @api_uri.scheme == 'https') do |http|
48
+ # Set timeouts: open_timeout for connection, read_timeout for waiting for response
49
+ http.open_timeout = 5 # seconds
50
+ http.read_timeout = 120 # seconds
51
+ http.request(request)
52
+ end
53
+ rescue Net::OpenTimeout, Net::ReadTimeout => e
54
+ raise N2B::LlmApiError.new("Ollama API Error: Timeout connecting or reading from Ollama at #{@api_uri}: #{e.message}")
55
+ rescue Errno::ECONNREFUSED => e
56
+ raise N2B::LlmApiError.new("Ollama API Error: Connection refused at #{@api_uri}. Is Ollama running? #{e.message}")
57
+ end
58
+
59
+
60
+ if response.code != '200'
61
+ raise N2B::LlmApiError.new("Ollama API Error: #{response.code} #{response.message} - #{response.body}")
62
+ end
63
+
64
+ # Ollama's chat response structure is slightly different. The message is in `message.content`.
65
+ raw_response_body = JSON.parse(response.body)
66
+ answer_content = raw_response_body['message']['content']
67
+
68
+ begin
69
+ # Attempt to parse the answer_content as JSON
70
+ # This is for n2b's expectation of JSON with 'commands' and 'explanation'
71
+ parsed_answer = JSON.parse(answer_content)
72
+ if parsed_answer.is_a?(Hash) && parsed_answer.key?('commands')
73
+ parsed_answer
74
+ else
75
+ # If the content itself is valid JSON but not the expected structure, wrap it.
76
+ { 'commands' => [answer_content], 'explanation' => 'Response from LLM (JSON content).' }
77
+ end
78
+ rescue JSON::ParserError
79
+ # If answer_content is not JSON, wrap it in the n2b expected structure
80
+ { 'commands' => [answer_content], 'explanation' => answer_content }
81
+ end
82
+ end
83
+
84
+ def analyze_code_diff(prompt_content)
85
+ request = Net::HTTP::Post.new(@api_uri)
86
+ request.content_type = 'application/json'
87
+
88
+ # The prompt_content for diff analysis should instruct the LLM to return JSON.
89
+ # For Ollama, you can also try adding "format": "json" to the request if the model supports it.
90
+ request_body = {
91
+ "model" => @config['model'] || MODELS.keys.first,
92
+ "messages" => [
93
+ {
94
+ "role" => "user",
95
+ "content" => prompt_content # This prompt must ask for JSON output
96
+ }
97
+ ],
98
+ "stream" => false
99
+ }
100
+ # Some Ollama models/versions might respect a "format": "json" parameter
101
+ # request_body["format"] = "json" # Uncomment if you want to try this
102
+
103
+ request.body = JSON.dump(request_body)
104
+
105
+ begin
106
+ response = Net::HTTP.start(@api_uri.hostname, @api_uri.port, use_ssl: @api_uri.scheme == 'https') do |http|
107
+ http.open_timeout = 5
108
+ http.read_timeout = 180 # Potentially longer for analysis
109
+ http.request(request)
110
+ end
111
+ rescue Net::OpenTimeout, Net::ReadTimeout => e
112
+ raise N2B::LlmApiError.new("Ollama API Error (analyze_code_diff): Timeout for #{@api_uri}: #{e.message}")
113
+ rescue Errno::ECONNREFUSED => e
114
+ raise N2B::LlmApiError.new("Ollama API Error (analyze_code_diff): Connection refused at #{@api_uri}. Is Ollama running? #{e.message}")
115
+ end
116
+
117
+
118
+ if response.code != '200'
119
+ raise N2B::LlmApiError.new("Ollama API Error (analyze_code_diff): #{response.code} #{response.message} - #{response.body}")
120
+ end
121
+
122
+ # Return the raw JSON string from the LLM's response content.
123
+ # The calling method (call_llm_for_diff_analysis in cli.rb) will parse this.
124
+ raw_response_body = JSON.parse(response.body)
125
+ raw_response_body['message']['content']
126
+ end
127
+ end
128
+ end
129
+ end
@@ -1,24 +1,34 @@
1
1
  require 'net/http'
2
2
  require 'json'
3
3
  require 'uri'
4
+ require_relative '../model_config'
4
5
 
5
6
  module N2M
6
7
  module Llm
7
8
  class OpenAi
8
9
  API_URI = URI.parse('https://api.openai.com/v1/chat/completions')
9
- MODELS = { 'gpt-4o' => 'gpt-4o','gpt-4o-mini'=>'gpt-4o-mini', 'gpt-35' => 'gpt-3.5-turbo-1106' }
10
10
 
11
11
  def initialize(config)
12
12
  @config = config
13
13
  end
14
14
 
15
+ def get_model_name
16
+ # Resolve model name using the centralized configuration
17
+ model_name = N2B::ModelConfig.resolve_model('openai', @config['model'])
18
+ if model_name.nil? || model_name.empty?
19
+ # Fallback to default if no model specified
20
+ model_name = N2B::ModelConfig.resolve_model('openai', N2B::ModelConfig.default_model('openai'))
21
+ end
22
+ model_name
23
+ end
24
+
15
25
  def make_request(content)
16
26
  request = Net::HTTP::Post.new(API_URI)
17
27
  request.content_type = 'application/json'
18
28
  request['Authorization'] = "Bearer #{@config['access_key']}"
19
29
 
20
30
  request.body = JSON.dump({
21
- "model" => MODELS[@config['model']],
31
+ "model" => get_model_name,
22
32
  response_format: { type: 'json_object' },
23
33
  "messages" => [
24
34
  {
@@ -54,7 +64,7 @@ module N2M
54
64
  request['Authorization'] = "Bearer #{@config['access_key']}"
55
65
 
56
66
  request.body = JSON.dump({
57
- "model" => MODELS[@config['model']],
67
+ "model" => get_model_name,
58
68
  "response_format" => { "type" => "json_object" }, # Crucial for OpenAI to return JSON
59
69
  "messages" => [
60
70
  {
@@ -0,0 +1,116 @@
1
+ require 'net/http'
2
+ require 'json'
3
+ require 'uri'
4
+ require_relative '../model_config'
5
+
6
+ module N2M
7
+ module Llm
8
+ class OpenRouter
9
+ API_URI = URI.parse('https://openrouter.ai/api/v1/chat/completions')
10
+
11
+ def initialize(config)
12
+ @config = config
13
+ @api_key = @config['access_key']
14
+ @site_url = @config['openrouter_site_url'] || '' # Optional: Read from config
15
+ @site_name = @config['openrouter_site_name'] || '' # Optional: Read from config
16
+ end
17
+
18
+ def get_model_name
19
+ # Resolve model name using the centralized configuration
20
+ model_name = N2B::ModelConfig.resolve_model('openrouter', @config['model'])
21
+ if model_name.nil? || model_name.empty?
22
+ # Fallback to default if no model specified
23
+ model_name = N2B::ModelConfig.resolve_model('openrouter', N2B::ModelConfig.default_model('openrouter'))
24
+ end
25
+ model_name
26
+ end
27
+
28
+ def make_request(prompt_content)
29
+ request = Net::HTTP::Post.new(API_URI)
30
+ request.content_type = 'application/json'
31
+ request['Authorization'] = "Bearer #{@api_key}"
32
+
33
+ # Add OpenRouter specific headers
34
+ request['HTTP-Referer'] = @site_url unless @site_url.empty?
35
+ request['X-Title'] = @site_name unless @site_name.empty?
36
+
37
+ request.body = JSON.dump({
38
+ "model" => get_model_name,
39
+ "messages" => [
40
+ {
41
+ "role" => "user",
42
+ "content" => prompt_content
43
+ }
44
+ ]
45
+ # TODO: Consider adding max_tokens, temperature, etc. from @config if needed
46
+ })
47
+
48
+ response = Net::HTTP.start(API_URI.hostname, API_URI.port, use_ssl: true) do |http|
49
+ http.request(request)
50
+ end
51
+
52
+ if response.code != '200'
53
+ raise N2B::LlmApiError.new("LLM API Error: #{response.code} #{response.message} - #{response.body}")
54
+ end
55
+
56
+ # Assuming OpenRouter returns a similar structure to OpenAI for chat completions
57
+ answer_content = JSON.parse(response.body)['choices'].first['message']['content']
58
+
59
+ begin
60
+ # Attempt to parse the answer as JSON, as expected by the calling CLI's process_natural_language_command
61
+ parsed_answer = JSON.parse(answer_content)
62
+ # Ensure it has the 'commands' and 'explanation' structure if it's for n2b's command generation
63
+ # This might need adjustment based on how `make_request` is used.
64
+ # If it's just for generic requests, this parsing might be too specific.
65
+ # For now, mirroring the OpenAI class's attempt to parse JSON from the content.
66
+ if parsed_answer.is_a?(Hash) && parsed_answer.key?('commands')
67
+ parsed_answer
68
+ else
69
+ # If the content itself isn't the JSON structure n2b expects,
70
+ # but is valid JSON, return it. Otherwise, wrap it.
71
+ # This part needs to be robust based on actual OpenRouter responses.
72
+ { 'commands' => [answer_content], 'explanation' => 'Response from LLM.' } # Fallback
73
+ end
74
+ rescue JSON::ParserError
75
+ # If the content isn't JSON, wrap it in the expected structure for n2b
76
+ { 'commands' => [answer_content], 'explanation' => answer_content }
77
+ end
78
+ end
79
+
80
+ def analyze_code_diff(prompt_content)
81
+ request = Net::HTTP::Post.new(API_URI) # Chat completions endpoint
82
+ request.content_type = 'application/json'
83
+ request['Authorization'] = "Bearer #{@api_key}"
84
+
85
+ # Add OpenRouter specific headers
86
+ request['HTTP-Referer'] = @site_url unless @site_url.empty?
87
+ request['X-Title'] = @site_name unless @site_name.empty?
88
+
89
+ # The prompt_content for diff analysis should already instruct the LLM to return JSON.
90
+ request.body = JSON.dump({
91
+ "model" => get_model_name,
92
+ # "response_format" => { "type" => "json_object" }, # Some models on OpenRouter might support this
93
+ "messages" => [
94
+ {
95
+ "role" => "user",
96
+ "content" => prompt_content # This prompt should ask for JSON output
97
+ }
98
+ ],
99
+ "max_tokens" => @config['max_tokens'] || 2048 # Ensure enough tokens for JSON
100
+ })
101
+
102
+ response = Net::HTTP.start(API_URI.hostname, API_URI.port, use_ssl: true) do |http|
103
+ http.request(request)
104
+ end
105
+
106
+ if response.code != '200'
107
+ raise N2B::LlmApiError.new("LLM API Error: #{response.code} #{response.message} - #{response.body}")
108
+ end
109
+
110
+ # Return the raw JSON string from the LLM's response content.
111
+ # The calling method (call_llm_for_diff_analysis in cli.rb) will parse this.
112
+ JSON.parse(response.body)['choices'].first['message']['content']
113
+ end
114
+ end
115
+ end
116
+ end
@@ -0,0 +1,112 @@
1
+ require 'yaml'
2
+
3
+ module N2B
4
+ class ModelConfig
5
+ CONFIG_PATH = File.expand_path('config/models.yml', __dir__)
6
+
7
+ def self.load_models
8
+ @models ||= YAML.load_file(CONFIG_PATH)
9
+ rescue => e
10
+ puts "Warning: Could not load models configuration: #{e.message}"
11
+ puts "Using fallback model configuration."
12
+ fallback_models
13
+ end
14
+
15
+ def self.fallback_models
16
+ {
17
+ 'claude' => { 'suggested' => { 'sonnet' => 'claude-3-sonnet-20240229' }, 'default' => 'sonnet' },
18
+ 'openai' => { 'suggested' => { 'gpt-4o-mini' => 'gpt-4o-mini' }, 'default' => 'gpt-4o-mini' },
19
+ 'gemini' => { 'suggested' => { 'gemini-flash' => 'gemini-2.0-flash' }, 'default' => 'gemini-flash' },
20
+ 'openrouter' => { 'suggested' => { 'gpt-4o' => 'openai/gpt-4o' }, 'default' => 'gpt-4o' },
21
+ 'ollama' => { 'suggested' => { 'llama3' => 'llama3' }, 'default' => 'llama3' }
22
+ }
23
+ end
24
+
25
+ def self.suggested_models(provider)
26
+ load_models.dig(provider, 'suggested') || {}
27
+ end
28
+
29
+ def self.default_model(provider)
30
+ load_models.dig(provider, 'default')
31
+ end
32
+
33
+ def self.resolve_model(provider, user_input)
34
+ return nil if user_input.nil? || user_input.empty?
35
+
36
+ suggested = suggested_models(provider)
37
+
38
+ # If user input matches a suggested model key, return the API name
39
+ if suggested.key?(user_input)
40
+ suggested[user_input]
41
+ else
42
+ # Otherwise, treat as custom model (return as-is)
43
+ user_input
44
+ end
45
+ end
46
+
47
+ def self.display_model_options(provider)
48
+ suggested = suggested_models(provider)
49
+ default = default_model(provider)
50
+
51
+ options = []
52
+ suggested.each_with_index do |(key, api_name), index|
53
+ default_marker = key == default ? " [default]" : ""
54
+ options << "#{index + 1}. #{key} (#{api_name})#{default_marker}"
55
+ end
56
+ options << "#{suggested.size + 1}. custom (enter your own model name)"
57
+
58
+ options
59
+ end
60
+
61
+ def self.get_model_choice(provider, current_model = nil)
62
+ options = display_model_options(provider)
63
+ suggested = suggested_models(provider)
64
+ default = default_model(provider)
65
+
66
+ puts "\nChoose a model for #{provider}:"
67
+ options.each { |option| puts " #{option}" }
68
+
69
+ current_display = current_model || default
70
+ print "\nEnter choice (1-#{options.size}) or model name [#{current_display}]: "
71
+
72
+ input = $stdin.gets.chomp
73
+
74
+ # If empty input, use current or default
75
+ if input.empty?
76
+ return current_model || resolve_model(provider, default)
77
+ end
78
+
79
+ # If numeric input, handle menu selection
80
+ if input.match?(/^\d+$/)
81
+ choice_num = input.to_i
82
+ if choice_num >= 1 && choice_num <= suggested.size
83
+ # Selected a suggested model
84
+ selected_key = suggested.keys[choice_num - 1]
85
+ return resolve_model(provider, selected_key)
86
+ elsif choice_num == suggested.size + 1
87
+ # Selected custom option
88
+ print "Enter custom model name: "
89
+ custom_model = $stdin.gets.chomp
90
+ if custom_model.empty?
91
+ puts "Custom model name cannot be empty. Using default."
92
+ return resolve_model(provider, default)
93
+ end
94
+ puts "โœ“ Using custom model: #{custom_model}"
95
+ return custom_model
96
+ else
97
+ puts "Invalid choice. Using default."
98
+ return resolve_model(provider, default)
99
+ end
100
+ else
101
+ # Direct model name input
102
+ resolved = resolve_model(provider, input)
103
+ if suggested.key?(input)
104
+ puts "โœ“ Using suggested model: #{input} (#{resolved})"
105
+ else
106
+ puts "โœ“ Using custom model: #{resolved}"
107
+ end
108
+ return resolved
109
+ end
110
+ end
111
+ end
112
+ end
data/lib/n2b/version.rb CHANGED
@@ -1,4 +1,4 @@
1
1
  # lib/n2b/version.rb
2
2
  module N2B
3
- VERSION = "0.3.0"
3
+ VERSION = "0.4.0"
4
4
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: n2b
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.0
4
+ version: 0.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Stefan Nothegger
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2025-06-04 00:00:00.000000000 Z
11
+ date: 2025-06-05 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: json
@@ -67,11 +67,15 @@ files:
67
67
  - lib/n2b.rb
68
68
  - lib/n2b/base.rb
69
69
  - lib/n2b/cli.rb
70
+ - lib/n2b/config/models.yml
70
71
  - lib/n2b/errors.rb
71
72
  - lib/n2b/irb.rb
72
73
  - lib/n2b/llm/claude.rb
73
74
  - lib/n2b/llm/gemini.rb
75
+ - lib/n2b/llm/ollama.rb
74
76
  - lib/n2b/llm/open_ai.rb
77
+ - lib/n2b/llm/open_router.rb
78
+ - lib/n2b/model_config.rb
75
79
  - lib/n2b/version.rb
76
80
  homepage: https://github.com/stefan-kp/n2b
77
81
  licenses:
@@ -95,7 +99,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
95
99
  - !ruby/object:Gem::Version
96
100
  version: '0'
97
101
  requirements: []
98
- rubygems_version: 3.5.22
102
+ rubygems_version: 3.5.3
99
103
  signing_key:
100
104
  specification_version: 4
101
105
  summary: Convert natural language to bash commands or ruby code and help with debugging.