terminal-sherpa 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,242 @@
1
+ Metadata-Version: 2.4
2
+ Name: terminal-sherpa
3
+ Version: 0.1.0
4
+ Summary: AI-powered bash command generator
5
+ Requires-Python: >=3.13
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: anthropic>=0.7.0
8
+ Requires-Dist: black>=25.1.0
9
+ Requires-Dist: loguru>=0.7.0
10
+ Requires-Dist: openai>=1.0.0
11
+ Requires-Dist: pytest>=8.0.0
12
+ Requires-Dist: pytest-cov>=4.0.0
13
+ Requires-Dist: pytest-mock>=3.12.0
14
+ Requires-Dist: ruff>=0.12.3
15
+ Requires-Dist: toml>=0.10.0
16
+
17
+ # ask
18
+
19
+ A lightweight AI chat interface for fellow terminal dwellers.
20
+
21
+ Turn natural language into bash commands instantly.
22
+ Stop googling syntax and start asking.
23
+
24
+ [![codecov](https://codecov.io/github/lcford2/ask/graph/badge.svg?token=2MXHNL3RHE)](https://codecov.io/github/lcford2/ask)
25
+
26
+ ## 🚀 Getting Started
27
+
28
+ Get up and running:
29
+
30
+ ```bash
31
+ # Install ask
32
+ uv tool install ask
33
+
34
+ # Set your API key
35
+ export ANTHROPIC_API_KEY="your-key-here"
36
+
37
+ # Try it out
38
+ ask "find all .py files modified in the last week"
39
+ ```
40
+
41
+ **Example output:**
42
+ ```bash
43
+ find . -name "*.py" -mtime -7
44
+ ```
45
+
46
+ ## ✨ Features
47
+
48
+ - **Natural language to bash conversion** - Describe what you want, get the command
49
+ - **Multiple AI provider support** - Choose between Anthropic (Claude) and OpenAI (GPT) models
50
+ - **Flexible configuration system** - Set defaults, customize models, and manage API keys
51
+ - **XDG-compliant config files** - Follows standard configuration file locations
52
+ - **Verbose logging support** - Debug and understand what's happening under the hood
53
+
54
+ ## 📦 Installation
55
+
56
+ ### Requirements
57
+ - Python 3.13+
58
+ - API key for Anthropic or OpenAI
59
+
60
+ ### Install Methods
61
+
62
+ **Recommended (uv):**
63
+ ```bash
64
+ uv tool install ask
65
+ ```
66
+
67
+ **Using pip:**
68
+ ```bash
69
+ pip install ask
70
+ ```
71
+
72
+ **From source:**
73
+ ```bash
74
+ git clone https://github.com/lcford2/ask.git
75
+ cd ask
76
+ uv sync
77
+ uv run ask "your prompt here"
78
+ ```
79
+
80
+ **Verify installation:**
81
+ ```bash
82
+ ask --help
83
+ ```
84
+
85
+ ## 💡 Usage
86
+
87
+ ### Basic Syntax
88
+ ```bash
89
+ ask "your natural language prompt"
90
+ ```
91
+
92
+ ### Command Options
93
+
94
+ | Option | Description | Example |
95
+ |--------|-------------|---------|
96
+ | `--model provider:model` | Specify provider and model | `ask --model anthropic:claude-3-haiku "list files"` |
97
+ | `--verbose` | Enable verbose logging | `ask --verbose "compress this folder"` |
98
+
99
+ ### Practical Examples
100
+
101
+ **File Operations:**
102
+ ```bash
103
+ ask "find all files larger than 100MB"
104
+ # Example output: find . -size +100M
105
+
106
+ ask "create a backup of config.txt with timestamp"
107
+ # Example output: cp config.txt config.txt.$(date +%Y%m%d_%H%M%S)
108
+ ```
109
+
110
+ **Git Commands:**
111
+ ```bash
112
+ ask "show git log for last 5 commits with one line each"
113
+ # Example output: git log --oneline -5
114
+
115
+ ask "delete all local branches that have been merged"
116
+ # Example output: git branch --merged | grep -v "\*\|main\|master" | xargs -n 1 git branch -d
117
+ ```
118
+
119
+ **System Administration:**
120
+ ```bash
121
+ ask "check disk usage of current directory sorted by size"
122
+ # Example output: du -sh * | sort -hr
123
+
124
+ ask "find processes using port 8080"
125
+ # Example output: lsof -i :8080
126
+ ```
127
+
128
+ **Text Processing:**
129
+ ```bash
130
+ ask "count lines in all Python files"
131
+ # Example output: find . -name "*.py" -exec wc -l {} + | tail -1
132
+
133
+ ask "replace all tabs with spaces in file.txt"
134
+ # Example output: sed -i 's/\t/ /g' file.txt
135
+ ```
136
+
137
+ **Network Operations:**
138
+ ```bash
139
+ ask "download file from URL and save to downloads folder"
140
+ # Example output: curl -o ~/Downloads/filename "https://example.com/file"
141
+
142
+ ask "check if port 443 is open on example.com"
143
+ # Example output: nc -zv example.com 443
144
+ ```
145
+
146
+ ## ⚙️ Configuration
147
+
148
+ ### Configuration File Locations
149
+ Ask follows XDG Base Directory Specification:
150
+
151
+ 1. `$XDG_CONFIG_HOME/ask/config.toml`
152
+ 1. `~/.config/ask/config.toml` (if XDG_CONFIG_HOME not set)
153
+ 1. `~/.ask/config.toml` (fallback)
154
+
155
+ ### Environment Variables
156
+ ```bash
157
+ export ANTHROPIC_API_KEY="your-anthropic-key"
158
+ export OPENAI_API_KEY="your-openai-key"
159
+ ```
160
+
161
+ ### Example Configuration File
162
+ Create `~/.config/ask/config.toml`:
163
+
164
+ ```toml
165
+ [ask]
166
+ default_model = "anthropic"
167
+
168
+ [anthropic]
169
+ api_key = "your-anthropic-key"
170
+ model = "claude-3-haiku-20240307"
171
+ max_tokens = 512
172
+
173
+ [anthropic.sonnet]
174
+ model = "claude-3-5-sonnet-20241022"
175
+ max_tokens = 1024
176
+
177
+ [openai]
178
+ api_key = "your-openai-key"
179
+ model = "gpt-4o"
180
+ max_tokens = 1024
181
+ ```
182
+
183
+ ## 🤖 Supported Providers
184
+
185
+ - Anthropic (Claude)
186
+ - OpenAI (GPT)
187
+
188
+ > **Note:** Get API keys from [Anthropic Console](https://console.anthropic.com/) or [OpenAI Platform](https://platform.openai.com/)
189
+
190
+ ## 🛣️ Roadmap
191
+
192
+ ### Near-term
193
+ - [ ] Shell integration and auto-completion
194
+ - [ ] Command history and favorites
195
+ - [ ] Safety features (command preview/confirmation)
196
+ - [ ] Output formatting options
197
+
198
+ ### Medium-term
199
+ - [ ] Additional providers (Google, Cohere, Mistral)
200
+ - [ ] Interactive mode for complex tasks
201
+ - [ ] Plugin system for custom providers
202
+ - [ ] Command validation and testing
203
+
204
+ ### Long-term
205
+ - [ ] Local model support (Ollama, llama.cpp)
206
+ - [ ] Learning from user preferences
207
+ - [ ] Advanced safety and sandboxing
208
+ - [ ] GUI and web interface options
209
+
210
+ ## 🔧 Development
211
+
212
+ ### Setup
213
+ ```bash
214
+ git clone https://github.com/lcford2/ask.git
215
+ cd ask
216
+ uv sync
217
+ uv run pre-commit install
218
+ ```
219
+
220
+ ### Testing
221
+ ```bash
222
+ uv run python -m pytest
223
+ ```
224
+
225
+ ### Contributing
226
+ 1. Fork the repository
227
+ 2. Create a feature branch
228
+ 3. Make your changes
229
+ 4. Run pre-commit checks: `uv run pre-commit run --all-files`
230
+ 5. Submit a pull request
231
+
232
+ ## License
233
+
234
+ This project is licensed under the MIT License - see the LICENSE file for details.
235
+
236
+ ## Contributing
237
+
238
+ Contributions are welcome! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.
239
+
240
+ ## Issues
241
+
242
+ Found a bug or have a feature request? Please open an issue on [GitHub Issues](https://github.com/lcford2/ask/issues).
@@ -0,0 +1,20 @@
1
+ ask/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
2
+ ask/config.py,sha256=iHIiMKePia80Sno_XlARqa7pEyW3eZm_Bf5SlUMidiQ,2880
3
+ ask/exceptions.py,sha256=0RLMSbw6j49BEhJN7C8MYaKpuhVeitsBhTGjZmaiHis,434
4
+ ask/main.py,sha256=SJ084NXNvd-pO2hY8oFO__NFR1F07YxeOacBuv65crc,3867
5
+ ask/providers/__init__.py,sha256=IINO0hNGarFpf62lCuVtIqMeAOC1CtR_wDWUL6_iWzI,1105
6
+ ask/providers/anthropic.py,sha256=ZFlnQZaxPGHLYjAacapE-63LoSu43xYnZ30Ajmrmgw0,2703
7
+ ask/providers/base.py,sha256=91ZbVORYWckSHNwNPiTmgfqQN0FLO9AgV6mptuAkIU0,769
8
+ ask/providers/openai.py,sha256=R-UgVArtlpn8F4qkliQ7unNk11ekTPL0hFZCfubGYpg,3661
9
+ test/conftest.py,sha256=V9ebLC-soz0hHocZLZAibzsVdvzZh4-elcTtKmZ2FyA,1363
10
+ test/test_anthropic.py,sha256=S5OQ67qIZ4VO38eJwAAwJa4JBylJhKCtmcGjCWA8WLY,5687
11
+ test/test_config.py,sha256=FrJ6bsZ6mK46e-8fQfkFGx9GgwHrNfnoI8211R0V9K8,5565
12
+ test/test_exceptions.py,sha256=tw-spMitAdYj9uW_8TjnlyVKKXFC06FR3610WGR-494,1754
13
+ test/test_main.py,sha256=3gZ83nVHMSEmgHSF2UJoELfK028a4vgxLpIk2P1cH1Y,7745
14
+ test/test_openai.py,sha256=3dDwlxnKGwl5aJcKHyNgrwrciJBZ96a51fHuxr-V8FA,8633
15
+ test/test_providers.py,sha256=SejQvCZSEQ5RAfVTCtPZ-39fXnfV17n4gaSxjiHA5UM,2140
16
+ terminal_sherpa-0.1.0.dist-info/METADATA,sha256=hh4NTZNv3yisUjLUcsJxlOBHUiK70l0u82CQIxJrjJI,5610
17
+ terminal_sherpa-0.1.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
18
+ terminal_sherpa-0.1.0.dist-info/entry_points.txt,sha256=LxG9-J__nMGmeEIi47WVGYC1LLJo1GaADH21hfxEK70,38
19
+ terminal_sherpa-0.1.0.dist-info/top_level.txt,sha256=Y7k5b2NSCkKiA_XPU-4fT_GYangD6JVDug5xwfXvmuQ,9
20
+ terminal_sherpa-0.1.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (80.9.0)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1,2 @@
1
+ [console_scripts]
2
+ ask = ask.main:main
@@ -0,0 +1,2 @@
1
+ ask
2
+ test
test/conftest.py ADDED
@@ -0,0 +1,58 @@
1
+ """Pytest configuration and fixtures."""
2
+
3
+ import os
4
+ import tempfile
5
+ from collections.abc import Generator
6
+ from pathlib import Path
7
+ from unittest.mock import patch
8
+
9
+ import pytest
10
+
11
+
12
+ @pytest.fixture
13
+ def temp_config_dir() -> Generator[Path, None, None]:
14
+ """Create a temporary directory for config files."""
15
+ with tempfile.TemporaryDirectory() as temp_dir:
16
+ yield Path(temp_dir)
17
+
18
+
19
+ @pytest.fixture
20
+ def test_resources_dir() -> Path:
21
+ """Return path to test resources directory."""
22
+ return Path(__file__).parent / "test_resources"
23
+
24
+
25
+ @pytest.fixture
26
+ def mock_env_vars():
27
+ """Mock environment variables for testing."""
28
+ with patch.dict(os.environ, {}, clear=True):
29
+ yield
30
+
31
+
32
+ @pytest.fixture
33
+ def mock_anthropic_key():
34
+ """Mock Anthropic API key in environment."""
35
+ with patch.dict(
36
+ os.environ, {"ANTHROPIC_API_KEY": "test-anthropic-key"}, clear=True
37
+ ):
38
+ yield
39
+
40
+
41
+ @pytest.fixture
42
+ def mock_openai_key():
43
+ """Mock OpenAI API key in environment."""
44
+ with patch.dict(os.environ, {"OPENAI_API_KEY": "test-openai-key"}, clear=True):
45
+ yield
46
+
47
+
48
+ @pytest.fixture
49
+ def mock_both_keys():
50
+ """Mock both API keys in environment."""
51
+ with patch.dict(
52
+ os.environ,
53
+ {
54
+ "ANTHROPIC_API_KEY": "test-anthropic-key",
55
+ "OPENAI_API_KEY": "test-openai-key",
56
+ },
57
+ ):
58
+ yield
test/test_anthropic.py ADDED
@@ -0,0 +1,173 @@
1
+ """Tests for Anthropic provider."""
2
+
3
+ import os
4
+ from unittest.mock import MagicMock, patch
5
+
6
+ import pytest
7
+
8
+ from ask.config import SYSTEM_PROMPT
9
+ from ask.exceptions import APIError, AuthenticationError, RateLimitError
10
+ from ask.providers.anthropic import AnthropicProvider
11
+
12
+
13
+ def test_anthropic_provider_init():
14
+ """Test provider initialization."""
15
+ config = {"model_name": "claude-3-haiku-20240307"}
16
+ provider = AnthropicProvider(config)
17
+
18
+ assert provider.config == config
19
+ assert provider.client is None
20
+
21
+
22
+ def test_validate_config_success(mock_anthropic_key):
23
+ """Test successful config validation."""
24
+ config = {"api_key_env": "ANTHROPIC_API_KEY"}
25
+ provider = AnthropicProvider(config)
26
+
27
+ with patch("anthropic.Anthropic") as mock_anthropic:
28
+ mock_client = MagicMock()
29
+ mock_anthropic.return_value = mock_client
30
+
31
+ provider.validate_config()
32
+
33
+ assert provider.client == mock_client
34
+ mock_anthropic.assert_called_once_with(api_key="test-anthropic-key")
35
+
36
+
37
+ def test_validate_config_missing_key(mock_env_vars):
38
+ """Test missing API key error."""
39
+ config = {"api_key_env": "ANTHROPIC_API_KEY"}
40
+ provider = AnthropicProvider(config)
41
+
42
+ with pytest.raises(
43
+ AuthenticationError, match="ANTHROPIC_API_KEY environment variable is required"
44
+ ):
45
+ provider.validate_config()
46
+
47
+
48
+ def test_validate_config_custom_env():
49
+ """Test custom environment variable."""
50
+ config = {"api_key_env": "CUSTOM_ANTHROPIC_KEY"}
51
+ provider = AnthropicProvider(config)
52
+
53
+ with patch.dict(os.environ, {"CUSTOM_ANTHROPIC_KEY": "custom-key"}):
54
+ with patch("anthropic.Anthropic") as mock_anthropic:
55
+ mock_client = MagicMock()
56
+ mock_anthropic.return_value = mock_client
57
+
58
+ provider.validate_config()
59
+
60
+ mock_anthropic.assert_called_once_with(api_key="custom-key")
61
+
62
+
63
+ def test_get_default_config():
64
+ """Test default configuration values."""
65
+ default_config = AnthropicProvider.get_default_config()
66
+
67
+ assert default_config["model_name"] == "claude-3-haiku-20240307"
68
+ assert default_config["max_tokens"] == 150
69
+ assert default_config["api_key_env"] == "ANTHROPIC_API_KEY"
70
+ assert default_config["temperature"] == 0.0
71
+ assert default_config["system_prompt"] == SYSTEM_PROMPT
72
+
73
+
74
+ def test_get_bash_command_success(mock_anthropic_key):
75
+ """Test successful command generation."""
76
+ config = {"model_name": "claude-3-haiku-20240307", "max_tokens": 150}
77
+ provider = AnthropicProvider(config)
78
+
79
+ mock_response = MagicMock()
80
+ mock_response.content = [MagicMock(text="ls -la")]
81
+
82
+ with patch("anthropic.Anthropic") as mock_anthropic:
83
+ mock_client = MagicMock()
84
+ mock_client.messages.create.return_value = mock_response
85
+ mock_anthropic.return_value = mock_client
86
+
87
+ result = provider.get_bash_command("list files")
88
+
89
+ assert result == "ls -la"
90
+ mock_client.messages.create.assert_called_once_with(
91
+ model="claude-3-haiku-20240307",
92
+ max_tokens=150,
93
+ temperature=0.0,
94
+ system=SYSTEM_PROMPT,
95
+ messages=[{"role": "user", "content": "list files"}],
96
+ )
97
+
98
+
99
+ def test_get_bash_command_auto_validate(mock_anthropic_key):
100
+ """Test auto-validation behavior."""
101
+ config = {}
102
+ provider = AnthropicProvider(config)
103
+
104
+ mock_response = MagicMock()
105
+ mock_response.content = [MagicMock(text="ls -la")]
106
+
107
+ with patch("anthropic.Anthropic") as mock_anthropic:
108
+ mock_client = MagicMock()
109
+ mock_client.messages.create.return_value = mock_response
110
+ mock_anthropic.return_value = mock_client
111
+
112
+ # Client should be None initially
113
+ assert provider.client is None
114
+
115
+ result = provider.get_bash_command("list files")
116
+
117
+ # Client should be set after auto-validation
118
+ assert provider.client is not None
119
+ assert result == "ls -la"
120
+
121
+
122
+ def test_handle_api_error_auth():
123
+ """Test authentication error mapping."""
124
+ provider = AnthropicProvider({})
125
+
126
+ with pytest.raises(AuthenticationError, match="Invalid API key"):
127
+ provider._handle_api_error(Exception("authentication failed"))
128
+
129
+
130
+ def test_handle_api_error_rate_limit():
131
+ """Test rate limit error mapping."""
132
+ provider = AnthropicProvider({})
133
+
134
+ with pytest.raises(RateLimitError, match="API rate limit exceeded"):
135
+ provider._handle_api_error(Exception("rate limit exceeded"))
136
+
137
+
138
+ def test_handle_api_error_generic():
139
+ """Test generic API error mapping."""
140
+ provider = AnthropicProvider({})
141
+
142
+ with pytest.raises(APIError, match="API request failed"):
143
+ provider._handle_api_error(Exception("unexpected error"))
144
+
145
+
146
+ def test_config_parameter_usage(mock_anthropic_key):
147
+ """Test configuration parameter usage."""
148
+ config = {
149
+ "model_name": "claude-3-5-sonnet-20241022",
150
+ "max_tokens": 1024,
151
+ "temperature": 0.5,
152
+ "system_prompt": "Custom system prompt",
153
+ }
154
+ provider = AnthropicProvider(config)
155
+
156
+ mock_response = MagicMock()
157
+ mock_response.content = [MagicMock(text="custom response")]
158
+
159
+ with patch("anthropic.Anthropic") as mock_anthropic:
160
+ mock_client = MagicMock()
161
+ mock_client.messages.create.return_value = mock_response
162
+ mock_anthropic.return_value = mock_client
163
+
164
+ result = provider.get_bash_command("test prompt")
165
+
166
+ assert result == "custom response"
167
+ mock_client.messages.create.assert_called_once_with(
168
+ model="claude-3-5-sonnet-20241022",
169
+ max_tokens=1024,
170
+ temperature=0.5,
171
+ system="Custom system prompt",
172
+ messages=[{"role": "user", "content": "test prompt"}],
173
+ )
test/test_config.py ADDED
@@ -0,0 +1,164 @@
1
+ """Tests for the configuration system."""
2
+
3
+ import os
4
+ from unittest.mock import patch
5
+
6
+ import pytest
7
+
8
+ from ask.config import (
9
+ get_config_path,
10
+ get_default_model,
11
+ get_default_provider,
12
+ get_provider_config,
13
+ load_config,
14
+ )
15
+ from ask.exceptions import ConfigurationError
16
+
17
+
18
+ def test_get_config_path_xdg_config_home(temp_config_dir):
19
+ """Test XDG_CONFIG_HOME path resolution."""
20
+ config_file = temp_config_dir / "ask" / "config.toml"
21
+ config_file.parent.mkdir(parents=True)
22
+ config_file.touch()
23
+
24
+ with patch.dict(os.environ, {"XDG_CONFIG_HOME": str(temp_config_dir)}):
25
+ assert get_config_path() == config_file
26
+
27
+
28
+ def test_get_config_path_default_xdg(temp_config_dir):
29
+ """Test default ~/.config path."""
30
+ config_file = temp_config_dir / ".config" / "ask" / "config.toml"
31
+ config_file.parent.mkdir(parents=True)
32
+ config_file.touch()
33
+
34
+ with patch.dict(os.environ, {}, clear=True):
35
+ with patch("pathlib.Path.home", return_value=temp_config_dir):
36
+ assert get_config_path() == config_file
37
+
38
+
39
+ def test_get_config_path_fallback(temp_config_dir):
40
+ """Test ~/.ask fallback path."""
41
+ config_file = temp_config_dir / ".ask" / "config.toml"
42
+ config_file.parent.mkdir(parents=True)
43
+ config_file.touch()
44
+
45
+ with patch.dict(os.environ, {}, clear=True):
46
+ with patch("pathlib.Path.home", return_value=temp_config_dir):
47
+ assert get_config_path() == config_file
48
+
49
+
50
+ def test_get_config_path_none(temp_config_dir):
51
+ """Test when no config file exists."""
52
+ with patch.dict(os.environ, {}, clear=True):
53
+ with patch("pathlib.Path.home", return_value=temp_config_dir):
54
+ assert get_config_path() is None
55
+
56
+
57
+ def test_load_config_valid(test_resources_dir):
58
+ """Test loading valid TOML config."""
59
+ config_file = test_resources_dir / "valid_config.toml"
60
+
61
+ with patch("ask.config.get_config_path", return_value=config_file):
62
+ config = load_config()
63
+ assert config["ask"]["default_model"] == "anthropic"
64
+ assert config["anthropic"]["model_name"] == "claude-3-haiku-20240307"
65
+
66
+
67
+ def test_load_config_invalid(test_resources_dir):
68
+ """Test loading invalid TOML syntax."""
69
+ config_file = test_resources_dir / "invalid_config.toml"
70
+
71
+ with patch("ask.config.get_config_path", return_value=config_file):
72
+ with pytest.raises(ConfigurationError, match="Failed to load config file"):
73
+ load_config()
74
+
75
+
76
+ def test_load_config_not_found():
77
+ """Test when config file doesn't exist."""
78
+ with patch("ask.config.get_config_path", return_value=None):
79
+ config = load_config()
80
+ assert config == {}
81
+
82
+
83
+ def test_load_config_permission_error(temp_config_dir):
84
+ """Test permission errors."""
85
+ config_file = temp_config_dir / "config.toml"
86
+ config_file.touch()
87
+
88
+ with patch("ask.config.get_config_path", return_value=config_file):
89
+ with patch("builtins.open", side_effect=PermissionError("Access denied")):
90
+ with pytest.raises(ConfigurationError, match="Failed to load config file"):
91
+ load_config()
92
+
93
+
94
+ def test_get_provider_config_simple():
95
+ """Test simple provider name parsing."""
96
+ config = {"anthropic": {"model_name": "claude-3-haiku-20240307"}}
97
+
98
+ provider_name, provider_config = get_provider_config(config, "anthropic")
99
+ assert provider_name == "anthropic"
100
+ assert provider_config["model_name"] == "claude-3-haiku-20240307"
101
+
102
+
103
+ def test_get_provider_config_with_model():
104
+ """Test provider:model syntax."""
105
+ config = {
106
+ "anthropic": {
107
+ "model_name": "claude-3-haiku-20240307",
108
+ "sonnet": {"model_name": "claude-3-5-sonnet-20241022"},
109
+ }
110
+ }
111
+
112
+ provider_name, provider_config = get_provider_config(config, "anthropic:sonnet")
113
+ assert provider_name == "anthropic"
114
+ assert provider_config["model_name"] == "claude-3-5-sonnet-20241022"
115
+
116
+
117
+ def test_get_provider_config_nested():
118
+ """Test nested provider configuration."""
119
+ config = {
120
+ "anthropic": {
121
+ "base_setting": "base_value",
122
+ "sonnet": {"model_name": "claude-3-5-sonnet-20241022", "max_tokens": 1024},
123
+ }
124
+ }
125
+
126
+ provider_name, provider_config = get_provider_config(config, "anthropic:sonnet")
127
+ assert provider_name == "anthropic"
128
+ assert provider_config["model_name"] == "claude-3-5-sonnet-20241022"
129
+ assert provider_config["max_tokens"] == 1024
130
+
131
+
132
+ def test_get_provider_config_global_merge():
133
+ """Test global config merging."""
134
+ config = {
135
+ "ask": {"global_setting": "global_value", "temperature": 0.1},
136
+ "anthropic": {"model_name": "claude-3-haiku-20240307", "temperature": 0.0},
137
+ }
138
+
139
+ provider_name, provider_config = get_provider_config(config, "anthropic")
140
+ assert provider_name == "anthropic"
141
+ assert provider_config["global_setting"] == "global_value"
142
+ assert provider_config["temperature"] == 0.0 # Provider-specific overrides global
143
+
144
+
145
+ def test_get_default_model():
146
+ """Test default model retrieval."""
147
+ config = {"ask": {"default_model": "anthropic:sonnet"}}
148
+
149
+ assert get_default_model(config) == "anthropic:sonnet"
150
+
151
+
152
+ def test_get_default_provider_anthropic(mock_anthropic_key):
153
+ """Test Anthropic as default provider."""
154
+ assert get_default_provider() == "anthropic"
155
+
156
+
157
+ def test_get_default_provider_openai(mock_openai_key):
158
+ """Test OpenAI as default provider."""
159
+ assert get_default_provider() == "openai"
160
+
161
+
162
+ def test_get_default_provider_none(mock_env_vars):
163
+ """Test when no API keys available."""
164
+ assert get_default_provider() is None