model-forge-llm 0.2.0__py3-none-any.whl → 0.2.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,314 @@
1
+ Metadata-Version: 2.4
2
+ Name: model-forge-llm
3
+ Version: 0.2.1
4
+ Summary: A reusable library for managing LLM providers, authentication, and model selection.
5
+ Author: ModelForge Contributors
6
+ License: MIT License
7
+
8
+ Copyright (c) 2025 Shuhai Miao
9
+
10
+ Permission is hereby granted, free of charge, to any person obtaining a copy
11
+ of this software and associated documentation files (the "Software"), to deal
12
+ in the Software without restriction, including without limitation the rights
13
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
14
+ copies of the Software, and to permit persons to whom the Software is
15
+ furnished to do so, subject to the following conditions:
16
+
17
+ The above copyright notice and this permission notice shall be included in all
18
+ copies or substantial portions of the Software.
19
+
20
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
21
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
22
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
23
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
24
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
25
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
26
+ SOFTWARE.
27
+
28
+ Project-URL: Homepage, https://github.com/your-org/model-forge
29
+ Project-URL: Repository, https://github.com/your-org/model-forge
30
+ Project-URL: Issues, https://github.com/your-org/model-forge/issues
31
+ Project-URL: Documentation, https://model-forge.readthedocs.io
32
+ Keywords: llm,ai,langchain,openai,ollama,providers,authentication
33
+ Classifier: Development Status :: 4 - Beta
34
+ Classifier: Intended Audience :: Developers
35
+ Classifier: License :: OSI Approved :: MIT License
36
+ Classifier: Operating System :: OS Independent
37
+ Classifier: Programming Language :: Python :: 3
38
+ Classifier: Programming Language :: Python :: 3.11
39
+ Classifier: Programming Language :: Python :: 3.12
40
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
41
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
42
+ Requires-Python: <4.0,>=3.11
43
+ Description-Content-Type: text/markdown
44
+ License-File: LICENSE
45
+ Requires-Dist: click>=8.1.7
46
+ Requires-Dist: requests>=2.32.3
47
+ Requires-Dist: langchain-core>=0.3.0
48
+ Requires-Dist: langchain-openai>=0.3.0
49
+ Requires-Dist: langchain-community>=0.3.0
50
+ Requires-Dist: langchain-google-genai>=2.1.5
51
+ Requires-Dist: langchain-github-copilot>=0.4.0
52
+ Provides-Extra: dev
53
+ Requires-Dist: pytest>=8.2.2; extra == "dev"
54
+ Requires-Dist: pytest-mock>=3.14.0; extra == "dev"
55
+ Requires-Dist: ruff>=0.7.0; extra == "dev"
56
+ Requires-Dist: mypy>=1.11.0; extra == "dev"
57
+ Requires-Dist: pre-commit>=3.8.0; extra == "dev"
58
+ Requires-Dist: pytest-cov>=5.0.0; extra == "dev"
59
+ Requires-Dist: types-requests>=2.32.0; extra == "dev"
60
+ Requires-Dist: twine>=6.1.0; extra == "dev"
61
+ Requires-Dist: requests-mock>=1.12.1; extra == "dev"
62
+ Dynamic: license-file
63
+
64
+ # ModelForge
65
+
66
+ A Python library for managing LLM providers, authentication, and model selection with seamless LangChain integration.
67
+
68
+ ## Installation
69
+
70
+ ### Recommended: Virtual Environment
71
+ ```bash
72
+ # Create and activate virtual environment
73
+ python -m venv model-forge-env
74
+ source model-forge-env/bin/activate # On Windows: model-forge-env\Scripts\activate
75
+
76
+ # Install package
77
+ pip install model-forge-llm
78
+
79
+ # Verify installation
80
+ modelforge --help
81
+ ```
82
+
83
+ ### Quick Install (System-wide)
84
+ ```bash
85
+ pip install model-forge-llm
86
+ ```
87
+
88
+ ## Quick Start
89
+
90
+ ### Option 1: GitHub Copilot via Device Authentication Flow
91
+ ```bash
92
+ # Discover GitHub Copilot models
93
+ modelforge models list --provider github_copilot
94
+
95
+ # Set up GitHub Copilot with device authentication
96
+ modelforge auth login --provider github_copilot
97
+
98
+ # Select Claude 3.7 Sonnet via GitHub Copilot
99
+ modelforge config use --provider github_copilot --model claude-3.7-sonnet
100
+
101
+ # Test your setup
102
+ modelforge test --prompt "Write a Python function to reverse a string"
103
+ ```
104
+
105
+ ### Option 2: OpenAI (API Key Required)
106
+ ```bash
107
+ # Add OpenAI with your API key
108
+ modelforge auth login --provider openai --api-key YOUR_API_KEY
109
+
110
+ # Select GPT-4o-mini
111
+ modelforge config use --provider openai --model gpt-4o-mini
112
+
113
+ # Test your setup
114
+ modelforge test --prompt "Hello, world!"
115
+ ```
116
+
117
+ ### Option 3: Local Ollama (No API Key Needed)
118
+ ```bash
119
+ # Make sure Ollama is running locally
120
+ # Then add a local model
121
+ modelforge config add --provider ollama --model qwen3:1.7b
122
+
123
+ # Select the local model
124
+ modelforge config use --provider ollama --model qwen3:1.7b
125
+
126
+ # Test your setup
127
+ modelforge test --prompt "What is machine learning?"
128
+ ```
129
+
130
+ ### Common Commands - Complete Lifecycle
131
+ ```bash
132
+ # Installation & Setup
133
+ modelforge --help # Verify installation
134
+ modelforge config show # View current config
135
+
136
+ # Model Discovery & Selection
137
+ modelforge models list # List all available models
138
+ modelforge models search "claude" # Search models by name
139
+ modelforge models info --provider openai --model gpt-4o # Get model details
140
+
141
+ # Authentication Management
142
+ modelforge auth login --provider openai --api-key KEY # API key auth
143
+ modelforge auth login --provider github_copilot # Device flow auth
144
+ modelforge auth status # Check auth status
145
+ modelforge auth logout --provider openai # Remove credentials
146
+
147
+ # Configuration Management
148
+ modelforge config add --provider openai --model gpt-4o-mini --api-key KEY
149
+ modelforge config add --provider ollama --model qwen3:1.7b --local
150
+ modelforge config use --provider openai --model gpt-4o-mini
151
+ modelforge config remove --provider openai --model gpt-4o-mini
152
+
153
+ # Testing & Usage
154
+ modelforge test --prompt "Hello, how are you?" # Test current model
155
+ modelforge test --prompt "Explain quantum computing" --verbose # Debug mode
156
+
157
+ # Cache & Maintenance
158
+ modelforge models list --refresh # Force refresh from models.dev
159
+ ```
160
+
161
+ ## Python API
162
+
163
+ ### Basic Usage
164
+
165
+ ```python
166
+ from modelforge.registry import ModelForgeRegistry
167
+
168
+ # Initialize registry
169
+ registry = ModelForgeRegistry()
170
+
171
+ # Get currently configured model
172
+ llm = registry.get_llm()
173
+
174
+ # Use directly with LangChain
175
+ from langchain_core.prompts import ChatPromptTemplate
176
+
177
+ prompt = ChatPromptTemplate.from_messages([("human", "{input}")])
178
+ chain = prompt | llm
179
+ response = chain.invoke({"input": "Tell me a joke"})
180
+ print(response)
181
+ ```
182
+
183
+ ### Advanced Usage
184
+
185
+ ```python
186
+ from modelforge.registry import ModelForgeRegistry
187
+
188
+ # Initialize with debug logging
189
+ registry = ModelForgeRegistry(verbose=True)
190
+
191
+ # Get specific model by provider and name
192
+ llm = registry.get_llm(provider_name="openai", model_alias="gpt-4o-mini")
193
+
194
+ # Use with full LangChain features
195
+ from langchain_core.prompts import ChatPromptTemplate
196
+ from langchain_core.output_parsers import StrOutputParser
197
+
198
+ # Create complex chains
199
+ prompt = ChatPromptTemplate.from_template("Explain {topic} in simple terms")
200
+ chain = prompt | llm | StrOutputParser()
201
+
202
+ # Use with streaming
203
+ for chunk in chain.stream({"topic": "quantum computing"}):
204
+ print(chunk, end="", flush=True)
205
+
206
+ # Batch processing
207
+ questions = [
208
+ "What is machine learning?",
209
+ "Explain neural networks",
210
+ "How does backpropagation work?"
211
+ ]
212
+ responses = chain.batch([{"topic": q} for q in questions])
213
+ ```
214
+
215
+ ### Configuration Management
216
+
217
+ ```python
218
+ from modelforge import config
219
+
220
+ # Get current model selection
221
+ current = config.get_current_model()
222
+ print(f"Current: {current.get('provider')}/{current.get('model')}")
223
+
224
+ # Check if models are configured
225
+ if not current:
226
+ print("No model selected. Configure with:")
227
+ print("modelforge config add --provider openai --model gpt-4o-mini")
228
+ ```
229
+
230
+ ### Error Handling
231
+
232
+ ```python
233
+ from modelforge.registry import ModelForgeRegistry
234
+ from modelforge.exceptions import ConfigurationError, ProviderError
235
+
236
+ try:
237
+ registry = ModelForgeRegistry()
238
+ llm = registry.get_llm()
239
+ response = llm.invoke("Hello world")
240
+ except ConfigurationError as e:
241
+ print(f"Configuration issue: {e}")
242
+ print("Run: modelforge config add --provider PROVIDER --model MODEL")
243
+ except ProviderError as e:
244
+ print(f"Provider error: {e}")
245
+ print("Check: modelforge auth status")
246
+ ```
247
+
248
+ ## Supported Providers
249
+
250
+ - **OpenAI**: GPT-4, GPT-4o, GPT-3.5-turbo
251
+ - **Google**: Gemini Pro, Gemini Flash
252
+ - **Ollama**: Local models (Llama, Qwen, Mistral)
253
+ - **GitHub Copilot**: Claude, GPT models via GitHub
254
+
255
+ ## Authentication
256
+
257
+ ModelForge supports multiple authentication methods:
258
+
259
+ - **API Keys**: Store securely in configuration
260
+ - **Device Flow**: Browser-based OAuth for GitHub Copilot
261
+ - **No Auth**: For local models like Ollama
262
+
263
+ ```bash
264
+ # API Key authentication
265
+ modelforge auth login --provider openai --api-key YOUR_KEY
266
+
267
+ # Device flow (GitHub Copilot)
268
+ modelforge auth login --provider github_copilot
269
+
270
+ # Check auth status
271
+ modelforge auth status
272
+ ```
273
+
274
+ ## Configuration
275
+
276
+ ModelForge uses a two-tier configuration system:
277
+
278
+ - **Global**: `~/.config/model-forge/config.json` (user-wide)
279
+ - **Local**: `./.model-forge/config.json` (project-specific)
280
+
281
+ Local config takes precedence over global when both exist.
282
+
283
+ ## Model Discovery
284
+
285
+ ```bash
286
+ # List all available models
287
+ modelforge models list
288
+
289
+ # Search models by name or capability
290
+ modelforge models search "gpt"
291
+
292
+ # Get detailed model info
293
+ modelforge models info --provider openai --model gpt-4o
294
+ ```
295
+
296
+ ## Development Setup
297
+
298
+ For contributors and developers:
299
+
300
+ ```bash
301
+ git clone https://github.com/smiao-icims/model-forge.git
302
+ cd model-forge
303
+ poetry install
304
+ poetry run pytest
305
+ ```
306
+
307
+ ## Documentation
308
+
309
+ - [Models.dev](https://models.dev) - Comprehensive model reference
310
+ - [GitHub Issues](https://github.com/smiao-icims/model-forge/issues) - Support and bug reports
311
+
312
+ ## License
313
+
314
+ MIT License - see LICENSE file for details.
@@ -1,5 +1,5 @@
1
- model_forge_llm-0.2.0.dist-info/licenses/LICENSE,sha256=g3TKKxfqGlKmBRdCKelUPNrTUBdvDzn0yopE8WeJNDw,1068
2
- modelforge/__init__.py,sha256=mTsSBmyZPmvMm92g9N0S_47QWM5raWuMLd2f9i81rNg,183
1
+ model_forge_llm-0.2.1.dist-info/licenses/LICENSE,sha256=g3TKKxfqGlKmBRdCKelUPNrTUBdvDzn0yopE8WeJNDw,1068
2
+ modelforge/__init__.py,sha256=3V-vaAfiBmDTC6eCaJYVfgtwEnK3R6RrC1-CQerneVw,183
3
3
  modelforge/auth.py,sha256=ju0ARgMCEb1cnEY_nJTbGR1o3MaEd-_H4YxFTpbE9Vg,18680
4
4
  modelforge/cli.py,sha256=QTgHhUq3gtOb74XLzRAhsX8wF0cdwXPgSwGee2BvIeU,27189
5
5
  modelforge/config.py,sha256=77dF1q2P2fo7hBF7YR8OBdcn3yVSSqkQXvJ3NoJvQh8,6814
@@ -7,8 +7,8 @@ modelforge/exceptions.py,sha256=Ld3RG1DuUiekAa1JM4-ivOqfHleN5o2rve1QJ-yZ5No,720
7
7
  modelforge/logging_config.py,sha256=SQlPMQkAIRIoItvcfM_P5ZEvfnsvc_O8tniuHN8keNM,1754
8
8
  modelforge/modelsdev.py,sha256=9hKPBlE4E2-FY_OTRfxyWkchpmHMwf4rzNh8om4UW5Q,14083
9
9
  modelforge/registry.py,sha256=MW3mxYrFdKw5URc4Tn0g9IJIo0DcC8h0YHAXKxjsL5E,10136
10
- model_forge_llm-0.2.0.dist-info/METADATA,sha256=fkH-IavtAegD3D4Uapef3eEfVdu372HPTjqavM6-43E,12515
11
- model_forge_llm-0.2.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
12
- model_forge_llm-0.2.0.dist-info/entry_points.txt,sha256=y5CZ8kHJCfMVzoj4KX6dMnlMHb4DAvgRy6jUlM0a8wE,50
13
- model_forge_llm-0.2.0.dist-info/top_level.txt,sha256=ZyKkWfjMHbQ1mlQyLT2xZlbeT0M1jOjB4nRWVR_YiI8,11
14
- model_forge_llm-0.2.0.dist-info/RECORD,,
10
+ model_forge_llm-0.2.1.dist-info/METADATA,sha256=VJNMIdGWg3Y71Oua_mmb0KiOWUjdPxFY4N6NNHHLavo,9872
11
+ model_forge_llm-0.2.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
12
+ model_forge_llm-0.2.1.dist-info/entry_points.txt,sha256=y5CZ8kHJCfMVzoj4KX6dMnlMHb4DAvgRy6jUlM0a8wE,50
13
+ model_forge_llm-0.2.1.dist-info/top_level.txt,sha256=ZyKkWfjMHbQ1mlQyLT2xZlbeT0M1jOjB4nRWVR_YiI8,11
14
+ model_forge_llm-0.2.1.dist-info/RECORD,,
modelforge/__init__.py CHANGED
@@ -1,6 +1,6 @@
1
1
  """ModelForge: A reusable library for managing LLM providers and authentication."""
2
2
 
3
- __version__ = "0.2.0"
3
+ __version__ = "0.2.1"
4
4
 
5
5
  from .registry import ModelForgeRegistry
6
6
 
@@ -1,327 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: model-forge-llm
3
- Version: 0.2.0
4
- Summary: A reusable library for managing LLM providers, authentication, and model selection.
5
- Author: ModelForge Contributors
6
- License: MIT License
7
-
8
- Copyright (c) 2025 Shuhai Miao
9
-
10
- Permission is hereby granted, free of charge, to any person obtaining a copy
11
- of this software and associated documentation files (the "Software"), to deal
12
- in the Software without restriction, including without limitation the rights
13
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
14
- copies of the Software, and to permit persons to whom the Software is
15
- furnished to do so, subject to the following conditions:
16
-
17
- The above copyright notice and this permission notice shall be included in all
18
- copies or substantial portions of the Software.
19
-
20
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
21
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
22
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
23
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
24
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
25
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
26
- SOFTWARE.
27
-
28
- Project-URL: Homepage, https://github.com/your-org/model-forge
29
- Project-URL: Repository, https://github.com/your-org/model-forge
30
- Project-URL: Issues, https://github.com/your-org/model-forge/issues
31
- Project-URL: Documentation, https://model-forge.readthedocs.io
32
- Keywords: llm,ai,langchain,openai,ollama,providers,authentication
33
- Classifier: Development Status :: 4 - Beta
34
- Classifier: Intended Audience :: Developers
35
- Classifier: License :: OSI Approved :: MIT License
36
- Classifier: Operating System :: OS Independent
37
- Classifier: Programming Language :: Python :: 3
38
- Classifier: Programming Language :: Python :: 3.11
39
- Classifier: Programming Language :: Python :: 3.12
40
- Classifier: Topic :: Software Development :: Libraries :: Python Modules
41
- Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
42
- Requires-Python: <4.0,>=3.11
43
- Description-Content-Type: text/markdown
44
- License-File: LICENSE
45
- Requires-Dist: click>=8.1.7
46
- Requires-Dist: requests>=2.32.3
47
- Requires-Dist: langchain-core>=0.3.0
48
- Requires-Dist: langchain-openai>=0.3.0
49
- Requires-Dist: langchain-community>=0.3.0
50
- Requires-Dist: langchain-google-genai>=2.1.5
51
- Requires-Dist: langchain-github-copilot>=0.4.0
52
- Provides-Extra: dev
53
- Requires-Dist: pytest>=8.2.2; extra == "dev"
54
- Requires-Dist: pytest-mock>=3.14.0; extra == "dev"
55
- Requires-Dist: ruff>=0.7.0; extra == "dev"
56
- Requires-Dist: mypy>=1.11.0; extra == "dev"
57
- Requires-Dist: pre-commit>=3.8.0; extra == "dev"
58
- Requires-Dist: pytest-cov>=5.0.0; extra == "dev"
59
- Requires-Dist: types-requests>=2.32.0; extra == "dev"
60
- Requires-Dist: twine>=6.1.0; extra == "dev"
61
- Requires-Dist: requests-mock>=1.12.1; extra == "dev"
62
- Dynamic: license-file
63
-
64
- # Model Forge Library
65
-
66
- A reusable library for managing LLM providers, authentication, and model selection.
67
-
68
- This library is intended to be used by various Python-based AI projects to provide a consistent way to handle LLM interactions.
69
-
70
- ## High-Level Design
71
-
72
- The library is composed of three core modules:
73
-
74
- - **`config`**: Manages configuration files with a two-tier system - global (`~/.config/model-forge/config.json`) and local (`./.model-forge/config.json`) - where all provider and model settings are stored.
75
- - **`auth`**: Provides a suite of authentication strategies (API Key, OAuth 2.0 Device Flow, and a No-Op for local models) and handles secure credential storage in configuration files.
76
- - **`registry`**: Acts as the main entry point and factory. It reads the configuration, invokes the appropriate authentication strategy, and instantiates ready-to-use, LangChain-compatible LLM objects.
77
-
78
- ## 🛠️ **Quick Start**
79
-
80
- ## **Option 1: Traditional Development Setup (Recommended)**
81
- Best for developers who will use ModelForge frequently:
82
-
83
- ```bash
84
- # 1. Run setup script
85
- ./setup.sh
86
-
87
- # 2. Use Poetry directly (faster for repeated use)
88
- poetry run modelforge config show
89
- poetry run modelforge config add --provider openai --model gpt-4
90
- ```
91
-
92
- ## **Option 2: Wrapper Script (Quick Usage)**
93
- Best for occasional use, CI/CD, or Docker environments:
94
-
95
- ```bash
96
- # Single command that handles setup + execution
97
- ./modelforge.sh config show
98
- ./modelforge.sh config add --provider openai --model gpt-4
99
- ```
100
-
101
- **Performance Comparison:**
102
- - **Traditional**: ~0.9s per command
103
- - **Wrapper**: ~1.6s per command (includes setup overhead)
104
-
105
- ## Local Development & Testing
106
-
107
- To test the library locally, you can use the built-in Command-Line Interface (CLI).
108
-
109
- **Option 1: Using the setup script (recommended)**
110
- ```bash
111
- ./setup.sh
112
- ```
113
-
114
- **Option 2: Manual setup**
115
- 1. **Set up a virtual environment:**
116
- ```bash
117
- python -m venv venv
118
- source venv/bin/activate
119
- ```
120
-
121
- 2. **Install the library in editable mode:**
122
- This allows you to use the CLI and reflects any code changes immediately without reinstalling.
123
- ```bash
124
- pip install -e .
125
- ```
126
-
127
- 3. **Use the CLI to manage your models:**
128
- ```bash
129
- # Show the current configuration
130
- modelforge config show
131
-
132
- # Add a local Ollama model
133
- modelforge config add --provider ollama --model qwen3:1.7b
134
-
135
- # Add OpenAI models with API key
136
- modelforge config add --provider openai --model gpt-4o-mini --api-key "YOUR_API_KEY_HERE"
137
- modelforge config add --provider openai --model gpt-4o --api-model-name "gpt-4o" --api-key "YOUR_API_KEY_HERE"
138
-
139
- # Add a provider requiring an API key (Google Gemini)
140
- modelforge config add --provider google --model gemini-pro --api-model-name "gemini-1.5-pro" --api-key "YOUR_API_KEY_HERE"
141
-
142
- # Add GitHub Copilot and trigger the device authentication flow
143
- modelforge config add --provider github_copilot --model claude-3.7-sonnet --dev-auth
144
-
145
- # Set a model to be the default
146
- modelforge config use --provider ollama --model qwen3:1.7b
147
- ```
148
-
149
- ## Available Models and Providers
150
-
151
- **📚 Model Reference:**
152
- For a comprehensive list of available providers and models, visit **[models.dev](https://models.dev)** - your go-to resource for:
153
-
154
- - **Provider Documentation**: Detailed information about each LLM provider
155
- - **Model Specifications**: Complete model listings with capabilities and pricing
156
- - **API References**: Authentication methods and integration guides
157
- - **Model Comparisons**: Performance metrics and use case recommendations
158
-
159
- **Supported Providers:**
160
- - **OpenAI**: GPT-4, GPT-4o, GPT-3.5-turbo, and more
161
- - **Ollama**: Local models like Llama, Qwen, Mistral, and others
162
- - **GitHub Copilot**: Claude, GPT-4, and other models via GitHub *(Enhanced Support)*
163
- - **Google Gemini**: Gemini Pro, Gemini Flash, and other Google models
164
-
165
- ### 🚀 **Enhanced GitHub Copilot Support**
166
-
167
- ModelForge provides **two-tier GitHub Copilot integration** for optimal performance:
168
-
169
- #### **🎯 Tier 1: Dedicated ChatGitHubCopilot (Recommended)**
170
- When `langchain-github-copilot` is installed, ModelForge uses the specialized GitHub Copilot class:
171
-
172
- ```bash
173
- # Install the enhanced GitHub Copilot support
174
- poetry add langchain-github-copilot
175
-
176
- # Add GitHub Copilot with device authentication
177
- ./modelforge.sh config add --provider github_copilot --model claude-3.7-sonnet --dev-auth
178
- ```
179
-
180
- **Benefits:**
181
- - ✅ **Optimized for 25-minute token lifecycle**
182
- - ✅ **GitHub-specific rate limiting**
183
- - ✅ **Enhanced error handling**
184
- - ✅ **Built-in token refresh**
185
-
186
- #### **🔄 Tier 2: OpenAI-Compatible Fallback**
187
- If `langchain-github-copilot` is not available, ModelForge automatically falls back to OpenAI-compatible mode:
188
-
189
- ```bash
190
- # Works even without langchain-github-copilot installed
191
- ./modelforge.sh config add --provider github_copilot --model claude-3.7-sonnet --dev-auth
192
- ```
193
-
194
- **Characteristics:**
195
- - ⚡ **Universal compatibility**
196
- - 🛠️ **Manual token management**
197
- - 📊 **Standard OpenAI interface**
198
-
199
- #### **🔍 Installation Options**
200
-
201
- ```bash
202
- # Option 1: Full installation with GitHub Copilot enhancement
203
- git clone <repo>
204
- cd model-forge
205
- ./setup.sh
206
- poetry add langchain-github-copilot
207
-
208
- # Option 2: Basic installation (fallback mode)
209
- git clone <repo>
210
- cd model-forge
211
- ./setup.sh
212
- # Uses OpenAI-compatible fallback automatically
213
- ```
214
-
215
- Use [models.dev](https://models.dev) to explore the full ecosystem and find the perfect model for your use case!
216
-
217
- ## Configuration System
218
-
219
- ModelForge uses a **two-tier configuration system** that provides flexibility for both personal and project-specific setups:
220
-
221
- ### 🌍 **Global Configuration** (`~/.config/model-forge/config.json`)
222
- - **Location**: User's config directory (follows XDG Base Directory Standard)
223
- - **Purpose**: System-wide model configurations shared across all projects
224
- - **Use case**: Personal API keys, frequently used models, default settings
225
-
226
- ### 📁 **Local Configuration** (`./.model-forge/config.json`)
227
- - **Location**: Current working directory (project-specific)
228
- - **Purpose**: Project-specific model configurations
229
- - **Use case**: Team projects, specific model requirements, environment-specific settings
230
-
231
- ### 🔄 **Precedence Rules**
232
- 1. **Local First**: If a local config exists, it takes precedence
233
- 2. **Global Fallback**: If no local config, the global config is used
234
- 3. **Auto-Creation**: If neither exists, a new global config is created
235
-
236
- ### 💡 **Managing Configurations**
237
- ```bash
238
- # View current configuration (shows which config is active)
239
- modelforge config show
240
-
241
- # Add to global configuration (default)
242
- modelforge config add --provider openai --model gpt-4o --api-key "YOUR_KEY"
243
-
244
- # Add to local configuration (project-specific)
245
- modelforge config add --provider openai --model gpt-4o --api-key "YOUR_KEY" --local
246
- ```
247
-
248
- Both configuration files use the same JSON structure and are fully compatible with all ModelForge features.
249
-
250
- ## Code Quality & Development
251
-
252
- ModelForge maintains high code quality standards with automated tooling:
253
-
254
- ### 🔧 **Quality Tools**
255
- - **Ruff**: Fast linting and formatting
256
- - **MyPy**: Type checking for reliability
257
- - **Pre-commit**: Automated quality checks
258
- - **GitHub Actions**: CI/CD pipeline
259
- - **Pytest**: Comprehensive testing with coverage
260
-
261
- ### 📋 **Code Review Guidelines**
262
- We provide comprehensive code review guidelines for consistent quality:
263
- - **[Detailed Guidelines](CODE_REVIEW_GUIDELINES.md)**: Complete review criteria and examples
264
- - **[LLM Prompt](PROMPT_CODE_REVIEW.md)**: Quick prompt for AI-assisted code reviews
265
-
266
- ### 🚀 **Development Commands**
267
- ```bash
268
- # Format and check code
269
- poetry run ruff format .
270
- poetry run ruff check .
271
-
272
- # Type checking
273
- poetry run mypy src/modelforge
274
-
275
- # Run tests with coverage
276
- poetry run pytest --cov=src/modelforge
277
-
278
- # Run all quality checks
279
- poetry run pre-commit run --all-files
280
- ```
281
-
282
- ## Integration Guide
283
-
284
- To use this library in a host application (e.g., RAG-Forge):
285
-
286
- 1. **Install the library:**
287
- ```bash
288
- # Quick setup (recommended for development)
289
- cd /path/to/model-forge && ./setup.sh
290
-
291
- # Or install manually from a local path
292
- pip install -e /path/to/model-forge
293
-
294
- # In the future, you would install from a package registry like PyPI
295
- # pip install model-forge
296
- ```
297
-
298
- 2. **Use the `ModelForgeRegistry` in your application:**
299
- ```python
300
- from modelforge.registry import ModelForgeRegistry
301
-
302
- # 1. Initialize the registry
303
- registry = ModelForgeRegistry()
304
-
305
- # 2. See which models the user has configured
306
- available_models = registry.list_models()
307
- print(f"Available models: {available_models}")
308
- # Example output: ['ollama/qwen3:1.7b', 'github_copilot/claude-3.7-sonnet']
309
-
310
- # 3. Get a fully authenticated model instance
311
- if available_models:
312
- model_id = available_models[0]
313
- llm = registry.get_model_instance(model_id)
314
-
315
- if llm:
316
- # Now you have a LangChain-compatible LLM object to use
317
- response = llm.invoke("Tell me a joke.")
318
- print(response)
319
- ```
320
-
321
- ## Features
322
-
323
- - **Multi-Provider Support**: OpenAI, Ollama, GitHub Copilot, Google Gemini
324
- - **Flexible Authentication**: API Key, OAuth 2.0 Device Flow, Local (no auth)
325
- - **Secure Credential Storage**: Stores API keys and tokens in configuration files
326
- - **LangChain Integration**: Provides ready-to-use LangChain-compatible model instances
327
- - **Centralized Configuration**: Single configuration file managing all providers and models