code-puppy 0.0.148__py3-none-any.whl → 0.0.150__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {code_puppy-0.0.148.dist-info → code_puppy-0.0.150.dist-info}/METADATA +6 -76
- {code_puppy-0.0.148.dist-info → code_puppy-0.0.150.dist-info}/RECORD +6 -6
- {code_puppy-0.0.148.data → code_puppy-0.0.150.data}/data/code_puppy/models.json +0 -0
- {code_puppy-0.0.148.dist-info → code_puppy-0.0.150.dist-info}/WHEEL +0 -0
- {code_puppy-0.0.148.dist-info → code_puppy-0.0.150.dist-info}/entry_points.txt +0 -0
- {code_puppy-0.0.148.dist-info → code_puppy-0.0.150.dist-info}/licenses/LICENSE +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: code-puppy
|
|
3
|
-
Version: 0.0.
|
|
3
|
+
Version: 0.0.150
|
|
4
4
|
Summary: Code generation agent
|
|
5
5
|
Project-URL: repository, https://github.com/mpfaffenberger/code_puppy
|
|
6
6
|
Project-URL: HomePage, https://github.com/mpfaffenberger/code_puppy
|
|
@@ -92,37 +92,6 @@ export AZURE_OPENAI_ENDPOINT=...
|
|
|
92
92
|
|
|
93
93
|
code-puppy --interactive
|
|
94
94
|
```
|
|
95
|
-
Running in a super weird corporate environment?
|
|
96
|
-
|
|
97
|
-
Try this:
|
|
98
|
-
```bash
|
|
99
|
-
export MODEL_NAME=my-custom-model
|
|
100
|
-
export YOLO_MODE=true
|
|
101
|
-
export MODELS_JSON_PATH=/path/to/custom/models.json
|
|
102
|
-
```
|
|
103
|
-
|
|
104
|
-
```json
|
|
105
|
-
{
|
|
106
|
-
"my-custom-model": {
|
|
107
|
-
"type": "custom_openai",
|
|
108
|
-
"name": "o4-mini-high",
|
|
109
|
-
"max_requests_per_minute": 100,
|
|
110
|
-
"max_retries": 3,
|
|
111
|
-
"retry_base_delay": 10,
|
|
112
|
-
"custom_endpoint": {
|
|
113
|
-
"url": "https://my.custom.endpoint:8080",
|
|
114
|
-
"headers": {
|
|
115
|
-
"X-Api-Key": "<Your_API_Key>",
|
|
116
|
-
"Some-Other-Header": "<Some_Value>"
|
|
117
|
-
},
|
|
118
|
-
"ca_certs_path": "/path/to/cert.pem"
|
|
119
|
-
}
|
|
120
|
-
}
|
|
121
|
-
}
|
|
122
|
-
```
|
|
123
|
-
Note that the `OPENAI_API_KEY` or `CEREBRAS_API_KEY` env variable must be set when using `custom_openai` endpoints.
|
|
124
|
-
|
|
125
|
-
Open an issue if your environment is somehow weirder than mine.
|
|
126
95
|
|
|
127
96
|
Run specific tasks or engage in interactive mode:
|
|
128
97
|
|
|
@@ -133,7 +102,7 @@ code-puppy "write me a C++ hello world program in /tmp/main.cpp then compile it
|
|
|
133
102
|
|
|
134
103
|
## Requirements
|
|
135
104
|
|
|
136
|
-
- Python 3.
|
|
105
|
+
- Python 3.11+
|
|
137
106
|
- OpenAI API key (for GPT models)
|
|
138
107
|
- Gemini API key (for Google's Gemini models)
|
|
139
108
|
- Cerebras API key (for Cerebras models)
|
|
@@ -151,50 +120,19 @@ For examples and more information about agent rules, visit [https://agent.md](ht
|
|
|
151
120
|
|
|
152
121
|
## Using MCP Servers for External Tools
|
|
153
122
|
|
|
154
|
-
|
|
123
|
+
Use the `/mcp` command to manage MCP (list, start, stop, status, etc.)
|
|
155
124
|
|
|
156
|
-
|
|
157
|
-
An MCP server is a standalone process (can be local or remote) that offers specialized functionality (plugins, doc search, code analysis, etc.). Code Puppy can connect to one or more MCP servers at startup, unlocking these extra commands inside your coding agent.
|
|
125
|
+
In the TUI you can click on MCP settings on the footer and interact with a mini-marketplace.
|
|
158
126
|
|
|
159
|
-
|
|
160
|
-
Create a config file at `~/.code_puppy/mcp_servers.json`. Here’s an example that connects to a local Context7 MCP server:
|
|
127
|
+
Watch this video for examples! https://www.youtube.com/watch?v=1t1zEetOqlo
|
|
161
128
|
|
|
162
|
-
```json
|
|
163
|
-
{
|
|
164
|
-
"mcp_servers": {
|
|
165
|
-
"context7": {
|
|
166
|
-
"url": "https://mcp.context7.com/sse"
|
|
167
|
-
}
|
|
168
|
-
}
|
|
169
|
-
}
|
|
170
|
-
```
|
|
171
|
-
|
|
172
|
-
You can list multiple objects (one per server).
|
|
173
|
-
|
|
174
|
-
### How to Use
|
|
175
|
-
- Drop the config file in `~/.code_puppy/mcp_servers.json`.
|
|
176
|
-
- Start your MCP (like context7, or anything compatible).
|
|
177
|
-
- Run Code Puppy as usual. It’ll discover and use all configured MCP servers.
|
|
178
|
-
|
|
179
|
-
#### Example usage
|
|
180
|
-
```bash
|
|
181
|
-
code-puppy --interactive
|
|
182
|
-
# Then ask: Use context7 to look up FastAPI docs!
|
|
183
|
-
```
|
|
184
|
-
|
|
185
|
-
That’s it!
|
|
186
|
-
If you need to run more exotic setups or connect to remote MCPs, just update your `mcp_servers.json` accordingly.
|
|
187
|
-
|
|
188
|
-
**NOTE:** Want to add your own server or tool? Just follow the config pattern above—no code changes needed!
|
|
189
|
-
|
|
190
|
-
---
|
|
191
129
|
|
|
192
130
|
## Round Robin Model Distribution
|
|
193
131
|
|
|
194
132
|
Code Puppy supports **Round Robin model distribution** to help you overcome rate limits and distribute load across multiple AI models. This feature automatically cycles through configured models with each request, maximizing your API usage while staying within rate limits.
|
|
195
133
|
|
|
196
134
|
### Configuration
|
|
197
|
-
Add a round-robin model configuration to your
|
|
135
|
+
Add a round-robin model configuration to your `~/.code_puppy/extra_models.json` file:
|
|
198
136
|
|
|
199
137
|
```bash
|
|
200
138
|
export CEREBRAS_API_KEY1=csk-...
|
|
@@ -244,14 +182,6 @@ Then just use /model and tab to select your round-robin model!
|
|
|
244
182
|
|
|
245
183
|
The `rotate_every` parameter controls how many requests are made to each model before rotating to the next one. In this example, the round-robin model will use each Qwen model for 5 consecutive requests before moving to the next model in the sequence.
|
|
246
184
|
|
|
247
|
-
### Benefits
|
|
248
|
-
- **Rate Limit Protection**: Automatically distribute requests across multiple models
|
|
249
|
-
- **Load Balancing**: Share workload between different model providers
|
|
250
|
-
- **Fallback Resilience**: Continue working even if one model has temporary issues
|
|
251
|
-
- **Cost Optimization**: Use different models for different types of tasks
|
|
252
|
-
|
|
253
|
-
**NOTE:** Unlike fallback models, round-robin models distribute load but don't automatically retry with another model on failure. If a request fails, it will raise the exception directly.
|
|
254
|
-
|
|
255
185
|
---
|
|
256
186
|
|
|
257
187
|
## Create your own Agent!!!
|
|
@@ -126,9 +126,9 @@ code_puppy/tui/tests/test_sidebar_history_navigation.py,sha256=JGiyua8A2B8dLfwiE
|
|
|
126
126
|
code_puppy/tui/tests/test_status_bar.py,sha256=nYT_FZGdmqnnbn6o0ZuOkLtNUtJzLSmtX8P72liQ5Vo,1797
|
|
127
127
|
code_puppy/tui/tests/test_timestamped_history.py,sha256=nVXt9hExZZ_8MFP-AZj4L4bB_1Eo_mc-ZhVICzTuw3I,1799
|
|
128
128
|
code_puppy/tui/tests/test_tools.py,sha256=kgzzAkK4r0DPzQwHHD4cePpVNgrHor6cFr05Pg6DBWg,2687
|
|
129
|
-
code_puppy-0.0.
|
|
130
|
-
code_puppy-0.0.
|
|
131
|
-
code_puppy-0.0.
|
|
132
|
-
code_puppy-0.0.
|
|
133
|
-
code_puppy-0.0.
|
|
134
|
-
code_puppy-0.0.
|
|
129
|
+
code_puppy-0.0.150.data/data/code_puppy/models.json,sha256=dAfpMMI2EEeOMv0ynHSmMuJAYDLcZrs5gCLX3voC4-A,3252
|
|
130
|
+
code_puppy-0.0.150.dist-info/METADATA,sha256=o6JLMG3E0Jcc2ZU0MM7oE6E0crV9fWf0CsHVF-bj4xQ,19485
|
|
131
|
+
code_puppy-0.0.150.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
|
|
132
|
+
code_puppy-0.0.150.dist-info/entry_points.txt,sha256=d8YkBvIUxF-dHNJAj-x4fPEqizbY5d_TwvYpc01U5kw,58
|
|
133
|
+
code_puppy-0.0.150.dist-info/licenses/LICENSE,sha256=31u8x0SPgdOq3izJX41kgFazWsM43zPEF9eskzqbJMY,1075
|
|
134
|
+
code_puppy-0.0.150.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|