weco 0.2.19__py3-none-any.whl → 0.2.22__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- weco/__init__.py +2 -8
- weco/api.py +245 -48
- weco/auth.py +164 -3
- weco/chatbot.py +797 -0
- weco/cli.py +129 -685
- weco/optimizer.py +479 -0
- weco/panels.py +59 -10
- weco/utils.py +31 -3
- {weco-0.2.19.dist-info → weco-0.2.22.dist-info}/METADATA +110 -32
- weco-0.2.22.dist-info/RECORD +14 -0
- {weco-0.2.19.dist-info → weco-0.2.22.dist-info}/WHEEL +1 -1
- weco-0.2.19.dist-info/RECORD +0 -12
- {weco-0.2.19.dist-info → weco-0.2.22.dist-info}/entry_points.txt +0 -0
- {weco-0.2.19.dist-info → weco-0.2.22.dist-info}/licenses/LICENSE +0 -0
- {weco-0.2.19.dist-info → weco-0.2.22.dist-info}/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: weco
|
|
3
|
-
Version: 0.2.
|
|
3
|
+
Version: 0.2.22
|
|
4
4
|
Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
|
|
5
5
|
Author-email: Weco AI Team <contact@weco.ai>
|
|
6
6
|
License: MIT
|
|
@@ -15,6 +15,7 @@ License-File: LICENSE
|
|
|
15
15
|
Requires-Dist: requests
|
|
16
16
|
Requires-Dist: rich
|
|
17
17
|
Requires-Dist: packaging
|
|
18
|
+
Requires-Dist: gitingest
|
|
18
19
|
Provides-Extra: dev
|
|
19
20
|
Requires-Dist: ruff; extra == "dev"
|
|
20
21
|
Requires-Dist: build; extra == "dev"
|
|
@@ -23,12 +24,18 @@ Dynamic: license-file
|
|
|
23
24
|
|
|
24
25
|
<div align="center">
|
|
25
26
|
|
|
26
|
-
|
|
27
|
+
<div align="center">
|
|
28
|
+
<img src="assets/weco.svg" alt="Weco Logo" width="120" height="120" style="margin-bottom: 20px;">
|
|
29
|
+
<h1>Weco: The Platform for Self-Improving Code</h1>
|
|
30
|
+
</div>
|
|
27
31
|
|
|
28
32
|
[](https://www.python.org)
|
|
29
33
|
[](https://docs.weco.ai/)
|
|
30
34
|
[](https://badge.fury.io/py/weco)
|
|
31
35
|
[](https://arxiv.org/abs/2502.13138)
|
|
36
|
+
[](https://colab.research.google.com/github/WecoAI/weco-cli/blob/main/examples/hello-kernel-world/colab_notebook_walkthrough.ipynb)
|
|
37
|
+
|
|
38
|
+
`pip install weco`
|
|
32
39
|
|
|
33
40
|
</div>
|
|
34
41
|
|
|
@@ -48,7 +55,7 @@ Example applications include:
|
|
|
48
55
|
|
|
49
56
|
## Overview
|
|
50
57
|
|
|
51
|
-
The `weco` CLI leverages a tree search approach guided by
|
|
58
|
+
The `weco` CLI leverages a tree search approach guided by LLMs to iteratively explore and refine your code. It automatically applies changes, runs your evaluation script, parses the results, and proposes further improvements based on the specified goal.
|
|
52
59
|
|
|
53
60
|

|
|
54
61
|
|
|
@@ -64,28 +71,48 @@ The `weco` CLI leverages a tree search approach guided by Large Language Models
|
|
|
64
71
|
|
|
65
72
|
2. **Set Up LLM API Keys (Required):**
|
|
66
73
|
|
|
67
|
-
`weco` requires API keys for the
|
|
74
|
+
`weco` requires API keys for the LLMs it uses internally. You **must** provide these keys via environment variables:
|
|
68
75
|
|
|
69
|
-
- **OpenAI:** `export OPENAI_API_KEY="your_key_here"`
|
|
70
|
-
- **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"`
|
|
71
|
-
- **Google
|
|
76
|
+
- **OpenAI:** `export OPENAI_API_KEY="your_key_here"` (Create your API key [here](https://platform.openai.com/api-keys))
|
|
77
|
+
- **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"` (Create your API key [here](https://console.anthropic.com/settings/keys))
|
|
78
|
+
- **Google:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create your API key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
|
|
72
79
|
|
|
73
80
|
---
|
|
74
81
|
|
|
75
82
|
## Get Started
|
|
76
83
|
|
|
84
|
+
### Quick Start (Recommended for New Users)
|
|
85
|
+
|
|
86
|
+
The easiest way to get started with Weco is to use the **interactive copilot**. Simply navigate to your project directory and run:
|
|
87
|
+
|
|
88
|
+
```bash
|
|
89
|
+
weco
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
Or specify a project path:
|
|
93
|
+
|
|
94
|
+
```bash
|
|
95
|
+
weco /path/to/your/project
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
This launches Weco's interactive copilot that will:
|
|
99
|
+
|
|
100
|
+
1. **Analyze your codebase** using AI to understand your project structure and identify optimization opportunities
|
|
101
|
+
2. **Suggest specific optimizations** tailored to your code (e.g., GPU kernel optimization, model improvements, prompt engineering)
|
|
102
|
+
3. **Generate evaluation scripts** automatically or help you configure existing ones
|
|
103
|
+
4. **Set up the complete optimization pipeline** with appropriate metrics and commands
|
|
104
|
+
5. **Run the optimization** or provide you with the exact command to execute
|
|
105
|
+
|
|
77
106
|
<div style="background-color: #fff3cd; border: 1px solid #ffeeba; padding: 15px; border-radius: 4px; margin-bottom: 15px;">
|
|
78
107
|
<strong>⚠️ Warning: Code Modification</strong><br>
|
|
79
108
|
<code>weco</code> directly modifies the file specified by <code>--source</code> during the optimization process. It is <strong>strongly recommended</strong> to use version control (like Git) to track changes and revert if needed. Alternatively, ensure you have a backup of your original file before running the command. Upon completion, the file will contain the best-performing version of the code found during the run.
|
|
80
109
|
</div>
|
|
81
110
|
|
|
82
|
-
|
|
111
|
+
### Manual Setup
|
|
83
112
|
|
|
84
|
-
**
|
|
85
|
-
|
|
86
|
-
This basic example shows how to optimize a simple PyTorch function for speedup.
|
|
113
|
+
**Configure optimization parameters yourself** - If you need precise control over the optimization parameters, you can use the direct `weco run` command:
|
|
87
114
|
|
|
88
|
-
|
|
115
|
+
**Example: Optimizing Simple PyTorch Operations**
|
|
89
116
|
|
|
90
117
|
```bash
|
|
91
118
|
# Navigate to the example directory
|
|
@@ -94,7 +121,7 @@ cd examples/hello-kernel-world
|
|
|
94
121
|
# Install dependencies
|
|
95
122
|
pip install torch
|
|
96
123
|
|
|
97
|
-
# Run Weco
|
|
124
|
+
# Run Weco with manual configuration
|
|
98
125
|
weco run --source optimize.py \
|
|
99
126
|
--eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
|
|
100
127
|
--metric speedup \
|
|
@@ -105,36 +132,87 @@ weco run --source optimize.py \
|
|
|
105
132
|
|
|
106
133
|
**Note:** If you have an NVIDIA GPU, change the device in the `--eval-command` to `cuda`. If you are running this on Apple Silicon, set it to `mps`.
|
|
107
134
|
|
|
135
|
+
For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md), [ML model optimization](/examples/spaceship-titanic/README.md), and [prompt engineering for math problems](examples/prompt/README.md), please see the `README.md` files within the corresponding subdirectories under the [`examples/`](examples/) folder.
|
|
136
|
+
|
|
108
137
|
---
|
|
109
138
|
|
|
110
139
|
### Arguments for `weco run`
|
|
111
140
|
|
|
112
141
|
**Required:**
|
|
113
142
|
|
|
114
|
-
| Argument | Description |
|
|
115
|
-
| :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
116
|
-
| `-s, --source` | Path to the source code file that will be optimized
|
|
117
|
-
| `-c, --eval-command`| Command to run for evaluating the code in `--source`. This command should print the target `--metric` and its value to the terminal (stdout/stderr). See note below. |
|
|
118
|
-
| `-m, --metric` | The name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name
|
|
119
|
-
| `-g, --goal`
|
|
143
|
+
| Argument | Description | Example |
|
|
144
|
+
| :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------- |
|
|
145
|
+
| `-s, --source` | Path to the source code file that will be optimized. | `-s model.py` |
|
|
146
|
+
| `-c, --eval-command`| Command to run for evaluating the code in `--source`. This command should print the target `--metric` and its value to the terminal (stdout/stderr). See note below. | `-c "python eval.py"` |
|
|
147
|
+
| `-m, --metric` | The name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name does not need to match what's printed by your `--eval-command` exactly (e.g., its okay to use "speedup" instead of "Speedup:"). | `-m speedup` |
|
|
148
|
+
| `-g, --goal` | `maximize`/`max` to maximize the `--metric` or `minimize`/`min` to minimize it. | `-g maximize` |
|
|
120
149
|
|
|
121
150
|
<br>
|
|
122
151
|
|
|
123
152
|
**Optional:**
|
|
124
153
|
|
|
125
|
-
| Argument | Description | Default |
|
|
126
|
-
| :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
127
|
-
| `-n, --steps` | Number of optimization steps (LLM iterations) to run. | 100 |
|
|
128
|
-
| `-M, --model` | Model identifier for the LLM to use (e.g., `
|
|
129
|
-
| `-i, --additional-instructions`| Natural language description of specific instructions **or** path to a file containing detailed instructions to guide the LLM. | `None`
|
|
130
|
-
| `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` |
|
|
154
|
+
| Argument | Description | Default | Example |
|
|
155
|
+
| :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------ |
|
|
156
|
+
| `-n, --steps` | Number of optimization steps (LLM iterations) to run. | 100 | `-n 50` |
|
|
157
|
+
| `-M, --model` | Model identifier for the LLM to use (e.g., `o4-mini`, `claude-sonnet-4-0`). | `o4-mini` when `OPENAI_API_KEY` is set; `claude-sonnet-4-0` when `ANTHROPIC_API_KEY` is set; `gemini-2.5-pro` when `GEMINI_API_KEY` is set. | `-M o4-mini` |
|
|
158
|
+
| `-i, --additional-instructions`| Natural language description of specific instructions **or** path to a file containing detailed instructions to guide the LLM. | `None` | `-i instructions.md` or `-i "Optimize the model for faster inference"`|
|
|
159
|
+
| `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` | `-l ./logs/` |
|
|
131
160
|
|
|
132
161
|
---
|
|
133
162
|
|
|
134
|
-
###
|
|
135
|
-
|
|
163
|
+
### Authentication & Dashboard
|
|
164
|
+
|
|
165
|
+
Weco offers both **anonymous** and **authenticated** usage:
|
|
166
|
+
|
|
167
|
+
#### Anonymous Usage
|
|
168
|
+
You can use Weco without creating an account by providing LLM API keys via environment variables. This is perfect for trying out Weco or for users who prefer not to create accounts.
|
|
169
|
+
|
|
170
|
+
#### Authenticated Usage (Recommended)
|
|
171
|
+
To save your optimization runs and view them on the Weco dashboard, you can log in using Weco's secure device authentication flow:
|
|
172
|
+
|
|
173
|
+
1. **During onboarding**: When you run `weco` for the first time, you'll be prompted to log in or skip
|
|
174
|
+
2. **Manual login**: Use `weco logout` to clear credentials, then run `weco` again to re-authenticate
|
|
175
|
+
3. **Device flow**: Weco will open your browser automatically and guide you through a secure OAuth-style authentication
|
|
176
|
+
|
|
136
177
|

|
|
137
178
|
|
|
179
|
+
**Benefits of authenticated usage:**
|
|
180
|
+
- **Run history**: View all your optimization runs on the Weco dashboard
|
|
181
|
+
- **Progress tracking**: Monitor long-running optimizations remotely
|
|
182
|
+
- **Enhanced support**: Get better assistance with your optimization challenges
|
|
183
|
+
|
|
184
|
+
---
|
|
185
|
+
|
|
186
|
+
## Command Reference
|
|
187
|
+
|
|
188
|
+
### Basic Usage Patterns
|
|
189
|
+
|
|
190
|
+
| Command | Description | When to Use |
|
|
191
|
+
|---------|-------------|-------------|
|
|
192
|
+
| `weco` | Launch interactive onboarding | **Recommended for beginners** - Analyzes your codebase and guides you through setup |
|
|
193
|
+
| `weco /path/to/project` | Launch onboarding for specific project | When working with a project in a different directory |
|
|
194
|
+
| `weco run [options]` | Direct optimization execution | **For advanced users** - When you know exactly what to optimize and how |
|
|
195
|
+
| `weco logout` | Clear authentication credentials | To switch accounts or troubleshoot authentication issues |
|
|
196
|
+
|
|
197
|
+
### Model Selection
|
|
198
|
+
|
|
199
|
+
You can specify which LLM model to use with the `-M` or `--model` flag:
|
|
200
|
+
|
|
201
|
+
```bash
|
|
202
|
+
# Use with onboarding
|
|
203
|
+
weco --model gpt-4o
|
|
204
|
+
|
|
205
|
+
# Use with direct execution
|
|
206
|
+
weco run --model claude-3.5-sonnet --source optimize.py [other options...]
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
**Available models:**
|
|
210
|
+
- `gpt-4o`, `o4-mini` (requires `OPENAI_API_KEY`)
|
|
211
|
+
- `claude-3.5-sonnet`, `claude-sonnet-4-20250514` (requires `ANTHROPIC_API_KEY`)
|
|
212
|
+
- `gemini-2.5-pro` (requires `GEMINI_API_KEY`)
|
|
213
|
+
|
|
214
|
+
If no model is specified, Weco automatically selects the best available model based on your API keys.
|
|
215
|
+
|
|
138
216
|
---
|
|
139
217
|
|
|
140
218
|
### Performance & Expectations
|
|
@@ -171,16 +249,16 @@ Weco will parse this output to extract the numerical value (1.5 in this case) as
|
|
|
171
249
|
|
|
172
250
|
## Contributing
|
|
173
251
|
|
|
174
|
-
We welcome contributions! To get started:
|
|
252
|
+
We welcome your contributions! To get started:
|
|
175
253
|
|
|
176
|
-
1. **Fork
|
|
254
|
+
1. **Fork & Clone the Repository:**
|
|
177
255
|
|
|
178
256
|
```bash
|
|
179
257
|
git clone https://github.com/WecoAI/weco-cli.git
|
|
180
258
|
cd weco-cli
|
|
181
259
|
```
|
|
182
260
|
|
|
183
|
-
2. **Install
|
|
261
|
+
2. **Install Dependencies:**
|
|
184
262
|
|
|
185
263
|
```bash
|
|
186
264
|
pip install -e ".[dev]"
|
|
@@ -192,8 +270,8 @@ We welcome contributions! To get started:
|
|
|
192
270
|
git checkout -b feature/your-feature-name
|
|
193
271
|
```
|
|
194
272
|
|
|
195
|
-
4. **Make
|
|
273
|
+
4. **Make Changes:** Ensure your code adheres to our style guidelines and includes relevant tests.
|
|
196
274
|
|
|
197
|
-
5. **Commit
|
|
275
|
+
5. **Commit, Push & Open a PR**: Commit your changes, and open a pull request with a clear description of your enhancements.
|
|
198
276
|
|
|
199
277
|
---
|
|
@@ -0,0 +1,14 @@
|
|
|
1
|
+
weco/__init__.py,sha256=ClO0uT6GKOA0iSptvP0xbtdycf0VpoPTq37jHtvlhtw,303
|
|
2
|
+
weco/api.py,sha256=X2b9iKhpXkEqmQWjecHcQruj0bRIiTpMOGGJMLpkCug,13827
|
|
3
|
+
weco/auth.py,sha256=6bDQv07sx7uxA9CrN3HqUdCHV6nqXO41PGCicquvB00,9919
|
|
4
|
+
weco/chatbot.py,sha256=H6d5yK9MB3pqpE7XVh_HAi1YAmxKy0v_xdozVSlKPCc,36959
|
|
5
|
+
weco/cli.py,sha256=Jy7kQEsNKdV7Wds9Z0DIWBeLpEVyssIiOBiQ4zCl3Lw,7862
|
|
6
|
+
weco/optimizer.py,sha256=z86-js_rvLMv3J8zCqvtc1xJC0EA0WqrN9_BlmX2RK4,23259
|
|
7
|
+
weco/panels.py,sha256=Cnro4Q65n7GGh0FBXuB_OGSxRVobd4k5lOuBViTQaaM,15591
|
|
8
|
+
weco/utils.py,sha256=5Pbhv_5wbTRv93Ws7aJfIOtcxeeqNrDRT3bV6YFOdgM,6032
|
|
9
|
+
weco-0.2.22.dist-info/licenses/LICENSE,sha256=p_GQqJBvuZgkLNboYKyH-5dhpTDlKs2wq2TVM55WrWE,1065
|
|
10
|
+
weco-0.2.22.dist-info/METADATA,sha256=EQ6vcDHjDyU5MbyNucVQU9LccJtz2i6wzghjMB25M6U,15251
|
|
11
|
+
weco-0.2.22.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
|
|
12
|
+
weco-0.2.22.dist-info/entry_points.txt,sha256=ixJ2uClALbCpBvnIR6BXMNck8SHAab8eVkM9pIUowcs,39
|
|
13
|
+
weco-0.2.22.dist-info/top_level.txt,sha256=F0N7v6e2zBSlsorFv-arAq2yDxQbzX3KVO8GxYhPUeE,5
|
|
14
|
+
weco-0.2.22.dist-info/RECORD,,
|
weco-0.2.19.dist-info/RECORD
DELETED
|
@@ -1,12 +0,0 @@
|
|
|
1
|
-
weco/__init__.py,sha256=npWmRgLxfVK69GdyxIujnI87xqmPCBrZWxxAxL_QQOc,478
|
|
2
|
-
weco/api.py,sha256=lJJ0j0-bABiQXDlRb43fCo7ky0N_HwfZgFdMktRKQ90,6635
|
|
3
|
-
weco/auth.py,sha256=IPfiLthcNRkPyM8pWHTyDLvikw83sigacpY1PmeA03Y,2343
|
|
4
|
-
weco/cli.py,sha256=eI468fxpMTfGPL-aX6EMYxh0NuaRxpaLVF_Jj2DiFhU,36383
|
|
5
|
-
weco/panels.py,sha256=pM_YGnmcXM_1CBcxo_EAzOV3g_4NFdLS4MqDqx7THbA,13563
|
|
6
|
-
weco/utils.py,sha256=LVTBo3dduJmhlbotcYoUW2nLx6IRtKs4eDFR52Qltcg,5244
|
|
7
|
-
weco-0.2.19.dist-info/licenses/LICENSE,sha256=p_GQqJBvuZgkLNboYKyH-5dhpTDlKs2wq2TVM55WrWE,1065
|
|
8
|
-
weco-0.2.19.dist-info/METADATA,sha256=3VBVsCqr7p332A10KsLr168GvOIKcCOWWfGDv8ViF7I,10729
|
|
9
|
-
weco-0.2.19.dist-info/WHEEL,sha256=Nw36Djuh_5VDukK0H78QzOX-_FQEo6V37m3nkm96gtU,91
|
|
10
|
-
weco-0.2.19.dist-info/entry_points.txt,sha256=ixJ2uClALbCpBvnIR6BXMNck8SHAab8eVkM9pIUowcs,39
|
|
11
|
-
weco-0.2.19.dist-info/top_level.txt,sha256=F0N7v6e2zBSlsorFv-arAq2yDxQbzX3KVO8GxYhPUeE,5
|
|
12
|
-
weco-0.2.19.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|