weco 0.2.20__py3-none-any.whl → 0.2.22__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.20
3
+ Version: 0.2.22
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License: MIT
@@ -15,6 +15,7 @@ License-File: LICENSE
15
15
  Requires-Dist: requests
16
16
  Requires-Dist: rich
17
17
  Requires-Dist: packaging
18
+ Requires-Dist: gitingest
18
19
  Provides-Extra: dev
19
20
  Requires-Dist: ruff; extra == "dev"
20
21
  Requires-Dist: build; extra == "dev"
@@ -23,7 +24,10 @@ Dynamic: license-file
23
24
 
24
25
  <div align="center">
25
26
 
26
- # Weco: The Platform for Self-Improving Code
27
+ <div align="center">
28
+ <img src="assets/weco.svg" alt="Weco Logo" width="120" height="120" style="margin-bottom: 20px;">
29
+ <h1>Weco: The Platform for Self-Improving Code</h1>
30
+ </div>
27
31
 
28
32
  [![Python](https://img.shields.io/badge/Python-3.8.0+-blue)](https://www.python.org)
29
33
  [![docs](https://img.shields.io/website?url=https://docs.weco.ai/&label=docs)](https://docs.weco.ai/)
@@ -51,7 +55,7 @@ Example applications include:
51
55
 
52
56
  ## Overview
53
57
 
54
- The `weco` CLI leverages a tree search approach guided by Large Language Models (LLMs) to iteratively explore and refine your code. It automatically applies changes, runs your evaluation script, parses the results, and proposes further improvements based on the specified goal.
58
+ The `weco` CLI leverages a tree search approach guided by LLMs to iteratively explore and refine your code. It automatically applies changes, runs your evaluation script, parses the results, and proposes further improvements based on the specified goal.
55
59
 
56
60
  ![image](https://github.com/user-attachments/assets/a6ed63fa-9c40-498e-aa98-a873e5786509)
57
61
 
@@ -67,28 +71,48 @@ The `weco` CLI leverages a tree search approach guided by Large Language Models
67
71
 
68
72
  2. **Set Up LLM API Keys (Required):**
69
73
 
70
- `weco` requires API keys for the Large Language Models (LLMs) it uses internally. You **must** provide these keys via environment variables:
74
+ `weco` requires API keys for the LLMs it uses internally. You **must** provide these keys via environment variables:
71
75
 
72
- - **OpenAI:** `export OPENAI_API_KEY="your_key_here"`
73
- - **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"`
74
- - **Google DeepMind:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create a key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
76
+ - **OpenAI:** `export OPENAI_API_KEY="your_key_here"` (Create your API key [here](https://platform.openai.com/api-keys))
77
+ - **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"` (Create your API key [here](https://console.anthropic.com/settings/keys))
78
+ - **Google:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create your API key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
75
79
 
76
80
  ---
77
81
 
78
82
  ## Get Started
79
83
 
84
+ ### Quick Start (Recommended for New Users)
85
+
86
+ The easiest way to get started with Weco is to use the **interactive copilot**. Simply navigate to your project directory and run:
87
+
88
+ ```bash
89
+ weco
90
+ ```
91
+
92
+ Or specify a project path:
93
+
94
+ ```bash
95
+ weco /path/to/your/project
96
+ ```
97
+
98
+ This launches Weco's interactive copilot that will:
99
+
100
+ 1. **Analyze your codebase** using AI to understand your project structure and identify optimization opportunities
101
+ 2. **Suggest specific optimizations** tailored to your code (e.g., GPU kernel optimization, model improvements, prompt engineering)
102
+ 3. **Generate evaluation scripts** automatically or help you configure existing ones
103
+ 4. **Set up the complete optimization pipeline** with appropriate metrics and commands
104
+ 5. **Run the optimization** or provide you with the exact command to execute
105
+
80
106
  <div style="background-color: #fff3cd; border: 1px solid #ffeeba; padding: 15px; border-radius: 4px; margin-bottom: 15px;">
81
107
  <strong>⚠️ Warning: Code Modification</strong><br>
82
108
  <code>weco</code> directly modifies the file specified by <code>--source</code> during the optimization process. It is <strong>strongly recommended</strong> to use version control (like Git) to track changes and revert if needed. Alternatively, ensure you have a backup of your original file before running the command. Upon completion, the file will contain the best-performing version of the code found during the run.
83
109
  </div>
84
110
 
85
- ---
111
+ ### Manual Setup
86
112
 
87
- **Example: Optimizing Simple PyTorch Operations**
113
+ **Configure optimization parameters yourself** - If you need precise control over the optimization parameters, you can use the direct `weco run` command:
88
114
 
89
- This basic example shows how to optimize a simple PyTorch function for speedup.
90
-
91
- For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md), [ML model optimization](/examples/spaceship-titanic/README.md), and [prompt engineering for math problems](https://github.com/WecoAI/weco-cli/tree/main/examples/prompt), please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
115
+ **Example: Optimizing Simple PyTorch Operations**
92
116
 
93
117
  ```bash
94
118
  # Navigate to the example directory
@@ -97,7 +121,7 @@ cd examples/hello-kernel-world
97
121
  # Install dependencies
98
122
  pip install torch
99
123
 
100
- # Run Weco
124
+ # Run Weco with manual configuration
101
125
  weco run --source optimize.py \
102
126
  --eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
103
127
  --metric speedup \
@@ -108,36 +132,87 @@ weco run --source optimize.py \
108
132
 
109
133
  **Note:** If you have an NVIDIA GPU, change the device in the `--eval-command` to `cuda`. If you are running this on Apple Silicon, set it to `mps`.
110
134
 
135
+ For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md), [ML model optimization](/examples/spaceship-titanic/README.md), and [prompt engineering for math problems](examples/prompt/README.md), please see the `README.md` files within the corresponding subdirectories under the [`examples/`](examples/) folder.
136
+
111
137
  ---
112
138
 
113
139
  ### Arguments for `weco run`
114
140
 
115
141
  **Required:**
116
142
 
117
- | Argument | Description |
118
- | :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
119
- | `-s, --source` | Path to the source code file that will be optimized (e.g., `optimize.py`). |
120
- | `-c, --eval-command`| Command to run for evaluating the code in `--source`. This command should print the target `--metric` and its value to the terminal (stdout/stderr). See note below. |
121
- | `-m, --metric` | The name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name should match what's printed by your `--eval-command`. |
122
- | `-g, --goal` | `maximize`/`max` to maximize the `--metric` or `minimize`/`min` to minimize it. |
143
+ | Argument | Description | Example |
144
+ | :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------- |
145
+ | `-s, --source` | Path to the source code file that will be optimized. | `-s model.py` |
146
+ | `-c, --eval-command`| Command to run for evaluating the code in `--source`. This command should print the target `--metric` and its value to the terminal (stdout/stderr). See note below. | `-c "python eval.py"` |
147
+ | `-m, --metric` | The name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name does not need to match what's printed by your `--eval-command` exactly (e.g., its okay to use "speedup" instead of "Speedup:"). | `-m speedup` |
148
+ | `-g, --goal` | `maximize`/`max` to maximize the `--metric` or `minimize`/`min` to minimize it. | `-g maximize` |
123
149
 
124
150
  <br>
125
151
 
126
152
  **Optional:**
127
153
 
128
- | Argument | Description | Default |
129
- | :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
130
- | `-n, --steps` | Number of optimization steps (LLM iterations) to run. | 100 |
131
- | `-M, --model` | Model identifier for the LLM to use (e.g., `gpt-4o`, `claude-3.5-sonnet`). | `o4-mini` when `OPENAI_API_KEY` is set; `claude-3-7-sonnet-20250219` when `ANTHROPIC_API_KEY` is set; `gemini-2.5-pro-exp-03-25` when `GEMINI_API_KEY` is set (priority: `OPENAI_API_KEY` > `ANTHROPIC_API_KEY` > `GEMINI_API_KEY`). |
132
- | `-i, --additional-instructions`| Natural language description of specific instructions **or** path to a file containing detailed instructions to guide the LLM. | `None` |
133
- | `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` |
154
+ | Argument | Description | Default | Example |
155
+ | :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------ |
156
+ | `-n, --steps` | Number of optimization steps (LLM iterations) to run. | 100 | `-n 50` |
157
+ | `-M, --model` | Model identifier for the LLM to use (e.g., `o4-mini`, `claude-sonnet-4-0`). | `o4-mini` when `OPENAI_API_KEY` is set; `claude-sonnet-4-0` when `ANTHROPIC_API_KEY` is set; `gemini-2.5-pro` when `GEMINI_API_KEY` is set. | `-M o4-mini` |
158
+ | `-i, --additional-instructions`| Natural language description of specific instructions **or** path to a file containing detailed instructions to guide the LLM. | `None` | `-i instructions.md` or `-i "Optimize the model for faster inference"`|
159
+ | `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` | `-l ./logs/` |
134
160
 
135
161
  ---
136
162
 
137
- ### Weco Dashboard
138
- To associate your optimization runs with your Weco account and view them on the Weco dashboard, you can log in. `weco` uses a device authentication flow
163
+ ### Authentication & Dashboard
164
+
165
+ Weco offers both **anonymous** and **authenticated** usage:
166
+
167
+ #### Anonymous Usage
168
+ You can use Weco without creating an account by providing LLM API keys via environment variables. This is perfect for trying out Weco or for users who prefer not to create accounts.
169
+
170
+ #### Authenticated Usage (Recommended)
171
+ To save your optimization runs and view them on the Weco dashboard, you can log in using Weco's secure device authentication flow:
172
+
173
+ 1. **During onboarding**: When you run `weco` for the first time, you'll be prompted to log in or skip
174
+ 2. **Manual login**: Use `weco logout` to clear credentials, then run `weco` again to re-authenticate
175
+ 3. **Device flow**: Weco will open your browser automatically and guide you through a secure OAuth-style authentication
176
+
139
177
  ![image (16)](https://github.com/user-attachments/assets/8a0a285b-4894-46fa-b6a2-4990017ca0c6)
140
178
 
179
+ **Benefits of authenticated usage:**
180
+ - **Run history**: View all your optimization runs on the Weco dashboard
181
+ - **Progress tracking**: Monitor long-running optimizations remotely
182
+ - **Enhanced support**: Get better assistance with your optimization challenges
183
+
184
+ ---
185
+
186
+ ## Command Reference
187
+
188
+ ### Basic Usage Patterns
189
+
190
+ | Command | Description | When to Use |
191
+ |---------|-------------|-------------|
192
+ | `weco` | Launch interactive onboarding | **Recommended for beginners** - Analyzes your codebase and guides you through setup |
193
+ | `weco /path/to/project` | Launch onboarding for specific project | When working with a project in a different directory |
194
+ | `weco run [options]` | Direct optimization execution | **For advanced users** - When you know exactly what to optimize and how |
195
+ | `weco logout` | Clear authentication credentials | To switch accounts or troubleshoot authentication issues |
196
+
197
+ ### Model Selection
198
+
199
+ You can specify which LLM model to use with the `-M` or `--model` flag:
200
+
201
+ ```bash
202
+ # Use with onboarding
203
+ weco --model gpt-4o
204
+
205
+ # Use with direct execution
206
+ weco run --model claude-3.5-sonnet --source optimize.py [other options...]
207
+ ```
208
+
209
+ **Available models:**
210
+ - `gpt-4o`, `o4-mini` (requires `OPENAI_API_KEY`)
211
+ - `claude-3.5-sonnet`, `claude-sonnet-4-20250514` (requires `ANTHROPIC_API_KEY`)
212
+ - `gemini-2.5-pro` (requires `GEMINI_API_KEY`)
213
+
214
+ If no model is specified, Weco automatically selects the best available model based on your API keys.
215
+
141
216
  ---
142
217
 
143
218
  ### Performance & Expectations
@@ -174,16 +249,16 @@ Weco will parse this output to extract the numerical value (1.5 in this case) as
174
249
 
175
250
  ## Contributing
176
251
 
177
- We welcome contributions! To get started:
252
+ We welcome your contributions! To get started:
178
253
 
179
- 1. **Fork and Clone the Repository:**
254
+ 1. **Fork & Clone the Repository:**
180
255
 
181
256
  ```bash
182
257
  git clone https://github.com/WecoAI/weco-cli.git
183
258
  cd weco-cli
184
259
  ```
185
260
 
186
- 2. **Install Development Dependencies:**
261
+ 2. **Install Dependencies:**
187
262
 
188
263
  ```bash
189
264
  pip install -e ".[dev]"
@@ -195,8 +270,8 @@ We welcome contributions! To get started:
195
270
  git checkout -b feature/your-feature-name
196
271
  ```
197
272
 
198
- 4. **Make Your Changes:** Ensure your code adheres to our style guidelines and includes relevant tests.
273
+ 4. **Make Changes:** Ensure your code adheres to our style guidelines and includes relevant tests.
199
274
 
200
- 5. **Commit and Push** your changes, then open a pull request with a clear description of your enhancements.
275
+ 5. **Commit, Push & Open a PR**: Commit your changes, and open a pull request with a clear description of your enhancements.
201
276
 
202
277
  ---
@@ -0,0 +1,14 @@
1
+ weco/__init__.py,sha256=ClO0uT6GKOA0iSptvP0xbtdycf0VpoPTq37jHtvlhtw,303
2
+ weco/api.py,sha256=X2b9iKhpXkEqmQWjecHcQruj0bRIiTpMOGGJMLpkCug,13827
3
+ weco/auth.py,sha256=6bDQv07sx7uxA9CrN3HqUdCHV6nqXO41PGCicquvB00,9919
4
+ weco/chatbot.py,sha256=H6d5yK9MB3pqpE7XVh_HAi1YAmxKy0v_xdozVSlKPCc,36959
5
+ weco/cli.py,sha256=Jy7kQEsNKdV7Wds9Z0DIWBeLpEVyssIiOBiQ4zCl3Lw,7862
6
+ weco/optimizer.py,sha256=z86-js_rvLMv3J8zCqvtc1xJC0EA0WqrN9_BlmX2RK4,23259
7
+ weco/panels.py,sha256=Cnro4Q65n7GGh0FBXuB_OGSxRVobd4k5lOuBViTQaaM,15591
8
+ weco/utils.py,sha256=5Pbhv_5wbTRv93Ws7aJfIOtcxeeqNrDRT3bV6YFOdgM,6032
9
+ weco-0.2.22.dist-info/licenses/LICENSE,sha256=p_GQqJBvuZgkLNboYKyH-5dhpTDlKs2wq2TVM55WrWE,1065
10
+ weco-0.2.22.dist-info/METADATA,sha256=EQ6vcDHjDyU5MbyNucVQU9LccJtz2i6wzghjMB25M6U,15251
11
+ weco-0.2.22.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
12
+ weco-0.2.22.dist-info/entry_points.txt,sha256=ixJ2uClALbCpBvnIR6BXMNck8SHAab8eVkM9pIUowcs,39
13
+ weco-0.2.22.dist-info/top_level.txt,sha256=F0N7v6e2zBSlsorFv-arAq2yDxQbzX3KVO8GxYhPUeE,5
14
+ weco-0.2.22.dist-info/RECORD,,
@@ -1,12 +0,0 @@
1
- weco/__init__.py,sha256=npWmRgLxfVK69GdyxIujnI87xqmPCBrZWxxAxL_QQOc,478
2
- weco/api.py,sha256=xHCyQPto1Lv9QysiOFwVf5NnWDh6LBCNfPLyq-L7nys,5873
3
- weco/auth.py,sha256=IPfiLthcNRkPyM8pWHTyDLvikw83sigacpY1PmeA03Y,2343
4
- weco/cli.py,sha256=e4h5bxeg2n95AlYXanfxLbcURWchjWTES2Kwx5AjKn0,36115
5
- weco/panels.py,sha256=lsTHTh-XdYMH3ZV_WBteEcIt2hTWGGtqfUjGlYRHl70,13598
6
- weco/utils.py,sha256=LVTBo3dduJmhlbotcYoUW2nLx6IRtKs4eDFR52Qltcg,5244
7
- weco-0.2.20.dist-info/licenses/LICENSE,sha256=p_GQqJBvuZgkLNboYKyH-5dhpTDlKs2wq2TVM55WrWE,1065
8
- weco-0.2.20.dist-info/METADATA,sha256=rK-Y9Q0zwaKUBS0bNZfUNvL82RXUiijVcIbW3i_IKKk,10955
9
- weco-0.2.20.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
10
- weco-0.2.20.dist-info/entry_points.txt,sha256=ixJ2uClALbCpBvnIR6BXMNck8SHAab8eVkM9pIUowcs,39
11
- weco-0.2.20.dist-info/top_level.txt,sha256=F0N7v6e2zBSlsorFv-arAq2yDxQbzX3KVO8GxYhPUeE,5
12
- weco-0.2.20.dist-info/RECORD,,
File without changes