weco 0.2.7__tar.gz → 0.2.8__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (39) hide show
  1. {weco-0.2.7 → weco-0.2.8}/.github/workflows/release.yml +2 -2
  2. {weco-0.2.7 → weco-0.2.8}/PKG-INFO +22 -129
  3. {weco-0.2.7 → weco-0.2.8}/README.md +20 -127
  4. weco-0.2.8/examples/cuda/README.md +40 -0
  5. weco-0.2.8/examples/metal/README.md +0 -0
  6. weco-0.2.8/examples/spaceship-titanic/README.md +62 -0
  7. weco-0.2.8/examples/triton/README.md +0 -0
  8. {weco-0.2.7 → weco-0.2.8}/pyproject.toml +2 -2
  9. {weco-0.2.7 → weco-0.2.8}/weco/__init__.py +1 -1
  10. {weco-0.2.7 → weco-0.2.8}/weco/api.py +3 -8
  11. {weco-0.2.7 → weco-0.2.8}/weco/cli.py +8 -8
  12. {weco-0.2.7 → weco-0.2.8}/weco/panels.py +12 -3
  13. {weco-0.2.7 → weco-0.2.8}/weco.egg-info/PKG-INFO +22 -129
  14. {weco-0.2.7 → weco-0.2.8}/weco.egg-info/SOURCES.txt +3 -0
  15. weco-0.2.7/examples/spaceship-titanic/README.md +0 -93
  16. {weco-0.2.7 → weco-0.2.8}/.github/workflows/lint.yml +0 -0
  17. {weco-0.2.7 → weco-0.2.8}/.gitignore +0 -0
  18. {weco-0.2.7 → weco-0.2.8}/LICENSE +0 -0
  19. {weco-0.2.7 → weco-0.2.8}/examples/cuda/evaluate.py +0 -0
  20. {weco-0.2.7 → weco-0.2.8}/examples/cuda/guide.md +0 -0
  21. {weco-0.2.7 → weco-0.2.8}/examples/cuda/optimize.py +0 -0
  22. {weco-0.2.7 → weco-0.2.8}/examples/hello-kernel-world/evaluate.py +0 -0
  23. {weco-0.2.7 → weco-0.2.8}/examples/hello-kernel-world/optimize.py +0 -0
  24. {weco-0.2.7 → weco-0.2.8}/examples/metal/evaluate.py +0 -0
  25. {weco-0.2.7 → weco-0.2.8}/examples/metal/examples.rst +0 -0
  26. {weco-0.2.7 → weco-0.2.8}/examples/metal/optimize.py +0 -0
  27. {weco-0.2.7 → weco-0.2.8}/examples/spaceship-titanic/baseline.py +0 -0
  28. {weco-0.2.7 → weco-0.2.8}/examples/spaceship-titanic/evaluate.py +0 -0
  29. {weco-0.2.7 → weco-0.2.8}/examples/spaceship-titanic/optimize.py +0 -0
  30. {weco-0.2.7 → weco-0.2.8}/examples/spaceship-titanic/requirements-test.txt +0 -0
  31. {weco-0.2.7 → weco-0.2.8}/examples/spaceship-titanic/utils.py +0 -0
  32. {weco-0.2.7 → weco-0.2.8}/examples/triton/evaluate.py +0 -0
  33. {weco-0.2.7 → weco-0.2.8}/examples/triton/optimize.py +0 -0
  34. {weco-0.2.7 → weco-0.2.8}/setup.cfg +0 -0
  35. {weco-0.2.7 → weco-0.2.8}/weco/utils.py +0 -0
  36. {weco-0.2.7 → weco-0.2.8}/weco.egg-info/dependency_links.txt +0 -0
  37. {weco-0.2.7 → weco-0.2.8}/weco.egg-info/entry_points.txt +0 -0
  38. {weco-0.2.7 → weco-0.2.8}/weco.egg-info/requires.txt +0 -0
  39. {weco-0.2.7 → weco-0.2.8}/weco.egg-info/top_level.txt +0 -0
@@ -90,7 +90,7 @@ jobs:
90
90
  GITHUB_TOKEN: ${{ github.token }}
91
91
  run: >-
92
92
  gh release create
93
- 'v0.2.7'
93
+ 'v0.2.8'
94
94
  --repo '${{ github.repository }}'
95
95
  --notes ""
96
96
 
@@ -102,5 +102,5 @@ jobs:
102
102
  # sigstore-produced signatures and certificates.
103
103
  run: >-
104
104
  gh release upload
105
- 'v0.2.7' dist/**
105
+ 'v0.2.8' dist/**
106
106
  --repo '${{ github.repository }}'
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.7
3
+ Version: 0.2.8
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License: MIT
@@ -9,7 +9,7 @@ Keywords: AI,Code Optimization,Code Generation
9
9
  Classifier: Programming Language :: Python :: 3
10
10
  Classifier: Operating System :: OS Independent
11
11
  Classifier: License :: OSI Approved :: MIT License
12
- Requires-Python: >=3.12
12
+ Requires-Python: >=3.8
13
13
  Description-Content-Type: text/markdown
14
14
  License-File: LICENSE
15
15
  Requires-Dist: requests
@@ -20,13 +20,19 @@ Requires-Dist: build; extra == "dev"
20
20
  Requires-Dist: setuptools_scm; extra == "dev"
21
21
  Dynamic: license-file
22
22
 
23
- # Weco CLI Code Optimizer for Machine Learning Engineers
23
+ # Weco: The Evaluation-Driven AI Code Optimizer
24
24
 
25
25
  [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
26
- [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
27
26
  [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
27
+ [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
28
28
 
29
- `weco` is a command-line interface for interacting with Weco AI's code optimizer, powered by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138). It helps you automate the improvement of your code for tasks like GPU kernel optimization, feature engineering, model development, and prompt engineering.
29
+ Weco systematically optimizes your code, guided directly by your evaluation metrics.
30
+
31
+ Example applications include:
32
+
33
+ - **GPU Kernel Optimization**: Reimplement PyTorch functions using CUDA, Triton or Metal, optimizing for `latency`, `throughput`, or `memory_bandwidth`.
34
+ - **Model Development**: Tune feature transformations or architectures, optimizing for `validation_accuracy`, `AUC`, or `Sharpe Ratio`.
35
+ - **Prompt Engineering**: Refine prompts for LLMs, optimizing for `win_rate`, `relevance`, or `format_adherence`
30
36
 
31
37
  https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
32
38
 
@@ -40,37 +46,6 @@ The `weco` CLI leverages a tree search approach guided by Large Language Models
40
46
 
41
47
  ---
42
48
 
43
- ## Example Use Cases
44
-
45
- Here's how `weco` can be applied to common ML engineering tasks:
46
-
47
- * **GPU Kernel Optimization:**
48
- * **Goal:** Improve the speed or efficiency of low-level GPU code.
49
- * **How:** `weco` iteratively refines CUDA, Triton, Metal, or other kernel code specified in your `--source` file.
50
- * **`--eval-command`:** Typically runs a script that compiles the kernel, executes it, and benchmarks performance (e.g., latency, throughput).
51
- * **`--metric`:** Examples include `latency`, `throughput`, `TFLOPS`, `memory_bandwidth`. Optimize to `minimize` latency or `maximize` throughput.
52
-
53
- * **Feature Engineering:**
54
- * **Goal:** Discover better data transformations or feature combinations for your machine learning models.
55
- * **How:** `weco` explores different processing steps or parameters within your feature transformation code (`--source`).
56
- * **`--eval-command`:** Executes a script that applies the features, trains/validates a model using those features, and prints a performance score.
57
- * **`--metric`:** Examples include `accuracy`, `AUC`, `F1-score`, `validation_loss`. Usually optimized to `maximize` accuracy/AUC/F1 or `minimize` loss.
58
-
59
- * **Model Development:**
60
- * **Goal:** Tune hyperparameters or experiment with small architectural changes directly within your model's code.
61
- * **How:** `weco` modifies hyperparameter values (like learning rate, layer sizes if defined in the code) or structural elements in your model definition (`--source`).
62
- * **`--eval-command`:** Runs your model training and evaluation script, printing the key performance indicator.
63
- * **`--metric`:** Examples include `validation_accuracy`, `test_loss`, `inference_time`, `perplexity`. Optimize according to the metric's nature (e.g., `maximize` accuracy, `minimize` loss).
64
-
65
- * **Prompt Engineering:**
66
- * **Goal:** Refine prompts used within larger systems (e.g., for LLM interactions) to achieve better or more consistent outputs.
67
- * **How:** `weco` modifies prompt templates, examples, or instructions stored in the `--source` file.
68
- * **`--eval-command`:** Executes a script that uses the prompt, generates an output, evaluates that output against desired criteria (e.g., using another LLM, checking for keywords, format validation), and prints a score.
69
- * **`--metric`:** Examples include `quality_score`, `relevance`, `task_success_rate`, `format_adherence`. Usually optimized to `maximize`.
70
-
71
- ---
72
-
73
-
74
49
  ## Setup
75
50
 
76
51
  1. **Install the Package:**
@@ -97,13 +72,20 @@ Here's how `weco` can be applied to common ML engineering tasks:
97
72
 
98
73
  ---
99
74
 
100
- ### Examples
75
+ ### Example: Optimizing Simple PyTorch Operations
76
+
77
+ This basic example shows how to optimize a simple PyTorch function for speedup.
101
78
 
102
- **Example 1: Optimizing PyTorch simple operations**
79
+ For more advanced examples, including **[Metal/MLX](/examples/metal/README.md), [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md)**, and **[ML model optimization](/examples/spaceship-titanic/README.md)t**, please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
103
80
 
104
81
  ```bash
82
+ # Navigate to the example directory
105
83
  cd examples/hello-kernel-world
106
- pip install torch
84
+
85
+ # Install dependencies
86
+ pip install torch
87
+
88
+ # Run Weco
107
89
  weco --source optimize.py \
108
90
  --eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
109
91
  --metric speedup \
@@ -113,96 +95,7 @@ weco --source optimize.py \
113
95
  --additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
114
96
  ```
115
97
 
116
- Note that if you have an NVIDIA gpu, change the device to `cuda`. If you are running this on Apple Silicon, set it to `mps`.
117
-
118
- **Example 2: Optimizing MLX operations with instructions from a file**
119
-
120
- Lets optimize a 2D convolution operation in [`mlx`](https://github.com/ml-explore/mlx) using [Metal](https://developer.apple.com/documentation/metal/). Sometimes, additional context or instructions are too complex for a single command-line string. You can provide a path to a file containing these instructions.
121
-
122
- ```bash
123
- cd examples/metal
124
- pip install mlx
125
- weco --source optimize.py \
126
- --eval-command "python evaluate.py --solution-path optimize.py" \
127
- --metric speedup \
128
- --maximize true \
129
- --steps 30 \
130
- --model gemini-2.5-pro-exp-03-25 \
131
- --additional-instructions examples.rst
132
- ```
133
-
134
- **Example 3: Level Agnostic Optimization: Causal Self Attention with Triton & CUDA**
135
-
136
- Given how useful causal multihead self attention is to transformers, we've seen its wide adoption across ML engineering and AI research. Its great to keep things at a high-level (in PyTorch) when doing research, but when moving to production you often need to write highly customized low-level kernels to make things run as fast as they can. The `weco` CLI can optimize kernels across a variety of different abstraction levels and frameworks. Example 2 uses Metal but lets explore two more frameworks:
137
-
138
- 1. [Triton](https://github.com/triton-lang/triton)
139
- ```bash
140
- cd examples/triton
141
- pip install torch triton
142
- weco --source optimize.py \
143
- --eval-command "python evaluate.py --solution-path optimize.py" \
144
- --metric speedup \
145
- --maximize true \
146
- --steps 30 \
147
- --model gemini-2.5-pro-exp-03-25 \
148
- --additional-instructions "Use triton to optimize the code while ensuring a small max float diff. Maintain the same code format."
149
- ```
150
-
151
- 2. [CUDA](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html)
152
- ```bash
153
- cd examples/cuda
154
- pip install torch
155
- weco --source optimize.py \
156
- --eval-command "python evaluate.py --solution-path optimize.py" \
157
- --metric speedup \
158
- --maximize true \
159
- --steps 30 \
160
- --model gemini-2.5-pro-exp-03-25 \
161
- --additional-instructions guide.md
162
- ```
163
-
164
- **Example 4: Optimizing a Classification Model**
165
-
166
- This example demonstrates optimizing a script for a Kaggle competition ([Spaceship Titanic](https://www.kaggle.com/competitions/spaceship-titanic/overview)) to improve classification accuracy. The additional instructions are provided via a separate file (`examples/spaceship-titanic/README.md`).
167
-
168
- First, install the requirements for the example environment:
169
- ```bash
170
- pip install -r examples/spaceship-titanic/requirements-test.txt
171
- ```
172
- And run utility function once to prepare the dataset
173
- ```bash
174
- python examples/spaceship-titanic/utils.py
175
- ```
176
-
177
- You should see the following structure at `examples/spaceship-titanic`. You need to prepare the kaggle credentials for downloading the dataset.
178
- ```
179
- .
180
- ├── baseline.py
181
- ├── evaluate.py
182
- ├── optimize.py
183
- ├── private
184
- │ └── test.csv
185
- ├── public
186
- │ ├── sample_submission.csv
187
- │ ├── test.csv
188
- │ └── train.csv
189
- ├── README.md
190
- ├── requirements-test.txt
191
- └── utils.py
192
- ```
193
-
194
- Then, execute the optimization command:
195
- ```bash
196
- weco --source examples/spaceship-titanic/optimize.py \
197
- --eval-command "python examples/spaceship-titanic/optimize.py && python examples/spaceship-titanic/evaluate.py" \
198
- --metric accuracy \
199
- --maximize true \
200
- --steps 10 \
201
- --model gemini-2.5-pro-exp-03-25 \
202
- --additional-instructions examples/spaceship-titanic/README.md
203
- ```
204
-
205
- *The [baseline.py](examples/spaceship-titanic/baseline.py) is provided as a start point for optimization*
98
+ **Note:** If you have an NVIDIA GPU, change the device in the `--eval-command` to `cuda`. If you are running this on Apple Silicon, set it to `mps`.
206
99
 
207
100
  ---
208
101
 
@@ -1,10 +1,16 @@
1
- # Weco CLI Code Optimizer for Machine Learning Engineers
1
+ # Weco: The Evaluation-Driven AI Code Optimizer
2
2
 
3
3
  [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
4
- [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
5
4
  [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
5
+ [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
6
6
 
7
- `weco` is a command-line interface for interacting with Weco AI's code optimizer, powered by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138). It helps you automate the improvement of your code for tasks like GPU kernel optimization, feature engineering, model development, and prompt engineering.
7
+ Weco systematically optimizes your code, guided directly by your evaluation metrics.
8
+
9
+ Example applications include:
10
+
11
+ - **GPU Kernel Optimization**: Reimplement PyTorch functions using CUDA, Triton or Metal, optimizing for `latency`, `throughput`, or `memory_bandwidth`.
12
+ - **Model Development**: Tune feature transformations or architectures, optimizing for `validation_accuracy`, `AUC`, or `Sharpe Ratio`.
13
+ - **Prompt Engineering**: Refine prompts for LLMs, optimizing for `win_rate`, `relevance`, or `format_adherence`
8
14
 
9
15
  https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
10
16
 
@@ -18,37 +24,6 @@ The `weco` CLI leverages a tree search approach guided by Large Language Models
18
24
 
19
25
  ---
20
26
 
21
- ## Example Use Cases
22
-
23
- Here's how `weco` can be applied to common ML engineering tasks:
24
-
25
- * **GPU Kernel Optimization:**
26
- * **Goal:** Improve the speed or efficiency of low-level GPU code.
27
- * **How:** `weco` iteratively refines CUDA, Triton, Metal, or other kernel code specified in your `--source` file.
28
- * **`--eval-command`:** Typically runs a script that compiles the kernel, executes it, and benchmarks performance (e.g., latency, throughput).
29
- * **`--metric`:** Examples include `latency`, `throughput`, `TFLOPS`, `memory_bandwidth`. Optimize to `minimize` latency or `maximize` throughput.
30
-
31
- * **Feature Engineering:**
32
- * **Goal:** Discover better data transformations or feature combinations for your machine learning models.
33
- * **How:** `weco` explores different processing steps or parameters within your feature transformation code (`--source`).
34
- * **`--eval-command`:** Executes a script that applies the features, trains/validates a model using those features, and prints a performance score.
35
- * **`--metric`:** Examples include `accuracy`, `AUC`, `F1-score`, `validation_loss`. Usually optimized to `maximize` accuracy/AUC/F1 or `minimize` loss.
36
-
37
- * **Model Development:**
38
- * **Goal:** Tune hyperparameters or experiment with small architectural changes directly within your model's code.
39
- * **How:** `weco` modifies hyperparameter values (like learning rate, layer sizes if defined in the code) or structural elements in your model definition (`--source`).
40
- * **`--eval-command`:** Runs your model training and evaluation script, printing the key performance indicator.
41
- * **`--metric`:** Examples include `validation_accuracy`, `test_loss`, `inference_time`, `perplexity`. Optimize according to the metric's nature (e.g., `maximize` accuracy, `minimize` loss).
42
-
43
- * **Prompt Engineering:**
44
- * **Goal:** Refine prompts used within larger systems (e.g., for LLM interactions) to achieve better or more consistent outputs.
45
- * **How:** `weco` modifies prompt templates, examples, or instructions stored in the `--source` file.
46
- * **`--eval-command`:** Executes a script that uses the prompt, generates an output, evaluates that output against desired criteria (e.g., using another LLM, checking for keywords, format validation), and prints a score.
47
- * **`--metric`:** Examples include `quality_score`, `relevance`, `task_success_rate`, `format_adherence`. Usually optimized to `maximize`.
48
-
49
- ---
50
-
51
-
52
27
  ## Setup
53
28
 
54
29
  1. **Install the Package:**
@@ -75,13 +50,20 @@ Here's how `weco` can be applied to common ML engineering tasks:
75
50
 
76
51
  ---
77
52
 
78
- ### Examples
53
+ ### Example: Optimizing Simple PyTorch Operations
54
+
55
+ This basic example shows how to optimize a simple PyTorch function for speedup.
79
56
 
80
- **Example 1: Optimizing PyTorch simple operations**
57
+ For more advanced examples, including **[Metal/MLX](/examples/metal/README.md), [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md)**, and **[ML model optimization](/examples/spaceship-titanic/README.md)t**, please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
81
58
 
82
59
  ```bash
60
+ # Navigate to the example directory
83
61
  cd examples/hello-kernel-world
84
- pip install torch
62
+
63
+ # Install dependencies
64
+ pip install torch
65
+
66
+ # Run Weco
85
67
  weco --source optimize.py \
86
68
  --eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
87
69
  --metric speedup \
@@ -91,96 +73,7 @@ weco --source optimize.py \
91
73
  --additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
92
74
  ```
93
75
 
94
- Note that if you have an NVIDIA gpu, change the device to `cuda`. If you are running this on Apple Silicon, set it to `mps`.
95
-
96
- **Example 2: Optimizing MLX operations with instructions from a file**
97
-
98
- Lets optimize a 2D convolution operation in [`mlx`](https://github.com/ml-explore/mlx) using [Metal](https://developer.apple.com/documentation/metal/). Sometimes, additional context or instructions are too complex for a single command-line string. You can provide a path to a file containing these instructions.
99
-
100
- ```bash
101
- cd examples/metal
102
- pip install mlx
103
- weco --source optimize.py \
104
- --eval-command "python evaluate.py --solution-path optimize.py" \
105
- --metric speedup \
106
- --maximize true \
107
- --steps 30 \
108
- --model gemini-2.5-pro-exp-03-25 \
109
- --additional-instructions examples.rst
110
- ```
111
-
112
- **Example 3: Level Agnostic Optimization: Causal Self Attention with Triton & CUDA**
113
-
114
- Given how useful causal multihead self attention is to transformers, we've seen its wide adoption across ML engineering and AI research. Its great to keep things at a high-level (in PyTorch) when doing research, but when moving to production you often need to write highly customized low-level kernels to make things run as fast as they can. The `weco` CLI can optimize kernels across a variety of different abstraction levels and frameworks. Example 2 uses Metal but lets explore two more frameworks:
115
-
116
- 1. [Triton](https://github.com/triton-lang/triton)
117
- ```bash
118
- cd examples/triton
119
- pip install torch triton
120
- weco --source optimize.py \
121
- --eval-command "python evaluate.py --solution-path optimize.py" \
122
- --metric speedup \
123
- --maximize true \
124
- --steps 30 \
125
- --model gemini-2.5-pro-exp-03-25 \
126
- --additional-instructions "Use triton to optimize the code while ensuring a small max float diff. Maintain the same code format."
127
- ```
128
-
129
- 2. [CUDA](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html)
130
- ```bash
131
- cd examples/cuda
132
- pip install torch
133
- weco --source optimize.py \
134
- --eval-command "python evaluate.py --solution-path optimize.py" \
135
- --metric speedup \
136
- --maximize true \
137
- --steps 30 \
138
- --model gemini-2.5-pro-exp-03-25 \
139
- --additional-instructions guide.md
140
- ```
141
-
142
- **Example 4: Optimizing a Classification Model**
143
-
144
- This example demonstrates optimizing a script for a Kaggle competition ([Spaceship Titanic](https://www.kaggle.com/competitions/spaceship-titanic/overview)) to improve classification accuracy. The additional instructions are provided via a separate file (`examples/spaceship-titanic/README.md`).
145
-
146
- First, install the requirements for the example environment:
147
- ```bash
148
- pip install -r examples/spaceship-titanic/requirements-test.txt
149
- ```
150
- And run utility function once to prepare the dataset
151
- ```bash
152
- python examples/spaceship-titanic/utils.py
153
- ```
154
-
155
- You should see the following structure at `examples/spaceship-titanic`. You need to prepare the kaggle credentials for downloading the dataset.
156
- ```
157
- .
158
- ├── baseline.py
159
- ├── evaluate.py
160
- ├── optimize.py
161
- ├── private
162
- │ └── test.csv
163
- ├── public
164
- │ ├── sample_submission.csv
165
- │ ├── test.csv
166
- │ └── train.csv
167
- ├── README.md
168
- ├── requirements-test.txt
169
- └── utils.py
170
- ```
171
-
172
- Then, execute the optimization command:
173
- ```bash
174
- weco --source examples/spaceship-titanic/optimize.py \
175
- --eval-command "python examples/spaceship-titanic/optimize.py && python examples/spaceship-titanic/evaluate.py" \
176
- --metric accuracy \
177
- --maximize true \
178
- --steps 10 \
179
- --model gemini-2.5-pro-exp-03-25 \
180
- --additional-instructions examples/spaceship-titanic/README.md
181
- ```
182
-
183
- *The [baseline.py](examples/spaceship-titanic/baseline.py) is provided as a start point for optimization*
76
+ **Note:** If you have an NVIDIA GPU, change the device in the `--eval-command` to `cuda`. If you are running this on Apple Silicon, set it to `mps`.
184
77
 
185
78
  ---
186
79
 
@@ -0,0 +1,40 @@
1
+ # Example: Optimizing PyTorch Self-Attention with CUDA
2
+
3
+ This example showcases using Weco to optimize a PyTorch causal multi-head self-attention implementation by generating custom [CUDA](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html) kernels. This approach aims for low-level optimization beyond standard PyTorch or even Triton for potentially higher performance on NVIDIA GPUs.
4
+
5
+ This example uses a separate Markdown file (`guide.md`) to provide detailed instructions and context to the LLM.
6
+
7
+ ## Setup
8
+
9
+ 1. Ensure you are in the `examples/cuda` directory.
10
+ 2. Install the required dependency:
11
+ ```bash
12
+ pip install torch
13
+ ```
14
+ *(Note: This example requires a compatible NVIDIA GPU and the CUDA Toolkit installed on your system for compiling and running the generated CUDA code.)*
15
+
16
+ ## Optimization Command
17
+
18
+ Run the following command to start the optimization process:
19
+
20
+ ```bash
21
+ weco --source optimize.py \
22
+ --eval-command "python evaluate.py --solution-path optimize.py" \
23
+ --metric speedup \
24
+ --maximize true \
25
+ --steps 30 \
26
+ --model gemini-2.5-pro-exp-03-25 \
27
+ --additional-instructions guide.md
28
+ ```
29
+
30
+ ### Explanation
31
+
32
+ * `--source optimize.py`: The initial PyTorch self-attention code to be optimized with CUDA.
33
+ * `--eval-command "python evaluate.py --solution-path optimize.py"`: Runs the evaluation script, which compiles (if necessary) and benchmarks the CUDA-enhanced code in `optimize.py` against a baseline, printing the `speedup`.
34
+ * `--metric speedup`: The optimization target metric.
35
+ * `--maximize true`: Weco aims to increase the speedup.
36
+ * `--steps 30`: The number of optimization iterations.
37
+ * `--model gemini-2.5-pro-exp-03-25`: The LLM used for code generation.
38
+ * `--additional-instructions guide.md`: Points Weco to a file containing detailed instructions for the LLM on how to write the CUDA kernels, handle compilation (e.g., using `torch.utils.cpp_extension`), manage data types, and ensure correctness.
39
+
40
+ Weco will iteratively modify `optimize.py`, potentially generating and integrating CUDA C++ code, guided by the evaluation results and the instructions in `guide.md`.
File without changes
@@ -0,0 +1,62 @@
1
+ # Example: Optimizing a Kaggle Classification Model (Spaceship Titanic)
2
+
3
+ This example demonstrates using Weco to optimize a Python script designed for the [Spaceship Titanic Kaggle competition](https://www.kaggle.com/competitions/spaceship-titanic/overview). The goal is to improve the model's `accuracy` metric by modifying the feature engineering and modeling steps within the `optimize.py` script.
4
+
5
+ This example uses the `README.md` file (this file) to provide additional instructions to the LLM.
6
+
7
+ ## Setup
8
+
9
+ 1. Ensure you are in the `examples/spaceship-titanic` directory.
10
+ 2. **Kaggle Credentials:** You need your Kaggle API credentials (`kaggle.json`) configured to download the competition dataset. Place the `kaggle.json` file in `~/.kaggle/` or set the `KAGGLE_USERNAME` and `KAGGLE_KEY` environment variables. See [Kaggle API documentation](https://github.com/Kaggle/kaggle-api#api-credentials) for details.
11
+ 3. **Install Dependencies:** Install the required Python packages:
12
+ ```bash
13
+ pip install -r requirements-test.txt
14
+ ```
15
+ 4. **Prepare Data:** Run the utility script once to download the dataset from Kaggle and place it in the expected `public/` and `private/` subdirectories:
16
+ ```bash
17
+ python utils.py
18
+ ```
19
+ After running `utils.py`, your directory structure should look like this:
20
+ ```
21
+ .
22
+ ├── baseline.py
23
+ ├── evaluate.py
24
+ ├── optimize.py
25
+ ├── private
26
+ │ └── test.csv
27
+ ├── public
28
+ │ ├── sample_submission.csv
29
+ │ ├── test.csv
30
+ │ └── train.csv
31
+ ├── README.md # This file
32
+ ├── requirements-test.txt
33
+ └── utils.py
34
+ ```
35
+
36
+ ## Optimization Command
37
+
38
+ Run the following command to start optimizing the model:
39
+
40
+ ```bash
41
+ weco --source optimize.py \
42
+ --eval-command "python optimize.py && python evaluate.py" \
43
+ --metric accuracy \
44
+ --maximize true \
45
+ --steps 10 \
46
+ --model gemini-2.5-pro-exp-03-25 \
47
+ --additional-instructions README.md
48
+ ```
49
+
50
+ ### Explanation
51
+
52
+ * `--source optimize.py`: The script containing the model training and prediction logic to be optimized. It starts identical to `baseline.py`.
53
+ * `--eval-command "python optimize.py && python evaluate.py"`: This is a multi-step evaluation.
54
+ * `python optimize.py`: Runs the modified script to generate predictions (`submission.csv`).
55
+ * `python evaluate.py`: Compares the generated `submission.csv` against the ground truth (using the training data as a proxy evaluation set in this example) and prints the `accuracy` metric.
56
+ * `--metric accuracy`: The target metric Weco should optimize.
57
+ * `--maximize true`: Weco aims to increase the accuracy.
58
+ * `--steps 10`: The number of optimization iterations.
59
+ * `--model gemini-2.5-pro-exp-03-25`: The LLM driving the optimization.
60
+ * `--additional-instructions README.md`: Provides this file as context to the LLM, which might include hints about feature engineering techniques, model types to try, or specific data columns to focus on (you can add such instructions to this file if desired).
61
+
62
+ Weco will iteratively modify the feature engineering or modeling code within `optimize.py`, run the evaluation pipeline, and use the resulting `accuracy` to guide further improvements. The `baseline.py` file is provided as a reference starting point.
File without changes
@@ -10,9 +10,9 @@ authors = [
10
10
  ]
11
11
  description = "Documentation for `weco`, a CLI for using Weco AI's code optimizer."
12
12
  readme = "README.md"
13
- version = "0.2.7"
13
+ version = "0.2.8"
14
14
  license = {text = "MIT"}
15
- requires-python = ">=3.12"
15
+ requires-python = ">=3.8"
16
16
  dependencies = ["requests", "rich"]
17
17
  keywords = ["AI", "Code Optimization", "Code Generation"]
18
18
  classifiers = [
@@ -1,4 +1,4 @@
1
1
  # DO NOT EDIT
2
- __pkg_version__ = "0.2.7"
2
+ __pkg_version__ = "0.2.8"
3
3
  __api_version__ = "v1"
4
4
  __base_url__ = f"https://api.aide.weco.ai/{__api_version__}"
@@ -6,14 +6,9 @@ import sys
6
6
 
7
7
 
8
8
  def handle_api_error(e: requests.exceptions.HTTPError, console: rich.console.Console) -> None:
9
- """Extract and display error messages from API responses."""
10
- try:
11
- error_data = e.response.json()
12
- error_message = error_data.get("detail", str(e))
13
- console.print(f"[bold red]Server Error:[/] {error_message}")
14
- except Exception:
15
- # If we can't parse the JSON, just show the original error
16
- console.print(f"[bold red]Server Error:[/] {str(e)}")
9
+ """Extract and display error messages from API responses in a structured format."""
10
+ error_message = str(e) # Default message
11
+ console.print(f"[bold red]Error:[/] {error_message}")
17
12
  sys.exit(1)
18
13
 
19
14
 
@@ -36,7 +36,7 @@ def main() -> None:
36
36
  parser = argparse.ArgumentParser(
37
37
  description="[bold cyan]Weco CLI[/]", formatter_class=argparse.RawDescriptionHelpFormatter
38
38
  )
39
- parser.add_argument("--source", type=str, required=True, help="Path to the Python source code (e.g. optimize.py)")
39
+ parser.add_argument("--source", type=str, required=True, help="Path to the source code (e.g. optimize.py)")
40
40
  parser.add_argument(
41
41
  "--eval-command", type=str, required=True, help="Command to run for evaluation (e.g. 'python eval.py --arg1=val1')"
42
42
  )
@@ -88,7 +88,7 @@ def main() -> None:
88
88
  maximize=maximize, metric_name=metric_name, total_steps=steps, model=args.model, runs_dir=args.log_dir
89
89
  )
90
90
  plan_panel = PlanPanel()
91
- solution_panels = SolutionPanels(metric_name=metric_name)
91
+ solution_panels = SolutionPanels(metric_name=metric_name, source_fp=source_fp)
92
92
  eval_output_panel = EvaluationOutputPanel()
93
93
  tree_panel = MetricTreePanel(maximize=maximize)
94
94
  layout = create_optimization_layout()
@@ -118,8 +118,8 @@ def main() -> None:
118
118
  runs_dir = pathlib.Path(args.log_dir) / session_id
119
119
  runs_dir.mkdir(parents=True, exist_ok=True)
120
120
 
121
- # Save the original code (.runs/<session-id>/original.py)
122
- runs_copy_source_fp = runs_dir / "original.py"
121
+ # Save the original code (.runs/<session-id>/original.<extension>)
122
+ runs_copy_source_fp = runs_dir / f"original.{source_fp.suffix}"
123
123
  write_to_path(fp=runs_copy_source_fp, content=source_code)
124
124
 
125
125
  # Write the code string to the source file path
@@ -200,8 +200,8 @@ def main() -> None:
200
200
  api_keys=api_keys,
201
201
  timeout=timeout,
202
202
  )
203
- # Save next solution (.runs/<session-id>/step_<step>.py)
204
- write_to_path(fp=runs_dir / f"step_{step}.py", content=eval_and_next_solution_response["code"])
203
+ # Save next solution (.runs/<session-id>/step_<step>.<extension>)
204
+ write_to_path(fp=runs_dir / f"step_{step}.{source_fp.suffix}", content=eval_and_next_solution_response["code"])
205
205
 
206
206
  # Write the next solution to the source file
207
207
  write_to_path(fp=source_fp, content=eval_and_next_solution_response["code"])
@@ -351,8 +351,8 @@ def main() -> None:
351
351
  )
352
352
  best_solution_content = f"# Best solution from Weco with a score of {best_score_str}\n\n{best_solution_code}"
353
353
 
354
- # Save best solution to .runs/<session-id>/best.py
355
- write_to_path(fp=runs_dir / "best.py", content=best_solution_content)
354
+ # Save best solution to .runs/<session-id>/best.<extension>
355
+ write_to_path(fp=runs_dir / f"best.{source_fp.suffix}", content=best_solution_content)
356
356
 
357
357
  # write the best solution to the source file
358
358
  write_to_path(fp=source_fp, content=best_solution_content)
@@ -6,6 +6,7 @@ from rich.panel import Panel
6
6
  from rich.syntax import Syntax
7
7
  from typing import Dict, List, Optional, Union, Tuple
8
8
  from .utils import format_number
9
+ import pathlib
9
10
 
10
11
 
11
12
  class SummaryPanel:
@@ -46,6 +47,8 @@ class SummaryPanel:
46
47
  """Create a summary panel with the relevant information."""
47
48
  layout = Layout(name="summary")
48
49
  summary_table = Table(show_header=False, box=None, padding=(0, 1))
50
+
51
+ summary_table.add_row("")
49
52
  # Goal
50
53
  if final_message is not None:
51
54
  summary_table.add_row(f"[bold cyan]Result:[/] {final_message}")
@@ -256,13 +259,19 @@ class EvaluationOutputPanel:
256
259
  class SolutionPanels:
257
260
  """Displays the current and best solutions side by side."""
258
261
 
259
- def __init__(self, metric_name: str):
262
+ def __init__(self, metric_name: str, source_fp: pathlib.Path):
260
263
  # Current solution
261
264
  self.current_node = None
262
265
  # Best solution
263
266
  self.best_node = None
264
267
  # Metric name
265
268
  self.metric_name = metric_name.capitalize()
269
+ # Determine the lexer for the source file
270
+ self.lexer = self._determine_lexer(source_fp)
271
+
272
+ def _determine_lexer(self, source_fp: pathlib.Path) -> str:
273
+ """Determine the lexer for the source file."""
274
+ return Syntax.from_path(source_fp).lexer
266
275
 
267
276
  def update(self, current_node: Union[Node, None], best_node: Union[Node, None]):
268
277
  """Update the current and best solutions."""
@@ -280,7 +289,7 @@ class SolutionPanels:
280
289
  # Current solution (without score)
281
290
  current_title = f"[bold]💡 Current Solution (Step {current_step})"
282
291
  current_panel = Panel(
283
- Syntax(str(current_code), "python", theme="monokai", line_numbers=True, word_wrap=False),
292
+ Syntax(str(current_code), self.lexer, theme="monokai", line_numbers=True, word_wrap=False),
284
293
  title=current_title,
285
294
  border_style="yellow",
286
295
  expand=True,
@@ -290,7 +299,7 @@ class SolutionPanels:
290
299
  # Best solution
291
300
  best_title = f"[bold]🏆 Best Solution ([green]{self.metric_name}: {f'{best_score:.4f}' if best_score is not None else 'N/A'}[/])"
292
301
  best_panel = Panel(
293
- Syntax(str(best_code), "python", theme="monokai", line_numbers=True, word_wrap=False),
302
+ Syntax(str(best_code), self.lexer, theme="monokai", line_numbers=True, word_wrap=False),
294
303
  title=best_title,
295
304
  border_style="green",
296
305
  expand=True,
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.7
3
+ Version: 0.2.8
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License: MIT
@@ -9,7 +9,7 @@ Keywords: AI,Code Optimization,Code Generation
9
9
  Classifier: Programming Language :: Python :: 3
10
10
  Classifier: Operating System :: OS Independent
11
11
  Classifier: License :: OSI Approved :: MIT License
12
- Requires-Python: >=3.12
12
+ Requires-Python: >=3.8
13
13
  Description-Content-Type: text/markdown
14
14
  License-File: LICENSE
15
15
  Requires-Dist: requests
@@ -20,13 +20,19 @@ Requires-Dist: build; extra == "dev"
20
20
  Requires-Dist: setuptools_scm; extra == "dev"
21
21
  Dynamic: license-file
22
22
 
23
- # Weco CLI Code Optimizer for Machine Learning Engineers
23
+ # Weco: The Evaluation-Driven AI Code Optimizer
24
24
 
25
25
  [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
26
- [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
27
26
  [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
27
+ [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
28
28
 
29
- `weco` is a command-line interface for interacting with Weco AI's code optimizer, powered by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138). It helps you automate the improvement of your code for tasks like GPU kernel optimization, feature engineering, model development, and prompt engineering.
29
+ Weco systematically optimizes your code, guided directly by your evaluation metrics.
30
+
31
+ Example applications include:
32
+
33
+ - **GPU Kernel Optimization**: Reimplement PyTorch functions using CUDA, Triton or Metal, optimizing for `latency`, `throughput`, or `memory_bandwidth`.
34
+ - **Model Development**: Tune feature transformations or architectures, optimizing for `validation_accuracy`, `AUC`, or `Sharpe Ratio`.
35
+ - **Prompt Engineering**: Refine prompts for LLMs, optimizing for `win_rate`, `relevance`, or `format_adherence`
30
36
 
31
37
  https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
32
38
 
@@ -40,37 +46,6 @@ The `weco` CLI leverages a tree search approach guided by Large Language Models
40
46
 
41
47
  ---
42
48
 
43
- ## Example Use Cases
44
-
45
- Here's how `weco` can be applied to common ML engineering tasks:
46
-
47
- * **GPU Kernel Optimization:**
48
- * **Goal:** Improve the speed or efficiency of low-level GPU code.
49
- * **How:** `weco` iteratively refines CUDA, Triton, Metal, or other kernel code specified in your `--source` file.
50
- * **`--eval-command`:** Typically runs a script that compiles the kernel, executes it, and benchmarks performance (e.g., latency, throughput).
51
- * **`--metric`:** Examples include `latency`, `throughput`, `TFLOPS`, `memory_bandwidth`. Optimize to `minimize` latency or `maximize` throughput.
52
-
53
- * **Feature Engineering:**
54
- * **Goal:** Discover better data transformations or feature combinations for your machine learning models.
55
- * **How:** `weco` explores different processing steps or parameters within your feature transformation code (`--source`).
56
- * **`--eval-command`:** Executes a script that applies the features, trains/validates a model using those features, and prints a performance score.
57
- * **`--metric`:** Examples include `accuracy`, `AUC`, `F1-score`, `validation_loss`. Usually optimized to `maximize` accuracy/AUC/F1 or `minimize` loss.
58
-
59
- * **Model Development:**
60
- * **Goal:** Tune hyperparameters or experiment with small architectural changes directly within your model's code.
61
- * **How:** `weco` modifies hyperparameter values (like learning rate, layer sizes if defined in the code) or structural elements in your model definition (`--source`).
62
- * **`--eval-command`:** Runs your model training and evaluation script, printing the key performance indicator.
63
- * **`--metric`:** Examples include `validation_accuracy`, `test_loss`, `inference_time`, `perplexity`. Optimize according to the metric's nature (e.g., `maximize` accuracy, `minimize` loss).
64
-
65
- * **Prompt Engineering:**
66
- * **Goal:** Refine prompts used within larger systems (e.g., for LLM interactions) to achieve better or more consistent outputs.
67
- * **How:** `weco` modifies prompt templates, examples, or instructions stored in the `--source` file.
68
- * **`--eval-command`:** Executes a script that uses the prompt, generates an output, evaluates that output against desired criteria (e.g., using another LLM, checking for keywords, format validation), and prints a score.
69
- * **`--metric`:** Examples include `quality_score`, `relevance`, `task_success_rate`, `format_adherence`. Usually optimized to `maximize`.
70
-
71
- ---
72
-
73
-
74
49
  ## Setup
75
50
 
76
51
  1. **Install the Package:**
@@ -97,13 +72,20 @@ Here's how `weco` can be applied to common ML engineering tasks:
97
72
 
98
73
  ---
99
74
 
100
- ### Examples
75
+ ### Example: Optimizing Simple PyTorch Operations
76
+
77
+ This basic example shows how to optimize a simple PyTorch function for speedup.
101
78
 
102
- **Example 1: Optimizing PyTorch simple operations**
79
+ For more advanced examples, including **[Metal/MLX](/examples/metal/README.md), [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md)**, and **[ML model optimization](/examples/spaceship-titanic/README.md)t**, please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
103
80
 
104
81
  ```bash
82
+ # Navigate to the example directory
105
83
  cd examples/hello-kernel-world
106
- pip install torch
84
+
85
+ # Install dependencies
86
+ pip install torch
87
+
88
+ # Run Weco
107
89
  weco --source optimize.py \
108
90
  --eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
109
91
  --metric speedup \
@@ -113,96 +95,7 @@ weco --source optimize.py \
113
95
  --additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
114
96
  ```
115
97
 
116
- Note that if you have an NVIDIA gpu, change the device to `cuda`. If you are running this on Apple Silicon, set it to `mps`.
117
-
118
- **Example 2: Optimizing MLX operations with instructions from a file**
119
-
120
- Lets optimize a 2D convolution operation in [`mlx`](https://github.com/ml-explore/mlx) using [Metal](https://developer.apple.com/documentation/metal/). Sometimes, additional context or instructions are too complex for a single command-line string. You can provide a path to a file containing these instructions.
121
-
122
- ```bash
123
- cd examples/metal
124
- pip install mlx
125
- weco --source optimize.py \
126
- --eval-command "python evaluate.py --solution-path optimize.py" \
127
- --metric speedup \
128
- --maximize true \
129
- --steps 30 \
130
- --model gemini-2.5-pro-exp-03-25 \
131
- --additional-instructions examples.rst
132
- ```
133
-
134
- **Example 3: Level Agnostic Optimization: Causal Self Attention with Triton & CUDA**
135
-
136
- Given how useful causal multihead self attention is to transformers, we've seen its wide adoption across ML engineering and AI research. Its great to keep things at a high-level (in PyTorch) when doing research, but when moving to production you often need to write highly customized low-level kernels to make things run as fast as they can. The `weco` CLI can optimize kernels across a variety of different abstraction levels and frameworks. Example 2 uses Metal but lets explore two more frameworks:
137
-
138
- 1. [Triton](https://github.com/triton-lang/triton)
139
- ```bash
140
- cd examples/triton
141
- pip install torch triton
142
- weco --source optimize.py \
143
- --eval-command "python evaluate.py --solution-path optimize.py" \
144
- --metric speedup \
145
- --maximize true \
146
- --steps 30 \
147
- --model gemini-2.5-pro-exp-03-25 \
148
- --additional-instructions "Use triton to optimize the code while ensuring a small max float diff. Maintain the same code format."
149
- ```
150
-
151
- 2. [CUDA](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html)
152
- ```bash
153
- cd examples/cuda
154
- pip install torch
155
- weco --source optimize.py \
156
- --eval-command "python evaluate.py --solution-path optimize.py" \
157
- --metric speedup \
158
- --maximize true \
159
- --steps 30 \
160
- --model gemini-2.5-pro-exp-03-25 \
161
- --additional-instructions guide.md
162
- ```
163
-
164
- **Example 4: Optimizing a Classification Model**
165
-
166
- This example demonstrates optimizing a script for a Kaggle competition ([Spaceship Titanic](https://www.kaggle.com/competitions/spaceship-titanic/overview)) to improve classification accuracy. The additional instructions are provided via a separate file (`examples/spaceship-titanic/README.md`).
167
-
168
- First, install the requirements for the example environment:
169
- ```bash
170
- pip install -r examples/spaceship-titanic/requirements-test.txt
171
- ```
172
- And run utility function once to prepare the dataset
173
- ```bash
174
- python examples/spaceship-titanic/utils.py
175
- ```
176
-
177
- You should see the following structure at `examples/spaceship-titanic`. You need to prepare the kaggle credentials for downloading the dataset.
178
- ```
179
- .
180
- ├── baseline.py
181
- ├── evaluate.py
182
- ├── optimize.py
183
- ├── private
184
- │ └── test.csv
185
- ├── public
186
- │ ├── sample_submission.csv
187
- │ ├── test.csv
188
- │ └── train.csv
189
- ├── README.md
190
- ├── requirements-test.txt
191
- └── utils.py
192
- ```
193
-
194
- Then, execute the optimization command:
195
- ```bash
196
- weco --source examples/spaceship-titanic/optimize.py \
197
- --eval-command "python examples/spaceship-titanic/optimize.py && python examples/spaceship-titanic/evaluate.py" \
198
- --metric accuracy \
199
- --maximize true \
200
- --steps 10 \
201
- --model gemini-2.5-pro-exp-03-25 \
202
- --additional-instructions examples/spaceship-titanic/README.md
203
- ```
204
-
205
- *The [baseline.py](examples/spaceship-titanic/baseline.py) is provided as a start point for optimization*
98
+ **Note:** If you have an NVIDIA GPU, change the device in the `--eval-command` to `cuda`. If you are running this on Apple Silicon, set it to `mps`.
206
99
 
207
100
  ---
208
101
 
@@ -4,11 +4,13 @@ README.md
4
4
  pyproject.toml
5
5
  .github/workflows/lint.yml
6
6
  .github/workflows/release.yml
7
+ examples/cuda/README.md
7
8
  examples/cuda/evaluate.py
8
9
  examples/cuda/guide.md
9
10
  examples/cuda/optimize.py
10
11
  examples/hello-kernel-world/evaluate.py
11
12
  examples/hello-kernel-world/optimize.py
13
+ examples/metal/README.md
12
14
  examples/metal/evaluate.py
13
15
  examples/metal/examples.rst
14
16
  examples/metal/optimize.py
@@ -18,6 +20,7 @@ examples/spaceship-titanic/evaluate.py
18
20
  examples/spaceship-titanic/optimize.py
19
21
  examples/spaceship-titanic/requirements-test.txt
20
22
  examples/spaceship-titanic/utils.py
23
+ examples/triton/README.md
21
24
  examples/triton/evaluate.py
22
25
  examples/triton/optimize.py
23
26
  weco/__init__.py
@@ -1,93 +0,0 @@
1
- # Overview
2
-
3
- ## Description
4
- Welcome to the year 2912, where your data science skills are needed to solve a cosmic mystery. We've received a transmission from four lightyears away and things aren't looking good.
5
-
6
- The *Spaceship Titanic* was an interstellar passenger liner launched a month ago. With almost 13,000 passengers on board, the vessel set out on its maiden voyage transporting emigrants from our solar system to three newly habitable exoplanets orbiting nearby stars.
7
-
8
- While rounding Alpha Centauri en route to its first destination—the torrid 55 Cancri E—the unwary *Spaceship Titanic* collided with a spacetime anomaly hidden within a dust cloud. Sadly, it met a similar fate as its namesake from 1000 years before. Though the ship stayed intact, almost half of the passengers were transported to an alternate dimension!
9
-
10
- ![joel-filipe-QwoNAhbmLLo-unsplash.jpg](https://storage.googleapis.com/kaggle-media/competitions/Spaceship%20Titanic/joel-filipe-QwoNAhbmLLo-unsplash.jpg)
11
-
12
- To help rescue crews and retrieve the lost passengers, you are challenged to predict which passengers were transported by the anomaly using records recovered from the spaceship’s damaged computer system.
13
-
14
- Help save them and change history!
15
-
16
- ### Acknowledgments
17
-
18
- Photos by [Joel Filipe](https://unsplash.com/@joelfilip?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText), [Richard Gatley](https://unsplash.com/@uncle_rickie?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) and [ActionVance](https://unsplash.com/@actionvance?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on Unsplash.
19
-
20
- ## Evaluation
21
-
22
- ### Metric
23
-
24
- Submissions are evaluated based on their [classification accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy), the percentage of predicted labels that are correct.
25
-
26
- ### Submission Format
27
-
28
- The submission format for the competition is a csv file with the following format:
29
-
30
- ```
31
- PassengerId,Transported
32
- 0013_01,False
33
- 0018_01,False
34
- 0019_01,False
35
- 0021_01,False
36
- etc.
37
- ```
38
-
39
- ## Frequently Asked Questions
40
-
41
- ### What is a Getting Started competition?
42
-
43
- Getting Started competitions were created by Kaggle data scientists for people who have little to no machine learning background. They are a great place to begin if you are new to data science or just finished a MOOC and want to get involved in Kaggle.
44
-
45
- Getting Started competitions are a non-competitive way to get familiar with Kaggle’s platform, learn basic machine learning concepts, and start meeting people in the community. They have no cash prize and are on a rolling timeline.
46
-
47
- ### How do I create and manage a team?
48
-
49
- When you accept the competition rules, a team will be created for you. You can invite others to your team, accept a merger with another team, and update basic information like team name by going to the [Team](https://www.kaggle.com/c/spaceship-titanic/team) page.
50
-
51
- We've heard from many Kagglers that teaming up is the best way to learn new skills AND have fun. If you don't have a teammate already, consider asking if anyone wants to team up in the [discussion forum](https://www.kaggle.com/c/spaceship-titanic/discussion).
52
-
53
- ### What are Notebooks?
54
-
55
- Kaggle Notebooks is a cloud computational environment that enables reproducible and collaborative analysis. Notebooks support scripts in Python and R, Jupyter Notebooks, and RMarkdown reports. You can visit the [Notebooks](https://www.kaggle.com/c/spaceship-titanic/notebooks) tab to view all of the publicly shared code for the Spaceship Titanic competition. For more on how to use Notebooks to learn data science, check out our [Courses](https://www.kaggle.com/learn/overview)!
56
-
57
- ### Why did my team disappear from the leaderboard?
58
-
59
- To keep with the spirit of getting-started competitions, we have implemented a two month rolling window on submissions. Once a submission is more than two months old, it will be invalidated and no longer count towards the leaderboard.
60
-
61
- If your team has no submissions in the previous two months, the team will also drop from the leaderboard. This will keep the leaderboard at a manageable size, freshen it up, and prevent newcomers from getting lost in a sea of abandoned scores.
62
-
63
- *"I worked so hard to get that score! Give it back!"* Read more about our decision to implement a rolling leaderboard [here](https://www.kaggle.com/c/titanic/discussion/6240).
64
-
65
- ### How do I contact Support?
66
-
67
- Kaggle does not have a dedicated support team so you’ll typically find that you receive a response more quickly by asking your question in the appropriate forum. (For this competition, you’ll want to use the [Spaceship Titanic discussion forum](https://www.kaggle.com/c/spaceship-titanic/discussion)).
68
-
69
- Support is only able to help with issues that are being experienced by all participants. Before contacting support, please check the discussion forum for information on your problem. If you can’t find it, you can post your problem in the forum so a fellow participant or a Kaggle team member can provide help. The forums are full of useful information on the data, metric, and different approaches. We encourage you to use the forums often. If you share your knowledge, you'll find that others will share a lot in turn!
70
-
71
- If your problem persists or it seems to be effective all participants then please [contact us](https://www.kaggle.com/contact).
72
-
73
- # Dataset Description
74
-
75
- In this competition your task is to predict whether a passenger was transported to an alternate dimension during the Spaceship Titanic's collision with the spacetime anomaly. To help you make these predictions, you're given a set of personal records recovered from the ship's damaged computer system.
76
-
77
- ## File and Data Field Descriptions
78
-
79
- - **train.csv** - Personal records for about two-thirds (~8700) of the passengers, to be used as training data.
80
- - `PassengerId` - A unique Id for each passenger. Each Id takes the form `gggg_pp` where `gggg` indicates a group the passenger is travelling with and `pp` is their number within the group. People in a group are often family members, but not always.
81
- - `HomePlanet` - The planet the passenger departed from, typically their planet of permanent residence.
82
- - `CryoSleep` - Indicates whether the passenger elected to be put into suspended animation for the duration of the voyage. Passengers in cryosleep are confined to their cabins.
83
- - `Cabin` - The cabin number where the passenger is staying. Takes the form `deck/num/side`, where `side` can be either `P` for *Port* or `S` for *Starboard*.
84
- - `Destination` - The planet the passenger will be debarking to.
85
- - `Age` - The age of the passenger.
86
- - `VIP` - Whether the passenger has paid for special VIP service during the voyage.
87
- - `RoomService`, `FoodCourt`, `ShoppingMall`, `Spa`, `VRDeck` - Amount the passenger has billed at each of the *Spaceship Titanic*'s many luxury amenities.
88
- - `Name` - The first and last names of the passenger.
89
- - `Transported` - Whether the passenger was transported to another dimension. This is the target, the column you are trying to predict.
90
- - **test.csv** - Personal records for the remaining one-third (~4300) of the passengers, to be used as test data. Your task is to predict the value of `Transported` for the passengers in this set.
91
- - **sample_submission.csv** - A submission file in the correct format.
92
- - `PassengerId` - Id for each passenger in the test set.
93
- - `Transported` - The target. For each passenger, predict either `True` or `False`.
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes