weco 0.2.2__tar.gz → 0.2.4__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {weco-0.2.2 → weco-0.2.4}/.github/workflows/release.yml +2 -2
- {weco-0.2.2 → weco-0.2.4}/PKG-INFO +23 -13
- {weco-0.2.2 → weco-0.2.4}/README.md +22 -12
- {weco-0.2.2 → weco-0.2.4}/examples/simple-mlx/evaluate.py +1 -1
- {weco-0.2.2 → weco-0.2.4}/examples/simple-mlx/optimize.py +4 -4
- {weco-0.2.2 → weco-0.2.4}/examples/simple-torch/optimize.py +4 -4
- {weco-0.2.2 → weco-0.2.4}/pyproject.toml +1 -1
- {weco-0.2.2 → weco-0.2.4}/weco/__init__.py +1 -1
- {weco-0.2.2 → weco-0.2.4}/weco/cli.py +60 -49
- {weco-0.2.2 → weco-0.2.4}/weco/panels.py +16 -10
- {weco-0.2.2 → weco-0.2.4}/weco/utils.py +8 -4
- {weco-0.2.2 → weco-0.2.4}/weco.egg-info/PKG-INFO +23 -13
- {weco-0.2.2 → weco-0.2.4}/.github/workflows/lint.yml +0 -0
- {weco-0.2.2 → weco-0.2.4}/.gitignore +0 -0
- {weco-0.2.2 → weco-0.2.4}/LICENSE +0 -0
- {weco-0.2.2 → weco-0.2.4}/examples/simple-mlx/metal-examples.rst +0 -0
- {weco-0.2.2 → weco-0.2.4}/examples/simple-torch/evaluate.py +0 -0
- {weco-0.2.2 → weco-0.2.4}/setup.cfg +0 -0
- {weco-0.2.2 → weco-0.2.4}/weco/api.py +0 -0
- {weco-0.2.2 → weco-0.2.4}/weco.egg-info/SOURCES.txt +0 -0
- {weco-0.2.2 → weco-0.2.4}/weco.egg-info/dependency_links.txt +0 -0
- {weco-0.2.2 → weco-0.2.4}/weco.egg-info/entry_points.txt +0 -0
- {weco-0.2.2 → weco-0.2.4}/weco.egg-info/requires.txt +0 -0
- {weco-0.2.2 → weco-0.2.4}/weco.egg-info/top_level.txt +0 -0
|
@@ -90,7 +90,7 @@ jobs:
|
|
|
90
90
|
GITHUB_TOKEN: ${{ github.token }}
|
|
91
91
|
run: >-
|
|
92
92
|
gh release create
|
|
93
|
-
'v0.2.
|
|
93
|
+
'v0.2.4'
|
|
94
94
|
--repo '${{ github.repository }}'
|
|
95
95
|
--notes ""
|
|
96
96
|
|
|
@@ -102,5 +102,5 @@ jobs:
|
|
|
102
102
|
# sigstore-produced signatures and certificates.
|
|
103
103
|
run: >-
|
|
104
104
|
gh release upload
|
|
105
|
-
'v0.2.
|
|
105
|
+
'v0.2.4' dist/**
|
|
106
106
|
--repo '${{ github.repository }}'
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: weco
|
|
3
|
-
Version: 0.2.
|
|
3
|
+
Version: 0.2.4
|
|
4
4
|
Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
|
|
5
5
|
Author-email: Weco AI Team <dhruv@weco.ai>
|
|
6
6
|
License: MIT
|
|
@@ -20,18 +20,28 @@ Requires-Dist: build; extra == "dev"
|
|
|
20
20
|
Requires-Dist: setuptools_scm; extra == "dev"
|
|
21
21
|
Dynamic: license-file
|
|
22
22
|
|
|
23
|
-
# Weco CLI –
|
|
23
|
+
# Weco CLI – Code Optimizer for Machine Learning Engineers
|
|
24
24
|
|
|
25
25
|
[](https://www.python.org)
|
|
26
26
|
[](LICENSE)
|
|
27
27
|
|
|
28
|
-
`weco` is a
|
|
28
|
+
`weco` is a command-line interface for interacting with Weco AI's code optimizer, powerred by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138).
|
|
29
|
+
|
|
30
|
+
|
|
31
|
+
|
|
32
|
+
https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
|
|
33
|
+
|
|
34
|
+
|
|
29
35
|
|
|
30
36
|
---
|
|
31
37
|
|
|
32
38
|
## Overview
|
|
33
39
|
|
|
34
|
-
The
|
|
40
|
+
The weco CLI leverages a tree search approach with LLMs to iteratively improve your code.
|
|
41
|
+
|
|
42
|
+

|
|
43
|
+
|
|
44
|
+
|
|
35
45
|
|
|
36
46
|
---
|
|
37
47
|
|
|
@@ -59,13 +69,13 @@ The `weco` CLI leverages advanced optimization techniques and language model str
|
|
|
59
69
|
|
|
60
70
|
| Argument | Description | Required |
|
|
61
71
|
|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------|
|
|
62
|
-
| `--source` | Path to the Python source code that will be optimized (e.g. optimize.py).
|
|
72
|
+
| `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
|
|
63
73
|
| `--eval-command` | Command to run for evaluation (e.g. 'python eval.py --arg1=val1'). | Yes |
|
|
64
74
|
| `--metric` | Metric to optimize. | Yes |
|
|
65
|
-
| `--maximize` |
|
|
75
|
+
| `--maximize` | Whether to maximize ('true') or minimize ('false') the metric. | Yes |
|
|
66
76
|
| `--steps` | Number of optimization steps to run. | Yes |
|
|
67
77
|
| `--model` | Model to use for optimization. | Yes |
|
|
68
|
-
| `--additional-instructions` | (Optional) Description of additional instructions
|
|
78
|
+
| `--additional-instructions` | (Optional) Description of additional instructions OR path to a file containing additional instructions. | No |
|
|
69
79
|
|
|
70
80
|
---
|
|
71
81
|
|
|
@@ -75,22 +85,22 @@ Optimizing common operations in pytorch:
|
|
|
75
85
|
```bash
|
|
76
86
|
weco --source examples/simple-torch/optimize.py \
|
|
77
87
|
--eval-command "python examples/simple-torch/evaluate.py --solution-path examples/simple-torch/optimize.py --device mps" \
|
|
78
|
-
--metric
|
|
88
|
+
--metric speedup \
|
|
79
89
|
--maximize true \
|
|
80
90
|
--steps 15 \
|
|
81
|
-
--model
|
|
91
|
+
--model o3-mini \
|
|
82
92
|
--additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
|
|
83
93
|
```
|
|
84
94
|
|
|
85
|
-
|
|
95
|
+
Sometimes we have a bit more context we'd like to provide. Its not easy to fit all of this in a string like shown above with `additional-instructions`. Thats why you can also provide a path to any file you'd like to me read as in context. In this example, we optimize the same operations using mlx and metal with additional instructions:
|
|
86
96
|
```bash
|
|
87
97
|
weco --source examples/simple-mlx/optimize.py \
|
|
88
98
|
--eval-command "python examples/simple-mlx/evaluate.py --solution-path examples/simple-mlx/optimize.py" \
|
|
89
|
-
--metric
|
|
99
|
+
--metric speedup \
|
|
90
100
|
--maximize true \
|
|
91
101
|
--steps 30 \
|
|
92
|
-
--model
|
|
93
|
-
--additional-instructions
|
|
102
|
+
--model o3-mini \
|
|
103
|
+
--additional-instructions examples/simple-mlx/metal-examples.rst
|
|
94
104
|
```
|
|
95
105
|
---
|
|
96
106
|
|
|
@@ -1,15 +1,25 @@
|
|
|
1
|
-
# Weco CLI –
|
|
1
|
+
# Weco CLI – Code Optimizer for Machine Learning Engineers
|
|
2
2
|
|
|
3
3
|
[](https://www.python.org)
|
|
4
4
|
[](LICENSE)
|
|
5
5
|
|
|
6
|
-
`weco` is a
|
|
6
|
+
`weco` is a command-line interface for interacting with Weco AI's code optimizer, powerred by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138).
|
|
7
|
+
|
|
8
|
+
|
|
9
|
+
|
|
10
|
+
https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
|
|
11
|
+
|
|
12
|
+
|
|
7
13
|
|
|
8
14
|
---
|
|
9
15
|
|
|
10
16
|
## Overview
|
|
11
17
|
|
|
12
|
-
The
|
|
18
|
+
The weco CLI leverages a tree search approach with LLMs to iteratively improve your code.
|
|
19
|
+
|
|
20
|
+

|
|
21
|
+
|
|
22
|
+
|
|
13
23
|
|
|
14
24
|
---
|
|
15
25
|
|
|
@@ -37,13 +47,13 @@ The `weco` CLI leverages advanced optimization techniques and language model str
|
|
|
37
47
|
|
|
38
48
|
| Argument | Description | Required |
|
|
39
49
|
|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------|
|
|
40
|
-
| `--source` | Path to the Python source code that will be optimized (e.g. optimize.py).
|
|
50
|
+
| `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
|
|
41
51
|
| `--eval-command` | Command to run for evaluation (e.g. 'python eval.py --arg1=val1'). | Yes |
|
|
42
52
|
| `--metric` | Metric to optimize. | Yes |
|
|
43
|
-
| `--maximize` |
|
|
53
|
+
| `--maximize` | Whether to maximize ('true') or minimize ('false') the metric. | Yes |
|
|
44
54
|
| `--steps` | Number of optimization steps to run. | Yes |
|
|
45
55
|
| `--model` | Model to use for optimization. | Yes |
|
|
46
|
-
| `--additional-instructions` | (Optional) Description of additional instructions
|
|
56
|
+
| `--additional-instructions` | (Optional) Description of additional instructions OR path to a file containing additional instructions. | No |
|
|
47
57
|
|
|
48
58
|
---
|
|
49
59
|
|
|
@@ -53,22 +63,22 @@ Optimizing common operations in pytorch:
|
|
|
53
63
|
```bash
|
|
54
64
|
weco --source examples/simple-torch/optimize.py \
|
|
55
65
|
--eval-command "python examples/simple-torch/evaluate.py --solution-path examples/simple-torch/optimize.py --device mps" \
|
|
56
|
-
--metric
|
|
66
|
+
--metric speedup \
|
|
57
67
|
--maximize true \
|
|
58
68
|
--steps 15 \
|
|
59
|
-
--model
|
|
69
|
+
--model o3-mini \
|
|
60
70
|
--additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
|
|
61
71
|
```
|
|
62
72
|
|
|
63
|
-
|
|
73
|
+
Sometimes we have a bit more context we'd like to provide. Its not easy to fit all of this in a string like shown above with `additional-instructions`. Thats why you can also provide a path to any file you'd like to me read as in context. In this example, we optimize the same operations using mlx and metal with additional instructions:
|
|
64
74
|
```bash
|
|
65
75
|
weco --source examples/simple-mlx/optimize.py \
|
|
66
76
|
--eval-command "python examples/simple-mlx/evaluate.py --solution-path examples/simple-mlx/optimize.py" \
|
|
67
|
-
--metric
|
|
77
|
+
--metric speedup \
|
|
68
78
|
--maximize true \
|
|
69
79
|
--steps 30 \
|
|
70
|
-
--model
|
|
71
|
-
--additional-instructions
|
|
80
|
+
--model o3-mini \
|
|
81
|
+
--additional-instructions examples/simple-mlx/metal-examples.rst
|
|
72
82
|
```
|
|
73
83
|
---
|
|
74
84
|
|
|
@@ -19,8 +19,8 @@ class Model(nn.Module):
|
|
|
19
19
|
Returns:
|
|
20
20
|
mx.array: Output tensor of shape (batch_size, hidden_size).
|
|
21
21
|
"""
|
|
22
|
-
x = mx.matmul(x, mx.transpose(self.weight))
|
|
23
|
-
x = x / 2
|
|
24
|
-
x = mx.sum(x, axis=1, keepdims=True)
|
|
25
|
-
x = x * self.scaling_factor
|
|
22
|
+
x = mx.matmul(x, mx.transpose(self.weight))
|
|
23
|
+
x = x / 2
|
|
24
|
+
x = mx.sum(x, axis=1, keepdims=True)
|
|
25
|
+
x = x * self.scaling_factor
|
|
26
26
|
return x
|
|
@@ -19,8 +19,8 @@ class Model(nn.Module):
|
|
|
19
19
|
Returns:
|
|
20
20
|
torch.Tensor: Output tensor of shape (batch_size, hidden_size).
|
|
21
21
|
"""
|
|
22
|
-
x = torch.matmul(x, self.weight.T)
|
|
23
|
-
x = x / 2
|
|
24
|
-
x = torch.sum(x, dim=1, keepdim=True)
|
|
25
|
-
x = x * self.scaling_factor
|
|
22
|
+
x = torch.matmul(x, self.weight.T)
|
|
23
|
+
x = x / 2
|
|
24
|
+
x = torch.sum(x, dim=1, keepdim=True)
|
|
25
|
+
x = x * self.scaling_factor
|
|
26
26
|
return x
|
|
@@ -10,7 +10,7 @@ authors = [
|
|
|
10
10
|
]
|
|
11
11
|
description = "Documentation for `weco`, a CLI for using Weco AI's code optimizer."
|
|
12
12
|
readme = "README.md"
|
|
13
|
-
version = "0.2.
|
|
13
|
+
version = "0.2.4"
|
|
14
14
|
license = {text = "MIT"}
|
|
15
15
|
requires-python = ">=3.12"
|
|
16
16
|
dependencies = ["requests", "rich"]
|
|
@@ -41,7 +41,13 @@ def main() -> None:
|
|
|
41
41
|
"--eval-command", type=str, required=True, help="Command to run for evaluation (e.g. 'python eval.py --arg1=val1')"
|
|
42
42
|
)
|
|
43
43
|
parser.add_argument("--metric", type=str, required=True, help="Metric to optimize")
|
|
44
|
-
parser.add_argument(
|
|
44
|
+
parser.add_argument(
|
|
45
|
+
"--maximize",
|
|
46
|
+
type=str,
|
|
47
|
+
choices=["true", "false"],
|
|
48
|
+
required=True,
|
|
49
|
+
help="Specify 'true' to maximize the metric or 'false' to minimize.",
|
|
50
|
+
)
|
|
45
51
|
parser.add_argument("--steps", type=int, required=True, help="Number of steps to run")
|
|
46
52
|
parser.add_argument("--model", type=str, required=True, help="Model to use for optimization")
|
|
47
53
|
parser.add_argument(
|
|
@@ -57,7 +63,7 @@ def main() -> None:
|
|
|
57
63
|
# Define optimization session config
|
|
58
64
|
evaluation_command = args.eval_command
|
|
59
65
|
metric_name = args.metric
|
|
60
|
-
maximize = args.maximize
|
|
66
|
+
maximize = args.maximize == "true"
|
|
61
67
|
steps = args.steps
|
|
62
68
|
code_generator_config = {"model": args.model}
|
|
63
69
|
evaluator_config = {"model": args.model}
|
|
@@ -75,9 +81,9 @@ def main() -> None:
|
|
|
75
81
|
api_keys = read_api_keys_from_env()
|
|
76
82
|
|
|
77
83
|
# Initialize panels
|
|
78
|
-
summary_panel = SummaryPanel(total_steps=steps, model=args.model)
|
|
84
|
+
summary_panel = SummaryPanel(maximize=maximize, metric_name=metric_name, total_steps=steps, model=args.model)
|
|
79
85
|
plan_panel = PlanPanel()
|
|
80
|
-
solution_panels = SolutionPanels()
|
|
86
|
+
solution_panels = SolutionPanels(metric_name=metric_name)
|
|
81
87
|
eval_output_panel = EvaluationOutputPanel()
|
|
82
88
|
tree_panel = MetricTreePanel(maximize=maximize)
|
|
83
89
|
layout = create_optimization_layout()
|
|
@@ -98,54 +104,59 @@ def main() -> None:
|
|
|
98
104
|
api_keys=api_keys,
|
|
99
105
|
)
|
|
100
106
|
|
|
101
|
-
# Define the runs directory (.runs/<session-id>)
|
|
102
|
-
session_id = session_response["session_id"]
|
|
103
|
-
runs_dir = pathlib.Path(".runs") / session_id
|
|
104
|
-
runs_dir.mkdir(parents=True, exist_ok=True)
|
|
105
|
-
|
|
106
|
-
# Save the original code (.runs/<session-id>/original.py)
|
|
107
|
-
runs_copy_source_fp = runs_dir / "original.py"
|
|
108
|
-
write_to_path(fp=runs_copy_source_fp, content=source_code)
|
|
109
|
-
|
|
110
|
-
# Write the code string to the source file path
|
|
111
|
-
# Do this after the original code is saved
|
|
112
|
-
write_to_path(fp=source_fp, content=session_response["code"])
|
|
113
|
-
|
|
114
|
-
# Update the panels with the initial solution
|
|
115
|
-
# Add session id now that we have it
|
|
116
|
-
summary_panel.session_id = session_id
|
|
117
|
-
# Set the step of the progress bar
|
|
118
|
-
summary_panel.set_step(step=0)
|
|
119
|
-
# Update the token counts
|
|
120
|
-
summary_panel.update_token_counts(usage=session_response["usage"])
|
|
121
|
-
# Update the plan
|
|
122
|
-
plan_panel.update(plan=session_response["plan"])
|
|
123
|
-
# Build the metric tree
|
|
124
|
-
tree_panel.build_metric_tree(
|
|
125
|
-
nodes=[
|
|
126
|
-
{
|
|
127
|
-
"solution_id": session_response["solution_id"],
|
|
128
|
-
"parent_id": None,
|
|
129
|
-
"code": session_response["code"],
|
|
130
|
-
"step": 0,
|
|
131
|
-
"metric_value": None,
|
|
132
|
-
"is_buggy": False,
|
|
133
|
-
}
|
|
134
|
-
]
|
|
135
|
-
)
|
|
136
|
-
# Set the current solution as unevaluated since we haven't run the evaluation function and fed it back to the model yet
|
|
137
|
-
tree_panel.set_unevaluated_node(node_id=session_response["solution_id"])
|
|
138
|
-
# Update the solution panels with the initial solution and get the panel displays
|
|
139
|
-
solution_panels.update(
|
|
140
|
-
current_node=Node(
|
|
141
|
-
id=session_response["solution_id"], parent_id=None, code=session_response["code"], metric=None, is_buggy=False
|
|
142
|
-
),
|
|
143
|
-
best_node=None,
|
|
144
|
-
)
|
|
145
|
-
current_solution_panel, best_solution_panel = solution_panels.get_display(current_step=0)
|
|
146
107
|
# Define the refresh rate
|
|
147
108
|
refresh_rate = 4
|
|
148
109
|
with Live(layout, refresh_per_second=refresh_rate, screen=True) as live:
|
|
110
|
+
# Define the runs directory (.runs/<session-id>)
|
|
111
|
+
session_id = session_response["session_id"]
|
|
112
|
+
runs_dir = pathlib.Path(".runs") / session_id
|
|
113
|
+
runs_dir.mkdir(parents=True, exist_ok=True)
|
|
114
|
+
|
|
115
|
+
# Save the original code (.runs/<session-id>/original.py)
|
|
116
|
+
runs_copy_source_fp = runs_dir / "original.py"
|
|
117
|
+
write_to_path(fp=runs_copy_source_fp, content=source_code)
|
|
118
|
+
|
|
119
|
+
# Write the code string to the source file path
|
|
120
|
+
# Do this after the original code is saved
|
|
121
|
+
write_to_path(fp=source_fp, content=session_response["code"])
|
|
122
|
+
|
|
123
|
+
# Update the panels with the initial solution
|
|
124
|
+
# Add session id now that we have it
|
|
125
|
+
summary_panel.session_id = session_id
|
|
126
|
+
# Set the step of the progress bar
|
|
127
|
+
summary_panel.set_step(step=0)
|
|
128
|
+
# Update the token counts
|
|
129
|
+
summary_panel.update_token_counts(usage=session_response["usage"])
|
|
130
|
+
# Update the plan
|
|
131
|
+
plan_panel.update(plan=session_response["plan"])
|
|
132
|
+
# Build the metric tree
|
|
133
|
+
tree_panel.build_metric_tree(
|
|
134
|
+
nodes=[
|
|
135
|
+
{
|
|
136
|
+
"solution_id": session_response["solution_id"],
|
|
137
|
+
"parent_id": None,
|
|
138
|
+
"code": session_response["code"],
|
|
139
|
+
"step": 0,
|
|
140
|
+
"metric_value": None,
|
|
141
|
+
"is_buggy": False,
|
|
142
|
+
}
|
|
143
|
+
]
|
|
144
|
+
)
|
|
145
|
+
# Set the current solution as unevaluated since we haven't run the evaluation function and fed it back to the model yet
|
|
146
|
+
tree_panel.set_unevaluated_node(node_id=session_response["solution_id"])
|
|
147
|
+
# Update the solution panels with the initial solution and get the panel displays
|
|
148
|
+
solution_panels.update(
|
|
149
|
+
current_node=Node(
|
|
150
|
+
id=session_response["solution_id"],
|
|
151
|
+
parent_id=None,
|
|
152
|
+
code=session_response["code"],
|
|
153
|
+
metric=None,
|
|
154
|
+
is_buggy=False,
|
|
155
|
+
),
|
|
156
|
+
best_node=None,
|
|
157
|
+
)
|
|
158
|
+
current_solution_panel, best_solution_panel = solution_panels.get_display(current_step=0)
|
|
159
|
+
|
|
149
160
|
# Update the entire layout
|
|
150
161
|
smooth_update(
|
|
151
162
|
live=live,
|
|
@@ -11,12 +11,13 @@ from .utils import format_number
|
|
|
11
11
|
class SummaryPanel:
|
|
12
12
|
"""Holds a summary of the optimization session."""
|
|
13
13
|
|
|
14
|
-
def __init__(self, total_steps: int, model: str, session_id: str = None):
|
|
14
|
+
def __init__(self, maximize: bool, metric_name: str, total_steps: int, model: str, session_id: str = None):
|
|
15
|
+
self.goal = ("Maximizing" if maximize else "Minimizing") + f" {metric_name}..."
|
|
15
16
|
self.total_input_tokens = 0
|
|
16
17
|
self.total_output_tokens = 0
|
|
17
18
|
self.total_steps = total_steps
|
|
18
19
|
self.model = model
|
|
19
|
-
self.session_id = session_id or "
|
|
20
|
+
self.session_id = session_id or "N/A"
|
|
20
21
|
self.progress = Progress(
|
|
21
22
|
TextColumn("[progress.description]{task.description}"),
|
|
22
23
|
BarColumn(bar_width=20),
|
|
@@ -42,12 +43,15 @@ class SummaryPanel:
|
|
|
42
43
|
"""Create a summary panel with the relevant information."""
|
|
43
44
|
layout = Layout(name="summary")
|
|
44
45
|
summary_table = Table(show_header=False, box=None, padding=(0, 1))
|
|
46
|
+
# Goal
|
|
47
|
+
summary_table.add_row(f"[bold cyan]Goal:[/] {self.goal}")
|
|
48
|
+
summary_table.add_row("")
|
|
45
49
|
# Log directory
|
|
46
50
|
runs_dir = f".runs/{self.session_id}"
|
|
47
|
-
summary_table.add_row(f"[bold cyan]Logs:[/] [blue]{runs_dir}[/]")
|
|
51
|
+
summary_table.add_row(f"[bold cyan]Logs:[/] [blue underline]{runs_dir}[/]")
|
|
48
52
|
summary_table.add_row("")
|
|
49
53
|
# Model used
|
|
50
|
-
summary_table.add_row(f"[bold cyan]Model:[/] {self.model}")
|
|
54
|
+
summary_table.add_row(f"[bold cyan]Model:[/] [yellow]{self.model}[/]")
|
|
51
55
|
summary_table.add_row("")
|
|
52
56
|
# Token counts
|
|
53
57
|
summary_table.add_row(
|
|
@@ -172,7 +176,7 @@ class MetricTreePanel:
|
|
|
172
176
|
def _build_rich_tree(self) -> Tree:
|
|
173
177
|
"""Get a Rich Tree representation of the solution tree using a DFS like traversal."""
|
|
174
178
|
if len(self.metric_tree.nodes) == 0:
|
|
175
|
-
return Tree("[bold green]Building
|
|
179
|
+
return Tree("[bold green]Building first solution...")
|
|
176
180
|
|
|
177
181
|
best_node = self.metric_tree.get_best_node()
|
|
178
182
|
|
|
@@ -193,7 +197,7 @@ class MetricTreePanel:
|
|
|
193
197
|
# best node
|
|
194
198
|
color = "green"
|
|
195
199
|
style = "bold"
|
|
196
|
-
text = f"{node.metric:.3f}
|
|
200
|
+
text = f"{node.metric:.3f} 🏆"
|
|
197
201
|
elif node.metric is None:
|
|
198
202
|
# metric not extracted from evaluated solution
|
|
199
203
|
color = "yellow"
|
|
@@ -220,7 +224,7 @@ class MetricTreePanel:
|
|
|
220
224
|
"""Get a panel displaying the solution tree."""
|
|
221
225
|
# Make sure the metric tree is built before calling build_rich_tree
|
|
222
226
|
return Panel(
|
|
223
|
-
self._build_rich_tree(), title="[bold]
|
|
227
|
+
self._build_rich_tree(), title="[bold]🔎 Exploring Solutions...", border_style="green", expand=True, padding=(0, 1)
|
|
224
228
|
)
|
|
225
229
|
|
|
226
230
|
|
|
@@ -240,17 +244,19 @@ class EvaluationOutputPanel:
|
|
|
240
244
|
|
|
241
245
|
def get_display(self) -> Panel:
|
|
242
246
|
"""Create a panel displaying the evaluation output with truncation if needed."""
|
|
243
|
-
return Panel(self.output, title="[bold]📋 Evaluation Output", border_style="
|
|
247
|
+
return Panel(self.output, title="[bold]📋 Evaluation Output", border_style="blue", expand=True, padding=(0, 1))
|
|
244
248
|
|
|
245
249
|
|
|
246
250
|
class SolutionPanels:
|
|
247
251
|
"""Displays the current and best solutions side by side."""
|
|
248
252
|
|
|
249
|
-
def __init__(self):
|
|
253
|
+
def __init__(self, metric_name: str):
|
|
250
254
|
# Current solution
|
|
251
255
|
self.current_node = None
|
|
252
256
|
# Best solution
|
|
253
257
|
self.best_node = None
|
|
258
|
+
# Metric name
|
|
259
|
+
self.metric_name = metric_name.capitalize()
|
|
254
260
|
|
|
255
261
|
def update(self, current_node: Union[Node, None], best_node: Union[Node, None]):
|
|
256
262
|
"""Update the current and best solutions."""
|
|
@@ -276,7 +282,7 @@ class SolutionPanels:
|
|
|
276
282
|
)
|
|
277
283
|
|
|
278
284
|
# Best solution
|
|
279
|
-
best_title = f"[bold]🏆 Best Solution ([green]
|
|
285
|
+
best_title = f"[bold]🏆 Best Solution ([green]{self.metric_name}: {f'{best_score:.4f}' if best_score is not None else 'N/A'}[/])"
|
|
280
286
|
best_panel = Panel(
|
|
281
287
|
Syntax(str(best_code), "python", theme="monokai", line_numbers=True, word_wrap=False),
|
|
282
288
|
title=best_title,
|
|
@@ -28,10 +28,14 @@ def read_additional_instructions(additional_instructions: str | None) -> str | N
|
|
|
28
28
|
|
|
29
29
|
# Try interpreting as a path first
|
|
30
30
|
potential_path = pathlib.Path(additional_instructions)
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
31
|
+
try:
|
|
32
|
+
if potential_path.exists() and potential_path.is_file():
|
|
33
|
+
return read_from_path(potential_path, is_json=False) # type: ignore # read_from_path returns str when is_json=False
|
|
34
|
+
else:
|
|
35
|
+
# If it's not a valid file path, return the string itself
|
|
36
|
+
return additional_instructions
|
|
37
|
+
except OSError:
|
|
38
|
+
# If the path can't be read, return the string itself
|
|
35
39
|
return additional_instructions
|
|
36
40
|
|
|
37
41
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: weco
|
|
3
|
-
Version: 0.2.
|
|
3
|
+
Version: 0.2.4
|
|
4
4
|
Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
|
|
5
5
|
Author-email: Weco AI Team <dhruv@weco.ai>
|
|
6
6
|
License: MIT
|
|
@@ -20,18 +20,28 @@ Requires-Dist: build; extra == "dev"
|
|
|
20
20
|
Requires-Dist: setuptools_scm; extra == "dev"
|
|
21
21
|
Dynamic: license-file
|
|
22
22
|
|
|
23
|
-
# Weco CLI –
|
|
23
|
+
# Weco CLI – Code Optimizer for Machine Learning Engineers
|
|
24
24
|
|
|
25
25
|
[](https://www.python.org)
|
|
26
26
|
[](LICENSE)
|
|
27
27
|
|
|
28
|
-
`weco` is a
|
|
28
|
+
`weco` is a command-line interface for interacting with Weco AI's code optimizer, powerred by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138).
|
|
29
|
+
|
|
30
|
+
|
|
31
|
+
|
|
32
|
+
https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
|
|
33
|
+
|
|
34
|
+
|
|
29
35
|
|
|
30
36
|
---
|
|
31
37
|
|
|
32
38
|
## Overview
|
|
33
39
|
|
|
34
|
-
The
|
|
40
|
+
The weco CLI leverages a tree search approach with LLMs to iteratively improve your code.
|
|
41
|
+
|
|
42
|
+

|
|
43
|
+
|
|
44
|
+
|
|
35
45
|
|
|
36
46
|
---
|
|
37
47
|
|
|
@@ -59,13 +69,13 @@ The `weco` CLI leverages advanced optimization techniques and language model str
|
|
|
59
69
|
|
|
60
70
|
| Argument | Description | Required |
|
|
61
71
|
|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------|
|
|
62
|
-
| `--source` | Path to the Python source code that will be optimized (e.g. optimize.py).
|
|
72
|
+
| `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
|
|
63
73
|
| `--eval-command` | Command to run for evaluation (e.g. 'python eval.py --arg1=val1'). | Yes |
|
|
64
74
|
| `--metric` | Metric to optimize. | Yes |
|
|
65
|
-
| `--maximize` |
|
|
75
|
+
| `--maximize` | Whether to maximize ('true') or minimize ('false') the metric. | Yes |
|
|
66
76
|
| `--steps` | Number of optimization steps to run. | Yes |
|
|
67
77
|
| `--model` | Model to use for optimization. | Yes |
|
|
68
|
-
| `--additional-instructions` | (Optional) Description of additional instructions
|
|
78
|
+
| `--additional-instructions` | (Optional) Description of additional instructions OR path to a file containing additional instructions. | No |
|
|
69
79
|
|
|
70
80
|
---
|
|
71
81
|
|
|
@@ -75,22 +85,22 @@ Optimizing common operations in pytorch:
|
|
|
75
85
|
```bash
|
|
76
86
|
weco --source examples/simple-torch/optimize.py \
|
|
77
87
|
--eval-command "python examples/simple-torch/evaluate.py --solution-path examples/simple-torch/optimize.py --device mps" \
|
|
78
|
-
--metric
|
|
88
|
+
--metric speedup \
|
|
79
89
|
--maximize true \
|
|
80
90
|
--steps 15 \
|
|
81
|
-
--model
|
|
91
|
+
--model o3-mini \
|
|
82
92
|
--additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
|
|
83
93
|
```
|
|
84
94
|
|
|
85
|
-
|
|
95
|
+
Sometimes we have a bit more context we'd like to provide. Its not easy to fit all of this in a string like shown above with `additional-instructions`. Thats why you can also provide a path to any file you'd like to me read as in context. In this example, we optimize the same operations using mlx and metal with additional instructions:
|
|
86
96
|
```bash
|
|
87
97
|
weco --source examples/simple-mlx/optimize.py \
|
|
88
98
|
--eval-command "python examples/simple-mlx/evaluate.py --solution-path examples/simple-mlx/optimize.py" \
|
|
89
|
-
--metric
|
|
99
|
+
--metric speedup \
|
|
90
100
|
--maximize true \
|
|
91
101
|
--steps 30 \
|
|
92
|
-
--model
|
|
93
|
-
--additional-instructions
|
|
102
|
+
--model o3-mini \
|
|
103
|
+
--additional-instructions examples/simple-mlx/metal-examples.rst
|
|
94
104
|
```
|
|
95
105
|
---
|
|
96
106
|
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|