weco 0.2.1__tar.gz → 0.2.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -90,7 +90,7 @@ jobs:
90
90
  GITHUB_TOKEN: ${{ github.token }}
91
91
  run: >-
92
92
  gh release create
93
- 'v0.2.1'
93
+ 'v0.2.3'
94
94
  --repo '${{ github.repository }}'
95
95
  --notes ""
96
96
 
@@ -102,5 +102,5 @@ jobs:
102
102
  # sigstore-produced signatures and certificates.
103
103
  run: >-
104
104
  gh release upload
105
- 'v0.2.1' dist/**
105
+ 'v0.2.3' dist/**
106
106
  --repo '${{ github.repository }}'
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.1
3
+ Version: 0.2.3
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <dhruv@weco.ai>
6
6
  License: MIT
@@ -20,18 +20,28 @@ Requires-Dist: build; extra == "dev"
20
20
  Requires-Dist: setuptools_scm; extra == "dev"
21
21
  Dynamic: license-file
22
22
 
23
- # Weco CLI – Optimize Your Code Effortlessly
23
+ # Weco CLI – Code Optimizer for Machine Learning Engineers
24
24
 
25
25
  [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
26
26
  [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
27
27
 
28
- `weco` is a powerful command-line interface for interacting with Weco AI's code optimizer. Whether you are looking to improve performance or refine code quality, our CLI streamlines your workflow for a better development experience.
28
+ `weco` is a command-line interface for interacting with Weco AI's code optimizer, powerred by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138).
29
+
30
+
31
+
32
+ https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
33
+
34
+
29
35
 
30
36
  ---
31
37
 
32
38
  ## Overview
33
39
 
34
- The `weco` CLI leverages advanced optimization techniques and language model strategies to iteratively improve your source code. It supports multiple language models and offers a flexible configuration to suit different optimization tasks.
40
+ The weco CLI leverages a tree search approach with LLMs to iteratively improve your code.
41
+
42
+ ![image](https://github.com/user-attachments/assets/a6ed63fa-9c40-498e-aa98-a873e5786509)
43
+
44
+
35
45
 
36
46
  ---
37
47
 
@@ -59,13 +69,13 @@ The `weco` CLI leverages advanced optimization techniques and language model str
59
69
 
60
70
  | Argument | Description | Required |
61
71
  |-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------|
62
- | `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
72
+ | `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
63
73
  | `--eval-command` | Command to run for evaluation (e.g. 'python eval.py --arg1=val1'). | Yes |
64
74
  | `--metric` | Metric to optimize. | Yes |
65
- | `--maximize` | Boolean flag indicating whether to maximize the metric. | Yes |
75
+ | `--maximize` | Whether to maximize ('true') or minimize ('false') the metric. | Yes |
66
76
  | `--steps` | Number of optimization steps to run. | Yes |
67
77
  | `--model` | Model to use for optimization. | Yes |
68
- | `--additional-instructions` | (Optional) Description of additional instructions or path to a file containing additional instructions. | No |
78
+ | `--additional-instructions` | (Optional) Description of additional instructions OR path to a file containing additional instructions. | No |
69
79
 
70
80
  ---
71
81
 
@@ -75,7 +85,7 @@ Optimizing common operations in pytorch:
75
85
  ```bash
76
86
  weco --source examples/simple-torch/optimize.py \
77
87
  --eval-command "python examples/simple-torch/evaluate.py --solution-path examples/simple-torch/optimize.py --device mps" \
78
- --metric "speedup" \
88
+ --metric speedup \
79
89
  --maximize true \
80
90
  --steps 15 \
81
91
  --model "o3-mini" \
@@ -86,7 +96,7 @@ Optimizing these same using mlx and metal:
86
96
  ```bash
87
97
  weco --source examples/simple-mlx/optimize.py \
88
98
  --eval-command "python examples/simple-mlx/evaluate.py --solution-path examples/simple-mlx/optimize.py" \
89
- --metric "speedup" \
99
+ --metric speedup \
90
100
  --maximize true \
91
101
  --steps 30 \
92
102
  --model "o3-mini" \
@@ -1,15 +1,25 @@
1
- # Weco CLI – Optimize Your Code Effortlessly
1
+ # Weco CLI – Code Optimizer for Machine Learning Engineers
2
2
 
3
3
  [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
4
4
  [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
5
5
 
6
- `weco` is a powerful command-line interface for interacting with Weco AI's code optimizer. Whether you are looking to improve performance or refine code quality, our CLI streamlines your workflow for a better development experience.
6
+ `weco` is a command-line interface for interacting with Weco AI's code optimizer, powerred by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138).
7
+
8
+
9
+
10
+ https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
11
+
12
+
7
13
 
8
14
  ---
9
15
 
10
16
  ## Overview
11
17
 
12
- The `weco` CLI leverages advanced optimization techniques and language model strategies to iteratively improve your source code. It supports multiple language models and offers a flexible configuration to suit different optimization tasks.
18
+ The weco CLI leverages a tree search approach with LLMs to iteratively improve your code.
19
+
20
+ ![image](https://github.com/user-attachments/assets/a6ed63fa-9c40-498e-aa98-a873e5786509)
21
+
22
+
13
23
 
14
24
  ---
15
25
 
@@ -37,13 +47,13 @@ The `weco` CLI leverages advanced optimization techniques and language model str
37
47
 
38
48
  | Argument | Description | Required |
39
49
  |-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------|
40
- | `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
50
+ | `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
41
51
  | `--eval-command` | Command to run for evaluation (e.g. 'python eval.py --arg1=val1'). | Yes |
42
52
  | `--metric` | Metric to optimize. | Yes |
43
- | `--maximize` | Boolean flag indicating whether to maximize the metric. | Yes |
53
+ | `--maximize` | Whether to maximize ('true') or minimize ('false') the metric. | Yes |
44
54
  | `--steps` | Number of optimization steps to run. | Yes |
45
55
  | `--model` | Model to use for optimization. | Yes |
46
- | `--additional-instructions` | (Optional) Description of additional instructions or path to a file containing additional instructions. | No |
56
+ | `--additional-instructions` | (Optional) Description of additional instructions OR path to a file containing additional instructions. | No |
47
57
 
48
58
  ---
49
59
 
@@ -53,7 +63,7 @@ Optimizing common operations in pytorch:
53
63
  ```bash
54
64
  weco --source examples/simple-torch/optimize.py \
55
65
  --eval-command "python examples/simple-torch/evaluate.py --solution-path examples/simple-torch/optimize.py --device mps" \
56
- --metric "speedup" \
66
+ --metric speedup \
57
67
  --maximize true \
58
68
  --steps 15 \
59
69
  --model "o3-mini" \
@@ -64,7 +74,7 @@ Optimizing these same using mlx and metal:
64
74
  ```bash
65
75
  weco --source examples/simple-mlx/optimize.py \
66
76
  --eval-command "python examples/simple-mlx/evaluate.py --solution-path examples/simple-mlx/optimize.py" \
67
- --metric "speedup" \
77
+ --metric speedup \
68
78
  --maximize true \
69
79
  --steps 30 \
70
80
  --model "o3-mini" \
@@ -19,8 +19,8 @@ class Model(nn.Module):
19
19
  Returns:
20
20
  mx.array: Output tensor of shape (batch_size, hidden_size).
21
21
  """
22
- x = mx.matmul(x, mx.transpose(self.weight)) # Gemm
23
- x = x / 2 # Divide
24
- x = mx.sum(x, axis=1, keepdims=True) # Sum
25
- x = x * self.scaling_factor # Scaling
22
+ x = mx.matmul(x, mx.transpose(self.weight))
23
+ x = x / 2
24
+ x = mx.sum(x, axis=1, keepdims=True)
25
+ x = x * self.scaling_factor
26
26
  return x
@@ -19,8 +19,8 @@ class Model(nn.Module):
19
19
  Returns:
20
20
  torch.Tensor: Output tensor of shape (batch_size, hidden_size).
21
21
  """
22
- x = torch.matmul(x, self.weight.T) # Gemm
23
- x = x / 2 # Divide
24
- x = torch.sum(x, dim=1, keepdim=True) # Sum
25
- x = x * self.scaling_factor # Scaling
22
+ x = torch.matmul(x, self.weight.T)
23
+ x = x / 2
24
+ x = torch.sum(x, dim=1, keepdim=True)
25
+ x = x * self.scaling_factor
26
26
  return x
@@ -10,7 +10,7 @@ authors = [
10
10
  ]
11
11
  description = "Documentation for `weco`, a CLI for using Weco AI's code optimizer."
12
12
  readme = "README.md"
13
- version = "0.2.1"
13
+ version = "0.2.3"
14
14
  license = {text = "MIT"}
15
15
  requires-python = ">=3.12"
16
16
  dependencies = ["requests", "rich"]
@@ -1,4 +1,4 @@
1
1
  # DO NOT EDIT
2
- __pkg_version__ = "0.2.1"
2
+ __pkg_version__ = "0.2.3"
3
3
  __api_version__ = "v1"
4
4
  __base_url__ = f"https://api.aide.weco.ai/{__api_version__}"
@@ -41,7 +41,13 @@ def main() -> None:
41
41
  "--eval-command", type=str, required=True, help="Command to run for evaluation (e.g. 'python eval.py --arg1=val1')"
42
42
  )
43
43
  parser.add_argument("--metric", type=str, required=True, help="Metric to optimize")
44
- parser.add_argument("--maximize", type=bool, required=True, help="Maximize the metric")
44
+ parser.add_argument(
45
+ "--maximize",
46
+ type=str,
47
+ choices=["true", "false"],
48
+ required=True,
49
+ help="Specify 'true' to maximize the metric or 'false' to minimize.",
50
+ )
45
51
  parser.add_argument("--steps", type=int, required=True, help="Number of steps to run")
46
52
  parser.add_argument("--model", type=str, required=True, help="Model to use for optimization")
47
53
  parser.add_argument(
@@ -57,7 +63,7 @@ def main() -> None:
57
63
  # Define optimization session config
58
64
  evaluation_command = args.eval_command
59
65
  metric_name = args.metric
60
- maximize = args.maximize
66
+ maximize = args.maximize == "true"
61
67
  steps = args.steps
62
68
  code_generator_config = {"model": args.model}
63
69
  evaluator_config = {"model": args.model}
@@ -75,9 +81,9 @@ def main() -> None:
75
81
  api_keys = read_api_keys_from_env()
76
82
 
77
83
  # Initialize panels
78
- summary_panel = SummaryPanel(total_steps=steps, model=args.model)
84
+ summary_panel = SummaryPanel(maximize=maximize, metric_name=metric_name, total_steps=steps, model=args.model)
79
85
  plan_panel = PlanPanel()
80
- solution_panels = SolutionPanels()
86
+ solution_panels = SolutionPanels(metric_name=metric_name)
81
87
  eval_output_panel = EvaluationOutputPanel()
82
88
  tree_panel = MetricTreePanel(maximize=maximize)
83
89
  layout = create_optimization_layout()
@@ -185,6 +191,9 @@ def main() -> None:
185
191
  # Save next solution (.runs/<session-id>/step_<step>.py)
186
192
  write_to_path(fp=runs_dir / f"step_{step}.py", content=eval_and_next_solution_response["code"])
187
193
 
194
+ # Write the next solution to the source file
195
+ write_to_path(fp=source_fp, content=eval_and_next_solution_response["code"])
196
+
188
197
  # Get the optimization session status for
189
198
  # the best solution, its score, and the history to plot the tree
190
199
  status_response = get_optimization_session_status(console=console, session_id=session_id, include_history=True)
@@ -11,12 +11,13 @@ from .utils import format_number
11
11
  class SummaryPanel:
12
12
  """Holds a summary of the optimization session."""
13
13
 
14
- def __init__(self, total_steps: int, model: str, session_id: str = None):
14
+ def __init__(self, maximize: bool, metric_name: str, total_steps: int, model: str, session_id: str = None):
15
+ self.goal = ("Maximizing" if maximize else "Minimizing") + f" {metric_name}"
15
16
  self.total_input_tokens = 0
16
17
  self.total_output_tokens = 0
17
18
  self.total_steps = total_steps
18
19
  self.model = model
19
- self.session_id = session_id or "unknown"
20
+ self.session_id = session_id or "N/A"
20
21
  self.progress = Progress(
21
22
  TextColumn("[progress.description]{task.description}"),
22
23
  BarColumn(bar_width=20),
@@ -42,6 +43,9 @@ class SummaryPanel:
42
43
  """Create a summary panel with the relevant information."""
43
44
  layout = Layout(name="summary")
44
45
  summary_table = Table(show_header=False, box=None, padding=(0, 1))
46
+ # Goal
47
+ summary_table.add_row(f"[bold cyan]Goal:[/] {self.goal}")
48
+ summary_table.add_row("")
45
49
  # Log directory
46
50
  runs_dir = f".runs/{self.session_id}"
47
51
  summary_table.add_row(f"[bold cyan]Logs:[/] [blue]{runs_dir}[/]")
@@ -172,7 +176,7 @@ class MetricTreePanel:
172
176
  def _build_rich_tree(self) -> Tree:
173
177
  """Get a Rich Tree representation of the solution tree using a DFS like traversal."""
174
178
  if len(self.metric_tree.nodes) == 0:
175
- return Tree("[bold green]Building 🌳")
179
+ return Tree("[bold green]Building first solution...")
176
180
 
177
181
  best_node = self.metric_tree.get_best_node()
178
182
 
@@ -193,7 +197,7 @@ class MetricTreePanel:
193
197
  # best node
194
198
  color = "green"
195
199
  style = "bold"
196
- text = f"{node.metric:.3f} (best)"
200
+ text = f"🏆 {node.metric:.3f}"
197
201
  elif node.metric is None:
198
202
  # metric not extracted from evaluated solution
199
203
  color = "yellow"
@@ -210,7 +214,7 @@ class MetricTreePanel:
210
214
  for child in node.children:
211
215
  append_rec(child, subtree)
212
216
 
213
- tree = Tree("🌳")
217
+ tree = Tree("solutions")
214
218
  for n in self.metric_tree.get_draft_nodes():
215
219
  append_rec(n, tree)
216
220
 
@@ -219,9 +223,7 @@ class MetricTreePanel:
219
223
  def get_display(self) -> Panel:
220
224
  """Get a panel displaying the solution tree."""
221
225
  # Make sure the metric tree is built before calling build_rich_tree
222
- return Panel(
223
- self._build_rich_tree(), title="[bold]🌳 Solution Tree", border_style="green", expand=True, padding=(0, 1)
224
- )
226
+ return Panel(self._build_rich_tree(), title="[bold]🌳 Exploring...", border_style="green", expand=True, padding=(0, 1))
225
227
 
226
228
 
227
229
  class EvaluationOutputPanel:
@@ -246,11 +248,13 @@ class EvaluationOutputPanel:
246
248
  class SolutionPanels:
247
249
  """Displays the current and best solutions side by side."""
248
250
 
249
- def __init__(self):
251
+ def __init__(self, metric_name: str):
250
252
  # Current solution
251
253
  self.current_node = None
252
254
  # Best solution
253
255
  self.best_node = None
256
+ # Metric name
257
+ self.metric_name = metric_name.capitalize()
254
258
 
255
259
  def update(self, current_node: Union[Node, None], best_node: Union[Node, None]):
256
260
  """Update the current and best solutions."""
@@ -276,7 +280,7 @@ class SolutionPanels:
276
280
  )
277
281
 
278
282
  # Best solution
279
- best_title = f"[bold]🏆 Best Solution ([green]Score: {f'{best_score:.4f}' if best_score is not None else 'N/A'}[/])"
283
+ best_title = f"[bold]🏆 Best Solution ([green]{self.metric_name}: {f'{best_score:.4f}' if best_score is not None else 'N/A'}[/])"
280
284
  best_panel = Panel(
281
285
  Syntax(str(best_code), "python", theme="monokai", line_numbers=True, word_wrap=False),
282
286
  title=best_title,
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.1
3
+ Version: 0.2.3
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <dhruv@weco.ai>
6
6
  License: MIT
@@ -20,18 +20,28 @@ Requires-Dist: build; extra == "dev"
20
20
  Requires-Dist: setuptools_scm; extra == "dev"
21
21
  Dynamic: license-file
22
22
 
23
- # Weco CLI – Optimize Your Code Effortlessly
23
+ # Weco CLI – Code Optimizer for Machine Learning Engineers
24
24
 
25
25
  [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
26
26
  [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
27
27
 
28
- `weco` is a powerful command-line interface for interacting with Weco AI's code optimizer. Whether you are looking to improve performance or refine code quality, our CLI streamlines your workflow for a better development experience.
28
+ `weco` is a command-line interface for interacting with Weco AI's code optimizer, powerred by [AI-Driven Exploration](https://arxiv.org/abs/2502.13138).
29
+
30
+
31
+
32
+ https://github.com/user-attachments/assets/cb724ef1-bff6-4757-b457-d3b2201ede81
33
+
34
+
29
35
 
30
36
  ---
31
37
 
32
38
  ## Overview
33
39
 
34
- The `weco` CLI leverages advanced optimization techniques and language model strategies to iteratively improve your source code. It supports multiple language models and offers a flexible configuration to suit different optimization tasks.
40
+ The weco CLI leverages a tree search approach with LLMs to iteratively improve your code.
41
+
42
+ ![image](https://github.com/user-attachments/assets/a6ed63fa-9c40-498e-aa98-a873e5786509)
43
+
44
+
35
45
 
36
46
  ---
37
47
 
@@ -59,13 +69,13 @@ The `weco` CLI leverages advanced optimization techniques and language model str
59
69
 
60
70
  | Argument | Description | Required |
61
71
  |-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------|
62
- | `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
72
+ | `--source` | Path to the Python source code that will be optimized (e.g. optimize.py). | Yes |
63
73
  | `--eval-command` | Command to run for evaluation (e.g. 'python eval.py --arg1=val1'). | Yes |
64
74
  | `--metric` | Metric to optimize. | Yes |
65
- | `--maximize` | Boolean flag indicating whether to maximize the metric. | Yes |
75
+ | `--maximize` | Whether to maximize ('true') or minimize ('false') the metric. | Yes |
66
76
  | `--steps` | Number of optimization steps to run. | Yes |
67
77
  | `--model` | Model to use for optimization. | Yes |
68
- | `--additional-instructions` | (Optional) Description of additional instructions or path to a file containing additional instructions. | No |
78
+ | `--additional-instructions` | (Optional) Description of additional instructions OR path to a file containing additional instructions. | No |
69
79
 
70
80
  ---
71
81
 
@@ -75,7 +85,7 @@ Optimizing common operations in pytorch:
75
85
  ```bash
76
86
  weco --source examples/simple-torch/optimize.py \
77
87
  --eval-command "python examples/simple-torch/evaluate.py --solution-path examples/simple-torch/optimize.py --device mps" \
78
- --metric "speedup" \
88
+ --metric speedup \
79
89
  --maximize true \
80
90
  --steps 15 \
81
91
  --model "o3-mini" \
@@ -86,7 +96,7 @@ Optimizing these same using mlx and metal:
86
96
  ```bash
87
97
  weco --source examples/simple-mlx/optimize.py \
88
98
  --eval-command "python examples/simple-mlx/evaluate.py --solution-path examples/simple-mlx/optimize.py" \
89
- --metric "speedup" \
99
+ --metric speedup \
90
100
  --maximize true \
91
101
  --steps 30 \
92
102
  --model "o3-mini" \
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes