weco 0.2.16__tar.gz → 0.2.17__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (41) hide show
  1. {weco-0.2.16 → weco-0.2.17}/.github/workflows/release.yml +0 -2
  2. {weco-0.2.16 → weco-0.2.17}/PKG-INFO +13 -5
  3. {weco-0.2.16 → weco-0.2.17}/README.md +12 -4
  4. {weco-0.2.16 → weco-0.2.17}/examples/prompt/README.md +1 -2
  5. {weco-0.2.16 → weco-0.2.17}/pyproject.toml +1 -1
  6. {weco-0.2.16 → weco-0.2.17}/weco/__init__.py +1 -1
  7. {weco-0.2.16 → weco-0.2.17}/weco/cli.py +4 -12
  8. {weco-0.2.16 → weco-0.2.17}/weco.egg-info/PKG-INFO +13 -5
  9. {weco-0.2.16 → weco-0.2.17}/.github/workflows/lint.yml +0 -0
  10. {weco-0.2.16 → weco-0.2.17}/.gitignore +0 -0
  11. {weco-0.2.16 → weco-0.2.17}/.repomixignore +0 -0
  12. {weco-0.2.16 → weco-0.2.17}/LICENSE +0 -0
  13. {weco-0.2.16 → weco-0.2.17}/assets/example-optimization.gif +0 -0
  14. {weco-0.2.16 → weco-0.2.17}/examples/cuda/README.md +0 -0
  15. {weco-0.2.16 → weco-0.2.17}/examples/cuda/evaluate.py +0 -0
  16. {weco-0.2.16 → weco-0.2.17}/examples/cuda/guide.md +0 -0
  17. {weco-0.2.16 → weco-0.2.17}/examples/cuda/optimize.py +0 -0
  18. {weco-0.2.16 → weco-0.2.17}/examples/hello-kernel-world/evaluate.py +0 -0
  19. {weco-0.2.16 → weco-0.2.17}/examples/hello-kernel-world/optimize.py +0 -0
  20. {weco-0.2.16 → weco-0.2.17}/examples/prompt/eval.py +0 -0
  21. {weco-0.2.16 → weco-0.2.17}/examples/prompt/optimize.py +0 -0
  22. {weco-0.2.16 → weco-0.2.17}/examples/prompt/prompt_guide.md +0 -0
  23. {weco-0.2.16 → weco-0.2.17}/examples/spaceship-titanic/README.md +0 -0
  24. {weco-0.2.16 → weco-0.2.17}/examples/spaceship-titanic/competition_description.md +0 -0
  25. {weco-0.2.16 → weco-0.2.17}/examples/spaceship-titanic/evaluate.py +0 -0
  26. {weco-0.2.16 → weco-0.2.17}/examples/spaceship-titanic/get_data.py +0 -0
  27. {weco-0.2.16 → weco-0.2.17}/examples/spaceship-titanic/requirements-test.txt +0 -0
  28. {weco-0.2.16 → weco-0.2.17}/examples/spaceship-titanic/submit.py +0 -0
  29. {weco-0.2.16 → weco-0.2.17}/examples/triton/README.md +0 -0
  30. {weco-0.2.16 → weco-0.2.17}/examples/triton/evaluate.py +0 -0
  31. {weco-0.2.16 → weco-0.2.17}/examples/triton/optimize.py +0 -0
  32. {weco-0.2.16 → weco-0.2.17}/setup.cfg +0 -0
  33. {weco-0.2.16 → weco-0.2.17}/weco/api.py +0 -0
  34. {weco-0.2.16 → weco-0.2.17}/weco/auth.py +0 -0
  35. {weco-0.2.16 → weco-0.2.17}/weco/panels.py +0 -0
  36. {weco-0.2.16 → weco-0.2.17}/weco/utils.py +0 -0
  37. {weco-0.2.16 → weco-0.2.17}/weco.egg-info/SOURCES.txt +0 -0
  38. {weco-0.2.16 → weco-0.2.17}/weco.egg-info/dependency_links.txt +0 -0
  39. {weco-0.2.16 → weco-0.2.17}/weco.egg-info/entry_points.txt +0 -0
  40. {weco-0.2.16 → weco-0.2.17}/weco.egg-info/requires.txt +0 -0
  41. {weco-0.2.16 → weco-0.2.17}/weco.egg-info/top_level.txt +0 -0
@@ -43,8 +43,6 @@ jobs:
43
43
  OLD_VERSION=""
44
44
  fi
45
45
 
46
- OLD_VERSION="0.2.15"
47
-
48
46
  echo "Previous version: $OLD_VERSION"
49
47
  echo "Current version: $NEW_VERSION"
50
48
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.16
3
+ Version: 0.2.17
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License: MIT
@@ -20,12 +20,21 @@ Requires-Dist: build; extra == "dev"
20
20
  Requires-Dist: setuptools_scm; extra == "dev"
21
21
  Dynamic: license-file
22
22
 
23
- # Weco: The AI Research Engineer
23
+ <div align="center">
24
24
 
25
- [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
25
+ # Weco: The AI Code Optimizer
26
+
27
+ [![Python](https://img.shields.io/badge/Python-3.8.0+-blue)](https://www.python.org)
28
+ [![docs](https://img.shields.io/website?url=https://docs.weco.ai/&label=docs)](https://docs.weco.ai/)
26
29
  [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
27
30
  [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
28
31
 
32
+ <code>pip install weco</code>
33
+
34
+ </div>
35
+
36
+ ---
37
+
29
38
  Weco systematically optimizes your code, guided directly by your evaluation metrics.
30
39
 
31
40
  Example applications include:
@@ -101,7 +110,7 @@ This command starts the optimization process.
101
110
 
102
111
  This basic example shows how to optimize a simple PyTorch function for speedup.
103
112
 
104
- For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md)**, and **[ML model optimization](/examples/spaceship-titanic/README.md)**, please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
113
+ For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md), [ML model optimization](/examples/spaceship-titanic/README.md), and [prompt engineering for math problems](https://github.com/WecoAI/weco-cli/tree/main/examples/prompt), please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
105
114
 
106
115
  ```bash
107
116
  # Navigate to the example directory
@@ -136,7 +145,6 @@ weco run --source optimize.py \
136
145
  | `--model` | Model identifier for the LLM to use (e.g., `gpt-4o`, `claude-3.5-sonnet`). Recommended models to try include `o3-mini`, `claude-3-haiku`, and `gemini-2.5-pro-exp-03-25`. | Yes |
137
146
  | `--additional-instructions` | (Optional) Natural language description of specific instructions OR path to a file containing detailed instructions to guide the LLM. | No |
138
147
  | `--log-dir` | (Optional) Path to the directory to log intermediate steps and final optimization result. Defaults to `.runs/`. | No |
139
- | `--preserve-source` | (Optional) If set, do not overwrite the original `--source` file. Modifications and the best solution will still be saved in the `--log-dir`. | No |
140
148
 
141
149
  ---
142
150
 
@@ -1,9 +1,18 @@
1
- # Weco: The AI Research Engineer
1
+ <div align="center">
2
2
 
3
- [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
3
+ # Weco: The AI Code Optimizer
4
+
5
+ [![Python](https://img.shields.io/badge/Python-3.8.0+-blue)](https://www.python.org)
6
+ [![docs](https://img.shields.io/website?url=https://docs.weco.ai/&label=docs)](https://docs.weco.ai/)
4
7
  [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
5
8
  [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
6
9
 
10
+ <code>pip install weco</code>
11
+
12
+ </div>
13
+
14
+ ---
15
+
7
16
  Weco systematically optimizes your code, guided directly by your evaluation metrics.
8
17
 
9
18
  Example applications include:
@@ -79,7 +88,7 @@ This command starts the optimization process.
79
88
 
80
89
  This basic example shows how to optimize a simple PyTorch function for speedup.
81
90
 
82
- For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md)**, and **[ML model optimization](/examples/spaceship-titanic/README.md)**, please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
91
+ For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md), [ML model optimization](/examples/spaceship-titanic/README.md), and [prompt engineering for math problems](https://github.com/WecoAI/weco-cli/tree/main/examples/prompt), please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
83
92
 
84
93
  ```bash
85
94
  # Navigate to the example directory
@@ -114,7 +123,6 @@ weco run --source optimize.py \
114
123
  | `--model` | Model identifier for the LLM to use (e.g., `gpt-4o`, `claude-3.5-sonnet`). Recommended models to try include `o3-mini`, `claude-3-haiku`, and `gemini-2.5-pro-exp-03-25`. | Yes |
115
124
  | `--additional-instructions` | (Optional) Natural language description of specific instructions OR path to a file containing detailed instructions to guide the LLM. | No |
116
125
  | `--log-dir` | (Optional) Path to the directory to log intermediate steps and final optimization result. Defaults to `.runs/`. | No |
117
- | `--preserve-source` | (Optional) If set, do not overwrite the original `--source` file. Modifications and the best solution will still be saved in the `--log-dir`. | No |
118
126
 
119
127
  ---
120
128
 
@@ -1,4 +1,3 @@
1
- # weco-cli/examples/prompt/README.md
2
1
  # AIME Prompt Engineering Example with Weco
3
2
 
4
3
  This example shows how **Weco** can iteratively improve a prompt for solving American Invitational Mathematics Examination (AIME) problems. The experiment runs locally, requires only two short Python files, and aims to improve the accuracy metric.
@@ -97,4 +96,4 @@ Weco then mutates the config, tries again, and gradually pushes the accuracy hig
97
96
  * `eval_aime.py` slices the **Maxwell‑Jia/AIME_2024** dataset to twenty problems for fast feedback. You can change the slice in one line.
98
97
  * The script sends model calls in parallel via `ThreadPoolExecutor`, so network latency is hidden.
99
98
  * Every five completed items, the script logs progress and elapsed time.
100
- * The final line `accuracy: value` is the only part Weco needs for guidance.
99
+ * The final line `accuracy: value` is the only part Weco needs for guidance.
@@ -10,7 +10,7 @@ authors = [
10
10
  ]
11
11
  description = "Documentation for `weco`, a CLI for using Weco AI's code optimizer."
12
12
  readme = "README.md"
13
- version = "0.2.16"
13
+ version = "0.2.17"
14
14
  license = {text = "MIT"}
15
15
  requires-python = ">=3.8"
16
16
  dependencies = ["requests", "rich"]
@@ -1,7 +1,7 @@
1
1
  import os
2
2
 
3
3
  # DO NOT EDIT
4
- __pkg_version__ = "0.2.16"
4
+ __pkg_version__ = "0.2.17"
5
5
  __api_version__ = "v1"
6
6
 
7
7
  __base_url__ = f"https://api.weco.ai/{__api_version__}"
@@ -194,11 +194,6 @@ def main() -> None:
194
194
  type=str,
195
195
  help="Description of additional instruction or path to a file containing additional instructions",
196
196
  )
197
- run_parser.add_argument(
198
- "--preserve-source",
199
- action="store_true",
200
- help="If set, do not overwrite the original source file; only save modified versions in the runs directory",
201
- )
202
197
 
203
198
  # --- Logout Command ---
204
199
  _ = subparsers.add_parser("logout", help="Log out from Weco and clear saved API key.")
@@ -310,9 +305,8 @@ def main() -> None:
310
305
  # Write the initial code string to the logs
311
306
  write_to_path(fp=runs_dir / f"step_0{source_fp.suffix}", content=session_response["code"])
312
307
 
313
- # Write the initial code string to the source file path (if not preserving)
314
- if not args.preserve_source:
315
- write_to_path(fp=source_fp, content=session_response["code"])
308
+ # Write the initial code string to the source file path
309
+ write_to_path(fp=source_fp, content=session_response["code"])
316
310
 
317
311
  # Update the panels with the initial solution
318
312
  summary_panel.set_session_id(session_id=session_id) # Add session id now that we have it
@@ -398,8 +392,7 @@ def main() -> None:
398
392
  )
399
393
 
400
394
  # Write the next solution to the source file
401
- if not args.preserve_source:
402
- write_to_path(fp=source_fp, content=eval_and_next_solution_response["code"])
395
+ write_to_path(fp=source_fp, content=eval_and_next_solution_response["code"])
403
396
 
404
397
  # Get the optimization session status for
405
398
  # the best solution, its score, and the history to plot the tree
@@ -560,8 +553,7 @@ def main() -> None:
560
553
  write_to_path(fp=runs_dir / f"best{source_fp.suffix}", content=best_solution_content)
561
554
 
562
555
  # write the best solution to the source file
563
- if not args.preserve_source:
564
- write_to_path(fp=source_fp, content=best_solution_content)
556
+ write_to_path(fp=source_fp, content=best_solution_content)
565
557
 
566
558
  console.print(end_optimization_layout)
567
559
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.16
3
+ Version: 0.2.17
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License: MIT
@@ -20,12 +20,21 @@ Requires-Dist: build; extra == "dev"
20
20
  Requires-Dist: setuptools_scm; extra == "dev"
21
21
  Dynamic: license-file
22
22
 
23
- # Weco: The AI Research Engineer
23
+ <div align="center">
24
24
 
25
- [![Python](https://img.shields.io/badge/Python-3.12.0-blue)](https://www.python.org)
25
+ # Weco: The AI Code Optimizer
26
+
27
+ [![Python](https://img.shields.io/badge/Python-3.8.0+-blue)](https://www.python.org)
28
+ [![docs](https://img.shields.io/website?url=https://docs.weco.ai/&label=docs)](https://docs.weco.ai/)
26
29
  [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
27
30
  [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
28
31
 
32
+ <code>pip install weco</code>
33
+
34
+ </div>
35
+
36
+ ---
37
+
29
38
  Weco systematically optimizes your code, guided directly by your evaluation metrics.
30
39
 
31
40
  Example applications include:
@@ -101,7 +110,7 @@ This command starts the optimization process.
101
110
 
102
111
  This basic example shows how to optimize a simple PyTorch function for speedup.
103
112
 
104
- For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md)**, and **[ML model optimization](/examples/spaceship-titanic/README.md)**, please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
113
+ For more advanced examples, including [Triton](/examples/triton/README.md), [CUDA kernel optimization](/examples/cuda/README.md), [ML model optimization](/examples/spaceship-titanic/README.md), and [prompt engineering for math problems](https://github.com/WecoAI/weco-cli/tree/main/examples/prompt), please see the `README.md` files within the corresponding subdirectories under the [`examples/`](./examples/) folder.
105
114
 
106
115
  ```bash
107
116
  # Navigate to the example directory
@@ -136,7 +145,6 @@ weco run --source optimize.py \
136
145
  | `--model` | Model identifier for the LLM to use (e.g., `gpt-4o`, `claude-3.5-sonnet`). Recommended models to try include `o3-mini`, `claude-3-haiku`, and `gemini-2.5-pro-exp-03-25`. | Yes |
137
146
  | `--additional-instructions` | (Optional) Natural language description of specific instructions OR path to a file containing detailed instructions to guide the LLM. | No |
138
147
  | `--log-dir` | (Optional) Path to the directory to log intermediate steps and final optimization result. Defaults to `.runs/`. | No |
139
- | `--preserve-source` | (Optional) If set, do not overwrite the original `--source` file. Modifications and the best solution will still be saved in the `--log-dir`. | No |
140
148
 
141
149
  ---
142
150
 
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes