weco 0.2.22__tar.gz → 0.2.23__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. {weco-0.2.22 → weco-0.2.23}/PKG-INFO +8 -10
  2. {weco-0.2.22 → weco-0.2.23}/README.md +7 -9
  3. {weco-0.2.22 → weco-0.2.23}/examples/cuda/README.md +7 -1
  4. {weco-0.2.22 → weco-0.2.23}/examples/cuda/evaluate.py +3 -0
  5. {weco-0.2.22 → weco-0.2.23}/examples/hello-kernel-world/evaluate.py +3 -0
  6. {weco-0.2.22 → weco-0.2.23}/examples/prompt/README.md +8 -1
  7. {weco-0.2.22 → weco-0.2.23}/examples/spaceship-titanic/README.md +5 -1
  8. {weco-0.2.22 → weco-0.2.23}/examples/triton/README.md +5 -1
  9. {weco-0.2.22 → weco-0.2.23}/examples/triton/evaluate.py +3 -0
  10. {weco-0.2.22 → weco-0.2.23}/pyproject.toml +1 -1
  11. {weco-0.2.22 → weco-0.2.23}/weco/api.py +22 -62
  12. {weco-0.2.22 → weco-0.2.23}/weco.egg-info/PKG-INFO +8 -10
  13. {weco-0.2.22 → weco-0.2.23}/.github/workflows/lint.yml +0 -0
  14. {weco-0.2.22 → weco-0.2.23}/.github/workflows/release.yml +0 -0
  15. {weco-0.2.22 → weco-0.2.23}/.gitignore +0 -0
  16. {weco-0.2.22 → weco-0.2.23}/.repomixignore +0 -0
  17. {weco-0.2.22 → weco-0.2.23}/LICENSE +0 -0
  18. {weco-0.2.22 → weco-0.2.23}/assets/example-optimization.gif +0 -0
  19. {weco-0.2.22 → weco-0.2.23}/assets/weco.svg +0 -0
  20. {weco-0.2.22 → weco-0.2.23}/examples/cuda/guide.md +0 -0
  21. {weco-0.2.22 → weco-0.2.23}/examples/cuda/optimize.py +0 -0
  22. {weco-0.2.22 → weco-0.2.23}/examples/hello-kernel-world/colab_notebook_walkthrough.ipynb +0 -0
  23. {weco-0.2.22 → weco-0.2.23}/examples/hello-kernel-world/optimize.py +0 -0
  24. {weco-0.2.22 → weco-0.2.23}/examples/prompt/eval.py +0 -0
  25. {weco-0.2.22 → weco-0.2.23}/examples/prompt/optimize.py +0 -0
  26. {weco-0.2.22 → weco-0.2.23}/examples/prompt/prompt_guide.md +0 -0
  27. {weco-0.2.22 → weco-0.2.23}/examples/spaceship-titanic/competition_description.md +0 -0
  28. {weco-0.2.22 → weco-0.2.23}/examples/spaceship-titanic/data/sample_submission.csv +0 -0
  29. {weco-0.2.22 → weco-0.2.23}/examples/spaceship-titanic/data/test.csv +0 -0
  30. {weco-0.2.22 → weco-0.2.23}/examples/spaceship-titanic/data/train.csv +0 -0
  31. {weco-0.2.22 → weco-0.2.23}/examples/spaceship-titanic/evaluate.py +0 -0
  32. {weco-0.2.22 → weco-0.2.23}/examples/spaceship-titanic/train.py +0 -0
  33. {weco-0.2.22 → weco-0.2.23}/examples/triton/optimize.py +0 -0
  34. {weco-0.2.22 → weco-0.2.23}/setup.cfg +0 -0
  35. {weco-0.2.22 → weco-0.2.23}/weco/__init__.py +0 -0
  36. {weco-0.2.22 → weco-0.2.23}/weco/auth.py +0 -0
  37. {weco-0.2.22 → weco-0.2.23}/weco/chatbot.py +0 -0
  38. {weco-0.2.22 → weco-0.2.23}/weco/cli.py +0 -0
  39. {weco-0.2.22 → weco-0.2.23}/weco/optimizer.py +0 -0
  40. {weco-0.2.22 → weco-0.2.23}/weco/panels.py +0 -0
  41. {weco-0.2.22 → weco-0.2.23}/weco/utils.py +0 -0
  42. {weco-0.2.22 → weco-0.2.23}/weco.egg-info/SOURCES.txt +0 -0
  43. {weco-0.2.22 → weco-0.2.23}/weco.egg-info/dependency_links.txt +0 -0
  44. {weco-0.2.22 → weco-0.2.23}/weco.egg-info/entry_points.txt +0 -0
  45. {weco-0.2.22 → weco-0.2.23}/weco.egg-info/requires.txt +0 -0
  46. {weco-0.2.22 → weco-0.2.23}/weco.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.22
3
+ Version: 0.2.23
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License: MIT
@@ -30,10 +30,11 @@ Dynamic: license-file
30
30
  </div>
31
31
 
32
32
  [![Python](https://img.shields.io/badge/Python-3.8.0+-blue)](https://www.python.org)
33
+ [![PyPI version](https://img.shields.io/pypi/v/weco?label=PyPI%20version&color=f05138&labelColor=555555)](https://badge.fury.io/py/weco)
33
34
  [![docs](https://img.shields.io/website?url=https://docs.weco.ai/&label=docs)](https://docs.weco.ai/)
34
- [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
35
- [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
36
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/WecoAI/weco-cli/blob/main/examples/hello-kernel-world/colab_notebook_walkthrough.ipynb)
35
+ [![PyPI Downloads](https://static.pepy.tech/badge/weco?color=4c1)](https://pepy.tech/projects/weco)
36
+ [![arXiv on AIDE](https://img.shields.io/badge/arXiv-AIDE-b31b1b?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2502.13138)
37
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg?labelColor=ffffff&color=F17E01)](https://colab.research.google.com/github/WecoAI/weco-cli/blob/main/examples/hello-kernel-world/colab_notebook_walkthrough.ipynb)
37
38
 
38
39
  `pip install weco`
39
40
 
@@ -73,9 +74,9 @@ The `weco` CLI leverages a tree search approach guided by LLMs to iteratively ex
73
74
 
74
75
  `weco` requires API keys for the LLMs it uses internally. You **must** provide these keys via environment variables:
75
76
 
76
- - **OpenAI:** `export OPENAI_API_KEY="your_key_here"` (Create your API key [here](https://platform.openai.com/api-keys))
77
- - **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"` (Create your API key [here](https://console.anthropic.com/settings/keys))
78
- - **Google:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create your API key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
77
+ - **OpenAI:** `export OPENAI_API_KEY="your_key_here"` (Create your OpenAI API key [here](https://platform.openai.com/api-keys))
78
+ - **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"` (Create your Anthropic API key [here](https://console.anthropic.com/settings/keys))
79
+ - **Google:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create your Gemini API key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
79
80
 
80
81
  ---
81
82
 
@@ -252,20 +253,17 @@ Weco will parse this output to extract the numerical value (1.5 in this case) as
252
253
  We welcome your contributions! To get started:
253
254
 
254
255
  1. **Fork & Clone the Repository:**
255
-
256
256
  ```bash
257
257
  git clone https://github.com/WecoAI/weco-cli.git
258
258
  cd weco-cli
259
259
  ```
260
260
 
261
261
  2. **Install Dependencies:**
262
-
263
262
  ```bash
264
263
  pip install -e ".[dev]"
265
264
  ```
266
265
 
267
266
  3. **Create a Feature Branch:**
268
-
269
267
  ```bash
270
268
  git checkout -b feature/your-feature-name
271
269
  ```
@@ -6,10 +6,11 @@
6
6
  </div>
7
7
 
8
8
  [![Python](https://img.shields.io/badge/Python-3.8.0+-blue)](https://www.python.org)
9
+ [![PyPI version](https://img.shields.io/pypi/v/weco?label=PyPI%20version&color=f05138&labelColor=555555)](https://badge.fury.io/py/weco)
9
10
  [![docs](https://img.shields.io/website?url=https://docs.weco.ai/&label=docs)](https://docs.weco.ai/)
10
- [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
11
- [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
12
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/WecoAI/weco-cli/blob/main/examples/hello-kernel-world/colab_notebook_walkthrough.ipynb)
11
+ [![PyPI Downloads](https://static.pepy.tech/badge/weco?color=4c1)](https://pepy.tech/projects/weco)
12
+ [![arXiv on AIDE](https://img.shields.io/badge/arXiv-AIDE-b31b1b?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2502.13138)
13
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg?labelColor=ffffff&color=F17E01)](https://colab.research.google.com/github/WecoAI/weco-cli/blob/main/examples/hello-kernel-world/colab_notebook_walkthrough.ipynb)
13
14
 
14
15
  `pip install weco`
15
16
 
@@ -49,9 +50,9 @@ The `weco` CLI leverages a tree search approach guided by LLMs to iteratively ex
49
50
 
50
51
  `weco` requires API keys for the LLMs it uses internally. You **must** provide these keys via environment variables:
51
52
 
52
- - **OpenAI:** `export OPENAI_API_KEY="your_key_here"` (Create your API key [here](https://platform.openai.com/api-keys))
53
- - **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"` (Create your API key [here](https://console.anthropic.com/settings/keys))
54
- - **Google:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create your API key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
53
+ - **OpenAI:** `export OPENAI_API_KEY="your_key_here"` (Create your OpenAI API key [here](https://platform.openai.com/api-keys))
54
+ - **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"` (Create your Anthropic API key [here](https://console.anthropic.com/settings/keys))
55
+ - **Google:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create your Gemini API key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
55
56
 
56
57
  ---
57
58
 
@@ -228,20 +229,17 @@ Weco will parse this output to extract the numerical value (1.5 in this case) as
228
229
  We welcome your contributions! To get started:
229
230
 
230
231
  1. **Fork & Clone the Repository:**
231
-
232
232
  ```bash
233
233
  git clone https://github.com/WecoAI/weco-cli.git
234
234
  cd weco-cli
235
235
  ```
236
236
 
237
237
  2. **Install Dependencies:**
238
-
239
238
  ```bash
240
239
  pip install -e ".[dev]"
241
240
  ```
242
241
 
243
242
  3. **Create a Feature Branch:**
244
-
245
243
  ```bash
246
244
  git checkout -b feature/your-feature-name
247
245
  ```
@@ -11,7 +11,7 @@ Install the CLI using `pip`:
11
11
  pip install weco
12
12
  ```
13
13
 
14
- Set up your API key:
14
+ Create your OpenAI API key [here](https://platform.openai.com/api-keys), then run:
15
15
  ```bash
16
16
  export OPENAI_API_KEY="your_key_here"
17
17
  ```
@@ -46,3 +46,9 @@ weco run --source optimize.py \
46
46
  * `--additional-instructions guide.md`: Provides guidance to the LLM on the optimization approach.
47
47
 
48
48
  Weco will iteratively modify `optimize.py`, generating and integrating CUDA C++ code, guided by the evaluation results and the instructions in `guide.md`.
49
+
50
+ ## Next Steps
51
+
52
+ Now that you've optimized your code with CUDA kernels, try [Triton Optimization](/examples/triton/README.md) for a higher-level GPU programming approach. If you're more interested in [Model Development](/examples/spaceship-titanic/README.md) or [Prompt Engineering](/examples/prompt/README.md), we've got you covered!
53
+
54
+ You can check out our [CLI Reference](https://docs.weco.ai/cli/cli-reference) to learn more about what you can do with the tool.
@@ -110,6 +110,7 @@ if __name__ == "__main__":
110
110
 
111
111
  # benchmarking parameters
112
112
  n_correctness_trials = 10
113
+ correctness_tolerance = 1e-5
113
114
  n_warmup = 1000
114
115
  n_rep = 5000
115
116
 
@@ -152,6 +153,8 @@ if __name__ == "__main__":
152
153
  max_diff_avg += torch.max(torch.abs(optimized_output - baseline_output))
153
154
  max_diff_avg /= n_correctness_trials
154
155
  print(f"max float diff between values of baseline and optimized model: {max_diff_avg}")
156
+ if max_diff_avg > correctness_tolerance:
157
+ print("invalid solution: max float diff is too high")
155
158
 
156
159
  # measure performance
157
160
  inputs = get_inputs(batch_size=batch_size, seq_len=seq_len, n_embd=n_embd, device="cuda")
@@ -101,6 +101,7 @@ if __name__ == "__main__":
101
101
 
102
102
  # benchmark parameters
103
103
  n_correctness_trials = 10
104
+ correctness_tolerance = 1e-5
104
105
  n_warmup = 1000
105
106
  n_rep = 5000
106
107
 
@@ -130,6 +131,8 @@ if __name__ == "__main__":
130
131
  max_diff_avg += torch.max(torch.abs(optimized_output - baseline_output))
131
132
  max_diff_avg /= n_correctness_trials
132
133
  print(f"max float diff between values of baseline and optimized model: {max_diff_avg}")
134
+ if max_diff_avg > correctness_tolerance:
135
+ print("invalid solution: max float diff is too high")
133
136
 
134
137
  # measure performance
135
138
  inputs = get_inputs(batch_size, input_size, args.device)
@@ -16,7 +16,7 @@ The experiment runs locally, requires only two short Python files and a prompt g
16
16
  pip install weco openai datasets
17
17
  ```
18
18
 
19
- 3. This example uses `o4-mini` via the OpenAI API by default. Set up your API key:
19
+ 3. This example uses `o4-mini` via the OpenAI API by default. Create your OpenAI API key [here](https://platform.openai.com/api-keys), then run:
20
20
  ```bash
21
21
  export OPENAI_API_KEY="your_key_here"
22
22
  ```
@@ -62,3 +62,10 @@ Weco then mutates the prompt instructions in `optimize.py`, tries again, and gra
62
62
  * The script sends model calls in parallel via `ThreadPoolExecutor`, so network latency is hidden.
63
63
  * Every five completed items, the script logs progress and elapsed time.
64
64
  * The final line `accuracy: value` is the only part Weco needs for guidance.
65
+
66
+ ## Next Steps
67
+
68
+ Now that you've automated prompt engineering for yourself, check out our guide on [Model Development](/examples/spaceship-titanic/README.md) or [CUDA Kernel Engineering](/examples/cuda/README.md).
69
+
70
+ You can check out our [CLI Reference](https://docs.weco.ai/cli/cli-reference) to learn more about what you can do with the tool.
71
+
@@ -10,7 +10,7 @@ The goal is to improve the model's `accuracy` metric by optimizing the `train.py
10
10
  ```bash
11
11
  pip install weco pandas numpy scikit-learn torch xgboost lightgbm catboost
12
12
  ```
13
- 3. Set up your API key:
13
+ 3. Create your OpenAI API key [here](https://platform.openai.com/api-keys), then run:
14
14
  ```bash
15
15
  export OPENAI_API_KEY="your_key_here"
16
16
  ```
@@ -44,3 +44,7 @@ weco run --source train.py \
44
44
  * `--log-dir .runs/spaceship-titanic`: Specifies the directory where Weco should save logs and results for this run.
45
45
 
46
46
  Weco will iteratively update the feature engineering or modeling code within `train.py` guided by the evaluation method defined in `evaluate.py`
47
+
48
+ ## Next Steps
49
+
50
+ With model development covered, you might be curious to see how you can make your AI code run faster, saving you time and more importantly GPU credits. Check out our example on automating kernel engineering in [CUDA](/examples/cuda/README.md) and [Triton](/examples/triton/README.md), or dive into the [CLI Reference](https://docs.weco.ai/cli/cli-reference).
@@ -12,7 +12,7 @@ Install the CLI using `pip`:
12
12
  pip install weco
13
13
  ```
14
14
 
15
- Set up your API key:
15
+ Create your OpenAI API key [here](https://platform.openai.com/api-keys), then run:
16
16
  ```bash
17
17
  export OPENAI_API_KEY="your_key_here"
18
18
  ```
@@ -47,3 +47,7 @@ weco run --source optimize.py \
47
47
  * `--additional-instructions "..."`: Provides specific guidance to the LLM. In this case, it directs the model to use Triton for optimization, ensure the numerical difference ("max float diff") between the original and optimized code remains small, and keep the overall code structure consistent.
48
48
 
49
49
  Weco will iteratively modify `optimize.py`, incorporating Triton kernels, guided by the performance feedback (`speedup`) from the evaluation script and the instructions provided.
50
+
51
+ ## Next Steps
52
+
53
+ After mastering Triton kernels, explore [CUDA Optimization](/examples/cuda/README.md) for even lower-level GPU programming, or check the [CLI Reference](https://docs.weco.ai/cli/cli-reference) to improve the results you get with Weco.
@@ -105,6 +105,7 @@ if __name__ == "__main__":
105
105
 
106
106
  # benchmarking parameters
107
107
  n_correctness_trials = 10
108
+ correctness_tolerance = 1e-5
108
109
  n_warmup = 1000
109
110
  n_rep = 5000
110
111
 
@@ -147,6 +148,8 @@ if __name__ == "__main__":
147
148
  max_diff_avg += torch.max(torch.abs(optimized_output - baseline_output))
148
149
  max_diff_avg /= n_correctness_trials
149
150
  print(f"max float diff between values of baseline and optimized model: {max_diff_avg}")
151
+ if max_diff_avg > correctness_tolerance:
152
+ print("invalid solution: max float diff is too high")
150
153
 
151
154
  # measure performance
152
155
  inputs = get_inputs(batch_size=batch_size, seq_len=seq_len, n_embd=n_embd, device="cuda")
@@ -8,7 +8,7 @@ name = "weco"
8
8
  authors = [{ name = "Weco AI Team", email = "contact@weco.ai" }]
9
9
  description = "Documentation for `weco`, a CLI for using Weco AI's code optimizer."
10
10
  readme = "README.md"
11
- version = "0.2.22"
11
+ version = "0.2.23"
12
12
  license = { text = "MIT" }
13
13
  requires-python = ">=3.8"
14
14
  dependencies = [
@@ -1,29 +1,11 @@
1
1
  import sys
2
2
  from typing import Dict, Any, Optional, Union, Tuple, List
3
-
4
3
  import requests
5
- from requests.adapters import HTTPAdapter
6
- from urllib3.util.retry import Retry
7
4
  from rich.console import Console
8
5
 
9
6
  from weco import __pkg_version__, __base_url__
10
7
 
11
8
 
12
- # --- Session Configuration ---
13
- def _get_weco_session() -> requests.Session:
14
- session = requests.Session()
15
- retry_strategy = Retry(
16
- total=3,
17
- status_forcelist=[429, 500, 502, 503, 504], # Retry on these server errors and rate limiting
18
- allowed_methods=["HEAD", "GET", "PUT", "POST", "DELETE", "OPTIONS"], # Case-insensitive
19
- backoff_factor=1, # e.g., sleep for 0s, 2s, 4s between retries (factor * (2 ** ({number of total retries} - 1)))
20
- )
21
- adapter = HTTPAdapter(max_retries=retry_strategy)
22
- session.mount("http://", adapter)
23
- session.mount("https://", adapter)
24
- return session
25
-
26
-
27
9
  def handle_api_error(e: requests.exceptions.HTTPError, console: Console) -> None:
28
10
  """Extract and display error messages from API responses in a structured format."""
29
11
  try:
@@ -53,8 +35,7 @@ def start_optimization_run(
53
35
  """Start the optimization run."""
54
36
  with console.status("[bold green]Starting Optimization..."):
55
37
  try:
56
- session = _get_weco_session()
57
- response = session.post(
38
+ response = requests.post(
58
39
  f"{__base_url__}/runs",
59
40
  json={
60
41
  "source_code": source_code,
@@ -76,8 +57,8 @@ def start_optimization_run(
76
57
  except requests.exceptions.HTTPError as e:
77
58
  handle_api_error(e, console)
78
59
  sys.exit(1)
79
- except requests.exceptions.RequestException as e:
80
- console.print(f"[bold red]Network Error starting run: {e}[/]")
60
+ except Exception as e:
61
+ console.print(f"[bold red]Error starting run: {e}[/]")
81
62
  sys.exit(1)
82
63
 
83
64
 
@@ -91,8 +72,7 @@ def evaluate_feedback_then_suggest_next_solution(
91
72
  ) -> Dict[str, Any]:
92
73
  """Evaluate the feedback and suggest the next solution."""
93
74
  try:
94
- session = _get_weco_session()
95
- response = session.post(
75
+ response = requests.post(
96
76
  f"{__base_url__}/runs/{run_id}/suggest",
97
77
  json={
98
78
  "execution_output": execution_output,
@@ -108,8 +88,8 @@ def evaluate_feedback_then_suggest_next_solution(
108
88
  # Allow caller to handle suggest errors, maybe retry or terminate
109
89
  handle_api_error(e, Console()) # Use default console if none passed
110
90
  raise # Re-raise the exception
111
- except requests.exceptions.RequestException as e:
112
- print(f"Network Error during suggest: {e}") # Use print as console might not be available
91
+ except Exception as e:
92
+ print(f"Error: {e}") # Use print as console might not be available
113
93
  raise # Re-raise the exception
114
94
 
115
95
 
@@ -118,8 +98,7 @@ def get_optimization_run_status(
118
98
  ) -> Dict[str, Any]:
119
99
  """Get the current status of the optimization run."""
120
100
  try:
121
- session = _get_weco_session()
122
- response = session.get(
101
+ response = requests.get(
123
102
  f"{__base_url__}/runs/{run_id}", params={"include_history": include_history}, headers=auth_headers, timeout=timeout
124
103
  )
125
104
  response.raise_for_status()
@@ -127,16 +106,15 @@ def get_optimization_run_status(
127
106
  except requests.exceptions.HTTPError as e:
128
107
  handle_api_error(e, Console()) # Use default console
129
108
  raise # Re-raise
130
- except requests.exceptions.RequestException as e:
131
- print(f"Network Error getting status: {e}")
109
+ except Exception as e:
110
+ print(f"Error getting run status: {e}")
132
111
  raise # Re-raise
133
112
 
134
113
 
135
114
  def send_heartbeat(run_id: str, auth_headers: dict = {}, timeout: Union[int, Tuple[int, int]] = 10) -> bool:
136
115
  """Send a heartbeat signal to the backend."""
137
116
  try:
138
- session = _get_weco_session()
139
- response = session.put(f"{__base_url__}/runs/{run_id}/heartbeat", headers=auth_headers, timeout=timeout)
117
+ response = requests.put(f"{__base_url__}/runs/{run_id}/heartbeat", headers=auth_headers, timeout=timeout)
140
118
  response.raise_for_status()
141
119
  return True
142
120
  except requests.exceptions.HTTPError as e:
@@ -145,8 +123,8 @@ def send_heartbeat(run_id: str, auth_headers: dict = {}, timeout: Union[int, Tup
145
123
  else:
146
124
  print(f"Heartbeat failed for run {run_id}: HTTP {e.response.status_code}", file=sys.stderr)
147
125
  return False
148
- except requests.exceptions.RequestException as e:
149
- print(f"Heartbeat network error for run {run_id}: {e}", file=sys.stderr)
126
+ except Exception as e:
127
+ print(f"Error sending heartbeat for run {run_id}: {e}", file=sys.stderr)
150
128
  return False
151
129
 
152
130
 
@@ -160,8 +138,7 @@ def report_termination(
160
138
  ) -> bool:
161
139
  """Report the termination reason to the backend."""
162
140
  try:
163
- session = _get_weco_session()
164
- response = session.post(
141
+ response = requests.post(
165
142
  f"{__base_url__}/runs/{run_id}/terminate",
166
143
  json={"status_update": status_update, "termination_reason": reason, "termination_details": details},
167
144
  headers=auth_headers,
@@ -169,7 +146,7 @@ def report_termination(
169
146
  )
170
147
  response.raise_for_status()
171
148
  return True
172
- except requests.exceptions.RequestException as e:
149
+ except Exception as e:
173
150
  print(f"Warning: Failed to report termination to backend for run {run_id}: {e}", file=sys.stderr)
174
151
  return False
175
152
 
@@ -211,8 +188,7 @@ def get_optimization_suggestions_from_codebase(
211
188
  """Analyze codebase and get optimization suggestions using the model-agnostic backend API."""
212
189
  try:
213
190
  model, api_key_dict = _determine_model_and_api_key()
214
- session = _get_weco_session()
215
- response = session.post(
191
+ response = requests.post(
216
192
  f"{__base_url__}/onboard/analyze-codebase",
217
193
  json={
218
194
  "gitingest_summary": gitingest_summary,
@@ -231,11 +207,8 @@ def get_optimization_suggestions_from_codebase(
231
207
  except requests.exceptions.HTTPError as e:
232
208
  handle_api_error(e, console)
233
209
  return None
234
- except requests.exceptions.RequestException as e:
235
- console.print(f"[bold red]Network Error getting optimization suggestions: {e}[/]")
236
- return None
237
210
  except Exception as e:
238
- console.print(f"[bold red]Error calling backend API: {e}[/]")
211
+ console.print(f"[bold red]Error: {e}[/]")
239
212
  return None
240
213
 
241
214
 
@@ -250,8 +223,7 @@ def generate_evaluation_script_and_metrics(
250
223
  """Generate evaluation script and determine metrics using the model-agnostic backend API."""
251
224
  try:
252
225
  model, api_key_dict = _determine_model_and_api_key()
253
- session = _get_weco_session()
254
- response = session.post(
226
+ response = requests.post(
255
227
  f"{__base_url__}/onboard/generate-script",
256
228
  json={
257
229
  "target_file": target_file,
@@ -266,15 +238,11 @@ def generate_evaluation_script_and_metrics(
266
238
  response.raise_for_status()
267
239
  result = response.json()
268
240
  return result.get("script_content"), result.get("metric_name"), result.get("goal"), result.get("reasoning")
269
-
270
241
  except requests.exceptions.HTTPError as e:
271
242
  handle_api_error(e, console)
272
243
  return None, None, None, None
273
- except requests.exceptions.RequestException as e:
274
- console.print(f"[bold red]Network Error generating evaluation script: {e}[/]")
275
- return None, None, None, None
276
244
  except Exception as e:
277
- console.print(f"[bold red]Error calling backend API: {e}[/]")
245
+ console.print(f"[bold red]Error: {e}[/]")
278
246
  return None, None, None, None
279
247
 
280
248
 
@@ -291,8 +259,7 @@ def analyze_evaluation_environment(
291
259
  """Analyze existing evaluation scripts and environment using the model-agnostic backend API."""
292
260
  try:
293
261
  model, api_key_dict = _determine_model_and_api_key()
294
- session = _get_weco_session()
295
- response = session.post(
262
+ response = requests.post(
296
263
  f"{__base_url__}/onboard/analyze-environment",
297
264
  json={
298
265
  "target_file": target_file,
@@ -312,11 +279,8 @@ def analyze_evaluation_environment(
312
279
  except requests.exceptions.HTTPError as e:
313
280
  handle_api_error(e, console)
314
281
  return None
315
- except requests.exceptions.RequestException as e:
316
- console.print(f"[bold red]Network Error analyzing evaluation environment: {e}[/]")
317
- return None
318
282
  except Exception as e:
319
- console.print(f"[bold red]Error calling backend API: {e}[/]")
283
+ console.print(f"[bold red]Error: {e}[/]")
320
284
  return None
321
285
 
322
286
 
@@ -331,8 +295,7 @@ def analyze_script_execution_requirements(
331
295
  """Analyze script to determine proper execution command using the model-agnostic backend API."""
332
296
  try:
333
297
  model, api_key_dict = _determine_model_and_api_key()
334
- session = _get_weco_session()
335
- response = session.post(
298
+ response = requests.post(
336
299
  f"{__base_url__}/onboard/analyze-script",
337
300
  json={
338
301
  "script_content": script_content,
@@ -351,9 +314,6 @@ def analyze_script_execution_requirements(
351
314
  except requests.exceptions.HTTPError as e:
352
315
  handle_api_error(e, console)
353
316
  return f"python {script_path}"
354
- except requests.exceptions.RequestException as e:
355
- console.print(f"[bold red]Network Error analyzing script execution: {e}[/]")
356
- return f"python {script_path}"
357
317
  except Exception as e:
358
- console.print(f"[bold red]Error calling backend API: {e}[/]")
318
+ console.print(f"[bold red]Error: {e}[/]")
359
319
  return f"python {script_path}"
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.2.22
3
+ Version: 0.2.23
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License: MIT
@@ -30,10 +30,11 @@ Dynamic: license-file
30
30
  </div>
31
31
 
32
32
  [![Python](https://img.shields.io/badge/Python-3.8.0+-blue)](https://www.python.org)
33
+ [![PyPI version](https://img.shields.io/pypi/v/weco?label=PyPI%20version&color=f05138&labelColor=555555)](https://badge.fury.io/py/weco)
33
34
  [![docs](https://img.shields.io/website?url=https://docs.weco.ai/&label=docs)](https://docs.weco.ai/)
34
- [![PyPI version](https://badge.fury.io/py/weco.svg)](https://badge.fury.io/py/weco)
35
- [![AIDE](https://img.shields.io/badge/AI--Driven_Exploration-arXiv-orange?style=flat-square&logo=arxiv)](https://arxiv.org/abs/2502.13138)
36
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/WecoAI/weco-cli/blob/main/examples/hello-kernel-world/colab_notebook_walkthrough.ipynb)
35
+ [![PyPI Downloads](https://static.pepy.tech/badge/weco?color=4c1)](https://pepy.tech/projects/weco)
36
+ [![arXiv on AIDE](https://img.shields.io/badge/arXiv-AIDE-b31b1b?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2502.13138)
37
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg?labelColor=ffffff&color=F17E01)](https://colab.research.google.com/github/WecoAI/weco-cli/blob/main/examples/hello-kernel-world/colab_notebook_walkthrough.ipynb)
37
38
 
38
39
  `pip install weco`
39
40
 
@@ -73,9 +74,9 @@ The `weco` CLI leverages a tree search approach guided by LLMs to iteratively ex
73
74
 
74
75
  `weco` requires API keys for the LLMs it uses internally. You **must** provide these keys via environment variables:
75
76
 
76
- - **OpenAI:** `export OPENAI_API_KEY="your_key_here"` (Create your API key [here](https://platform.openai.com/api-keys))
77
- - **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"` (Create your API key [here](https://console.anthropic.com/settings/keys))
78
- - **Google:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create your API key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
77
+ - **OpenAI:** `export OPENAI_API_KEY="your_key_here"` (Create your OpenAI API key [here](https://platform.openai.com/api-keys))
78
+ - **Anthropic:** `export ANTHROPIC_API_KEY="your_key_here"` (Create your Anthropic API key [here](https://console.anthropic.com/settings/keys))
79
+ - **Google:** `export GEMINI_API_KEY="your_key_here"` (Google AI Studio has a free API usage quota. Create your Gemini API key [here](https://aistudio.google.com/apikey) to use `weco` for free.)
79
80
 
80
81
  ---
81
82
 
@@ -252,20 +253,17 @@ Weco will parse this output to extract the numerical value (1.5 in this case) as
252
253
  We welcome your contributions! To get started:
253
254
 
254
255
  1. **Fork & Clone the Repository:**
255
-
256
256
  ```bash
257
257
  git clone https://github.com/WecoAI/weco-cli.git
258
258
  cd weco-cli
259
259
  ```
260
260
 
261
261
  2. **Install Dependencies:**
262
-
263
262
  ```bash
264
263
  pip install -e ".[dev]"
265
264
  ```
266
265
 
267
266
  3. **Create a Feature Branch:**
268
-
269
267
  ```bash
270
268
  git checkout -b feature/your-feature-name
271
269
  ```
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes