weco 0.2.18__tar.gz → 0.2.19__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {weco-0.2.18 → weco-0.2.19}/PKG-INFO +23 -15
- {weco-0.2.18 → weco-0.2.19}/README.md +22 -14
- {weco-0.2.18 → weco-0.2.19}/examples/cuda/README.md +2 -2
- {weco-0.2.18 → weco-0.2.19}/examples/prompt/README.md +2 -2
- {weco-0.2.18 → weco-0.2.19}/examples/spaceship-titanic/README.md +2 -2
- {weco-0.2.18 → weco-0.2.19}/examples/triton/README.md +2 -2
- {weco-0.2.18 → weco-0.2.19}/pyproject.toml +1 -1
- {weco-0.2.18 → weco-0.2.19}/weco/cli.py +50 -12
- {weco-0.2.18 → weco-0.2.19}/weco.egg-info/PKG-INFO +23 -15
- {weco-0.2.18 → weco-0.2.19}/.github/workflows/lint.yml +0 -0
- {weco-0.2.18 → weco-0.2.19}/.github/workflows/release.yml +0 -0
- {weco-0.2.18 → weco-0.2.19}/.gitignore +0 -0
- {weco-0.2.18 → weco-0.2.19}/.repomixignore +0 -0
- {weco-0.2.18 → weco-0.2.19}/LICENSE +0 -0
- {weco-0.2.18 → weco-0.2.19}/assets/example-optimization.gif +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/cuda/evaluate.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/cuda/guide.md +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/cuda/optimize.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/hello-kernel-world/evaluate.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/hello-kernel-world/optimize.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/prompt/eval.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/prompt/optimize.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/prompt/prompt_guide.md +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/spaceship-titanic/competition_description.md +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/spaceship-titanic/data/sample_submission.csv +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/spaceship-titanic/data/test.csv +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/spaceship-titanic/data/train.csv +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/spaceship-titanic/evaluate.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/spaceship-titanic/requirements-test.txt +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/triton/evaluate.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/examples/triton/optimize.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/setup.cfg +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco/__init__.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco/api.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco/auth.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco/panels.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco/utils.py +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco.egg-info/SOURCES.txt +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco.egg-info/dependency_links.txt +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco.egg-info/entry_points.txt +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco.egg-info/requires.txt +0 -0
- {weco-0.2.18 → weco-0.2.19}/weco.egg-info/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: weco
|
|
3
|
-
Version: 0.2.
|
|
3
|
+
Version: 0.2.19
|
|
4
4
|
Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
|
|
5
5
|
Author-email: Weco AI Team <contact@weco.ai>
|
|
6
6
|
License: MIT
|
|
@@ -98,9 +98,8 @@ pip install torch
|
|
|
98
98
|
weco run --source optimize.py \
|
|
99
99
|
--eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
|
|
100
100
|
--metric speedup \
|
|
101
|
-
--maximize
|
|
101
|
+
--goal maximize \
|
|
102
102
|
--steps 15 \
|
|
103
|
-
--model gemini-2.5-pro-exp-03-25 \
|
|
104
103
|
--additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
|
|
105
104
|
```
|
|
106
105
|
|
|
@@ -108,18 +107,27 @@ weco run --source optimize.py \
|
|
|
108
107
|
|
|
109
108
|
---
|
|
110
109
|
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
|
116
|
-
|
|
|
117
|
-
|
|
|
118
|
-
|
|
|
119
|
-
|
|
|
120
|
-
|
|
|
121
|
-
|
|
122
|
-
|
|
110
|
+
### Arguments for `weco run`
|
|
111
|
+
|
|
112
|
+
**Required:**
|
|
113
|
+
|
|
114
|
+
| Argument | Description |
|
|
115
|
+
| :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
116
|
+
| `-s, --source` | Path to the source code file that will be optimized (e.g., `optimize.py`). |
|
|
117
|
+
| `-c, --eval-command`| Command to run for evaluating the code in `--source`. This command should print the target `--metric` and its value to the terminal (stdout/stderr). See note below. |
|
|
118
|
+
| `-m, --metric` | The name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name should match what's printed by your `--eval-command`. |
|
|
119
|
+
| `-g, --goal` | `maximize`/`max` to maximize the `--metric` or `minimize`/`min` to minimize it. |
|
|
120
|
+
|
|
121
|
+
<br>
|
|
122
|
+
|
|
123
|
+
**Optional:**
|
|
124
|
+
|
|
125
|
+
| Argument | Description | Default |
|
|
126
|
+
| :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
|
|
127
|
+
| `-n, --steps` | Number of optimization steps (LLM iterations) to run. | 100 |
|
|
128
|
+
| `-M, --model` | Model identifier for the LLM to use (e.g., `gpt-4o`, `claude-3.5-sonnet`). | `o4-mini` when `OPENAI_API_KEY` is set; `claude-3-7-sonnet-20250219` when `ANTHROPIC_API_KEY` is set; `gemini-2.5-pro-exp-03-25` when `GEMINI_API_KEY` is set (priority: `OPENAI_API_KEY` > `ANTHROPIC_API_KEY` > `GEMINI_API_KEY`). |
|
|
129
|
+
| `-i, --additional-instructions`| Natural language description of specific instructions **or** path to a file containing detailed instructions to guide the LLM. | `None` |
|
|
130
|
+
| `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` |
|
|
123
131
|
|
|
124
132
|
---
|
|
125
133
|
|
|
@@ -75,9 +75,8 @@ pip install torch
|
|
|
75
75
|
weco run --source optimize.py \
|
|
76
76
|
--eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
|
|
77
77
|
--metric speedup \
|
|
78
|
-
--maximize
|
|
78
|
+
--goal maximize \
|
|
79
79
|
--steps 15 \
|
|
80
|
-
--model gemini-2.5-pro-exp-03-25 \
|
|
81
80
|
--additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
|
|
82
81
|
```
|
|
83
82
|
|
|
@@ -85,18 +84,27 @@ weco run --source optimize.py \
|
|
|
85
84
|
|
|
86
85
|
---
|
|
87
86
|
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
|
93
|
-
|
|
|
94
|
-
|
|
|
95
|
-
|
|
|
96
|
-
|
|
|
97
|
-
|
|
|
98
|
-
|
|
99
|
-
|
|
87
|
+
### Arguments for `weco run`
|
|
88
|
+
|
|
89
|
+
**Required:**
|
|
90
|
+
|
|
91
|
+
| Argument | Description |
|
|
92
|
+
| :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
93
|
+
| `-s, --source` | Path to the source code file that will be optimized (e.g., `optimize.py`). |
|
|
94
|
+
| `-c, --eval-command`| Command to run for evaluating the code in `--source`. This command should print the target `--metric` and its value to the terminal (stdout/stderr). See note below. |
|
|
95
|
+
| `-m, --metric` | The name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name should match what's printed by your `--eval-command`. |
|
|
96
|
+
| `-g, --goal` | `maximize`/`max` to maximize the `--metric` or `minimize`/`min` to minimize it. |
|
|
97
|
+
|
|
98
|
+
<br>
|
|
99
|
+
|
|
100
|
+
**Optional:**
|
|
101
|
+
|
|
102
|
+
| Argument | Description | Default |
|
|
103
|
+
| :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
|
|
104
|
+
| `-n, --steps` | Number of optimization steps (LLM iterations) to run. | 100 |
|
|
105
|
+
| `-M, --model` | Model identifier for the LLM to use (e.g., `gpt-4o`, `claude-3.5-sonnet`). | `o4-mini` when `OPENAI_API_KEY` is set; `claude-3-7-sonnet-20250219` when `ANTHROPIC_API_KEY` is set; `gemini-2.5-pro-exp-03-25` when `GEMINI_API_KEY` is set (priority: `OPENAI_API_KEY` > `ANTHROPIC_API_KEY` > `GEMINI_API_KEY`). |
|
|
106
|
+
| `-i, --additional-instructions`| Natural language description of specific instructions **or** path to a file containing detailed instructions to guide the LLM. | `None` |
|
|
107
|
+
| `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` |
|
|
100
108
|
|
|
101
109
|
---
|
|
102
110
|
|
|
@@ -21,7 +21,7 @@ Run the following command to start the optimization process:
|
|
|
21
21
|
weco run --source optimize.py \
|
|
22
22
|
--eval-command "python evaluate.py --solution-path optimize.py" \
|
|
23
23
|
--metric speedup \
|
|
24
|
-
--maximize
|
|
24
|
+
--goal maximize \
|
|
25
25
|
--steps 30 \
|
|
26
26
|
--model gemini-2.5-pro-exp-03-25 \
|
|
27
27
|
--additional-instructions guide.md
|
|
@@ -32,7 +32,7 @@ weco run --source optimize.py \
|
|
|
32
32
|
* `--source optimize.py`: The initial PyTorch self-attention code to be optimized with CUDA.
|
|
33
33
|
* `--eval-command "python evaluate.py --solution-path optimize.py"`: Runs the evaluation script, which compiles (if necessary) and benchmarks the CUDA-enhanced code in `optimize.py` against a baseline, printing the `speedup`.
|
|
34
34
|
* `--metric speedup`: The optimization target metric.
|
|
35
|
-
* `--maximize
|
|
35
|
+
* `--goal maximize`: Weco aims to increase the speedup.
|
|
36
36
|
* `--steps 30`: The number of optimization iterations.
|
|
37
37
|
* `--model gemini-2.5-pro-exp-03-25`: The LLM used for code generation.
|
|
38
38
|
* `--additional-instructions guide.md`: Points Weco to a file containing detailed instructions for the LLM on how to write the CUDA kernels, handle compilation (e.g., using `torch.utils.cpp_extension`), manage data types, and ensure correctness.
|
|
@@ -24,10 +24,10 @@ This example uses `gpt-4o-mini` via the OpenAI API by default. Ensure your `OPEN
|
|
|
24
24
|
weco --source optimize.py \
|
|
25
25
|
--eval-command "python eval.py" \
|
|
26
26
|
--metric accuracy \
|
|
27
|
-
--maximize
|
|
27
|
+
--goal maximize \
|
|
28
28
|
--steps 40 \
|
|
29
29
|
--model gemini-2.5-flash-preview-04-17 \
|
|
30
|
-
--
|
|
30
|
+
--additional-instructions prompt_guide.md
|
|
31
31
|
```
|
|
32
32
|
|
|
33
33
|
During each evaluation round you will see log lines similar to the following.
|
|
@@ -20,7 +20,7 @@ Run the following command to start optimizing the model:
|
|
|
20
20
|
weco run --source evaluate.py \
|
|
21
21
|
--eval-command "python evaluate.py --data-dir ./data" \
|
|
22
22
|
--metric accuracy \
|
|
23
|
-
--maximize
|
|
23
|
+
--goal maximize \
|
|
24
24
|
--steps 20 \
|
|
25
25
|
--model o4-mini \
|
|
26
26
|
--additional-instructions "Improve feature engineering, model choice and hyper-parameters."
|
|
@@ -34,7 +34,7 @@ weco run --source evaluate.py \
|
|
|
34
34
|
* [optional] `--data-dir`: path to the train and test data.
|
|
35
35
|
* [optional] `--seed`: Seed for reproduce the experiment.
|
|
36
36
|
* `--metric accuracy`: The target metric Weco should optimize.
|
|
37
|
-
* `--maximize
|
|
37
|
+
* `--goal maximize`: Weco aims to increase the accuracy.
|
|
38
38
|
* `--steps 10`: The number of optimization iterations.
|
|
39
39
|
* `--model gemini-2.5-pro-exp-03-25`: The LLM driving the optimization.
|
|
40
40
|
* `--additional-instructions "Improve feature engineering, model choice and hyper-parameters."`: A simple instruction for model improvement or you can put the path to [`comptition_description.md`](./competition_description.md) within the repo to feed the agent more detailed information.
|
|
@@ -19,7 +19,7 @@ Run the following command to start the optimization process:
|
|
|
19
19
|
weco run --source optimize.py \
|
|
20
20
|
--eval-command "python evaluate.py --solution-path optimize.py" \
|
|
21
21
|
--metric speedup \
|
|
22
|
-
--maximize
|
|
22
|
+
--goal maximize \
|
|
23
23
|
--steps 30 \
|
|
24
24
|
--model gemini-2.5-pro-exp-03-25 \
|
|
25
25
|
--additional-instructions "Use triton to optimize the code while ensuring a small max float diff. Maintain the same code format."
|
|
@@ -30,7 +30,7 @@ weco run --source optimize.py \
|
|
|
30
30
|
* `--source optimize.py`: The PyTorch self-attention implementation to be optimized.
|
|
31
31
|
* `--eval-command "python evaluate.py --solution-path optimize.py"`: Executes the evaluation script, which benchmarks the `optimize.py` code against a baseline and prints the `speedup`.
|
|
32
32
|
* `--metric speedup`: The target metric for optimization.
|
|
33
|
-
* `--maximize
|
|
33
|
+
* `--goal maximize`: The agent should maximize the speedup.
|
|
34
34
|
* `--steps 30`: The number of optimization iterations.
|
|
35
35
|
* `--model gemini-2.5-pro-exp-03-25`: The LLM driving the optimization.
|
|
36
36
|
* `--additional-instructions "..."`: Provides specific guidance to the LLM, instructing it to use Triton, maintain numerical accuracy ("small max float diff"), and preserve the code structure.
|
|
@@ -8,7 +8,7 @@ name = "weco"
|
|
|
8
8
|
authors = [{ name = "Weco AI Team", email = "contact@weco.ai" }]
|
|
9
9
|
description = "Documentation for `weco`, a CLI for using Weco AI's code optimizer."
|
|
10
10
|
readme = "README.md"
|
|
11
|
-
version = "0.2.
|
|
11
|
+
version = "0.2.19"
|
|
12
12
|
license = { text = "MIT" }
|
|
13
13
|
requires-python = ">=3.8"
|
|
14
14
|
dependencies = ["requests", "rich", "packaging"]
|
|
@@ -247,29 +247,55 @@ def main() -> None:
|
|
|
247
247
|
|
|
248
248
|
# --- Run Command ---
|
|
249
249
|
run_parser = subparsers.add_parser(
|
|
250
|
-
"run", help="Run code optimization", formatter_class=argparse.RawDescriptionHelpFormatter
|
|
250
|
+
"run", help="Run code optimization", formatter_class=argparse.RawDescriptionHelpFormatter, allow_abbrev=False
|
|
251
251
|
)
|
|
252
252
|
# Add arguments specific to the 'run' command to the run_parser
|
|
253
|
-
run_parser.add_argument("--source", type=str, required=True, help="Path to the source code file (e.g. optimize.py)")
|
|
254
253
|
run_parser.add_argument(
|
|
255
|
-
"
|
|
254
|
+
"-s",
|
|
255
|
+
"--source",
|
|
256
|
+
type=str,
|
|
257
|
+
required=True,
|
|
258
|
+
help="Path to the source code file that will be optimized (e.g., `optimize.py`)",
|
|
259
|
+
)
|
|
260
|
+
run_parser.add_argument(
|
|
261
|
+
"-c",
|
|
262
|
+
"--eval-command",
|
|
263
|
+
type=str,
|
|
264
|
+
required=True,
|
|
265
|
+
help="Command to run for evaluation (e.g. 'python eval.py --arg1=val1').",
|
|
266
|
+
)
|
|
267
|
+
run_parser.add_argument(
|
|
268
|
+
"-m",
|
|
269
|
+
"--metric",
|
|
270
|
+
type=str,
|
|
271
|
+
required=True,
|
|
272
|
+
help="Metric to optimize (e.g. 'accuracy', 'loss', 'f1_score') that is printed to the terminal by the eval command.",
|
|
256
273
|
)
|
|
257
|
-
run_parser.add_argument("--metric", type=str, required=True, help="Metric to optimize")
|
|
258
274
|
run_parser.add_argument(
|
|
259
|
-
"
|
|
275
|
+
"-g",
|
|
276
|
+
"--goal",
|
|
260
277
|
type=str,
|
|
261
|
-
choices=["
|
|
278
|
+
choices=["maximize", "max", "minimize", "min"],
|
|
262
279
|
required=True,
|
|
263
|
-
help="Specify '
|
|
280
|
+
help="Specify 'maximize'/'max' to maximize the metric or 'minimize'/'min' to minimize it.",
|
|
281
|
+
)
|
|
282
|
+
run_parser.add_argument("-n", "--steps", type=int, default=100, help="Number of steps to run. Defaults to 100.")
|
|
283
|
+
run_parser.add_argument(
|
|
284
|
+
"-M",
|
|
285
|
+
"--model",
|
|
286
|
+
type=str,
|
|
287
|
+
default=None,
|
|
288
|
+
help="Model to use for optimization. Defaults to `o4-mini` when `OPENAI_API_KEY` is set, `claude-3-7-sonnet-20250219` when `ANTHROPIC_API_KEY` is set, and `gemini-2.5-pro-exp-03-25` when `GEMINI_API_KEY` is set. When multiple keys are set, the priority is `OPENAI_API_KEY` > `ANTHROPIC_API_KEY` > `GEMINI_API_KEY`.",
|
|
264
289
|
)
|
|
265
|
-
run_parser.add_argument("--steps", type=int, required=True, help="Number of steps to run")
|
|
266
|
-
run_parser.add_argument("--model", type=str, required=True, help="Model to use for optimization")
|
|
267
|
-
run_parser.add_argument("--log-dir", type=str, default=".runs", help="Directory to store logs and results")
|
|
268
290
|
run_parser.add_argument(
|
|
291
|
+
"-l", "--log-dir", type=str, default=".runs", help="Directory to store logs and results. Defaults to `.runs`."
|
|
292
|
+
)
|
|
293
|
+
run_parser.add_argument(
|
|
294
|
+
"-i",
|
|
269
295
|
"--additional-instructions",
|
|
270
296
|
default=None,
|
|
271
297
|
type=str,
|
|
272
|
-
help="Description of additional instruction or path to a file containing additional instructions",
|
|
298
|
+
help="Description of additional instruction or path to a file containing additional instructions. Defaults to None.",
|
|
273
299
|
)
|
|
274
300
|
|
|
275
301
|
# --- Logout Command ---
|
|
@@ -328,8 +354,20 @@ def main() -> None:
|
|
|
328
354
|
# --- Configuration Loading ---
|
|
329
355
|
evaluation_command = args.eval_command
|
|
330
356
|
metric_name = args.metric
|
|
331
|
-
maximize = args.
|
|
357
|
+
maximize = args.goal in ["maximize", "max"]
|
|
332
358
|
steps = args.steps
|
|
359
|
+
# Determine the model to use
|
|
360
|
+
if args.model is None:
|
|
361
|
+
if "OPENAI_API_KEY" in llm_api_keys:
|
|
362
|
+
args.model = "o4-mini"
|
|
363
|
+
elif "ANTHROPIC_API_KEY" in llm_api_keys:
|
|
364
|
+
args.model = "claude-3-7-sonnet-20250219"
|
|
365
|
+
elif "GEMINI_API_KEY" in llm_api_keys:
|
|
366
|
+
args.model = "gemini-2.5-pro-exp-03-25"
|
|
367
|
+
else:
|
|
368
|
+
raise ValueError(
|
|
369
|
+
"No LLM API keys found in environment. Please set one of the following: OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY."
|
|
370
|
+
)
|
|
333
371
|
code_generator_config = {"model": args.model}
|
|
334
372
|
evaluator_config = {"model": args.model, "include_analysis": True}
|
|
335
373
|
search_policy_config = {
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: weco
|
|
3
|
-
Version: 0.2.
|
|
3
|
+
Version: 0.2.19
|
|
4
4
|
Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
|
|
5
5
|
Author-email: Weco AI Team <contact@weco.ai>
|
|
6
6
|
License: MIT
|
|
@@ -98,9 +98,8 @@ pip install torch
|
|
|
98
98
|
weco run --source optimize.py \
|
|
99
99
|
--eval-command "python evaluate.py --solution-path optimize.py --device cpu" \
|
|
100
100
|
--metric speedup \
|
|
101
|
-
--maximize
|
|
101
|
+
--goal maximize \
|
|
102
102
|
--steps 15 \
|
|
103
|
-
--model gemini-2.5-pro-exp-03-25 \
|
|
104
103
|
--additional-instructions "Fuse operations in the forward method while ensuring the max float deviation remains small. Maintain the same format of the code."
|
|
105
104
|
```
|
|
106
105
|
|
|
@@ -108,18 +107,27 @@ weco run --source optimize.py \
|
|
|
108
107
|
|
|
109
108
|
---
|
|
110
109
|
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
|
116
|
-
|
|
|
117
|
-
|
|
|
118
|
-
|
|
|
119
|
-
|
|
|
120
|
-
|
|
|
121
|
-
|
|
122
|
-
|
|
110
|
+
### Arguments for `weco run`
|
|
111
|
+
|
|
112
|
+
**Required:**
|
|
113
|
+
|
|
114
|
+
| Argument | Description |
|
|
115
|
+
| :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
116
|
+
| `-s, --source` | Path to the source code file that will be optimized (e.g., `optimize.py`). |
|
|
117
|
+
| `-c, --eval-command`| Command to run for evaluating the code in `--source`. This command should print the target `--metric` and its value to the terminal (stdout/stderr). See note below. |
|
|
118
|
+
| `-m, --metric` | The name of the metric you want to optimize (e.g., 'accuracy', 'speedup', 'loss'). This metric name should match what's printed by your `--eval-command`. |
|
|
119
|
+
| `-g, --goal` | `maximize`/`max` to maximize the `--metric` or `minimize`/`min` to minimize it. |
|
|
120
|
+
|
|
121
|
+
<br>
|
|
122
|
+
|
|
123
|
+
**Optional:**
|
|
124
|
+
|
|
125
|
+
| Argument | Description | Default |
|
|
126
|
+
| :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ |
|
|
127
|
+
| `-n, --steps` | Number of optimization steps (LLM iterations) to run. | 100 |
|
|
128
|
+
| `-M, --model` | Model identifier for the LLM to use (e.g., `gpt-4o`, `claude-3.5-sonnet`). | `o4-mini` when `OPENAI_API_KEY` is set; `claude-3-7-sonnet-20250219` when `ANTHROPIC_API_KEY` is set; `gemini-2.5-pro-exp-03-25` when `GEMINI_API_KEY` is set (priority: `OPENAI_API_KEY` > `ANTHROPIC_API_KEY` > `GEMINI_API_KEY`). |
|
|
129
|
+
| `-i, --additional-instructions`| Natural language description of specific instructions **or** path to a file containing detailed instructions to guide the LLM. | `None` |
|
|
130
|
+
| `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` |
|
|
123
131
|
|
|
124
132
|
---
|
|
125
133
|
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|