weco 0.3.3__tar.gz → 0.3.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (58) hide show
  1. {weco-0.3.3 → weco-0.3.4}/PKG-INFO +3 -1
  2. {weco-0.3.3 → weco-0.3.4}/README.md +2 -0
  3. {weco-0.3.3 → weco-0.3.4}/examples/README.md +2 -0
  4. {weco-0.3.3 → weco-0.3.4}/pyproject.toml +1 -1
  5. {weco-0.3.3 → weco-0.3.4}/weco/cli.py +12 -1
  6. {weco-0.3.3 → weco-0.3.4}/weco/optimizer.py +99 -31
  7. {weco-0.3.3 → weco-0.3.4}/weco/panels.py +26 -0
  8. {weco-0.3.3 → weco-0.3.4}/weco/utils.py +35 -0
  9. {weco-0.3.3 → weco-0.3.4}/weco.egg-info/PKG-INFO +3 -1
  10. {weco-0.3.3 → weco-0.3.4}/.github/workflows/lint.yml +0 -0
  11. {weco-0.3.3 → weco-0.3.4}/.github/workflows/release.yml +0 -0
  12. {weco-0.3.3 → weco-0.3.4}/.gitignore +0 -0
  13. {weco-0.3.3 → weco-0.3.4}/LICENSE +0 -0
  14. {weco-0.3.3 → weco-0.3.4}/assets/example-optimization.gif +0 -0
  15. {weco-0.3.3 → weco-0.3.4}/assets/weco.svg +0 -0
  16. {weco-0.3.3 → weco-0.3.4}/contributing.md +0 -0
  17. {weco-0.3.3 → weco-0.3.4}/examples/cuda/README.md +0 -0
  18. {weco-0.3.3 → weco-0.3.4}/examples/cuda/evaluate.py +0 -0
  19. {weco-0.3.3 → weco-0.3.4}/examples/cuda/module.py +0 -0
  20. {weco-0.3.3 → weco-0.3.4}/examples/cuda/requirements.txt +0 -0
  21. {weco-0.3.3 → weco-0.3.4}/examples/extract-line-plot/README.md +0 -0
  22. {weco-0.3.3 → weco-0.3.4}/examples/extract-line-plot/eval.py +0 -0
  23. {weco-0.3.3 → weco-0.3.4}/examples/extract-line-plot/guide.md +0 -0
  24. {weco-0.3.3 → weco-0.3.4}/examples/extract-line-plot/optimize.py +0 -0
  25. {weco-0.3.3 → weco-0.3.4}/examples/extract-line-plot/prepare_data.py +0 -0
  26. {weco-0.3.3 → weco-0.3.4}/examples/extract-line-plot/pyproject.toml +0 -0
  27. {weco-0.3.3 → weco-0.3.4}/examples/hello-world/README.md +0 -0
  28. {weco-0.3.3 → weco-0.3.4}/examples/hello-world/colab_notebook_walkthrough.ipynb +0 -0
  29. {weco-0.3.3 → weco-0.3.4}/examples/hello-world/evaluate.py +0 -0
  30. {weco-0.3.3 → weco-0.3.4}/examples/hello-world/module.py +0 -0
  31. {weco-0.3.3 → weco-0.3.4}/examples/hello-world/requirements.txt +0 -0
  32. {weco-0.3.3 → weco-0.3.4}/examples/prompt/README.md +0 -0
  33. {weco-0.3.3 → weco-0.3.4}/examples/prompt/eval.py +0 -0
  34. {weco-0.3.3 → weco-0.3.4}/examples/prompt/optimize.py +0 -0
  35. {weco-0.3.3 → weco-0.3.4}/examples/prompt/prompt_guide.md +0 -0
  36. {weco-0.3.3 → weco-0.3.4}/examples/spaceship-titanic/README.md +0 -0
  37. {weco-0.3.3 → weco-0.3.4}/examples/spaceship-titanic/competition_description.md +0 -0
  38. {weco-0.3.3 → weco-0.3.4}/examples/spaceship-titanic/data/sample_submission.csv +0 -0
  39. {weco-0.3.3 → weco-0.3.4}/examples/spaceship-titanic/data/test.csv +0 -0
  40. {weco-0.3.3 → weco-0.3.4}/examples/spaceship-titanic/data/train.csv +0 -0
  41. {weco-0.3.3 → weco-0.3.4}/examples/spaceship-titanic/evaluate.py +0 -0
  42. {weco-0.3.3 → weco-0.3.4}/examples/spaceship-titanic/train.py +0 -0
  43. {weco-0.3.3 → weco-0.3.4}/examples/triton/README.md +0 -0
  44. {weco-0.3.3 → weco-0.3.4}/examples/triton/evaluate.py +0 -0
  45. {weco-0.3.3 → weco-0.3.4}/examples/triton/module.py +0 -0
  46. {weco-0.3.3 → weco-0.3.4}/examples/triton/requirements.txt +0 -0
  47. {weco-0.3.3 → weco-0.3.4}/setup.cfg +0 -0
  48. {weco-0.3.3 → weco-0.3.4}/weco/__init__.py +0 -0
  49. {weco-0.3.3 → weco-0.3.4}/weco/api.py +0 -0
  50. {weco-0.3.3 → weco-0.3.4}/weco/auth.py +0 -0
  51. {weco-0.3.3 → weco-0.3.4}/weco/chatbot.py +0 -0
  52. {weco-0.3.3 → weco-0.3.4}/weco/constants.py +0 -0
  53. {weco-0.3.3 → weco-0.3.4}/weco/credits.py +0 -0
  54. {weco-0.3.3 → weco-0.3.4}/weco.egg-info/SOURCES.txt +0 -0
  55. {weco-0.3.3 → weco-0.3.4}/weco.egg-info/dependency_links.txt +0 -0
  56. {weco-0.3.3 → weco-0.3.4}/weco.egg-info/entry_points.txt +0 -0
  57. {weco-0.3.3 → weco-0.3.4}/weco.egg-info/requires.txt +0 -0
  58. {weco-0.3.3 → weco-0.3.4}/weco.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.3.3
3
+ Version: 0.3.4
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License:
@@ -322,6 +322,7 @@ For more advanced examples, including [Triton](/examples/triton/README.md), [CUD
322
322
  | `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` | `-l ./logs/` |
323
323
  | `--eval-timeout` | Timeout in seconds for each step in evaluation. | No timeout (unlimited) | `--eval-timeout 3600` |
324
324
  | `--save-logs` | Save execution output from each optimization step to disk. Creates timestamped directories with raw output files and a JSONL index for tracking execution history. | `False` | `--save-logs` |
325
+ | `--apply-change` | Automatically apply the best solution to the source file without prompting. | `False` | `--apply-change` |
325
326
 
326
327
  ---
327
328
 
@@ -375,6 +376,7 @@ Arguments for `weco resume`:
375
376
  | Argument | Description | Example |
376
377
  |----------|-------------|---------|
377
378
  | `run-id` | The UUID of the run to resume (shown at the start of each run) | `0002e071-1b67-411f-a514-36947f0c4b31` |
379
+ | `--apply-change` | Automatically apply the best solution to the source file without prompting | `--apply-change` |
378
380
 
379
381
  Notes:
380
382
  - Works only for interrupted runs (status: `error`, `terminated`, etc.).
@@ -94,6 +94,7 @@ For more advanced examples, including [Triton](/examples/triton/README.md), [CUD
94
94
  | `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` | `-l ./logs/` |
95
95
  | `--eval-timeout` | Timeout in seconds for each step in evaluation. | No timeout (unlimited) | `--eval-timeout 3600` |
96
96
  | `--save-logs` | Save execution output from each optimization step to disk. Creates timestamped directories with raw output files and a JSONL index for tracking execution history. | `False` | `--save-logs` |
97
+ | `--apply-change` | Automatically apply the best solution to the source file without prompting. | `False` | `--apply-change` |
97
98
 
98
99
  ---
99
100
 
@@ -147,6 +148,7 @@ Arguments for `weco resume`:
147
148
  | Argument | Description | Example |
148
149
  |----------|-------------|---------|
149
150
  | `run-id` | The UUID of the run to resume (shown at the start of each run) | `0002e071-1b67-411f-a514-36947f0c4b31` |
151
+ | `--apply-change` | Automatically apply the best solution to the source file without prompting | `--apply-change` |
150
152
 
151
153
  Notes:
152
154
  - Works only for interrupted runs (status: `error`, `terminated`, etc.).
@@ -40,6 +40,8 @@ pip install weco
40
40
 
41
41
  Minimal commands to run each example. For full context and explanations, see the linked READMEs.
42
42
 
43
+ > **Tip**: Add `--apply-change` to any command below to automatically apply the best solution to your source file without prompting.
44
+
43
45
  ### 🧭 Hello World
44
46
 
45
47
  ```bash
@@ -8,7 +8,7 @@ name = "weco"
8
8
  authors = [{ name = "Weco AI Team", email = "contact@weco.ai" }]
9
9
  description = "Documentation for `weco`, a CLI for using Weco AI's code optimizer."
10
10
  readme = "README.md"
11
- version = "0.3.3"
11
+ version = "0.3.4"
12
12
  license = { file = "LICENSE" }
13
13
  requires-python = ">=3.8"
14
14
  dependencies = [
@@ -72,6 +72,11 @@ def configure_run_parser(run_parser: argparse.ArgumentParser) -> None:
72
72
  action="store_true",
73
73
  help="Save execution output to .runs/<run-id>/outputs/step_<n>.out.txt with JSONL index",
74
74
  )
75
+ run_parser.add_argument(
76
+ "--apply-change",
77
+ action="store_true",
78
+ help="Automatically apply the best solution to the source file without prompting",
79
+ )
75
80
 
76
81
 
77
82
  def configure_credits_parser(credits_parser: argparse.ArgumentParser) -> None:
@@ -118,6 +123,11 @@ def configure_resume_parser(resume_parser: argparse.ArgumentParser) -> None:
118
123
  resume_parser.add_argument(
119
124
  "run_id", type=str, help="The UUID of the run to resume (e.g., '0002e071-1b67-411f-a514-36947f0c4b31')"
120
125
  )
126
+ resume_parser.add_argument(
127
+ "--apply-change",
128
+ action="store_true",
129
+ help="Automatically apply the best solution to the source file without prompting",
130
+ )
121
131
 
122
132
 
123
133
  def execute_run_command(args: argparse.Namespace) -> None:
@@ -136,6 +146,7 @@ def execute_run_command(args: argparse.Namespace) -> None:
136
146
  console=console,
137
147
  eval_timeout=args.eval_timeout,
138
148
  save_logs=args.save_logs,
149
+ apply_change=args.apply_change,
139
150
  )
140
151
  exit_code = 0 if success else 1
141
152
  sys.exit(exit_code)
@@ -145,7 +156,7 @@ def execute_resume_command(args: argparse.Namespace) -> None:
145
156
  """Execute the 'weco resume' command with all its logic."""
146
157
  from .optimizer import resume_optimization
147
158
 
148
- success = resume_optimization(run_id=args.run_id, console=console)
159
+ success = resume_optimization(run_id=args.run_id, console=console, apply_change=args.apply_change)
149
160
  sys.exit(0 if success else 1)
150
161
 
151
162
 
@@ -30,7 +30,7 @@ from .panels import (
30
30
  create_optimization_layout,
31
31
  create_end_optimization_layout,
32
32
  )
33
- from .utils import read_additional_instructions, read_from_path, write_to_path, run_evaluation, smooth_update
33
+ from .utils import read_additional_instructions, read_from_path, write_to_path, run_evaluation_with_file_swap, smooth_update
34
34
 
35
35
 
36
36
  def save_execution_output(runs_dir: pathlib.Path, step: int, output: str) -> None:
@@ -134,6 +134,7 @@ def execute_optimization(
134
134
  console: Optional[Console] = None,
135
135
  eval_timeout: Optional[int] = None,
136
136
  save_logs: bool = False,
137
+ apply_change: bool = False,
137
138
  ) -> bool:
138
139
  """
139
140
  Execute the core optimization logic.
@@ -182,6 +183,8 @@ def execute_optimization(
182
183
  optimization_completed_normally = False
183
184
  user_stop_requested_flag = False
184
185
 
186
+ best_solution_code = None
187
+ original_source_code = None # Make available to the finally block
185
188
  try:
186
189
  # --- Login/Authentication Handling (now mandatory) ---
187
190
  weco_api_key, auth_headers = handle_authentication(console)
@@ -209,6 +212,7 @@ def execute_optimization(
209
212
  processed_additional_instructions = read_additional_instructions(additional_instructions=additional_instructions)
210
213
  source_fp = pathlib.Path(source)
211
214
  source_code = read_from_path(fp=source_fp, is_json=False)
215
+ original_source_code = source_code
212
216
 
213
217
  # --- Panel Initialization ---
214
218
  summary_panel = SummaryPanel(maximize=maximize, metric_name=metric, total_steps=steps, model=model, runs_dir=log_dir)
@@ -272,10 +276,6 @@ def execute_optimization(
272
276
  }
273
277
  with open(jsonl_file, "w", encoding="utf-8") as f:
274
278
  f.write(json.dumps(metadata) + "\n")
275
- # Write the initial code string to the logs
276
- write_to_path(fp=runs_dir / f"step_0{source_fp.suffix}", content=run_response["code"])
277
- # Write the initial code string to the source file path
278
- write_to_path(fp=source_fp, content=run_response["code"])
279
279
 
280
280
  # Update the panels with the initial solution
281
281
  # Add run id and run name now that we have it
@@ -307,6 +307,7 @@ def execute_optimization(
307
307
  best_node=None,
308
308
  )
309
309
  current_solution_panel, best_solution_panel = solution_panels.get_display(current_step=0)
310
+
310
311
  # Update the live layout with the initial solution panels
311
312
  smooth_update(
312
313
  live=live,
@@ -321,8 +322,17 @@ def execute_optimization(
321
322
  transition_delay=0.1,
322
323
  )
323
324
 
324
- # Run evaluation on the initial solution
325
- term_out = run_evaluation(eval_command=eval_command, timeout=eval_timeout)
325
+ # Write the initial code string to the logs
326
+ write_to_path(fp=runs_dir / f"step_0{source_fp.suffix}", content=run_response["code"])
327
+ # Run evaluation on the initial solution (file swap ensures original is restored)
328
+ term_out = run_evaluation_with_file_swap(
329
+ file_path=source_fp,
330
+ new_content=run_response["code"],
331
+ original_content=source_code,
332
+ eval_command=eval_command,
333
+ timeout=eval_timeout,
334
+ )
335
+
326
336
  # Save logs if requested
327
337
  if save_logs:
328
338
  save_execution_output(runs_dir, step=0, output=term_out)
@@ -358,8 +368,7 @@ def execute_optimization(
358
368
  )
359
369
  # Save next solution (.runs/<run-id>/step_<step>.<extension>)
360
370
  write_to_path(fp=runs_dir / f"step_{step}{source_fp.suffix}", content=eval_and_next_solution_response["code"])
361
- # Write the next solution to the source file
362
- write_to_path(fp=source_fp, content=eval_and_next_solution_response["code"])
371
+
363
372
  status_response = get_optimization_run_status(
364
373
  console=console, run_id=run_id, include_history=True, auth_headers=auth_headers
365
374
  )
@@ -395,7 +404,16 @@ def execute_optimization(
395
404
  ],
396
405
  transition_delay=0.08, # Slightly longer delay for more noticeable transitions
397
406
  )
398
- term_out = run_evaluation(eval_command=eval_command, timeout=eval_timeout)
407
+
408
+ # Run evaluation and restore original code after
409
+ term_out = run_evaluation_with_file_swap(
410
+ file_path=source_fp,
411
+ new_content=eval_and_next_solution_response["code"],
412
+ original_content=source_code,
413
+ eval_command=eval_command,
414
+ timeout=eval_timeout,
415
+ )
416
+
399
417
  # Save logs if requested
400
418
  if save_logs:
401
419
  save_execution_output(runs_dir, step=step, output=term_out)
@@ -444,15 +462,14 @@ def execute_optimization(
444
462
  # Save optimization results
445
463
  # If the best solution does not exist or is has not been measured at the end of the optimization
446
464
  # save the original solution as the best solution
447
- if best_solution_node is not None:
448
- best_solution_content = best_solution_node.code
449
- else:
450
- best_solution_content = read_from_path(fp=runs_dir / f"step_0{source_fp.suffix}", is_json=False)
465
+ try:
466
+ best_solution_code = best_solution_node.code
467
+ except AttributeError:
468
+ best_solution_code = read_from_path(fp=runs_dir / f"step_0{source_fp.suffix}", is_json=False)
451
469
 
452
470
  # Save best solution to .runs/<run-id>/best.<extension>
453
- write_to_path(fp=runs_dir / f"best{source_fp.suffix}", content=best_solution_content)
454
- # write the best solution to the source file
455
- write_to_path(fp=source_fp, content=best_solution_content)
471
+ write_to_path(fp=runs_dir / f"best{source_fp.suffix}", content=best_solution_code)
472
+
456
473
  # Mark as completed normally for the finally block
457
474
  optimization_completed_normally = True
458
475
  live.update(end_optimization_layout)
@@ -491,6 +508,21 @@ def execute_optimization(
491
508
  else "CLI terminated unexpectedly without a specific exception captured."
492
509
  )
493
510
 
511
+ # raise Exception(best_solution_code, original_source_code)
512
+ if best_solution_code and best_solution_code != original_source_code:
513
+ # Determine whether to apply: automatically if --apply-change is set, otherwise ask user
514
+ should_apply = apply_change or summary_panel.ask_user_feedback(
515
+ live=live,
516
+ layout=end_optimization_layout,
517
+ question="Would you like to apply the best solution to the source file?",
518
+ default=True,
519
+ )
520
+ if should_apply:
521
+ write_to_path(fp=source_fp, content=best_solution_code)
522
+ console.print("[green]Best solution applied to the source file.[/]\n")
523
+ else:
524
+ console.print("[green]A better solution was not found. No changes to apply.[/]\n")
525
+
494
526
  report_termination(
495
527
  run_id=run_id,
496
528
  status_update=status,
@@ -507,7 +539,7 @@ def execute_optimization(
507
539
  return optimization_completed_normally or user_stop_requested_flag
508
540
 
509
541
 
510
- def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
542
+ def resume_optimization(run_id: str, console: Optional[Console] = None, apply_change: bool = False) -> bool:
511
543
  """Resume an interrupted run from the most recent node and continue optimization."""
512
544
  if console is None:
513
545
  console = Console()
@@ -543,6 +575,9 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
543
575
  optimization_completed_normally = False
544
576
  user_stop_requested_flag = False
545
577
 
578
+ best_solution_code = None
579
+ original_source_code = None
580
+
546
581
  try:
547
582
  # --- Login/Authentication Handling (now mandatory) ---
548
583
  weco_api_key, auth_headers = handle_authentication(console)
@@ -619,9 +654,12 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
619
654
  save_logs = bool(resume_resp.get("save_logs", False))
620
655
  eval_timeout = resume_resp.get("eval_timeout")
621
656
 
622
- # Write last solution code to source path
657
+ # Read the original source code from the file before we start modifying it
623
658
  source_fp = pathlib.Path(source_path)
624
659
  source_fp.parent.mkdir(parents=True, exist_ok=True)
660
+ # Store the original content to restore after each evaluation
661
+ original_source_code = read_from_path(fp=source_fp, is_json=False) if source_fp.exists() else ""
662
+
625
663
  code_to_restore = resume_resp.get("code") or resume_resp.get("source_code") or ""
626
664
  write_to_path(fp=source_fp, content=code_to_restore)
627
665
 
@@ -689,7 +727,13 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
689
727
 
690
728
  # If missing output, evaluate once before first suggest
691
729
  if term_out is None or len(term_out.strip()) == 0:
692
- term_out = run_evaluation(eval_command=eval_command, timeout=eval_timeout)
730
+ term_out = run_evaluation_with_file_swap(
731
+ file_path=source_fp,
732
+ new_content=code_to_restore,
733
+ original_content=original_source_code,
734
+ eval_command=eval_command,
735
+ timeout=eval_timeout,
736
+ )
693
737
  eval_output_panel.update(output=term_out)
694
738
  # Update the evaluation output panel
695
739
  smooth_update(
@@ -727,9 +771,8 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
727
771
  auth_headers=auth_headers,
728
772
  )
729
773
 
730
- # Save next solution file(s)
774
+ # Save next solution to logs
731
775
  write_to_path(fp=runs_dir / f"step_{step}{source_fp.suffix}", content=eval_and_next_solution_response["code"])
732
- write_to_path(fp=source_fp, content=eval_and_next_solution_response["code"])
733
776
 
734
777
  # Refresh status with history and update panels
735
778
  status_response = get_optimization_run_status(
@@ -744,6 +787,7 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
744
787
  current_solution_node = get_node_from_status(
745
788
  status_response=status_response, solution_id=eval_and_next_solution_response["solution_id"]
746
789
  )
790
+
747
791
  solution_panels.update(current_node=current_solution_node, best_node=best_solution_node)
748
792
  current_solution_panel, best_solution_panel = solution_panels.get_display(current_step=step)
749
793
  eval_output_panel.clear()
@@ -760,8 +804,14 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
760
804
  transition_delay=0.08,
761
805
  )
762
806
 
763
- # Evaluate this new solution
764
- term_out = run_evaluation(eval_command=eval_command, timeout=eval_timeout)
807
+ # Evaluate this new solution and restore original code after
808
+ term_out = run_evaluation_with_file_swap(
809
+ file_path=source_fp,
810
+ new_content=eval_and_next_solution_response["code"],
811
+ original_content=original_source_code,
812
+ eval_command=eval_command,
813
+ timeout=eval_timeout,
814
+ )
765
815
  if save_logs:
766
816
  save_execution_output(runs_dir, step=step, output=term_out)
767
817
  eval_output_panel.update(output=term_out)
@@ -801,13 +851,13 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
801
851
  end_optimization_layout["best_solution"].update(best_solution_panel)
802
852
 
803
853
  # Save best
804
- if best_solution_node is not None:
805
- best_solution_content = best_solution_node.code
806
- else:
807
- best_solution_content = read_from_path(fp=runs_dir / f"step_0{source_fp.suffix}", is_json=False)
854
+ try:
855
+ best_solution_code = best_solution_node.code
856
+ except AttributeError:
857
+ best_solution_code = read_from_path(fp=runs_dir / f"step_0{source_fp.suffix}", is_json=False)
858
+
859
+ write_to_path(fp=runs_dir / f"best{source_fp.suffix}", content=best_solution_code)
808
860
 
809
- write_to_path(fp=runs_dir / f"best{source_fp.suffix}", content=best_solution_content)
810
- write_to_path(fp=source_fp, content=best_solution_content)
811
861
  optimization_completed_normally = True
812
862
  live.update(end_optimization_layout)
813
863
 
@@ -826,7 +876,11 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
826
876
  if heartbeat_thread and heartbeat_thread.is_alive():
827
877
  heartbeat_thread.join(timeout=2)
828
878
 
829
- run_id = resume_resp.get("run_id")
879
+ try:
880
+ run_id = resume_resp.get("run_id")
881
+ except Exception:
882
+ run_id = None
883
+
830
884
  # Report final status if run exists
831
885
  if run_id:
832
886
  if optimization_completed_normally:
@@ -840,6 +894,20 @@ def resume_optimization(run_id: str, console: Optional[Console] = None) -> bool:
840
894
  if "e" in locals() and isinstance(locals()["e"], Exception)
841
895
  else "CLI terminated unexpectedly without a specific exception captured."
842
896
  )
897
+
898
+ if best_solution_code and best_solution_code != original_source_code:
899
+ should_apply = apply_change or summary_panel.ask_user_feedback(
900
+ live=live,
901
+ layout=end_optimization_layout,
902
+ question="Would you like to apply the best solution to the source file?",
903
+ default=True,
904
+ )
905
+ if should_apply:
906
+ write_to_path(fp=source_fp, content=best_solution_code)
907
+ console.print("[green]Best solution applied to the source file.[/]\n")
908
+ else:
909
+ console.print("[green]A better solution was not found. No changes to apply.[/]\n")
910
+
843
911
  report_termination(
844
912
  run_id=run_id,
845
913
  status_update=status,
@@ -5,6 +5,10 @@ from rich.layout import Layout
5
5
  from rich.panel import Panel
6
6
  from rich.syntax import Syntax
7
7
  from rich import box
8
+ from rich.console import Console
9
+ from rich.live import Live
10
+
11
+ from rich.prompt import Confirm
8
12
  from typing import Dict, List, Optional, Union, Tuple
9
13
  from pathlib import Path
10
14
  from .__init__ import __dashboard_url__
@@ -22,6 +26,7 @@ class SummaryPanel:
22
26
  runs_dir: str,
23
27
  run_id: str = None,
24
28
  run_name: str = None,
29
+ console: Optional[Console] = None,
25
30
  ):
26
31
  self.maximize = maximize
27
32
  self.metric_name = metric_name
@@ -32,6 +37,8 @@ class SummaryPanel:
32
37
  self.run_name = run_name if run_name is not None else "N/A"
33
38
  self.dashboard_url = "N/A"
34
39
  self.thinking_content = ""
40
+ self.user_input = ""
41
+ self.console = Console()
35
42
  self.progress = Progress(
36
43
  TextColumn("[progress.description]{task.description}"),
37
44
  BarColumn(bar_width=20),
@@ -67,6 +74,25 @@ class SummaryPanel:
67
74
  """Clear the thinking content."""
68
75
  self.thinking_content = ""
69
76
 
77
+ def ask_user_feedback(self, live: Live, layout: Layout, question: str, default: bool = True) -> bool:
78
+ """
79
+ Ask a yes/no question while keeping the main layout fixed.
80
+ Uses Rich's Confirm for a clean user experience.
81
+ """
82
+ # Stop live updates temporarily to prevent layout from moving
83
+ live.stop()
84
+
85
+ try:
86
+ # Use Rich's built-in Confirm
87
+ result = Confirm.ask(question, default=default)
88
+ except (KeyboardInterrupt, EOFError):
89
+ result = default
90
+ finally:
91
+ # Resume live updates
92
+ live.start()
93
+
94
+ return result
95
+
70
96
  def get_display(self, final_message: Optional[str] = None) -> Panel:
71
97
  """Return a Rich panel summarising the current run."""
72
98
  # ───────────────────── summary grid ──────────────────────
@@ -106,6 +106,41 @@ def truncate_output(output: str) -> str:
106
106
  return output
107
107
 
108
108
 
109
+ def run_evaluation_with_file_swap(
110
+ file_path: pathlib.Path, new_content: str, original_content: str, eval_command: str, timeout: int | None = None
111
+ ) -> str:
112
+ """
113
+ Temporarily write new content to a file, run evaluation, then restore original.
114
+
115
+ This function ensures the file is always restored to its original state,
116
+ even if an exception occurs during evaluation.
117
+
118
+ Args:
119
+ file_path: Path to the file to temporarily modify
120
+ new_content: The new content to write for evaluation
121
+ original_content: The original content to restore after evaluation
122
+ eval_command: The shell command to run for evaluation
123
+ timeout: Optional timeout for the evaluation command
124
+
125
+ Returns:
126
+ The output from running the evaluation command
127
+
128
+ Raises:
129
+ Any exception raised by run_evaluation will be re-raised after
130
+ the file is restored to its original state.
131
+ """
132
+ # Write the new content
133
+ write_to_path(fp=file_path, content=new_content)
134
+
135
+ try:
136
+ # Run the evaluation
137
+ output = run_evaluation(eval_command=eval_command, timeout=timeout)
138
+ return output
139
+ finally:
140
+ # Always restore the original file, even if evaluation fails
141
+ write_to_path(fp=file_path, content=original_content)
142
+
143
+
109
144
  def run_evaluation(eval_command: str, timeout: int | None = None) -> str:
110
145
  """Run the evaluation command on the code and return the output."""
111
146
  process = subprocess.Popen(
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: weco
3
- Version: 0.3.3
3
+ Version: 0.3.4
4
4
  Summary: Documentation for `weco`, a CLI for using Weco AI's code optimizer.
5
5
  Author-email: Weco AI Team <contact@weco.ai>
6
6
  License:
@@ -322,6 +322,7 @@ For more advanced examples, including [Triton](/examples/triton/README.md), [CUD
322
322
  | `-l, --log-dir` | Path to the directory to log intermediate steps and final optimization result. | `.runs/` | `-l ./logs/` |
323
323
  | `--eval-timeout` | Timeout in seconds for each step in evaluation. | No timeout (unlimited) | `--eval-timeout 3600` |
324
324
  | `--save-logs` | Save execution output from each optimization step to disk. Creates timestamped directories with raw output files and a JSONL index for tracking execution history. | `False` | `--save-logs` |
325
+ | `--apply-change` | Automatically apply the best solution to the source file without prompting. | `False` | `--apply-change` |
325
326
 
326
327
  ---
327
328
 
@@ -375,6 +376,7 @@ Arguments for `weco resume`:
375
376
  | Argument | Description | Example |
376
377
  |----------|-------------|---------|
377
378
  | `run-id` | The UUID of the run to resume (shown at the start of each run) | `0002e071-1b67-411f-a514-36947f0c4b31` |
379
+ | `--apply-change` | Automatically apply the best solution to the source file without prompting | `--apply-change` |
378
380
 
379
381
  Notes:
380
382
  - Works only for interrupted runs (status: `error`, `terminated`, etc.).
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes