claude-turing 4.8.0 → 4.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (166) hide show
  1. package/.claude-plugin/plugin.json +1 -1
  2. package/README.md +1 -1
  3. package/agents/ml-evaluator.md +4 -4
  4. package/agents/ml-researcher.md +2 -2
  5. package/bin/turing-init.sh +2 -2
  6. package/commands/ablate.md +3 -3
  7. package/commands/annotate.md +2 -2
  8. package/commands/archive.md +2 -2
  9. package/commands/audit.md +3 -3
  10. package/commands/baseline.md +3 -3
  11. package/commands/brief.md +5 -5
  12. package/commands/budget.md +3 -3
  13. package/commands/calibrate.md +3 -3
  14. package/commands/card.md +3 -3
  15. package/commands/changelog.md +2 -2
  16. package/commands/checkpoint.md +3 -3
  17. package/commands/cite.md +2 -2
  18. package/commands/compare.md +1 -1
  19. package/commands/counterfactual.md +2 -2
  20. package/commands/curriculum.md +3 -3
  21. package/commands/design.md +3 -3
  22. package/commands/diagnose.md +4 -4
  23. package/commands/diff.md +3 -3
  24. package/commands/distill.md +3 -3
  25. package/commands/doctor.md +2 -2
  26. package/commands/ensemble.md +3 -3
  27. package/commands/explore.md +4 -4
  28. package/commands/export.md +3 -3
  29. package/commands/feature.md +3 -3
  30. package/commands/flashback.md +2 -2
  31. package/commands/fork.md +3 -3
  32. package/commands/frontier.md +3 -3
  33. package/commands/init.md +5 -5
  34. package/commands/leak.md +3 -3
  35. package/commands/lit.md +3 -3
  36. package/commands/logbook.md +5 -5
  37. package/commands/merge.md +2 -2
  38. package/commands/mode.md +1 -1
  39. package/commands/onboard.md +2 -2
  40. package/commands/paper.md +3 -3
  41. package/commands/plan.md +2 -2
  42. package/commands/poster.md +3 -3
  43. package/commands/postmortem.md +2 -2
  44. package/commands/preflight.md +5 -5
  45. package/commands/present.md +2 -2
  46. package/commands/profile.md +3 -3
  47. package/commands/prune.md +2 -2
  48. package/commands/quantize.md +2 -2
  49. package/commands/queue.md +3 -3
  50. package/commands/registry.md +2 -2
  51. package/commands/regress.md +3 -3
  52. package/commands/replay.md +2 -2
  53. package/commands/report.md +3 -3
  54. package/commands/reproduce.md +3 -3
  55. package/commands/retry.md +3 -3
  56. package/commands/review.md +2 -2
  57. package/commands/rules/loop-protocol.md +11 -11
  58. package/commands/sanity.md +3 -3
  59. package/commands/scale.md +4 -4
  60. package/commands/search.md +2 -2
  61. package/commands/seed.md +3 -3
  62. package/commands/sensitivity.md +3 -3
  63. package/commands/share.md +2 -2
  64. package/commands/simulate.md +2 -2
  65. package/commands/status.md +1 -1
  66. package/commands/stitch.md +3 -3
  67. package/commands/suggest.md +5 -5
  68. package/commands/surgery.md +2 -2
  69. package/commands/sweep.md +8 -8
  70. package/commands/template.md +2 -2
  71. package/commands/train.md +5 -5
  72. package/commands/transfer.md +3 -3
  73. package/commands/trend.md +2 -2
  74. package/commands/try.md +4 -4
  75. package/commands/update.md +2 -2
  76. package/commands/validate.md +4 -4
  77. package/commands/warm.md +3 -3
  78. package/commands/watch.md +4 -4
  79. package/commands/whatif.md +2 -2
  80. package/commands/xray.md +3 -3
  81. package/config/commands.yaml +1 -1
  82. package/package.json +1 -1
  83. package/skills/turing/ablate/SKILL.md +3 -3
  84. package/skills/turing/annotate/SKILL.md +2 -2
  85. package/skills/turing/archive/SKILL.md +2 -2
  86. package/skills/turing/audit/SKILL.md +3 -3
  87. package/skills/turing/baseline/SKILL.md +3 -3
  88. package/skills/turing/brief/SKILL.md +5 -5
  89. package/skills/turing/budget/SKILL.md +3 -3
  90. package/skills/turing/calibrate/SKILL.md +3 -3
  91. package/skills/turing/card/SKILL.md +3 -3
  92. package/skills/turing/changelog/SKILL.md +2 -2
  93. package/skills/turing/checkpoint/SKILL.md +3 -3
  94. package/skills/turing/cite/SKILL.md +2 -2
  95. package/skills/turing/compare/SKILL.md +1 -1
  96. package/skills/turing/counterfactual/SKILL.md +2 -2
  97. package/skills/turing/curriculum/SKILL.md +3 -3
  98. package/skills/turing/design/SKILL.md +3 -3
  99. package/skills/turing/diagnose/SKILL.md +4 -4
  100. package/skills/turing/diff/SKILL.md +3 -3
  101. package/skills/turing/distill/SKILL.md +3 -3
  102. package/skills/turing/doctor/SKILL.md +2 -2
  103. package/skills/turing/ensemble/SKILL.md +3 -3
  104. package/skills/turing/explore/SKILL.md +4 -4
  105. package/skills/turing/export/SKILL.md +3 -3
  106. package/skills/turing/feature/SKILL.md +3 -3
  107. package/skills/turing/flashback/SKILL.md +2 -2
  108. package/skills/turing/fork/SKILL.md +3 -3
  109. package/skills/turing/frontier/SKILL.md +3 -3
  110. package/skills/turing/init/SKILL.md +5 -5
  111. package/skills/turing/leak/SKILL.md +3 -3
  112. package/skills/turing/lit/SKILL.md +3 -3
  113. package/skills/turing/logbook/SKILL.md +5 -5
  114. package/skills/turing/merge/SKILL.md +2 -2
  115. package/skills/turing/mode/SKILL.md +1 -1
  116. package/skills/turing/onboard/SKILL.md +2 -2
  117. package/skills/turing/paper/SKILL.md +3 -3
  118. package/skills/turing/plan/SKILL.md +2 -2
  119. package/skills/turing/poster/SKILL.md +3 -3
  120. package/skills/turing/postmortem/SKILL.md +2 -2
  121. package/skills/turing/preflight/SKILL.md +5 -5
  122. package/skills/turing/present/SKILL.md +2 -2
  123. package/skills/turing/profile/SKILL.md +3 -3
  124. package/skills/turing/prune/SKILL.md +2 -2
  125. package/skills/turing/quantize/SKILL.md +2 -2
  126. package/skills/turing/queue/SKILL.md +3 -3
  127. package/skills/turing/registry/SKILL.md +2 -2
  128. package/skills/turing/regress/SKILL.md +3 -3
  129. package/skills/turing/replay/SKILL.md +2 -2
  130. package/skills/turing/report/SKILL.md +3 -3
  131. package/skills/turing/reproduce/SKILL.md +3 -3
  132. package/skills/turing/retry/SKILL.md +3 -3
  133. package/skills/turing/review/SKILL.md +2 -2
  134. package/skills/turing/rules/loop-protocol.md +11 -11
  135. package/skills/turing/sanity/SKILL.md +3 -3
  136. package/skills/turing/scale/SKILL.md +4 -4
  137. package/skills/turing/search/SKILL.md +2 -2
  138. package/skills/turing/seed/SKILL.md +3 -3
  139. package/skills/turing/sensitivity/SKILL.md +3 -3
  140. package/skills/turing/share/SKILL.md +2 -2
  141. package/skills/turing/simulate/SKILL.md +2 -2
  142. package/skills/turing/status/SKILL.md +1 -1
  143. package/skills/turing/stitch/SKILL.md +3 -3
  144. package/skills/turing/suggest/SKILL.md +5 -5
  145. package/skills/turing/surgery/SKILL.md +2 -2
  146. package/skills/turing/sweep/SKILL.md +8 -8
  147. package/skills/turing/template/SKILL.md +2 -2
  148. package/skills/turing/train/SKILL.md +5 -5
  149. package/skills/turing/transfer/SKILL.md +3 -3
  150. package/skills/turing/trend/SKILL.md +2 -2
  151. package/skills/turing/try/SKILL.md +4 -4
  152. package/skills/turing/update/SKILL.md +2 -2
  153. package/skills/turing/validate/SKILL.md +4 -4
  154. package/skills/turing/warm/SKILL.md +3 -3
  155. package/skills/turing/watch/SKILL.md +4 -4
  156. package/skills/turing/whatif/SKILL.md +2 -2
  157. package/skills/turing/xray/SKILL.md +3 -3
  158. package/templates/README.md +5 -8
  159. package/templates/program.md +18 -18
  160. package/templates/pyproject.toml +10 -0
  161. package/templates/requirements.txt +4 -1
  162. package/templates/scripts/generate_onboarding.py +1 -1
  163. package/templates/scripts/post-train-hook.sh +7 -8
  164. package/templates/scripts/scaffold.py +24 -26
  165. package/templates/scripts/stop-hook.sh +2 -3
  166. package/templates/scripts/turing-run-python.sh +9 -0
@@ -9,9 +9,9 @@ CI for your model. After any change to code, dependencies, or data, verify metri
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -23,7 +23,7 @@ CI for your model. After any change to code, dependencies, or data, verify metri
23
23
 
24
24
  3. **Run regression gate:**
25
25
  ```bash
26
- python scripts/regression_gate.py $ARGUMENTS
26
+ uv run python scripts/regression_gate.py $ARGUMENTS
27
27
  ```
28
28
 
29
29
  4. **Report results:**
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  Should you revisit old ideas? Infrastructure changes may make failed approaches work now.
9
9
 
10
10
  ## Steps
11
- 1. **Activate environment:** `source .venv/bin/activate`
12
- 2. **Run:** `python scripts/experiment_replay.py $ARGUMENTS`
11
+ 1. **Sync environment:** `uv sync`
12
+ 2. **Run:** `uv run python scripts/experiment_replay.py $ARGUMENTS`
13
13
  3. **Modes:** default (current code+data), --with-current-data, --with-current-preprocessing
14
14
  4. **Report:** original vs replayed metrics, delta, verdict
15
15
  5. **Saved output:** `experiments/replays/`
@@ -2,7 +2,7 @@
2
2
  name: report
3
3
  description: Generate a markdown research report from experiment history — structured for sharing, archiving, or including in documentation. More detailed than a brief, less visual than a poster.
4
4
  argument-hint: "[--since YYYY-MM-DD] [--output path]"
5
- allowed-tools: Read, Bash(python scripts/*:*, source .venv/bin/activate:*, mkdir:*), Grep, Glob
5
+ allowed-tools: Read, Bash(uv run python scripts/*:*, uv sync:*, mkdir:*), Grep, Glob
6
6
  ---
7
7
 
8
8
  Generate a structured markdown research report summarizing the experiment campaign.
@@ -14,12 +14,12 @@ Generate a structured markdown research report summarizing the experiment campai
14
14
  Use the logbook generator in markdown mode as the data backbone:
15
15
 
16
16
  ```bash
17
- source .venv/bin/activate && python scripts/generate_logbook.py --format markdown
17
+ uv run python scripts/generate_logbook.py --format markdown
18
18
  ```
19
19
 
20
20
  Also gather supplementary data:
21
21
  ```bash
22
- source .venv/bin/activate && python scripts/generate_brief.py
22
+ uv run python scripts/generate_brief.py
23
23
  cat experiment_state.yaml 2>/dev/null || true
24
24
  cat RESEARCH_PLAN.md 2>/dev/null || true
25
25
  ```
@@ -9,9 +9,9 @@ Verify that a logged experiment can be reproduced with consistent results.
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -22,7 +22,7 @@ Verify that a logged experiment can be reproduced with consistent results.
22
22
 
23
23
  3. **Run reproducibility verification:**
24
24
  ```bash
25
- python scripts/reproduce_experiment.py $ARGUMENTS
25
+ uv run python scripts/reproduce_experiment.py $ARGUMENTS
26
26
  ```
27
27
 
28
28
  4. **Report results:**
@@ -9,9 +9,9 @@ Auto-diagnose and recover from experiment failures.
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -21,7 +21,7 @@ Auto-diagnose and recover from experiment failures.
21
21
 
22
22
  3. **Run smart retry:**
23
23
  ```bash
24
- python scripts/smart_retry.py $ARGUMENTS
24
+ uv run python scripts/smart_retry.py $ARGUMENTS
25
25
  ```
26
26
 
27
27
  4. **Report results:**
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  Simulate a conference reviewer before you submit. Each weakness links to the command that fixes it.
9
9
 
10
10
  ## Steps
11
- 1. `source .venv/bin/activate`
12
- 2. `python scripts/simulate_review.py $ARGUMENTS`
11
+ 1. `uv sync`
12
+ 2. `uv run python scripts/simulate_review.py $ARGUMENTS`
13
13
  3. **Saved:** `experiments/reviews/`
14
14
 
15
15
  ## Examples
@@ -16,9 +16,9 @@ The autoresearch harness enforces a strict separation between the **hypothesis s
16
16
 
17
17
  ## Execution Rules
18
18
 
19
- - **ALWAYS redirect training output:** `python train.py > run.log 2>&1`
19
+ - **ALWAYS redirect training output:** `uv run python train.py > run.log 2>&1`
20
20
  - **ALWAYS parse metrics with grep** between `---` delimiters: `grep -A 10 "^---" run.log | head -10`
21
- - **ALWAYS activate the venv first:** `source .venv/bin/activate`
21
+ - **ALWAYS run Python through uv:** `uv run python ...`
22
22
  - **NEVER install new packages** without human approval
23
23
 
24
24
  ## Git Discipline
@@ -40,16 +40,16 @@ The autoresearch harness enforces a strict separation between the **hypothesis s
40
40
 
41
41
  ## Sweep Workflow
42
42
 
43
- 1. Generate queue: `python scripts/sweep.py`
44
- 2. Check status: `python scripts/sweep.py --status`
45
- 3. Get next: `python scripts/sweep.py --next`
43
+ 1. Generate queue: `uv run python scripts/sweep.py`
44
+ 2. Check status: `uv run python scripts/sweep.py --status`
45
+ 3. Get next: `uv run python scripts/sweep.py --next`
46
46
  4. Apply overrides, create branch, run training
47
- 5. Mark: `python scripts/sweep.py --mark <name> complete|failed`
47
+ 5. Mark: `uv run python scripts/sweep.py --mark <name> complete|failed`
48
48
  6. Repeat until queue is empty
49
49
 
50
50
  ## Logging Rules
51
51
 
52
- - **Log every experiment** to `experiments/log.jsonl` via `python scripts/log_experiment.py` — kept and discarded alike.
52
+ - **Log every experiment** to `experiments/log.jsonl` via `uv run python scripts/log_experiment.py` — kept and discarded alike.
53
53
  - **Include all metrics, config, and description** of the hypothesis and its outcome.
54
54
 
55
55
  ## Convergence Rules
@@ -64,11 +64,11 @@ The researcher agent's Bash access is restricted to a whitelist of necessary com
64
64
 
65
65
  | Allowed Pattern | Purpose |
66
66
  |-----------------|---------|
67
- | `python train.py:*` | Execute training |
68
- | `python scripts/*:*` | Run utility scripts (logging, metrics, sweep) |
67
+ | `uv run python train.py:*` | Execute training |
68
+ | `uv run python scripts/*:*` | Run utility scripts (logging, metrics, sweep) |
69
69
  | `git:*` | Branch, commit, merge, reset operations |
70
- | `source .venv/bin/activate:*` | Virtual environment activation |
71
- | `pip:*` | Package installation (requires human approval) |
70
+ | `uv sync:*` | Virtual environment activation |
71
+ | `uv add:*` | Package installation (requires human approval) |
72
72
 
73
73
  **Blocked by omission:** `cat`, `head`, `tail`, `less` (prevents reading hidden files via shell), `curl`, `wget` (prevents data exfiltration), arbitrary command execution.
74
74
 
@@ -9,9 +9,9 @@ Run a battery of fast checks before committing to a full training run. Catches w
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -21,7 +21,7 @@ Run a battery of fast checks before committing to a full training run. Catches w
21
21
 
22
22
  3. **Run sanity checks:**
23
23
  ```bash
24
- python scripts/sanity_checks.py $ARGUMENTS
24
+ uv run python scripts/sanity_checks.py $ARGUMENTS
25
25
  ```
26
26
 
27
27
  4. **Checks performed:**
@@ -9,9 +9,9 @@ Predict full-scale performance from a handful of small experiments. Answers "is
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -24,11 +24,11 @@ Predict full-scale performance from a handful of small experiments. Answers "is
24
24
  3. **Plan or analyze:**
25
25
  - **Plan mode (default):** generates scale point configs to run
26
26
  ```bash
27
- python scripts/scaling_estimator.py --axis data --points 4
27
+ uv run python scripts/scaling_estimator.py --axis data --points 4
28
28
  ```
29
29
  - **Analyze mode:** fits power law to completed results
30
30
  ```bash
31
- python scripts/scaling_estimator.py --analyze experiments/scaling/results.yaml
31
+ uv run python scripts/scaling_estimator.py --analyze experiments/scaling/results.yaml
32
32
  ```
33
33
 
34
34
  4. **Scaling axes:**
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  Find specific experiments in a large history with natural language and structured filters.
9
9
 
10
10
  ## Steps
11
- 1. **Activate environment:** `source .venv/bin/activate`
12
- 2. **Run:** `python scripts/experiment_search.py $ARGUMENTS`
11
+ 1. **Sync environment:** `uv sync`
12
+ 2. **Run:** `uv run python scripts/experiment_search.py $ARGUMENTS`
13
13
  3. **Filters:** `accuracy>0.85`, `status:kept`, `family:baseline`, `date:last-week`
14
14
  4. **Report:** ranked table of matching experiments
15
15
 
@@ -9,9 +9,9 @@ Run a multi-seed study to verify that experiment results are robust across rando
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -22,7 +22,7 @@ Run a multi-seed study to verify that experiment results are robust across rando
22
22
 
23
23
  3. **Run seed study:**
24
24
  ```bash
25
- python scripts/seed_runner.py $ARGUMENTS
25
+ uv run python scripts/seed_runner.py $ARGUMENTS
26
26
  ```
27
27
 
28
28
  4. **Report results:**
@@ -9,9 +9,9 @@ Which hyperparameters actually matter? Stop wasting time on the ones that don't.
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -21,7 +21,7 @@ Which hyperparameters actually matter? Stop wasting time on the ones that don't.
21
21
 
22
22
  3. **Run sensitivity analysis:**
23
23
  ```bash
24
- python scripts/sensitivity_analysis.py $ARGUMENTS
24
+ uv run python scripts/sensitivity_analysis.py $ARGUMENTS
25
25
  ```
26
26
 
27
27
  4. **Report includes:**
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  Package experiments for collaborator handoff or paper supplementary material.
9
9
 
10
10
  ## Steps
11
- 1. `source .venv/bin/activate`
12
- 2. `python scripts/package_experiments.py $ARGUMENTS`
11
+ 1. `uv sync`
12
+ 2. `uv run python scripts/package_experiments.py $ARGUMENTS`
13
13
  3. **Saved:** `exports/packages/<name>/`
14
14
 
15
15
  ## Examples
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  Predict outcomes before spending compute. Ranks proposed configs and recommends which to run vs skip.
9
9
 
10
10
  ## Steps
11
- 1. `source .venv/bin/activate`
12
- 2. `python scripts/experiment_simulator.py $ARGUMENTS`
11
+ 1. `uv sync`
12
+ 2. `uv run python scripts/experiment_simulator.py $ARGUMENTS`
13
13
  3. **Saved:** `experiments/simulations/`
14
14
 
15
15
  ## How it works
@@ -10,7 +10,7 @@ Show the current state of the ML training pipeline. This is an observation-only
10
10
 
11
11
  1. **Run metrics display:**
12
12
  ```bash
13
- source .venv/bin/activate && python scripts/show_metrics.py --last 10
13
+ uv run python scripts/show_metrics.py --last 10
14
14
  ```
15
15
 
16
16
  2. **Summarize for the user:**
@@ -9,9 +9,9 @@ Decompose your ML pipeline into stages that can be independently varied, cached,
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -23,7 +23,7 @@ Decompose your ML pipeline into stages that can be independently varied, cached,
23
23
 
24
24
  3. **Run pipeline manager:**
25
25
  ```bash
26
- python scripts/pipeline_manager.py $ARGUMENTS
26
+ uv run python scripts/pipeline_manager.py $ARGUMENTS
27
27
  ```
28
28
 
29
29
  4. **Report results:**
@@ -2,7 +2,7 @@
2
2
  name: suggest
3
3
  description: Literature-grounded model selection. Reads the ML task context, searches recent literature, and suggests model architectures worth trying — with citations. Suggestions are auto-queued as hypotheses.
4
4
  argument-hint: "[task description override]"
5
- allowed-tools: Read, Write, Bash(python scripts/*:*, source .venv/bin/activate:*), Grep, Glob, WebSearch, WebFetch
5
+ allowed-tools: Read, Write, Bash(uv run python scripts/*:*, uv sync:*), Grep, Glob, WebSearch, WebFetch
6
6
  ---
7
7
 
8
8
  Suggest model architectures for the current ML task. Supports two strategies:
@@ -25,7 +25,7 @@ cat config.yaml
25
25
  ```
26
26
 
27
27
  ```bash
28
- source .venv/bin/activate && python scripts/show_metrics.py --last 10 2>/dev/null || echo "No experiments yet"
28
+ uv run python scripts/show_metrics.py --last 10 2>/dev/null || echo "No experiments yet"
29
29
  ```
30
30
 
31
31
  If `$ARGUMENTS` is provided, use that as the task description. Otherwise, infer from `config.yaml` (model type, primary metric, data source, target column).
@@ -66,7 +66,7 @@ From the literature, synthesize **3-5 concrete model architecture suggestions**.
66
66
  For each suggestion, add to the hypothesis queue:
67
67
 
68
68
  ```bash
69
- source .venv/bin/activate && python scripts/manage_hypotheses.py add "<model>: <rationale> (source: <citation>)" --priority medium --source literature
69
+ uv run python scripts/manage_hypotheses.py add "<model>: <rationale> (source: <citation>)" --priority medium --source literature
70
70
  ```
71
71
 
72
72
  ### 5. Show Results
@@ -105,7 +105,7 @@ Same detection logic as the literature strategy — find `config.yaml` + `train.
105
105
  ### 2. Run Tree Search
106
106
 
107
107
  ```bash
108
- source .venv/bin/activate && python scripts/treequest_suggest.py \
108
+ uv run python scripts/treequest_suggest.py \
109
109
  --log experiments/log.jsonl \
110
110
  --config config.yaml \
111
111
  --top 5 \
@@ -120,7 +120,7 @@ If TreeQuest is not installed, the script automatically falls back to greedy bes
120
120
  For each result from the tree search, queue as a hypothesis:
121
121
 
122
122
  ```bash
123
- source .venv/bin/activate && python scripts/manage_hypotheses.py add "<description>" --priority medium --source treequest
123
+ uv run python scripts/manage_hypotheses.py add "<description>" --priority medium --source treequest
124
124
  ```
125
125
 
126
126
  ### 4. Show Results
@@ -9,8 +9,8 @@ Programmatic architecture changes with auto warm-start from existing weights.
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:** `source .venv/bin/activate`
13
- 2. **Run:** `python scripts/architecture_surgery.py $ARGUMENTS`
12
+ 1. **Sync environment:** `uv sync`
13
+ 2. **Run:** `uv run python scripts/architecture_surgery.py $ARGUMENTS`
14
14
  3. **Operations:** add-layer, remove-layer, widen, narrow, swap-activation, add-skip, add-norm, deepen, swap-objective
15
15
  4. **For tree models:** deepen (increase max_depth), widen (more estimators), swap-objective
16
16
  5. **Report:** operation details, config changes, parameter count delta, warm-start source
@@ -2,38 +2,38 @@
2
2
  name: sweep
3
3
  description: Generate and run a systematic hyperparameter sweep. Computes the cartesian product of configured parameter ranges and processes the queue sequentially with full experiment logging.
4
4
  argument-hint: "[sweep_config.yaml]"
5
- allowed-tools: Read, Write, Edit, Bash(python train.py:*, python scripts/*:*, git:*, source .venv/bin/activate:*, pip:*), Grep, Glob
5
+ allowed-tools: Read, Write, Edit, Bash(uv run python train.py:*, uv run python scripts/*:*, git:*, uv sync:*, uv add:*), Grep, Glob
6
6
  ---
7
7
 
8
8
  Run a systematic hyperparameter sweep using the sweep configuration.
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Resolve config:** Use `$ARGUMENTS` as sweep config path, or default to `sweep_config.yaml`.
18
18
 
19
19
  3. **Generate queue** (if not already generated):
20
20
  ```bash
21
- python scripts/sweep.py [sweep_config.yaml]
21
+ uv run python scripts/sweep.py [sweep_config.yaml]
22
22
  ```
23
23
 
24
24
  4. **Check queue status:**
25
25
  ```bash
26
- python scripts/sweep.py --status
26
+ uv run python scripts/sweep.py --status
27
27
  ```
28
28
 
29
29
  5. **Process queue sequentially:**
30
- - Get next: `python scripts/sweep.py --next`
30
+ - Get next: `uv run python scripts/sweep.py --next`
31
31
  - Apply config overrides to `config.yaml`
32
32
  - Create experiment branch: `git checkout -b exp/NNN-description`
33
- - Run training: `python train.py > run.log 2>&1`
33
+ - Run training: `uv run python train.py > run.log 2>&1`
34
34
  - Parse metrics: `grep -A 10 "^---" run.log | head -10`
35
35
  - Log the experiment
36
- - Mark complete: `python scripts/sweep.py --mark <name> complete`
36
+ - Mark complete: `uv run python scripts/sweep.py --mark <name> complete`
37
37
  - If improved, merge to main. If not, return to main.
38
38
  - Repeat until queue is empty
39
39
 
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  Turn your best experiment configs into reusable recipes that persist across projects.
9
9
 
10
10
  ## Steps
11
- 1. **Activate environment:** `source .venv/bin/activate`
12
- 2. **Run:** `python scripts/experiment_templates.py $ARGUMENTS`
11
+ 1. **Sync environment:** `uv sync`
12
+ 2. **Run:** `uv run python scripts/experiment_templates.py $ARGUMENTS`
13
13
  3. **Operations:** save (from experiment), list (all templates), apply (to current project), share (export)
14
14
  4. **Stored at:** `~/.turing/templates/` (cross-project)
15
15
 
@@ -2,7 +2,7 @@
2
2
  name: train
3
3
  description: Run the autonomous ML experiment loop. Iteratively hypothesizes, trains, evaluates, and decides — keeping only improvements. Implements the autoresearch pattern with formal convergence detection and git-disciplined rollback.
4
4
  argument-hint: "[max_iterations]"
5
- allowed-tools: Read, Write, Edit, Bash(python train.py:*, python scripts/*:*, git:*, source .venv/bin/activate:*, pip:*), Grep, Glob
5
+ allowed-tools: Read, Write, Edit, Bash(uv run python train.py:*, uv run python scripts/*:*, git:*, uv sync:*, uv add:*), Grep, Glob
6
6
  ---
7
7
 
8
8
  You are an autonomous ML researcher. Your goal: iteratively improve a model by following the experiment loop protocol — the scientific method applied to machine learning.
@@ -26,9 +26,9 @@ Read `program.md` in the ML project directory for the complete protocol. Follow
26
26
 
27
27
  1. **Restore memory:** Read `.claude/agent-memory/ml-researcher-{project_name}/MEMORY.md` for prior observations and best results.
28
28
  2. **Read protocol:** Read `program.md` completely — it defines the experiment loop, constraints, and output format.
29
- 3. **Bootstrap data:** Check for training data at `config.yaml` → `data.source`. If no splits exist, run `python prepare.py`.
30
- 4. **Bootstrap venv:** `test -d .venv || (python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt)`
31
- 5. **Assess state:** `source .venv/bin/activate && python scripts/show_metrics.py --last 5`
29
+ 3. **Bootstrap data:** Check for training data at `config.yaml` → `data.source`. If no splits exist, run `uv run python prepare.py`.
30
+ 4. **Bootstrap uv environment:** `uv sync`
31
+ 5. **Assess state:** `uv run python scripts/show_metrics.py --last 5`
32
32
  6. **Begin the loop** from program.md.
33
33
 
34
34
  ## The Loop
@@ -47,7 +47,7 @@ Use `@ml-evaluator` for analysis tasks. It is read-only (no Write/Edit) and cann
47
47
 
48
48
  ## Context Management
49
49
 
50
- - Redirect all training output: `python train.py > run.log 2>&1`
50
+ - Redirect all training output: `uv run python train.py > run.log 2>&1`
51
51
  - Parse metrics with grep, never read full output
52
52
  - Persist observations to MEMORY.md after each experiment
53
53
 
@@ -9,9 +9,9 @@ Find similar prior projects and surface what worked. "Last time you had tabular
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -22,7 +22,7 @@ Find similar prior projects and surface what worked. "Last time you had tabular
22
22
 
23
23
  3. **Run knowledge transfer:**
24
24
  ```bash
25
- python scripts/knowledge_transfer.py $ARGUMENTS
25
+ uv run python scripts/knowledge_transfer.py $ARGUMENTS
26
26
  ```
27
27
 
28
28
  4. **Report includes:**
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  See the arc of your research, not just the latest results. Strategic view over 100+ experiments.
9
9
 
10
10
  ## Steps
11
- 1. **Activate environment:** `source .venv/bin/activate`
12
- 2. **Run:** `python scripts/trend_analysis.py $ARGUMENTS`
11
+ 1. **Sync environment:** `uv sync`
12
+ 2. **Run:** `uv run python scripts/trend_analysis.py $ARGUMENTS`
13
13
  3. **Report:** improvement velocity over time windows, family ROI ranking, diminishing returns prediction, phase transitions
14
14
  4. **Saved output:** `experiments/trends/trend-*.yaml`
15
15
 
@@ -2,7 +2,7 @@
2
2
  name: try
3
3
  description: Inject a hypothesis into the agent's experiment queue. This is how research taste reaches the agent — the human selects which coins to flip, the agent flips them.
4
4
  argument-hint: "<hypothesis description>"
5
- allowed-tools: Read, Write, Edit, Bash(python scripts/*:*, source .venv/bin/activate:*), Grep, Glob
5
+ allowed-tools: Read, Write, Edit, Bash(uv run python scripts/*:*, uv sync:*), Grep, Glob
6
6
  ---
7
7
 
8
8
  Inject a human hypothesis into the experiment queue for the next `/turing:train` iteration.
@@ -15,18 +15,18 @@ This is the taste-leverage mechanism: you provide judgment about what's worth tr
15
15
 
16
16
  2. **Check for archetype syntax.** If the argument starts with `archetype:`, expand it:
17
17
  ```bash
18
- source .venv/bin/activate && python scripts/manage_hypotheses.py add --archetype <name> --priority high --source human
18
+ uv run python scripts/manage_hypotheses.py add --archetype <name> --priority high --source human
19
19
  ```
20
20
 
21
21
  Otherwise, use the raw description:
22
22
  ```bash
23
- source .venv/bin/activate && python scripts/manage_hypotheses.py add "$ARGUMENTS" --priority high --source human
23
+ uv run python scripts/manage_hypotheses.py add "$ARGUMENTS" --priority high --source human
24
24
  ```
25
25
 
26
26
  3. **Confirm** with the hypothesis ID and instructions:
27
27
  - "Queued as hyp-NNN (high priority, human-injected)"
28
28
  - "The agent will prioritize this on the next `/turing:train` iteration"
29
- - Show current queue: `python scripts/manage_hypotheses.py list --status queued`
29
+ - Show current queue: `uv run python scripts/manage_hypotheses.py list --status queued`
30
30
 
31
31
  ## Examples
32
32
 
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  Add new data to an existing model without starting from scratch. Detects catastrophic forgetting.
9
9
 
10
10
  ## Steps
11
- 1. `source .venv/bin/activate`
12
- 2. `python scripts/incremental_update.py $ARGUMENTS`
11
+ 1. `uv sync`
12
+ 2. `uv run python scripts/incremental_update.py $ARGUMENTS`
13
13
  3. **Saved:** `experiments/updates/`
14
14
 
15
15
  ## Model-specific strategies
@@ -9,19 +9,19 @@ Validate the stability of the current ML pipeline by running it multiple times a
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Run stability check:**
18
18
  ```bash
19
- python scripts/validate_stability.py
19
+ uv run python scripts/validate_stability.py
20
20
  ```
21
21
 
22
22
  3. **If `$ARGUMENTS` contains `--auto`:**
23
23
  ```bash
24
- python scripts/validate_stability.py --auto
24
+ uv run python scripts/validate_stability.py --auto
25
25
  ```
26
26
  This auto-writes `evaluation.n_runs: 3` to `config.yaml` if CV > 5%.
27
27
 
@@ -9,9 +9,9 @@ Take a trained checkpoint and use it as initialization for a new experiment. Aut
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -23,7 +23,7 @@ Take a trained checkpoint and use it as initialization for a new experiment. Aut
23
23
 
24
24
  3. **Run warm-start planner:**
25
25
  ```bash
26
- python scripts/warm_start.py $ARGUMENTS
26
+ uv run python scripts/warm_start.py $ARGUMENTS
27
27
  ```
28
28
 
29
29
  4. **Report results:**
@@ -9,9 +9,9 @@ Stream metrics during training with early-warning alerts. Catches problems mid-r
9
9
 
10
10
  ## Steps
11
11
 
12
- 1. **Activate environment:**
12
+ 1. **Sync environment:**
13
13
  ```bash
14
- source .venv/bin/activate
14
+ uv sync
15
15
  ```
16
16
 
17
17
  2. **Parse arguments from `$ARGUMENTS`:**
@@ -23,13 +23,13 @@ Stream metrics during training with early-warning alerts. Catches problems mid-r
23
23
 
24
24
  3. **For post-hoc analysis:**
25
25
  ```bash
26
- python scripts/training_monitor.py --analyze run.log
26
+ uv run python scripts/training_monitor.py --analyze run.log
27
27
  ```
28
28
 
29
29
  4. **For live monitoring (inform user):**
30
30
  Live monitoring requires a running training process. Suggest the user run in a separate terminal:
31
31
  ```bash
32
- python scripts/training_monitor.py --log run.log --interval 10
32
+ uv run python scripts/training_monitor.py --log run.log --interval 10
33
33
  ```
34
34
 
35
35
  5. **Alert types:**
@@ -8,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
8
8
  Answer "what if?" questions using existing experiment data. Routes to the right estimator automatically.
9
9
 
10
10
  ## Steps
11
- 1. `source .venv/bin/activate`
12
- 2. `python scripts/whatif_engine.py $ARGUMENTS`
11
+ 1. `uv sync`
12
+ 2. `uv run python scripts/whatif_engine.py $ARGUMENTS`
13
13
  3. **Saved:** `experiments/whatif/`
14
14
 
15
15
  ## Supported question types