claude-turing 4.7.0 → 4.8.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/plugin.json +2 -2
- package/README.md +1 -1
- package/agents/ml-evaluator.md +4 -4
- package/agents/ml-researcher.md +2 -2
- package/bin/turing-init.sh +2 -2
- package/commands/ablate.md +3 -4
- package/commands/annotate.md +2 -3
- package/commands/archive.md +2 -3
- package/commands/audit.md +3 -4
- package/commands/baseline.md +3 -4
- package/commands/brief.md +5 -6
- package/commands/budget.md +3 -4
- package/commands/calibrate.md +3 -4
- package/commands/card.md +3 -4
- package/commands/changelog.md +2 -3
- package/commands/checkpoint.md +3 -4
- package/commands/cite.md +2 -3
- package/commands/compare.md +1 -2
- package/commands/counterfactual.md +2 -3
- package/commands/curriculum.md +3 -4
- package/commands/design.md +3 -4
- package/commands/diagnose.md +4 -5
- package/commands/diff.md +3 -4
- package/commands/distill.md +3 -4
- package/commands/doctor.md +2 -3
- package/commands/ensemble.md +3 -4
- package/commands/explore.md +4 -5
- package/commands/export.md +3 -4
- package/commands/feature.md +3 -4
- package/commands/flashback.md +2 -3
- package/commands/fork.md +3 -4
- package/commands/frontier.md +3 -4
- package/commands/init.md +5 -6
- package/commands/leak.md +3 -4
- package/commands/lit.md +3 -4
- package/commands/logbook.md +5 -6
- package/commands/merge.md +2 -3
- package/commands/mode.md +1 -2
- package/commands/onboard.md +2 -3
- package/commands/paper.md +3 -4
- package/commands/plan.md +2 -3
- package/commands/poster.md +3 -4
- package/commands/postmortem.md +2 -3
- package/commands/preflight.md +5 -6
- package/commands/present.md +2 -3
- package/commands/profile.md +3 -4
- package/commands/prune.md +2 -3
- package/commands/quantize.md +2 -3
- package/commands/queue.md +3 -4
- package/commands/registry.md +2 -3
- package/commands/regress.md +3 -4
- package/commands/replay.md +2 -3
- package/commands/report.md +3 -4
- package/commands/reproduce.md +3 -4
- package/commands/retry.md +3 -4
- package/commands/review.md +2 -3
- package/commands/rules/loop-protocol.md +11 -11
- package/commands/sanity.md +3 -4
- package/commands/scale.md +4 -5
- package/commands/search.md +2 -3
- package/commands/seed.md +3 -4
- package/commands/sensitivity.md +3 -4
- package/commands/share.md +2 -3
- package/commands/simulate.md +2 -3
- package/commands/status.md +1 -2
- package/commands/stitch.md +3 -4
- package/commands/suggest.md +5 -6
- package/commands/surgery.md +2 -3
- package/commands/sweep.md +8 -9
- package/commands/template.md +2 -3
- package/commands/train.md +5 -6
- package/commands/transfer.md +3 -4
- package/commands/trend.md +2 -3
- package/commands/try.md +4 -5
- package/commands/turing.md +3 -3
- package/commands/update.md +2 -3
- package/commands/validate.md +4 -5
- package/commands/warm.md +3 -4
- package/commands/watch.md +4 -5
- package/commands/whatif.md +2 -3
- package/commands/xray.md +3 -4
- package/config/commands.yaml +75 -75
- package/package.json +3 -2
- package/skills/turing/SKILL.md +3 -3
- package/skills/turing/ablate/SKILL.md +3 -4
- package/skills/turing/annotate/SKILL.md +2 -3
- package/skills/turing/archive/SKILL.md +2 -3
- package/skills/turing/audit/SKILL.md +3 -4
- package/skills/turing/baseline/SKILL.md +3 -4
- package/skills/turing/brief/SKILL.md +5 -6
- package/skills/turing/budget/SKILL.md +3 -4
- package/skills/turing/calibrate/SKILL.md +3 -4
- package/skills/turing/card/SKILL.md +3 -4
- package/skills/turing/changelog/SKILL.md +2 -3
- package/skills/turing/checkpoint/SKILL.md +3 -4
- package/skills/turing/cite/SKILL.md +2 -3
- package/skills/turing/compare/SKILL.md +1 -2
- package/skills/turing/counterfactual/SKILL.md +2 -3
- package/skills/turing/curriculum/SKILL.md +3 -4
- package/skills/turing/design/SKILL.md +3 -4
- package/skills/turing/diagnose/SKILL.md +4 -5
- package/skills/turing/diff/SKILL.md +3 -4
- package/skills/turing/distill/SKILL.md +3 -4
- package/skills/turing/doctor/SKILL.md +2 -3
- package/skills/turing/ensemble/SKILL.md +3 -4
- package/skills/turing/explore/SKILL.md +4 -5
- package/skills/turing/export/SKILL.md +3 -4
- package/skills/turing/feature/SKILL.md +3 -4
- package/skills/turing/flashback/SKILL.md +2 -3
- package/skills/turing/fork/SKILL.md +3 -4
- package/skills/turing/frontier/SKILL.md +3 -4
- package/skills/turing/init/SKILL.md +5 -6
- package/skills/turing/leak/SKILL.md +3 -4
- package/skills/turing/lit/SKILL.md +3 -4
- package/skills/turing/logbook/SKILL.md +5 -6
- package/skills/turing/merge/SKILL.md +2 -3
- package/skills/turing/mode/SKILL.md +1 -2
- package/skills/turing/onboard/SKILL.md +2 -3
- package/skills/turing/paper/SKILL.md +3 -4
- package/skills/turing/plan/SKILL.md +2 -3
- package/skills/turing/poster/SKILL.md +3 -4
- package/skills/turing/postmortem/SKILL.md +2 -3
- package/skills/turing/preflight/SKILL.md +5 -6
- package/skills/turing/present/SKILL.md +2 -3
- package/skills/turing/profile/SKILL.md +3 -4
- package/skills/turing/prune/SKILL.md +2 -3
- package/skills/turing/quantize/SKILL.md +2 -3
- package/skills/turing/queue/SKILL.md +3 -4
- package/skills/turing/registry/SKILL.md +2 -3
- package/skills/turing/regress/SKILL.md +3 -4
- package/skills/turing/replay/SKILL.md +2 -3
- package/skills/turing/report/SKILL.md +3 -4
- package/skills/turing/reproduce/SKILL.md +3 -4
- package/skills/turing/retry/SKILL.md +3 -4
- package/skills/turing/review/SKILL.md +2 -3
- package/skills/turing/rules/loop-protocol.md +11 -11
- package/skills/turing/sanity/SKILL.md +3 -4
- package/skills/turing/scale/SKILL.md +4 -5
- package/skills/turing/search/SKILL.md +2 -3
- package/skills/turing/seed/SKILL.md +3 -4
- package/skills/turing/sensitivity/SKILL.md +3 -4
- package/skills/turing/share/SKILL.md +2 -3
- package/skills/turing/simulate/SKILL.md +2 -3
- package/skills/turing/status/SKILL.md +1 -2
- package/skills/turing/stitch/SKILL.md +3 -4
- package/skills/turing/suggest/SKILL.md +5 -6
- package/skills/turing/surgery/SKILL.md +2 -3
- package/skills/turing/sweep/SKILL.md +8 -9
- package/skills/turing/template/SKILL.md +2 -3
- package/skills/turing/train/SKILL.md +5 -6
- package/skills/turing/transfer/SKILL.md +3 -4
- package/skills/turing/trend/SKILL.md +2 -3
- package/skills/turing/try/SKILL.md +4 -5
- package/skills/turing/update/SKILL.md +2 -3
- package/skills/turing/validate/SKILL.md +4 -5
- package/skills/turing/warm/SKILL.md +3 -4
- package/skills/turing/watch/SKILL.md +4 -5
- package/skills/turing/whatif/SKILL.md +2 -3
- package/skills/turing/xray/SKILL.md +3 -4
- package/src/command-registry.js +12 -0
- package/src/install.js +4 -3
- package/src/sync-commands-layout.js +149 -0
- package/src/sync-skills-layout.js +4 -133
- package/templates/README.md +5 -8
- package/templates/program.md +18 -18
- package/templates/pyproject.toml +10 -0
- package/templates/requirements.txt +4 -1
- package/templates/scripts/generate_onboarding.py +1 -1
- package/templates/scripts/post-train-hook.sh +7 -8
- package/templates/scripts/scaffold.py +24 -26
- package/templates/scripts/stop-hook.sh +2 -3
- package/templates/scripts/turing-run-python.sh +9 -0
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "turing",
|
|
3
|
-
"version": "4.
|
|
4
|
-
"description": "Autonomous ML research harness — the autoresearch loop as a formal protocol. 74 commands, 2 specialized agents,
|
|
3
|
+
"version": "4.8.1",
|
|
4
|
+
"description": "Autonomous ML research harness — the autoresearch loop as a formal protocol. 74 commands, 2 specialized agents, skills/turing source layout, operational intelligence (postmortem + doctor + plan), model lifecycle (update + registry), what-if analysis (whatif + counterfactual + simulate), collaboration (onboard + share + review), research communication (cite + present + changelog), experiment archaeology (trend + flashback + archive + annotate + search + template + replay), model surgery (prune + quantize + merge + surgery), feature & training intelligence, model debugging, pre-training intelligence, meta-intelligence, scaling & efficiency, model composition, deep analysis, experiment orchestration, literature + paper, model export, profiling, checkpoints, experiment intelligence, statistical rigor, tree-search, cost-performance, model cards, hypothesis database, novelty guard, anti-cheating, taste-leverage loop. Inspired by Karpathy's autoresearch and the scientific method itself.",
|
|
5
5
|
"author": {
|
|
6
6
|
"name": "Prannaya Gupta"
|
|
7
7
|
},
|
package/README.md
CHANGED
|
@@ -3,7 +3,7 @@
|
|
|
3
3
|
*The research assistant that can't fool itself.*
|
|
4
4
|
|
|
5
5
|
<p align="center">
|
|
6
|
-
<img src="https://img.shields.io/badge/version-4.
|
|
6
|
+
<img src="https://img.shields.io/badge/version-4.8.1-ffb74d?style=flat-square&labelColor=1a1a2e" alt="Version" />
|
|
7
7
|
<img src="https://img.shields.io/badge/license-MIT-ff4d4d?style=flat-square&labelColor=1a1a2e" alt="License" />
|
|
8
8
|
<img src="https://img.shields.io/badge/Claude_Code-plugin-ff4d4d?style=flat-square&labelColor=1a1a2e" alt="Claude Code" />
|
|
9
9
|
<img src="https://img.shields.io/badge/Node.js-20%2B-ff4d4d?style=flat-square&labelColor=1a1a2e" alt="Node.js" />
|
package/agents/ml-evaluator.md
CHANGED
|
@@ -22,13 +22,13 @@ In quantum mechanics, observation changes the system. In ML experimentation, the
|
|
|
22
22
|
|
|
23
23
|
## Useful Commands
|
|
24
24
|
|
|
25
|
-
Always
|
|
25
|
+
Always run Python through uv from the ML directory.
|
|
26
26
|
|
|
27
27
|
| Command | Purpose |
|
|
28
28
|
|---------|---------|
|
|
29
|
-
| `python scripts/show_metrics.py --last 10` | Recent experiment summary |
|
|
30
|
-
| `python scripts/compare_runs.py <a> <b>` | Side-by-side comparison |
|
|
31
|
-
| `python evaluate.py` | Run evaluation on current model |
|
|
29
|
+
| `uv run python scripts/show_metrics.py --last 10` | Recent experiment summary |
|
|
30
|
+
| `uv run python scripts/compare_runs.py <a> <b>` | Side-by-side comparison |
|
|
31
|
+
| `uv run python evaluate.py` | Run evaluation on current model |
|
|
32
32
|
| `cat experiments/results.tsv` | Quick-reference TSV |
|
|
33
33
|
|
|
34
34
|
## Analysis Framework
|
package/agents/ml-researcher.md
CHANGED
|
@@ -27,8 +27,8 @@ Read `program.md` in the ML directory for the complete experiment loop protocol.
|
|
|
27
27
|
## Constraints
|
|
28
28
|
|
|
29
29
|
- **Only modify `train.py` and `config.yaml`.** `evaluate.py` is HIDDEN (do not read or reference). Other pipeline files are READ-ONLY.
|
|
30
|
-
- **Always
|
|
31
|
-
- **Redirect training output:** `python train.py > run.log 2>&1`
|
|
30
|
+
- **Always run Python through uv:** `uv run python ...`
|
|
31
|
+
- **Redirect training output:** `uv run python train.py > run.log 2>&1`
|
|
32
32
|
- **Parse metrics with grep:** `grep -A 10 "^---" run.log | head -10`
|
|
33
33
|
- **Use @ml-evaluator** for analysis tasks — it has no Write/Edit tools and cannot accidentally break the pipeline.
|
|
34
34
|
|
package/bin/turing-init.sh
CHANGED
|
@@ -33,8 +33,8 @@ echo ""
|
|
|
33
33
|
if [[ $# -eq 0 ]] || [[ "${1:-}" == "--interactive" ]]; then
|
|
34
34
|
python3 "$SCAFFOLD_SCRIPT" --interactive --templates-dir "$TEMPLATES_DIR" --no-venv
|
|
35
35
|
echo ""
|
|
36
|
-
echo " To set up the
|
|
37
|
-
echo " cd <ml_dir> &&
|
|
36
|
+
echo " To set up the uv environment:"
|
|
37
|
+
echo " cd <ml_dir> && uv sync"
|
|
38
38
|
exit 0
|
|
39
39
|
fi
|
|
40
40
|
|
package/commands/ablate.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: ablate
|
|
3
3
|
description: Run systematic ablation study — remove components one at a time, measure impact, produce publication-ready table with dead-weight flagging.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[exp-id] [--components \"X,Y\"] [--seeds 3] [--latex]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Run a systematic ablation study to measure the contribution of each model compon
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -23,7 +22,7 @@ Run a systematic ablation study to measure the contribution of each model compon
|
|
|
23
22
|
|
|
24
23
|
3. **Run ablation study:**
|
|
25
24
|
```bash
|
|
26
|
-
python scripts/ablation_study.py $ARGUMENTS
|
|
25
|
+
uv run python scripts/ablation_study.py $ARGUMENTS
|
|
27
26
|
```
|
|
28
27
|
|
|
29
28
|
4. **Report results:**
|
package/commands/annotate.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: annotate
|
|
3
3
|
description: Retrospective experiment annotations — add human notes, tags, and context that automated metrics can't capture.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<exp-id> \"note\" [--tag fragile] | --list | --search \"keyword\""
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -9,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
|
|
|
9
8
|
Add context that experiment logs can't capture. "This only worked because the data was pre-sorted."
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
|
-
1. **
|
|
13
|
-
2. **Run:** `python scripts/experiment_annotations.py $ARGUMENTS`
|
|
11
|
+
1. **Sync environment:** `uv sync`
|
|
12
|
+
2. **Run:** `uv run python scripts/experiment_annotations.py $ARGUMENTS`
|
|
14
13
|
3. **Operations:** add (text + tags), list (per-experiment or all), search (keyword or tag)
|
|
15
14
|
4. **Stored in:** `experiments/annotations.yaml`
|
|
16
15
|
|
package/commands/archive.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: archive
|
|
3
3
|
description: Experiment lifecycle cleanup — compress old artifacts, prune checkpoints, create queryable summary index. Reclaim disk space.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[--older-than 30d] [--keep-best 10] [--dry-run]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -9,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
|
|
|
9
8
|
Keep your project directory manageable after 200+ experiments.
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
|
-
1. **
|
|
13
|
-
2. **Run:** `python scripts/experiment_archive.py $ARGUMENTS`
|
|
11
|
+
1. **Sync environment:** `uv sync`
|
|
12
|
+
2. **Run:** `uv run python scripts/experiment_archive.py $ARGUMENTS`
|
|
14
13
|
3. **Protected experiments:** Pareto-optimal, current best, recent, top-N by metric
|
|
15
14
|
4. **Report:** archived count, preserved count, space reclaimed
|
|
16
15
|
5. **Saved output:** `experiments/archive/index.yaml`
|
package/commands/audit.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: audit
|
|
3
3
|
description: Pre-submission methodology audit — catch data leakage, missing baselines, cherry-picked seeds, and incomplete ablations before a reviewer does.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[--strict] [--checklist neurips]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ A reviewer checklist you run before submitting. Catches methodology mistakes tha
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -22,7 +21,7 @@ A reviewer checklist you run before submitting. Catches methodology mistakes tha
|
|
|
22
21
|
|
|
23
22
|
3. **Run methodology audit:**
|
|
24
23
|
```bash
|
|
25
|
-
python scripts/methodology_audit.py $ARGUMENTS
|
|
24
|
+
uv run python scripts/methodology_audit.py $ARGUMENTS
|
|
26
25
|
```
|
|
27
26
|
|
|
28
27
|
4. **Checks performed:**
|
package/commands/baseline.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: baseline
|
|
3
3
|
description: Automatic baseline generation — random, majority/mean, linear, k-NN baselines in 60 seconds. Every experiment needs a "is this better than dumb?" reference.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[--methods all|simple|linear] [--data data.npz]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Generate trivial baselines so you always know if your model is meaningfully bett
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -22,7 +21,7 @@ Generate trivial baselines so you always know if your model is meaningfully bett
|
|
|
22
21
|
|
|
23
22
|
3. **Run baseline generation:**
|
|
24
23
|
```bash
|
|
25
|
-
python scripts/generate_baselines.py $ARGUMENTS
|
|
24
|
+
uv run python scripts/generate_baselines.py $ARGUMENTS
|
|
26
25
|
```
|
|
27
26
|
|
|
28
27
|
4. **Baselines generated:**
|
package/commands/brief.md
CHANGED
|
@@ -1,9 +1,8 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: brief
|
|
3
3
|
description: Generate a structured research intelligence report from experiment history — what's been learned, what's promising, what's exhausted, and what the human should consider next. Use --deep for literature-grounded suggestions.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[ml/project] [--deep]"
|
|
6
|
-
allowed-tools: Read, Bash(python scripts/*:*,
|
|
5
|
+
allowed-tools: Read, Bash(uv run python scripts/*:*, uv sync:*), Grep, Glob, WebSearch, WebFetch
|
|
7
6
|
---
|
|
8
7
|
|
|
9
8
|
Generate a research briefing that a human can read in 2 minutes and immediately decide what to inject next.
|
|
@@ -24,14 +23,14 @@ Before generating the briefing, detect which project to report on:
|
|
|
24
23
|
|
|
25
24
|
1. **Generate the briefing:**
|
|
26
25
|
```bash
|
|
27
|
-
|
|
26
|
+
uv run python scripts/generate_brief.py
|
|
28
27
|
```
|
|
29
28
|
|
|
30
29
|
2. **Self-critique the briefing** before presenting. Review the generated output and check:
|
|
31
30
|
- **Recommendations specificity:** Are they concrete enough to act on? "Try a different model" is bad. "Try LightGBM with leaf-wise growth because exp-004 showed depth sensitivity" is good. If vague, rewrite them with specific model/hyperparameter suggestions grounded in the experiment data.
|
|
32
31
|
- **Exhausted directions coverage:** Cross-reference the "Model Types Explored" section against `experiments/log.jsonl`. Are there discarded experiments missing from the summary? If so, add them.
|
|
33
32
|
- **Convergence estimate grounding:** If the briefing says "close to convergence" or "further improvement possible", verify against the actual metric trajectory. Is the claim supported by the numbers?
|
|
34
|
-
- **Metric accuracy:** Spot-check that the "Current Best" metrics match the actual log. Run `python scripts/show_metrics.py --last 1` if uncertain.
|
|
33
|
+
- **Metric accuracy:** Spot-check that the "Current Best" metrics match the actual log. Run `uv run python scripts/show_metrics.py --last 1` if uncertain.
|
|
35
34
|
|
|
36
35
|
If any section fails the check, regenerate just that section. Max 1 revision round — don't over-polish.
|
|
37
36
|
|
|
@@ -76,7 +75,7 @@ When `--deep` is requested, add a 7th section: **Literature-Grounded Suggestions
|
|
|
76
75
|
|
|
77
76
|
4. **Queue suggestions** as hypotheses:
|
|
78
77
|
```bash
|
|
79
|
-
|
|
78
|
+
uv run python scripts/manage_hypotheses.py add "<technique>: <rationale> (source: <citation>)" --priority medium --source literature
|
|
80
79
|
```
|
|
81
80
|
|
|
82
81
|
5. **Format as a section** appended to the briefing.
|
|
@@ -84,7 +83,7 @@ When `--deep` is requested, add a 7th section: **Literature-Grounded Suggestions
|
|
|
84
83
|
## Saving Briefs
|
|
85
84
|
|
|
86
85
|
```bash
|
|
87
|
-
mkdir -p briefs && python scripts/generate_brief.py > briefs/brief-$(date +%Y-%m-%d).md
|
|
86
|
+
mkdir -p briefs && uv run python scripts/generate_brief.py > briefs/brief-$(date +%Y-%m-%d).md
|
|
88
87
|
```
|
|
89
88
|
|
|
90
89
|
## When to Use
|
package/commands/budget.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: budget
|
|
3
3
|
description: Compute budget manager — set experiment/time limits, track allocation across explore/exploit phases, auto-shift modes, hard stop.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<set|status|reset> [--experiments 50] [--hours 8]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Set a compute ceiling and let the system optimize within it. Prevents runaway ex
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -23,7 +22,7 @@ Set a compute ceiling and let the system optimize within it. Prevents runaway ex
|
|
|
23
22
|
|
|
24
23
|
3. **Run budget manager:**
|
|
25
24
|
```bash
|
|
26
|
-
python scripts/budget_manager.py $ARGUMENTS
|
|
25
|
+
uv run python scripts/budget_manager.py $ARGUMENTS
|
|
27
26
|
```
|
|
28
27
|
|
|
29
28
|
4. **Actions:**
|
package/commands/calibrate.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: calibrate
|
|
3
3
|
description: Probability calibration — measure ECE, plot reliability diagrams, apply Platt scaling or isotonic regression.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[exp-id] [--method platt|isotonic|temperature|auto]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Make model probabilities trustworthy. Does 80% confidence actually mean 80% corr
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -22,7 +21,7 @@ Make model probabilities trustworthy. Does 80% confidence actually mean 80% corr
|
|
|
22
21
|
|
|
23
22
|
3. **Run calibration:**
|
|
24
23
|
```bash
|
|
25
|
-
python scripts/calibration.py $ARGUMENTS
|
|
24
|
+
uv run python scripts/calibration.py $ARGUMENTS
|
|
26
25
|
```
|
|
27
26
|
|
|
28
27
|
4. **Report includes:**
|
package/commands/card.md
CHANGED
|
@@ -1,8 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: card
|
|
3
3
|
description: Generate a standardized model card documenting the trained model — type, performance, training data, limitations, intended use, and artifact contract.
|
|
4
|
-
|
|
5
|
-
allowed-tools: Read, Bash(python scripts/*:*, source .venv/bin/activate:*), Grep, Glob
|
|
4
|
+
allowed-tools: Read, Bash(uv run python scripts/*:*, uv sync:*), Grep, Glob
|
|
6
5
|
---
|
|
7
6
|
|
|
8
7
|
You generate a standardized model card from the experiment log, model contract, and config.
|
|
@@ -11,12 +10,12 @@ You generate a standardized model card from the experiment log, model contract,
|
|
|
11
10
|
|
|
12
11
|
1. **Activate the virtual environment:**
|
|
13
12
|
```bash
|
|
14
|
-
|
|
13
|
+
uv sync
|
|
15
14
|
```
|
|
16
15
|
|
|
17
16
|
2. **Run the model card generator:**
|
|
18
17
|
```bash
|
|
19
|
-
python scripts/generate_model_card.py --config config.yaml --log experiments/log.jsonl --contract model_contract.md --output MODEL_CARD.md
|
|
18
|
+
uv run python scripts/generate_model_card.py --config config.yaml --log experiments/log.jsonl --contract model_contract.md --output MODEL_CARD.md
|
|
20
19
|
```
|
|
21
20
|
|
|
22
21
|
3. **Read and present the generated card:**
|
package/commands/changelog.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: changelog
|
|
3
3
|
description: Model changelog generation — auto-generate human-readable progress narrative from experiment history for stakeholders.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[--since exp-id|date] [--audience technical|stakeholder]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -9,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
|
|
|
9
8
|
Translate experiment logs into a narrative that PMs and stakeholders can read in 2 minutes.
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
|
-
1. **
|
|
13
|
-
2. **Run:** `python scripts/generate_changelog.py $ARGUMENTS`
|
|
11
|
+
1. **Sync environment:** `uv sync`
|
|
12
|
+
2. **Run:** `uv run python scripts/generate_changelog.py $ARGUMENTS`
|
|
14
13
|
3. **Audience:** technical (experiment IDs, configs), stakeholder (plain English, percentages)
|
|
15
14
|
4. **Saved output:** `paper/CHANGELOG.md`
|
|
16
15
|
|
package/commands/checkpoint.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: checkpoint
|
|
3
3
|
description: Smart checkpoint management — list, prune (Pareto-based), average top-K, resume from any point, disk usage stats.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<list|prune|average|resume|stats> [exp-id] [--top 3] [--dry-run]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Manage model checkpoints intelligently using Pareto dominance.
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -23,7 +22,7 @@ Manage model checkpoints intelligently using Pareto dominance.
|
|
|
23
22
|
|
|
24
23
|
3. **Run checkpoint manager:**
|
|
25
24
|
```bash
|
|
26
|
-
python scripts/checkpoint_manager.py $ARGUMENTS
|
|
25
|
+
uv run python scripts/checkpoint_manager.py $ARGUMENTS
|
|
27
26
|
```
|
|
28
27
|
|
|
29
28
|
4. **Report results by action:**
|
package/commands/cite.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: cite
|
|
3
3
|
description: Citation & attribution manager — track papers, datasets, methods. Audit for missing citations, generate BibTeX.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<add|list|check|bib> [--key Chen2016 --title XGBoost --url ...]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -9,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
|
|
|
9
8
|
Track which papers and methods influenced each experiment. Catch missing citations before submission.
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
|
-
1. **
|
|
13
|
-
2. **Run:** `python scripts/citation_manager.py $ARGUMENTS`
|
|
11
|
+
1. **Sync environment:** `uv sync`
|
|
12
|
+
2. **Run:** `uv run python scripts/citation_manager.py $ARGUMENTS`
|
|
14
13
|
3. **Operations:** add (associate citation with experiment), list (group by type), check (audit missing), bib (BibTeX)
|
|
15
14
|
4. **Stored in:** `experiments/citations.yaml`
|
|
16
15
|
|
package/commands/compare.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: compare
|
|
3
3
|
description: Compare two ML experiment runs side-by-side — metrics, configuration deltas, and a verdict on which approach is more promising.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<exp-id-1> <exp-id-2>"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -12,7 +11,7 @@ Compare two ML experiment runs side-by-side to understand what changed and why o
|
|
|
12
11
|
|
|
13
12
|
1. **Run comparison:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv run python scripts/compare_runs.py $0 $1
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Analyze the delta:**
|
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: counterfactual
|
|
3
3
|
description: Input-level counterfactual explanations — find the smallest input change to flip a prediction.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<exp-id> --sample <index> [--target <class>]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -9,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
|
|
|
9
8
|
What would need to change to flip this prediction? Minimum-change counterfactual for individual predictions.
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
|
-
1. `
|
|
13
|
-
2. `python scripts/counterfactual_explanation.py $ARGUMENTS`
|
|
11
|
+
1. `uv sync`
|
|
12
|
+
2. `uv run python scripts/counterfactual_explanation.py $ARGUMENTS`
|
|
14
13
|
3. **Saved:** `experiments/counterfactuals/`
|
|
15
14
|
|
|
16
15
|
## Methods
|
package/commands/curriculum.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: curriculum
|
|
3
3
|
description: Training curriculum optimization — order data by difficulty, compare easy-to-hard vs hard-to-easy vs self-paced strategies.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[exp-id] [--strategies easy-to-hard,random]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Does the order your model sees data matter? Find out systematically.
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -22,7 +21,7 @@ Does the order your model sees data matter? Find out systematically.
|
|
|
22
21
|
|
|
23
22
|
3. **Run curriculum analysis:**
|
|
24
23
|
```bash
|
|
25
|
-
python scripts/curriculum_optimizer.py $ARGUMENTS
|
|
24
|
+
uv run python scripts/curriculum_optimizer.py $ARGUMENTS
|
|
26
25
|
```
|
|
27
26
|
|
|
28
27
|
4. **Strategies tested:**
|
package/commands/design.md
CHANGED
|
@@ -1,9 +1,8 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: design
|
|
3
3
|
description: Generate a structured experiment design for a hypothesis. Reads experiment history, searches literature for methodology, produces a scored design document at experiments/designs/.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<hypothesis-id or description>"
|
|
6
|
-
allowed-tools: Read, Write, Bash(python scripts/*:*,
|
|
5
|
+
allowed-tools: Read, Write, Bash(uv run python scripts/*:*, uv sync:*, mkdir:*), Grep, Glob, WebSearch, WebFetch
|
|
7
6
|
---
|
|
8
7
|
|
|
9
8
|
Front-load the thinking before the coding. Given a hypothesis, produce a structured experiment design grounded in methodology from the literature.
|
|
@@ -14,7 +13,7 @@ Front-load the thinking before the coding. Given a hypothesis, produce a structu
|
|
|
14
13
|
|
|
15
14
|
If `$ARGUMENTS` matches `hyp-NNN`, load the hypothesis:
|
|
16
15
|
```bash
|
|
17
|
-
|
|
16
|
+
uv run python scripts/manage_hypotheses.py show $ARGUMENTS
|
|
18
17
|
```
|
|
19
18
|
|
|
20
19
|
If freeform text, use it directly as the hypothesis description.
|
|
@@ -24,7 +23,7 @@ Read the current config and experiment state:
|
|
|
24
23
|
cat config.yaml
|
|
25
24
|
```
|
|
26
25
|
```bash
|
|
27
|
-
|
|
26
|
+
uv run python scripts/show_metrics.py --last 10 2>/dev/null || echo "No experiments yet"
|
|
28
27
|
```
|
|
29
28
|
```bash
|
|
30
29
|
cat experiment_state.yaml 2>/dev/null || echo "No experiment state yet"
|
package/commands/diagnose.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: diagnose
|
|
3
3
|
description: Error analysis — cluster failure cases, identify systematic failure modes, and suggest targeted fixes with auto-queued hypotheses.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[exp-id] [--auto-queue] [--top 5]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,15 +9,15 @@ Analyze where and why the model fails, beyond aggregate metrics.
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Generate predictions if needed:**
|
|
19
18
|
Check if `experiments/predictions/exp-NNN-preds.yaml` exists. If not, run:
|
|
20
19
|
```bash
|
|
21
|
-
python train.py --predict-only --output experiments/predictions/
|
|
20
|
+
uv run python train.py --predict-only --output experiments/predictions/
|
|
22
21
|
```
|
|
23
22
|
The predictions file must contain `y_true`, `y_pred`, `task_type`, and optionally `features`.
|
|
24
23
|
|
|
@@ -29,7 +28,7 @@ Analyze where and why the model fails, beyond aggregate metrics.
|
|
|
29
28
|
|
|
30
29
|
4. **Run error analysis:**
|
|
31
30
|
```bash
|
|
32
|
-
python scripts/diagnose_errors.py $ARGUMENTS
|
|
31
|
+
uv run python scripts/diagnose_errors.py $ARGUMENTS
|
|
33
32
|
```
|
|
34
33
|
|
|
35
34
|
5. **Report results:**
|
package/commands/diff.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: diff
|
|
3
3
|
description: Deep experiment comparison — config diffs, metric significance, per-class regressions, training curve divergence, feature importance shifts.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<exp-a> <exp-b> [--code]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Deep diagnostic comparison of two experiments. Goes beyond "which metric is high
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -22,7 +21,7 @@ Deep diagnostic comparison of two experiments. Goes beyond "which metric is high
|
|
|
22
21
|
|
|
23
22
|
3. **Run deep comparison:**
|
|
24
23
|
```bash
|
|
25
|
-
python scripts/experiment_diff.py $ARGUMENTS
|
|
24
|
+
uv run python scripts/experiment_diff.py $ARGUMENTS
|
|
26
25
|
```
|
|
27
26
|
|
|
28
27
|
4. **Report results — the diff includes:**
|
package/commands/distill.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: distill
|
|
3
3
|
description: Model compression via distillation — train a smaller student model to match a larger teacher's predictions.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "<teacher-exp-id> [--compression 4] [--method soft-labels]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Compress a large model into a smaller, faster one for production. Measures the a
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -24,7 +23,7 @@ Compress a large model into a smaller, faster one for production. Measures the a
|
|
|
24
23
|
|
|
25
24
|
3. **Run distillation planner:**
|
|
26
25
|
```bash
|
|
27
|
-
python scripts/model_distiller.py $ARGUMENTS
|
|
26
|
+
uv run python scripts/model_distiller.py $ARGUMENTS
|
|
28
27
|
```
|
|
29
28
|
|
|
30
29
|
4. **Report includes:**
|
package/commands/doctor.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: doctor
|
|
3
3
|
description: Harness self-diagnosis — check environment, project, resources, and git state. Auto-fix common issues.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[--fix] [--verbose]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -9,8 +8,8 @@ allowed-tools: Read, Bash(*), Grep, Glob
|
|
|
9
8
|
Is Turing healthy? Check everything and get a score.
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
|
-
1. `
|
|
13
|
-
2. `python scripts/harness_doctor.py $ARGUMENTS`
|
|
11
|
+
1. `uv sync`
|
|
12
|
+
2. `uv run python scripts/harness_doctor.py $ARGUMENTS`
|
|
14
13
|
3. **Saved:** `experiments/doctor/`
|
|
15
14
|
|
|
16
15
|
## Checks
|
package/commands/ensemble.md
CHANGED
|
@@ -1,7 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: ensemble
|
|
3
3
|
description: Automated ensemble construction — combines top-K models via voting, stacking, and blending for zero-cost improvement.
|
|
4
|
-
disable-model-invocation: true
|
|
5
4
|
argument-hint: "[--top-k 5] [--methods voting,stacking,blending]"
|
|
6
5
|
allowed-tools: Read, Bash(*), Grep, Glob
|
|
7
6
|
---
|
|
@@ -10,9 +9,9 @@ Build ensembles from your best experiments automatically. Often yields 1-3% impr
|
|
|
10
9
|
|
|
11
10
|
## Steps
|
|
12
11
|
|
|
13
|
-
1. **
|
|
12
|
+
1. **Sync environment:**
|
|
14
13
|
```bash
|
|
15
|
-
|
|
14
|
+
uv sync
|
|
16
15
|
```
|
|
17
16
|
|
|
18
17
|
2. **Parse arguments from `$ARGUMENTS`:**
|
|
@@ -23,7 +22,7 @@ Build ensembles from your best experiments automatically. Often yields 1-3% impr
|
|
|
23
22
|
|
|
24
23
|
3. **Run ensemble construction:**
|
|
25
24
|
```bash
|
|
26
|
-
python scripts/build_ensemble.py $ARGUMENTS
|
|
25
|
+
uv run python scripts/build_ensemble.py $ARGUMENTS
|
|
27
26
|
```
|
|
28
27
|
|
|
29
28
|
4. **Report results:**
|