claude-turing 2.3.0 → 2.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "turing",
3
- "version": "2.3.0",
4
- "description": "Autonomous ML research harness — the autoresearch loop as a formal protocol. 33 commands, 2 specialized agents, deep analysis (experiment diff + live training monitor + regression gate), experiment orchestration (batch queue + smart retry + branching), literature integration + paper drafting, production model export, performance profiling, smart checkpoints, experiment intelligence, statistical rigor, tree-search hypothesis exploration, cost-performance frontier, model cards, model registry, hypothesis database with novelty guard, anti-cheating guardrails, and the taste-leverage loop. Inspired by Karpathy's autoresearch and the scientific method itself.",
3
+ "version": "2.5.0",
4
+ "description": "Autonomous ML research harness — the autoresearch loop as a formal protocol. 39 commands, 2 specialized agents, scaling & efficiency (scaling laws + compute budget + model distillation), model composition (ensemble + pipeline stitch + warm-start), deep analysis (experiment diff + live training monitor + regression gate), experiment orchestration (batch queue + smart retry + branching), literature integration + paper drafting, production model export, performance profiling, smart checkpoints, experiment intelligence, statistical rigor, tree-search hypothesis exploration, cost-performance frontier, model cards, model registry, hypothesis database with novelty guard, anti-cheating guardrails, and the taste-leverage loop. Inspired by Karpathy's autoresearch and the scientific method itself.",
5
5
  "author": {
6
6
  "name": "pragnition"
7
7
  },
package/README.md CHANGED
@@ -344,6 +344,12 @@ The index (`hypotheses.yaml`) is the lightweight queue. The detail files (`hypot
344
344
  | `/turing:diff <a> <b>` | Deep experiment comparison — config diffs, metric significance, per-class regressions, curve divergence |
345
345
  | `/turing:watch [--analyze]` | Live training monitor — loss spikes, NaN detection, overfitting, plateau alerts |
346
346
  | `/turing:regress [--tolerance]` | Performance regression gate — verify metrics haven't degraded after changes |
347
+ | `/turing:ensemble [--top-k]` | Automated ensemble — voting, stacking, blending from top-K models |
348
+ | `/turing:stitch <action>` | Pipeline composition — show, swap, cache, and run stages independently |
349
+ | `/turing:warm <exp-id>` | Warm-start from prior model — load checkpoint, freeze layers, adjust LR |
350
+ | `/turing:scale [--axis]` | Scaling law estimator — power-law fit, full-scale predictions, diminishing returns verdict |
351
+ | `/turing:budget <action>` | Compute budget manager — set limits, track allocation, auto-shift explore/exploit |
352
+ | `/turing:distill <exp-id>` | Model compression — distill teacher into smaller student with accuracy/size tradeoff |
347
353
 
348
354
  And for fully hands-off operation:
349
355
 
@@ -528,11 +534,11 @@ Each project gets independent config, data, experiments, models, and agent memor
528
534
 
529
535
  ## Architecture of Turing Itself
530
536
 
531
- 33 commands, 2 agents, 10 config files, 52 template scripts, model registry, artifact contract, cost-performance frontier, model cards, tree-search exploration, statistical rigor, experiment intelligence, performance profiling, smart checkpoints, production model export, literature integration, paper section drafting, experiment orchestration (queue + retry + fork), deep analysis (diff + watch + regress), 16 ADRs. See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for the full codemap.
537
+ 39 commands, 2 agents, 10 config files, 58 template scripts, model registry, artifact contract, cost-performance frontier, model cards, tree-search exploration, statistical rigor, experiment intelligence, performance profiling, smart checkpoints, production model export, literature integration, paper section drafting, experiment orchestration (queue + retry + fork), deep analysis (diff + watch + regress), model composition (ensemble + stitch + warm), scaling & efficiency (scale + budget + distill), 16 ADRs. See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for the full codemap.
532
538
 
533
539
  ```
534
540
  turing/
535
- ├── commands/ 32 skill files (core + taste-leverage + reporting + exploration + statistical rigor + experiment intelligence + performance + deployment + research workflow + orchestration + deep analysis)
541
+ ├── commands/ 38 skill files (core + taste-leverage + reporting + exploration + statistical rigor + experiment intelligence + performance + deployment + research workflow + orchestration + deep analysis + model composition + scaling & efficiency)
536
542
  ├── agents/ 2 agents (researcher: read/write, evaluator: read-only)
537
543
  ├── config/ 8 files (lifecycle, taxonomy, archetypes, novelty aliases)
538
544
  ├── templates/ Scaffolded into user projects by /turing:init
@@ -0,0 +1,52 @@
1
+ ---
2
+ name: budget
3
+ description: Compute budget manager — set experiment/time limits, track allocation across explore/exploit phases, auto-shift modes, hard stop.
4
+ disable-model-invocation: true
5
+ argument-hint: "<set|status|reset> [--experiments 50] [--hours 8]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Set a compute ceiling and let the system optimize within it. Prevents runaway experiment loops.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - First argument is action: `set`, `status`, `reset`, or `check`
20
+ - `--experiments 50` — max experiment count
21
+ - `--hours 8` — max wall-clock hours
22
+ - `--json` — raw JSON output
23
+
24
+ 3. **Run budget manager:**
25
+ ```bash
26
+ python scripts/budget_manager.py $ARGUMENTS
27
+ ```
28
+
29
+ 4. **Actions:**
30
+ - **set:** create a budget with experiment and/or time constraints
31
+ - **status:** show usage, burn rate, projected exhaustion, allocation breakdown
32
+ - **reset:** deactivate the current budget
33
+ - **check:** returns whether another experiment is allowed (used by `/turing:train`)
34
+
35
+ 5. **Budget allocation policy:**
36
+ - **0-50% budget:** EXPLORE — try diverse hypotheses
37
+ - **50-80% budget:** MIXED — explore promising, exploit best
38
+ - **80-100% budget:** EXPLOIT ONLY — refine the winner
39
+ - **100% budget:** HARD STOP — `/turing:train` refuses new experiments
40
+
41
+ 6. **Budget state** stored in `experiment_state.yaml` under the `budget` key.
42
+
43
+ 7. **If no budget exists:** `/turing:train` runs without limits.
44
+
45
+ ## Examples
46
+
47
+ ```
48
+ /turing:budget set --experiments 50 --hours 8 # Set both constraints
49
+ /turing:budget set --experiments 30 # Experiment count only
50
+ /turing:budget status # Show usage and projections
51
+ /turing:budget reset # Remove budget limits
52
+ ```
@@ -0,0 +1,56 @@
1
+ ---
2
+ name: distill
3
+ description: Model compression via distillation — train a smaller student model to match a larger teacher's predictions.
4
+ disable-model-invocation: true
5
+ argument-hint: "<teacher-exp-id> [--compression 4] [--method soft-labels]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Compress a large model into a smaller, faster one for production. Measures the accuracy/size/latency tradeoff.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - First argument is teacher experiment ID (required)
20
+ - `--compression 4` — compression ratio (default: 4x)
21
+ - `--method soft_labels|feature_matching|dataset_distillation` — distillation method
22
+ - `--target-latency 5` — auto-adjust compression to meet latency target (ms)
23
+ - `--json` — raw JSON output
24
+
25
+ 3. **Run distillation planner:**
26
+ ```bash
27
+ python scripts/model_distiller.py $ARGUMENTS
28
+ ```
29
+
30
+ 4. **Report includes:**
31
+ - Teacher model metrics
32
+ - Auto-selected student architecture (fewer trees/layers/width)
33
+ - Estimated size reduction and latency improvement
34
+ - Distillation configuration (temperature, alpha, loss function)
35
+ - Verdict: EXCELLENT / ACCEPTABLE / MARGINAL / TOO MUCH LOSS
36
+
37
+ 5. **Student selection by model type:**
38
+ - **Tree models:** fewer estimators, shallower depth
39
+ - **Neural networks:** fewer layers, narrower hidden dims
40
+ - **scikit-learn:** simpler model family (RandomForest → DecisionTree)
41
+
42
+ 6. **Distillation methods:**
43
+ - **soft_labels:** train on teacher's probability outputs with temperature scaling
44
+ - **feature_matching:** align intermediate representations (neural only)
45
+ - **dataset_distillation:** train on teacher-labeled synthetic data
46
+
47
+ 7. **Saved output:** report written to `experiments/distillations/distill-<exp-id>.yaml`
48
+
49
+ ## Examples
50
+
51
+ ```
52
+ /turing:distill exp-042 # 4x compression, soft labels
53
+ /turing:distill exp-042 --compression 8 # Aggressive compression
54
+ /turing:distill exp-042 --method feature_matching # Neural feature alignment
55
+ /turing:distill exp-042 --target-latency 5 # Meet 5ms latency target
56
+ ```
@@ -0,0 +1,54 @@
1
+ ---
2
+ name: ensemble
3
+ description: Automated ensemble construction — combines top-K models via voting, stacking, and blending for zero-cost improvement.
4
+ disable-model-invocation: true
5
+ argument-hint: "[--top-k 5] [--methods voting,stacking,blending]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Build ensembles from your best experiments automatically. Often yields 1-3% improvement with zero additional training.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - `--top-k 5` — number of top models to include (default: 5)
20
+ - `--methods voting,stacking,blending` — ensemble methods to try
21
+ - `--predictions-dir experiments/predictions` — directory with saved predictions
22
+ - `--json` — raw JSON output
23
+
24
+ 3. **Run ensemble construction:**
25
+ ```bash
26
+ python scripts/build_ensemble.py $ARGUMENTS
27
+ ```
28
+
29
+ 4. **Report results:**
30
+ - Table of all ensemble methods tried with metric deltas vs best single model
31
+ - Best ensemble method highlighted with improvement amount
32
+ - Diversity analysis: prediction correlation matrix, diversity assessment
33
+ - Base model summary: which experiments were combined
34
+
35
+ 5. **Ensemble methods:**
36
+ - **Voting:** majority vote (classification) or mean (regression)
37
+ - **Weighted voting:** weights proportional to individual model performance
38
+ - **Stacking:** cross-validated meta-learner (ridge/logistic) on out-of-fold predictions
39
+ - **Blending:** holdout-based meta-learner (simpler, less data-efficient)
40
+
41
+ 6. **Prerequisites:** experiments must have saved predictions in `experiments/predictions/`. Each experiment needs `<exp-id>-predictions.npy` and a shared `labels.npy`.
42
+
43
+ 7. **If no predictions exist:** suggest saving predictions during training by adding prediction logging to `evaluate.py`.
44
+
45
+ 8. **Saved output:** report written to `experiments/ensembles/ensemble-*.yaml`
46
+
47
+ ## Examples
48
+
49
+ ```
50
+ /turing:ensemble # Default: top-5, all methods
51
+ /turing:ensemble --top-k 3 # Top-3 models only
52
+ /turing:ensemble --methods voting,stacking # Specific methods
53
+ /turing:ensemble --json # Machine-readable output
54
+ ```
@@ -0,0 +1,55 @@
1
+ ---
2
+ name: scale
3
+ description: Scaling law estimator — run small experiments at different sizes, fit a power law, and predict full-scale performance before committing compute.
4
+ disable-model-invocation: true
5
+ argument-hint: "[--axis data|compute|params] [--points 4] [--analyze results.yaml]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Predict full-scale performance from a handful of small experiments. Answers "is it worth training on the full dataset?" in 30 minutes instead of 3 days.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - `--axis data|compute|params` — scaling axis (default: data)
20
+ - `--points 4` — number of scale points (default: 4)
21
+ - `--analyze results.yaml` — analyze existing results instead of planning
22
+ - `--plot` — include ASCII scaling plot
23
+ - `--json` — raw JSON output
24
+
25
+ 3. **Plan or analyze:**
26
+ - **Plan mode (default):** generates scale point configs to run
27
+ ```bash
28
+ python scripts/scaling_estimator.py --axis data --points 4
29
+ ```
30
+ - **Analyze mode:** fits power law to completed results
31
+ ```bash
32
+ python scripts/scaling_estimator.py --analyze experiments/scaling/results.yaml
33
+ ```
34
+
35
+ 4. **Scaling axes:**
36
+ - **data:** train on 10%, 25%, 50%, 75% of dataset
37
+ - **compute:** train for 10%, 25%, 50%, 75% of max epochs
38
+ - **params:** scale model size (fewer estimators, shallower depth)
39
+
40
+ 5. **After planning:** run each scale point experiment, record results in YAML, then use `--analyze` to fit the curve
41
+
42
+ 6. **Report includes:**
43
+ - Power law fit: `metric = a × n^b` with R²
44
+ - Predictions for 100%, 150%, 200% scale
45
+ - Verdict: DIMINISHING RETURNS / MARGINAL GAINS / WORTH SCALING
46
+
47
+ 7. **Saved output:** report written to `experiments/scaling/scale-YYYY-MM-DD.yaml`
48
+
49
+ ## Examples
50
+
51
+ ```
52
+ /turing:scale # Plan: data axis, 4 points
53
+ /turing:scale --axis compute --points 3 # Plan: compute axis, 3 points
54
+ /turing:scale --analyze results.yaml --plot # Analyze with ASCII plot
55
+ ```
@@ -0,0 +1,49 @@
1
+ ---
2
+ name: stitch
3
+ description: Pipeline composition — decompose ML pipelines into swappable stages. Show, swap, cache, and run stages independently.
4
+ disable-model-invocation: true
5
+ argument-hint: "<show|swap|cache|run> [stage] [--from exp-id]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Decompose your ML pipeline into stages that can be independently varied, cached, and reused across experiments.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - First argument is the action: `show`, `swap`, `cache`, `run`
20
+ - `show` — display pipeline stages with hash and cache status
21
+ - `swap <stage> --from <exp-id>` — replace a stage with one from another experiment
22
+ - `cache` — save intermediate stage outputs to disk
23
+ - `run` — execute pipeline, skipping cached stages
24
+
25
+ 3. **Run pipeline manager:**
26
+ ```bash
27
+ python scripts/pipeline_manager.py $ARGUMENTS
28
+ ```
29
+
30
+ 4. **Report results:**
31
+ - **show:** numbered stage list with description, content hash, and cache status
32
+ - **swap:** what changed, old vs new stage config, updated pipeline
33
+ - **cache:** per-stage cache paths and status
34
+ - **run:** which stages will be skipped (cached) vs re-run
35
+
36
+ 5. **Stage types:** preprocess, features, model, postprocess (configurable in `config.yaml` under `pipeline.stages`)
37
+
38
+ 6. **Cache benefit:** when only the model stage changes, preprocessing and feature engineering are skipped — experiments run faster
39
+
40
+ 7. **If no pipeline config:** falls back to default 4-stage pipeline
41
+
42
+ ## Examples
43
+
44
+ ```
45
+ /turing:stitch show # Display pipeline stages
46
+ /turing:stitch swap model --from exp-031 # Keep features, swap model
47
+ /turing:stitch cache # Cache intermediate outputs
48
+ /turing:stitch run # Run with cached stages
49
+ ```
@@ -42,6 +42,12 @@ You are the Turing ML research router. Detect the user's intent and route to the
42
42
  | "diff", "deep compare", "what changed", "why did it diverge", "experiment diff" | `/turing:diff` | Analyze |
43
43
  | "watch", "monitor", "live training", "loss spike", "is it overfitting", "training progress" | `/turing:watch` | Monitor |
44
44
  | "regress", "regression", "did metrics degrade", "check for regression", "CI gate", "stability check" | `/turing:regress` | Validate |
45
+ | "ensemble", "combine models", "voting", "stacking", "blending", "merge models" | `/turing:ensemble` | Compose |
46
+ | "stitch", "pipeline", "swap stage", "cache stage", "pipeline composition" | `/turing:stitch` | Compose |
47
+ | "warm", "warm start", "fine-tune", "continue training", "transfer learning", "from checkpoint" | `/turing:warm` | Compose |
48
+ | "scale", "scaling law", "how much data", "is more data worth it", "power law", "data efficiency" | `/turing:scale` | Analyze |
49
+ | "budget", "compute budget", "how many experiments", "spending limit", "stop after" | `/turing:budget` | Manage |
50
+ | "distill", "compress", "smaller model", "student model", "knowledge distillation", "model compression" | `/turing:distill` | Deploy |
45
51
 
46
52
  ## Sub-commands
47
53
 
@@ -80,6 +86,12 @@ You are the Turing ML research router. Detect the user's intent and route to the
80
86
  | `/turing:diff <exp-a> <exp-b>` | Deep experiment comparison: config diff, metric significance, per-class regressions, curve divergence | (inline) |
81
87
  | `/turing:watch [--analyze]` | Live training monitor with early-warning alerts (loss spike, NaN, overfitting, plateau) | (inline) |
82
88
  | `/turing:regress [--tolerance]` | Performance regression gate: re-run best experiment, verify metrics haven't degraded | (inline) |
89
+ | `/turing:ensemble [--top-k] [--methods]` | Automated ensemble: voting, weighted voting, stacking, blending from top-K models | (inline) |
90
+ | `/turing:stitch <action> [stage]` | Pipeline composition: show/swap/cache/run stages independently | (inline) |
91
+ | `/turing:warm <exp-id>` | Warm-start from prior model: load checkpoint, freeze layers, adjust LR | (inline) |
92
+ | `/turing:scale [--axis]` | Scaling law estimator: fit power law, predict full-scale performance | (inline) |
93
+ | `/turing:budget <action>` | Compute budget manager: set limits, track allocation, auto-shift modes | (inline) |
94
+ | `/turing:distill <exp-id>` | Model compression: distill teacher into smaller student model | (inline) |
83
95
 
84
96
  ## Proactive Detection
85
97
 
@@ -0,0 +1,53 @@
1
+ ---
2
+ name: warm
3
+ description: Warm-start from a prior model — load checkpoint, optionally freeze layers, adjust learning rate, and continue training.
4
+ disable-model-invocation: true
5
+ argument-hint: "<exp-id> [--freeze-layers encoder] [--unfreeze-after 5]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Take a trained checkpoint and use it as initialization for a new experiment. Automates the "start from here but change X" pattern.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - First argument is the source experiment ID (required)
20
+ - `--freeze-layers encoder decoder` — layer names to freeze (neural only)
21
+ - `--unfreeze-after 5` — unfreeze all layers after N epochs (gradual unfreezing)
22
+ - `--lr-factor 0.1` — learning rate reduction factor (default: 0.1x)
23
+ - `--json` — raw JSON output
24
+
25
+ 3. **Run warm-start planner:**
26
+ ```bash
27
+ python scripts/warm_start.py $ARGUMENTS
28
+ ```
29
+
30
+ 4. **Report results:**
31
+ - Model type detection (tree, neural, sklearn)
32
+ - Strategy: continue_boosting, load_weights, or warm_start_param
33
+ - Numbered step-by-step instructions
34
+ - Config changes to apply
35
+ - Checkpoint info (path, format, size)
36
+
37
+ 5. **Strategies by model type:**
38
+ - **Tree models (XGBoost/LightGBM):** continue boosting from existing trees with more estimators
39
+ - **Neural networks:** load weights, optionally freeze layers, reset optimizer, reduce LR
40
+ - **scikit-learn:** use `warm_start=True` parameter for incremental learning
41
+
42
+ 6. **If no checkpoint found:** plan is still generated, but warns that checkpoint is needed
43
+
44
+ 7. **Saved output:** report written to `experiments/warm_starts/warm-<exp-id>.yaml`
45
+
46
+ ## Examples
47
+
48
+ ```
49
+ /turing:warm exp-042 # Auto-detect strategy
50
+ /turing:warm exp-042 --freeze-layers encoder # Freeze encoder layers
51
+ /turing:warm exp-042 --freeze-layers encoder --unfreeze-after 5 # Gradual unfreezing
52
+ /turing:warm exp-042 --lr-factor 0.01 # Very small fine-tuning LR
53
+ ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-turing",
3
- "version": "2.3.0",
3
+ "version": "2.5.0",
4
4
  "type": "module",
5
5
  "description": "Autonomous ML research harness for Claude Code. The autoresearch loop as a formal protocol — iteratively trains, evaluates, and improves ML models with structured experiment tracking, convergence detection, immutable evaluation infrastructure, and safety guardrails.",
6
6
  "bin": {
package/src/install.js CHANGED
@@ -26,6 +26,8 @@ const SUB_COMMANDS = [
26
26
  "diagnose", "ablate", "frontier", "profile", "checkpoint", "export",
27
27
  "lit", "paper", "queue", "retry", "fork",
28
28
  "diff", "watch", "regress",
29
+ "ensemble", "stitch", "warm",
30
+ "scale", "budget", "distill",
29
31
  ];
30
32
 
31
33
  export async function install(opts = {}) {
package/src/verify.js CHANGED
@@ -47,6 +47,12 @@ const EXPECTED_COMMANDS = [
47
47
  "diff/SKILL.md",
48
48
  "watch/SKILL.md",
49
49
  "regress/SKILL.md",
50
+ "ensemble/SKILL.md",
51
+ "stitch/SKILL.md",
52
+ "warm/SKILL.md",
53
+ "scale/SKILL.md",
54
+ "budget/SKILL.md",
55
+ "distill/SKILL.md",
50
56
  ];
51
57
 
52
58
  const EXPECTED_AGENTS = ["ml-researcher.md", "ml-evaluator.md"];