claude-turing 2.5.0 → 3.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "turing",
3
- "version": "2.5.0",
4
- "description": "Autonomous ML research harness — the autoresearch loop as a formal protocol. 39 commands, 2 specialized agents, scaling & efficiency (scaling laws + compute budget + model distillation), model composition (ensemble + pipeline stitch + warm-start), deep analysis (experiment diff + live training monitor + regression gate), experiment orchestration (batch queue + smart retry + branching), literature integration + paper drafting, production model export, performance profiling, smart checkpoints, experiment intelligence, statistical rigor, tree-search hypothesis exploration, cost-performance frontier, model cards, model registry, hypothesis database with novelty guard, anti-cheating guardrails, and the taste-leverage loop. Inspired by Karpathy's autoresearch and the scientific method itself.",
3
+ "version": "3.1.0",
4
+ "description": "Autonomous ML research harness — the autoresearch loop as a formal protocol. 44 commands, 2 specialized agents, pre-training intelligence (sanity checks + baseline generation + leakage detection), meta-intelligence (cross-project knowledge transfer + methodology audit), scaling & efficiency (scaling laws + compute budget + model distillation), model composition (ensemble + pipeline stitch + warm-start), deep analysis (experiment diff + live training monitor + regression gate), experiment orchestration (batch queue + smart retry + branching), literature integration + paper drafting, production model export, performance profiling, smart checkpoints, experiment intelligence, statistical rigor, tree-search hypothesis exploration, cost-performance frontier, model cards, model registry, hypothesis database with novelty guard, anti-cheating guardrails, and the taste-leverage loop. Inspired by Karpathy's autoresearch and the scientific method itself.",
5
5
  "author": {
6
6
  "name": "pragnition"
7
7
  },
package/README.md CHANGED
@@ -350,6 +350,11 @@ The index (`hypotheses.yaml`) is the lightweight queue. The detail files (`hypot
350
350
  | `/turing:scale [--axis]` | Scaling law estimator — power-law fit, full-scale predictions, diminishing returns verdict |
351
351
  | `/turing:budget <action>` | Compute budget manager — set limits, track allocation, auto-shift explore/exploit |
352
352
  | `/turing:distill <exp-id>` | Model compression — distill teacher into smaller student with accuracy/size tradeoff |
353
+ | `/turing:transfer [--from]` | Cross-project knowledge transfer — find similar projects, surface what worked |
354
+ | `/turing:audit [--strict]` | Pre-submission methodology audit — data leakage, baselines, seeds, ablations, reproducibility |
355
+ | `/turing:sanity [--quick]` | Pre-training sanity checks — initial loss, single-batch overfit, gradient flow, output validation |
356
+ | `/turing:baseline [--methods]` | Automatic baseline generation — random, majority/mean, linear, k-NN |
357
+ | `/turing:leak [--deep]` | Targeted leakage detection — single-feature tests, correlation, train/test overlap |
353
358
 
354
359
  And for fully hands-off operation:
355
360
 
@@ -534,11 +539,11 @@ Each project gets independent config, data, experiments, models, and agent memor
534
539
 
535
540
  ## Architecture of Turing Itself
536
541
 
537
- 39 commands, 2 agents, 10 config files, 58 template scripts, model registry, artifact contract, cost-performance frontier, model cards, tree-search exploration, statistical rigor, experiment intelligence, performance profiling, smart checkpoints, production model export, literature integration, paper section drafting, experiment orchestration (queue + retry + fork), deep analysis (diff + watch + regress), model composition (ensemble + stitch + warm), scaling & efficiency (scale + budget + distill), 16 ADRs. See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for the full codemap.
542
+ 44 commands, 2 agents, 10 config files, 63 template scripts, model registry, artifact contract, cost-performance frontier, model cards, tree-search exploration, statistical rigor, experiment intelligence, performance profiling, smart checkpoints, production model export, literature integration, paper section drafting, experiment orchestration (queue + retry + fork), deep analysis (diff + watch + regress), model composition (ensemble + stitch + warm), scaling & efficiency (scale + budget + distill), meta-intelligence (transfer + audit), pre-training intelligence (sanity + baseline + leak), 16 ADRs. See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for the full codemap.
538
543
 
539
544
  ```
540
545
  turing/
541
- ├── commands/ 38 skill files (core + taste-leverage + reporting + exploration + statistical rigor + experiment intelligence + performance + deployment + research workflow + orchestration + deep analysis + model composition + scaling & efficiency)
546
+ ├── commands/ 43 skill files (core + taste-leverage + reporting + exploration + statistical rigor + experiment intelligence + performance + deployment + research workflow + orchestration + deep analysis + model composition + scaling & efficiency + meta-intelligence + pre-training intelligence)
542
547
  ├── agents/ 2 agents (researcher: read/write, evaluator: read-only)
543
548
  ├── config/ 8 files (lifecycle, taxonomy, archetypes, novelty aliases)
544
549
  ├── templates/ Scaffolded into user projects by /turing:init
@@ -0,0 +1,56 @@
1
+ ---
2
+ name: audit
3
+ description: Pre-submission methodology audit — catch data leakage, missing baselines, cherry-picked seeds, and incomplete ablations before a reviewer does.
4
+ disable-model-invocation: true
5
+ argument-hint: "[--strict] [--checklist neurips]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ A reviewer checklist you run before submitting. Catches methodology mistakes that cause desk rejections.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - `--strict` — treat warnings as failures
20
+ - `--checklist neurips|icml|iclr` — add venue-specific checks
21
+ - `--json` — raw JSON output
22
+
23
+ 3. **Run methodology audit:**
24
+ ```bash
25
+ python scripts/methodology_audit.py $ARGUMENTS
26
+ ```
27
+
28
+ 4. **Checks performed:**
29
+ - **Data leakage** (critical): verify prepare.py/evaluate.py separation
30
+ - **CV strategy** (critical): verify appropriate cross-validation for data type
31
+ - **Seed sensitivity** (high): seed studies exist for best experiments
32
+ - **Ablation completeness** (high): ablation studies performed
33
+ - **Baseline comparison** (high): simple baselines in experiment log
34
+ - **Reproducibility** (high): best result successfully reproduced
35
+ - **Hyperparameter budget** (medium): total tuning cost documented
36
+ - **Regression stability** (medium): regression checks performed
37
+
38
+ 5. **Verdicts:**
39
+ - **PASS** — ready for submission
40
+ - **PASS (with warnings)** — address before submission
41
+ - **NEEDS WORK** — fix failures first
42
+ - **FAIL** — critical issues found
43
+
44
+ 6. **Actions:** each failure suggests the `/turing:` command to fix it
45
+
46
+ 7. **Venue checklists:** `--checklist neurips` adds NeurIPS-specific checks (broader impact, reproducibility checklist, code availability)
47
+
48
+ 8. **Saved output:** report in `experiments/audits/audit-YYYY-MM-DD.yaml`
49
+
50
+ ## Examples
51
+
52
+ ```
53
+ /turing:audit # Standard audit
54
+ /turing:audit --strict # Warnings become failures
55
+ /turing:audit --checklist neurips # NeurIPS submission checklist
56
+ ```
@@ -0,0 +1,45 @@
1
+ ---
2
+ name: baseline
3
+ description: Automatic baseline generation — random, majority/mean, linear, k-NN baselines in 60 seconds. Every experiment needs a "is this better than dumb?" reference.
4
+ disable-model-invocation: true
5
+ argument-hint: "[--methods all|simple|linear] [--data data.npz]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Generate trivial baselines so you always know if your model is meaningfully better than simple approaches.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - `--methods all|simple|linear` — baseline group (default: all)
20
+ - `--data data.npz` — data file with X and y arrays
21
+ - `--json` — raw JSON output
22
+
23
+ 3. **Run baseline generation:**
24
+ ```bash
25
+ python scripts/generate_baselines.py $ARGUMENTS
26
+ ```
27
+
28
+ 4. **Baselines generated:**
29
+ - **Classification:** Random, Majority class, Stratified random, Logistic Regression, k-NN
30
+ - **Regression:** Random, Mean predictor, Median predictor, Ridge Regression, k-NN
31
+ - Each evaluated with the same protocol as real experiments
32
+
33
+ 5. **Report includes:** comparison table with metric values and notes (floor, ceiling, reference)
34
+
35
+ 6. **Integration:** satisfies the "baseline comparison" check in `/turing:audit`
36
+
37
+ 7. **Saved output:** report in `experiments/baselines/baselines-*.yaml`
38
+
39
+ ## Examples
40
+
41
+ ```
42
+ /turing:baseline # All baselines
43
+ /turing:baseline --methods simple # Just random + majority
44
+ /turing:baseline --data data/processed.npz # With actual data
45
+ ```
@@ -0,0 +1,47 @@
1
+ ---
2
+ name: leak
3
+ description: Targeted leakage detection — probe for data leakage with single-feature tests, correlation checks, and train/test overlap detection.
4
+ disable-model-invocation: true
5
+ argument-hint: "[--deep] [--features feature_1,feature_2]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Actively probe for data leakage. The #1 cause of "too good to be true" results.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - `--deep` — run full single-feature analysis (slow but thorough)
20
+ - `--features "feat_1,feat_2"` — check specific features
21
+ - `--json` — raw JSON output
22
+
23
+ 3. **Run leakage scan:**
24
+ ```bash
25
+ python scripts/leakage_detector.py $ARGUMENTS
26
+ ```
27
+
28
+ 4. **Checks performed:**
29
+ - **Feature-target correlation:** flag features with >0.95 correlation to target
30
+ - **Single-feature predictiveness (--deep):** train on each feature alone, flag any that achieve >80% of full model performance
31
+ - **Train/test overlap:** hash-based deduplication across splits
32
+
33
+ 5. **Verdicts:**
34
+ - **CLEAN** — no leakage detected
35
+ - **SUSPICIOUS** — warnings to review
36
+ - **LEAKAGE DETECTED** — critical flags found
37
+
38
+ 6. **Integration:** satisfies the "data leakage" check in `/turing:audit`
39
+
40
+ 7. **Saved output:** report in `experiments/leakage/leak-*.yaml`
41
+
42
+ ## Examples
43
+
44
+ ```
45
+ /turing:leak # Standard correlation + overlap checks
46
+ /turing:leak --deep # Full single-feature analysis
47
+ ```
@@ -0,0 +1,48 @@
1
+ ---
2
+ name: sanity
3
+ description: Pre-training sanity checks — catch broken data loaders, misconfigured losses, and dead gradients in 30 seconds before wasting hours.
4
+ disable-model-invocation: true
5
+ argument-hint: "[--quick] [--verbose]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Run a battery of fast checks before committing to a full training run. Catches wiring bugs in seconds.
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - `--quick` — skip single-batch overfit test (fastest, ~5 seconds)
20
+ - `--verbose` — show detailed check output
21
+ - `--json` — raw JSON output
22
+
23
+ 3. **Run sanity checks:**
24
+ ```bash
25
+ python scripts/sanity_checks.py $ARGUMENTS
26
+ ```
27
+
28
+ 4. **Checks performed:**
29
+ - **Data pipeline** (critical): first batch loads, shapes match, no NaN/Inf
30
+ - **Initial loss** (high): loss at initialization matches theory (e.g., -log(1/C) for cross-entropy)
31
+ - **Gradient flow** (high): all parameters have non-zero, non-exploding gradients
32
+ - **Single-batch overfit** (critical): model can memorize 1 batch in 50 steps — if not, something is broken
33
+ - **Output validation** (high): predictions are non-NaN, non-constant, reasonable range
34
+ - **Config consistency** (medium): learning rate, batch size in reasonable ranges
35
+
36
+ 5. **Verdicts:**
37
+ - **PASS** — safe to proceed
38
+ - **PASS (with warnings)** — review before training
39
+ - **FAIL** — do not proceed, fix issues first
40
+
41
+ 6. **Saved output:** report in `experiments/sanity/sanity-*.yaml`
42
+
43
+ ## Examples
44
+
45
+ ```
46
+ /turing:sanity # Full check (~30 seconds)
47
+ /turing:sanity --quick # Skip overfit test (~5 seconds)
48
+ ```
@@ -0,0 +1,54 @@
1
+ ---
2
+ name: transfer
3
+ description: Cross-project knowledge transfer — find similar prior projects and surface what worked. Builds institutional ML memory.
4
+ disable-model-invocation: true
5
+ argument-hint: "[--from project-path] [--auto]"
6
+ allowed-tools: Read, Bash(*), Grep, Glob
7
+ ---
8
+
9
+ Find similar prior projects and surface what worked. "Last time you had tabular classification with class imbalance, LightGBM beat everything by 3%."
10
+
11
+ ## Steps
12
+
13
+ 1. **Activate environment:**
14
+ ```bash
15
+ source .venv/bin/activate
16
+ ```
17
+
18
+ 2. **Parse arguments from `$ARGUMENTS`:**
19
+ - `--from ~/projects/fraud-detection` — transfer from a specific project
20
+ - `--auto` — auto-queue hypotheses from recommendations
21
+ - `--index ~/.turing/project_index.yaml` — custom index path
22
+ - `--json` — raw JSON output
23
+
24
+ 3. **Run knowledge transfer:**
25
+ ```bash
26
+ python scripts/knowledge_transfer.py $ARGUMENTS
27
+ ```
28
+
29
+ 4. **Report includes:**
30
+ - Similar prior projects ranked by similarity score
31
+ - Per project: task type, winner model, key insights
32
+ - Suggested hypotheses from winning strategies
33
+ - Auto-queued hypotheses (with `--auto`)
34
+
35
+ 5. **Similarity matching** uses:
36
+ - Task type (classification/regression) — highest weight
37
+ - Dataset size (log-scale comparison)
38
+ - Feature types (tabular/image/text)
39
+ - Class balance characteristics
40
+ - Dimensionality
41
+
42
+ 6. **Project index** at `~/.turing/project_index.yaml` — local only, never uploaded
43
+
44
+ 7. **If no similar projects found:** suggest running on more projects first or specifying one with `--from`
45
+
46
+ 8. **Saved output:** report in `experiments/transfers/transfer-*.yaml`
47
+
48
+ ## Examples
49
+
50
+ ```
51
+ /turing:transfer # Search index for similar projects
52
+ /turing:transfer --from ~/projects/fraud-detection # Transfer from specific project
53
+ /turing:transfer --auto # Auto-queue hypotheses
54
+ ```
@@ -48,6 +48,11 @@ You are the Turing ML research router. Detect the user's intent and route to the
48
48
  | "scale", "scaling law", "how much data", "is more data worth it", "power law", "data efficiency" | `/turing:scale` | Analyze |
49
49
  | "budget", "compute budget", "how many experiments", "spending limit", "stop after" | `/turing:budget` | Manage |
50
50
  | "distill", "compress", "smaller model", "student model", "knowledge distillation", "model compression" | `/turing:distill` | Deploy |
51
+ | "transfer", "what worked before", "similar project", "cross-project", "institutional knowledge", "prior projects" | `/turing:transfer` | Research |
52
+ | "audit", "methodology check", "pre-submission", "reviewer checklist", "data leakage", "missing baselines" | `/turing:audit` | Validate |
53
+ | "sanity", "sanity check", "pre-training", "is it broken", "before training", "quick check" | `/turing:sanity` | Check |
54
+ | "baseline", "baselines", "trivial baseline", "majority class", "is it better than random" | `/turing:baseline` | Analyze |
55
+ | "leak", "leakage", "data leakage scan", "suspicious feature", "train test overlap" | `/turing:leak` | Validate |
51
56
 
52
57
  ## Sub-commands
53
58
 
@@ -92,6 +97,11 @@ You are the Turing ML research router. Detect the user's intent and route to the
92
97
  | `/turing:scale [--axis]` | Scaling law estimator: fit power law, predict full-scale performance | (inline) |
93
98
  | `/turing:budget <action>` | Compute budget manager: set limits, track allocation, auto-shift modes | (inline) |
94
99
  | `/turing:distill <exp-id>` | Model compression: distill teacher into smaller student model | (inline) |
100
+ | `/turing:transfer [--from]` | Cross-project knowledge transfer: find similar prior projects, surface what worked | (inline) |
101
+ | `/turing:audit [--strict]` | Pre-submission methodology audit: data leakage, baselines, seeds, ablations, reproducibility | (inline) |
102
+ | `/turing:sanity [--quick]` | Pre-training sanity checks: initial loss, overfit test, gradient flow, output validation | (inline) |
103
+ | `/turing:baseline [--methods]` | Automatic baseline generation: random, majority/mean, linear, k-NN | (inline) |
104
+ | `/turing:leak [--deep]` | Targeted leakage detection: single-feature tests, correlation, train/test overlap | (inline) |
95
105
 
96
106
  ## Proactive Detection
97
107
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-turing",
3
- "version": "2.5.0",
3
+ "version": "3.1.0",
4
4
  "type": "module",
5
5
  "description": "Autonomous ML research harness for Claude Code. The autoresearch loop as a formal protocol — iteratively trains, evaluates, and improves ML models with structured experiment tracking, convergence detection, immutable evaluation infrastructure, and safety guardrails.",
6
6
  "bin": {
package/src/install.js CHANGED
@@ -28,6 +28,8 @@ const SUB_COMMANDS = [
28
28
  "diff", "watch", "regress",
29
29
  "ensemble", "stitch", "warm",
30
30
  "scale", "budget", "distill",
31
+ "transfer", "audit",
32
+ "sanity", "baseline", "leak",
31
33
  ];
32
34
 
33
35
  export async function install(opts = {}) {
package/src/verify.js CHANGED
@@ -53,6 +53,11 @@ const EXPECTED_COMMANDS = [
53
53
  "scale/SKILL.md",
54
54
  "budget/SKILL.md",
55
55
  "distill/SKILL.md",
56
+ "transfer/SKILL.md",
57
+ "audit/SKILL.md",
58
+ "sanity/SKILL.md",
59
+ "baseline/SKILL.md",
60
+ "leak/SKILL.md",
56
61
  ];
57
62
 
58
63
  const EXPECTED_AGENTS = ["ml-researcher.md", "ml-evaluator.md"];