livepilot 1.18.3 → 1.19.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,186 @@
1
1
  # Changelog
2
2
 
3
+ ## 1.19.0 — Experiment baseline + hybrid packet compilation (April 24 2026)
4
+
5
+ Minor version bump. Ships two of the three open items documented in
6
+ `docs/plans/v1.19-structural-plan.md`. Item C (full architectural
7
+ routing of director Phase 6 through `apply_semantic_move`) is
8
+ deferred to v1.20 per the plan's blast-radius rationale.
9
+
10
+ Both items shipped under strict TDD: 52 new unit tests, zero
11
+ regressions across the 2854-test suite. Both items live-tested in
12
+ production (real Ableton session, Live 12.4.0, 13 live-test
13
+ scenarios green).
14
+
15
+ ### Item A — Experiment baseline transport snapshot/restore
16
+
17
+ Live-verified in v1.18.0 Test 8: running a 3-branch experiment
18
+ sequentially produced inconsistent `before_snapshot` values
19
+ because playback position, mute/solo/arm, and playing-clip state
20
+ drifted across branches. `undo()` reverts command history but
21
+ doesn't guarantee transport state is identical when each branch's
22
+ `before_snapshot` fires. Track_meters[0].level values of 0.764 /
23
+ 0.000 / 0.873 across three branches rendered the before/after
24
+ comparisons meaningless.
25
+
26
+ Fix — snapshot-and-restore pattern, experiment-level:
27
+
28
+ - NEW `mcp_server/experiment/baseline.py` — `BaselineTransportState`
29
+ dataclass + `capture_baseline(ableton)` +
30
+ `restore_baseline(ableton, baseline, stabilize_ms=300)`.
31
+ Captures `is_playing`, `song_time`, and per-track
32
+ `mute`/`solo`/`arm` via a single `get_session_info` round-trip.
33
+ Restore issues `stop_playback` → per-track
34
+ `set_track_mute`/`set_track_solo`/`set_track_arm` → 300 ms
35
+ stabilize sleep. Per-track failures are logged, not fatal (a
36
+ single flaky track never aborts restore for the rest).
37
+ - `ExperimentSet` gains a `baseline_transport: Optional[BaselineTransportState]`
38
+ field. `to_dict()` surfaces it when populated.
39
+ - `engine.prepare_for_next_branch(ableton, baseline, stabilize_ms)`
40
+ — thin wrapper called by `run_experiment` between branches.
41
+ No-op when baseline is None (first branch).
42
+ - `run_experiment` captures the baseline once before the branch
43
+ loop starts, stashes it on the experiment, and calls
44
+ `prepare_for_next_branch` before every branch after the first.
45
+ Capture failure logs + degrades to None (pre-v1.19 behavior).
46
+
47
+ **Stabilize window defaults to 300 ms** — midpoint of plan §2's
48
+ 200-500 ms empirical range. Per-branch overhead stayed at
49
+ ~1.04 s amortized under live 5-branch testing (well under the
50
+ plan's 2-second-per-branch success criterion target).
51
+
52
+ **Live evidence of state preservation:** 5-branch test with two
53
+ mutations on track 0 "Dub Chord" (pan -0.35 by `widen_stereo`,
54
+ then volume 0.4 by `darken_without_losing_width`) returned the
55
+ track to identical pre-experiment state (arm=true, mute=false,
56
+ solo=false) after every branch cycle.
57
+
58
+ Known limitations (accepted per plan §2):
59
+ - Automation drift is not frozen — deeper refactor out of scope.
60
+ - Send values + device parameters mutated outside a branch's own
61
+ steps fall back to `undo()` alone — no explicit restore.
62
+ - Transport position is NOT re-seeked; `song_time` is captured
63
+ but unused (stopping is enough).
64
+
65
+ 21 unit tests added: capture (transport fields, empty tracks,
66
+ missing-field defaults, epoch-ms timestamp), restore (command
67
+ sequence, per-track mute/solo/arm restoration, stabilize sleep
68
+ with monkey-patched time.sleep, flaky-track resilience,
69
+ return-track arm skip), `ExperimentSet.baseline_transport`
70
+ (default None, to_dict surfacing/omission), engine helper
71
+ (None no-op, delegation), tool-level wiring (`run_experiment`
72
+ populates baseline once + idempotent on second run).
73
+
74
+ ### Item B — Hybrid concept packet compilation
75
+
76
+ Pre-v1.19 the director handled "Basic Channel meets Dilla swing"
77
+ via LLM ad-hoc reasoning — no explicit rule for contradictions
78
+ (e.g., Gas deprioritizes rhythmic, Dilla emphasizes rhythmic;
79
+ what survives the hybrid?). v1.18.0 Test 7 verified plausible
80
+ output but entirely improvisational, with no guarantee either
81
+ source packet's `avoid` list or tempo constraints would persist.
82
+
83
+ Fix — explicit merge algorithm with canonical rules per plan §3:
84
+
85
+ - NEW `mcp_server/creative_director/hybrid.py` —
86
+ `compile_hybrid_brief(packet_ids, weights=None)` loads concept
87
+ packets from `livepilot/skills/livepilot-core/references/concepts/`
88
+ and applies merge rules:
89
+ * `sonic_identity` / `avoid` / `reach_for.*` / `*_idioms` /
90
+ `sample_roles` / `dimensions_in_scope`: UNION, deduplicated,
91
+ first-packet order preserved.
92
+ * `dimensions_deprioritized` / `move_family_bias.deprioritize`:
93
+ INTERSECTION — only deprioritize if ALL packets agree.
94
+ Safer default: one packet's ignored dimension shouldn't
95
+ starve another packet's wanted one.
96
+ * `move_family_bias.favor`: INTERSECTION when non-empty
97
+ (hybrid focuses where both agree), UNION fallback with
98
+ warning when empty.
99
+ * `evaluation_bias.target_dimensions`: WEIGHTED AVERAGE
100
+ (default uniform; override via `weights`).
101
+ * `evaluation_bias.protect`: MAX per dimension (stricter
102
+ floor wins).
103
+ * `novelty_budget_default`: MAX (hybrid asks skew
104
+ exploratory).
105
+ * `tempo_hint`: NEAREST-OVERLAP — intersect overlapping
106
+ ranges, else midpoint + `disjoint: true` flag + warning.
107
+
108
+ - NEW MCP tool `compile_hybrid_brief` in
109
+ `mcp_server/creative_director/tools.py` (tool count 428 → 429).
110
+ Accepts packet IDs as filename stems (`"basic-channel"`),
111
+ aliases (`"dilla"`), or packet `id` values
112
+ (`"dub_techno__basic_channel"`). Returns ValueError as an
113
+ error-dict response (doesn't raise).
114
+
115
+ - NEW reference doc
116
+ `livepilot/skills/livepilot-creative-director/references/hybrid-compilation.md`
117
+ — canonical merge-rule table, output shape, interop notes,
118
+ guidance for handling the `warnings` list.
119
+
120
+ - Director SKILL.md Phase 1 — explicit guidance to call
121
+ `compile_hybrid_brief` when the user names 2+ references,
122
+ with a mandate to surface any `warnings` entries (don't
123
+ silently average disjoint tempos).
124
+
125
+ - Output exposes merged `avoid` also as `anti_patterns` alias
126
+ for drop-in compat with `check_brief_compliance` (v1.18.3).
127
+ Live interop test: Basic Channel × J Dilla hybrid correctly
128
+ flagged a Hi Gain boost via `check_brief_compliance`.
129
+
130
+ 31 unit tests added: packet loading (stem / alias / id /
131
+ underscore-to-hyphen normalization / missing), input validation
132
+ (min 2 packets / missing packet / weights length mismatch),
133
+ UNION rules (avoid / sonic_identity / reach_for /
134
+ dimensions_in_scope), INTERSECTION rules (deprioritized
135
+ dimensions / `move_family_bias.deprioritize` /
136
+ `move_family_bias.favor` non-empty case / UNION fallback with
137
+ warning), WEIGHTED AVERAGE (default + custom weights), MAX rules
138
+ (protect / novelty_budget), tempo_hint (overlap intersection /
139
+ disjoint midpoint with warning), 3+ packet composition, output
140
+ metadata (`type` / `source_packets` / hybrid name /
141
+ `locked_dimensions=[]` / warnings list), and interoperability
142
+ (hybrid brief passed through `check_brief_compliance`).
143
+
144
+ ### Live test coverage (13 scenarios)
145
+
146
+ Item B: BC × Dilla (disjoint tempos) · BC × Villalobos
147
+ (overlapping tempos, NO disjoint flag) · alias + spaced-name
148
+ resolution · invalid packet error · 3-packet hybrid
149
+ (BC + Dilla + Villalobos) · weighted average 75/25 · genre ×
150
+ artist (ambient × basinski, tempo=0 case) · full hybrid brief
151
+ → `check_brief_compliance` interop (quantize_clip flagged).
152
+
153
+ Item A: 3-branch experiment (all snapshots populated, ranking
154
+ produced) · 5-branch experiment (1.04s/branch amortized
155
+ overhead) · state preservation under 2 mutations on track 0
156
+ (Dub Chord) across 5-branch cycle · `discard_experiment` cleanup.
157
+
158
+ ### Known gaps deferred to v1.19.1
159
+
160
+ - `experiment.baseline_transport` populated internally but not
161
+ surfaced through `compare_experiments` response. 3-line fix
162
+ for operator visibility; not a correctness issue.
163
+ - `warnings` message rounds tempo midpoint to int display (128
164
+ BPM) while range returned is exact (125-130, centered 127.5).
165
+ Two rounding conventions. Cosmetic.
166
+ - `weights` in response show full float precision
167
+ (`0.3333333333333333`) instead of rounding to 4 dp like
168
+ `target_dimensions` already does. Cosmetic.
169
+
170
+ ### Still open for v1.20 (Item C from the plan)
171
+
172
+ - Route director's Phase 6 execution through `apply_semantic_move`
173
+ / `create_experiment + commit_experiment` so the action ledger
174
+ populates automatically and anti-repetition becomes reliable.
175
+ Doc-level fix shipped in v1.18.1; architectural fix deferred
176
+ to v1.20 per plan §5 blast-radius rationale. Requires 5-10
177
+ new semantic_moves to cover current Phase 6 patterns
178
+ (return-chain builds, multi-param device presets, chord
179
+ source loading, send routing, etc.).
180
+
181
+ Test suite: 2854 pass, 1 skipped (from 2792 pre-v1.19). Zero
182
+ regressions. sync_metadata --check clean.
183
+
3
184
  ## 1.18.3 — Brief compliance runtime check (#7 + #8) (April 24 2026)
4
185
 
5
186
  Third v1.18.x patch. Bundles two Known Issues items (#7 + #8) that
package/README.md CHANGED
@@ -17,7 +17,7 @@
17
17
 
18
18
  <p align="center">
19
19
  An agentic production system for Ableton Live 12.<br>
20
- 428 tools. 53 domains. Device atlas. Plan-aware Splice integration. Auto-composition. Spectral perception. Technique memory. Drum-rack pad builder. Live dead-device detection.
20
+ 429 tools. 53 domains. Device atlas. Plan-aware Splice integration. Auto-composition. Spectral perception. Technique memory. Drum-rack pad builder. Live dead-device detection.
21
21
  </p>
22
22
 
23
23
  <br>
@@ -80,7 +80,7 @@ Most MCP servers are tool collections — they execute commands. LivePilot is an
80
80
  │ └─────────────────┼──────────────────┘ │
81
81
  │ ▼ │
82
82
  │ ┌─────────────────┐ │
83
- │ │ 428 MCP Tools │ │
83
+ │ │ 429 MCP Tools │ │
84
84
  │ │ 53 domains │ │
85
85
  │ └────────┬────────┘ │
86
86
  │ │ │
@@ -121,7 +121,7 @@ Most MCP servers are tool collections — they execute commands. LivePilot is an
121
121
 
122
122
  ## The Intelligence Layer
123
123
 
124
- 12 engines sit on top of the 428 tools. They give the AI musical judgment, not just musical execution.
124
+ 12 engines sit on top of the 429 tools. They give the AI musical judgment, not just musical execution.
125
125
 
126
126
  ### SongBrain — What the Song Is
127
127
 
@@ -173,7 +173,7 @@ Every engine follows: **measure before → act → measure after → compare**.
173
173
 
174
174
  ## Tools
175
175
 
176
- 428 tools across 53 domains. Highlights below — [full catalog here](docs/manual/tool-catalog.md).
176
+ 429 tools across 53 domains. Highlights below — [full catalog here](docs/manual/tool-catalog.md).
177
177
 
178
178
  <br>
179
179
 
@@ -362,7 +362,7 @@ The V2 intelligence layer. These tools analyze, diagnose, plan, evaluate, and le
362
362
  | Creative Constraints | 5 | constraint activation, reference-inspired variants |
363
363
  | Preview Studio | 5 | variant creation, preview rendering, comparison, commit |
364
364
 
365
- > **[View all 428 tools →](docs/manual/tool-catalog.md)**
365
+ > **[View all 429 tools →](docs/manual/tool-catalog.md)**
366
366
 
367
367
  <br>
368
368
 
@@ -589,7 +589,7 @@ See [CONTRIBUTING.md](CONTRIBUTING.md) for architecture details, code guidelines
589
589
 
590
590
  | Document | What's inside |
591
591
  |----------|---------------|
592
- | [Manual](docs/manual/index.md) | Complete reference: architecture, all 428 tools, workflows |
592
+ | [Manual](docs/manual/index.md) | Complete reference: architecture, all 429 tools, workflows |
593
593
  | [Intelligence Layer](docs/manual/intelligence.md) | How the 12 engines connect — conductor, moves, preview, evaluation |
594
594
  | [Device Atlas](docs/manual/device-atlas.md) | 1305 devices indexed — search, suggest, chain building |
595
595
  | [Samples & Slicing](docs/manual/samples.md) | 3-source search, fitness critics, slice workflows |
@@ -1,2 +1,2 @@
1
1
  """LivePilot MCP Server — bridges MCP protocol to Ableton Live."""
2
- __version__ = "1.18.3"
2
+ __version__ = "1.19.0"
@@ -0,0 +1,429 @@
1
+ """Hybrid concept packet compilation (v1.19 Item B).
2
+
3
+ When the user says "Basic Channel meets Dilla swing" or
4
+ "Villalobos but sparse like Gas", the director needs to merge
5
+ two (or more) concept packets into a single brief. Pre-v1.19
6
+ this was LLM ad-hoc reasoning with no guarantees about
7
+ contradiction handling.
8
+
9
+ ``compile_hybrid_brief(packet_ids, weights=None)`` loads the
10
+ named packets from
11
+ ``livepilot/skills/livepilot-core/references/concepts/`` and
12
+ merges them per the rules in
13
+ ``docs/plans/v1.19-structural-plan.md §3``.
14
+
15
+ Design invariants:
16
+
17
+ 1. **UNION** the descriptive fields (sonic_identity, avoid,
18
+ reach_for.*, *_idioms) — hybrids describe the envelope of
19
+ BOTH sources, not the intersection.
20
+ 2. **INTERSECTION** the deprioritization fields
21
+ (dimensions_deprioritized, move_family_bias.deprioritize) —
22
+ a hybrid only deprioritizes something if BOTH sources agree
23
+ it should be deprioritized. Otherwise the other packet is
24
+ asking for it and the hybrid must honor that.
25
+ 3. **INTERSECTION (with UNION fallback + warning)** for
26
+ move_family_bias.favor — hybrids focus where both packets
27
+ agree when possible; when they don't overlap at all, fall
28
+ back to UNION but warn (the hybrid spans more families
29
+ than either source intends).
30
+ 4. **MAX** for stricter-wins fields (protect floors,
31
+ novelty_budget_default).
32
+ 5. **WEIGHTED AVERAGE** for continuous blends
33
+ (target_dimensions weights).
34
+ 6. **NEAREST-OVERLAP** for tempo_hint — intersect when ranges
35
+ overlap; warn and use midpoint when they don't.
36
+ 7. **Surface ambiguity** — all warnings go on the ``warnings``
37
+ list so the caller (director) can read them back to the
38
+ user.
39
+
40
+ Output is a dict that is structurally compatible with
41
+ :func:`mcp_server.creative_director.compliance.check_brief_compliance`:
42
+ the merged ``avoid`` list is also exposed as ``anti_patterns``,
43
+ and ``locked_dimensions`` defaults to ``[]`` (hybrids don't lock
44
+ dimensions by default — that's a per-turn choice).
45
+ """
46
+
47
+ from __future__ import annotations
48
+
49
+ import logging
50
+ import pathlib
51
+ from typing import Iterable, Optional
52
+
53
+ import yaml
54
+
55
+ logger = logging.getLogger(__name__)
56
+
57
+
58
+ # Resolve the concepts root relative to this file. Layout:
59
+ # mcp_server/creative_director/hybrid.py
60
+ # livepilot/skills/livepilot-core/references/concepts/
61
+ # Three parents up from this file → repo root.
62
+ _REPO_ROOT = pathlib.Path(__file__).resolve().parents[2]
63
+ _CONCEPTS_ROOT = (
64
+ _REPO_ROOT / "livepilot" / "skills" / "livepilot-core"
65
+ / "references" / "concepts"
66
+ )
67
+
68
+
69
+ # ── Packet loader ────────────────────────────────────────────────────────────
70
+
71
+
72
+ def _normalize(s: str) -> str:
73
+ """Lowercase, hyphenate whitespace and underscores for lookup."""
74
+ return s.strip().lower().replace("_", "-").replace(" ", "-")
75
+
76
+
77
+ def load_packet(packet_id: str) -> Optional[dict]:
78
+ """Load a concept packet by filename stem, alias, or packet.id.
79
+
80
+ Resolution order (first hit wins):
81
+ 1. Normalize the given id (lowercase, underscores → hyphens).
82
+ 2. Try ``artists/<norm>.yaml`` then ``genres/<norm>.yaml``.
83
+ 3. If still not found, scan all packets and match on ``id``
84
+ or any alias (normalized).
85
+ 4. Return None on miss.
86
+ """
87
+ norm = _normalize(packet_id)
88
+
89
+ for subdir in ("artists", "genres"):
90
+ candidate = _CONCEPTS_ROOT / subdir / f"{norm}.yaml"
91
+ if candidate.exists():
92
+ try:
93
+ return yaml.safe_load(candidate.read_text())
94
+ except Exception as exc:
95
+ logger.debug("load_packet parse failed for %s: %s", candidate, exc)
96
+ return None
97
+
98
+ # Fallback: scan for alias / id match
99
+ for subdir in ("artists", "genres"):
100
+ subpath = _CONCEPTS_ROOT / subdir
101
+ if not subpath.is_dir():
102
+ continue
103
+ for p in sorted(subpath.glob("*.yaml")):
104
+ try:
105
+ d = yaml.safe_load(p.read_text())
106
+ except Exception as exc:
107
+ logger.debug("load_packet scan-parse failed for %s: %s", p, exc)
108
+ continue
109
+ if not isinstance(d, dict):
110
+ continue
111
+ if d.get("id") == packet_id:
112
+ return d
113
+ aliases = [_normalize(a) for a in (d.get("aliases") or []) if isinstance(a, str)]
114
+ if norm in aliases:
115
+ return d
116
+
117
+ return None
118
+
119
+
120
+ # ── Merge helpers ────────────────────────────────────────────────────────────
121
+
122
+
123
+ def _union_preserve_order(lists: Iterable[Iterable[str]]) -> list[str]:
124
+ seen: set = set()
125
+ out: list[str] = []
126
+ for lst in lists:
127
+ for item in (lst or []):
128
+ if item not in seen:
129
+ seen.add(item)
130
+ out.append(item)
131
+ return out
132
+
133
+
134
+ def _intersection_preserve_order(
135
+ lists: list[list[str]], reference_order: list[str],
136
+ ) -> list[str]:
137
+ """Intersect across all lists; ordering follows ``reference_order``
138
+ (typically the first packet's list)."""
139
+ if not lists:
140
+ return []
141
+ sets = [set(lst or []) for lst in lists]
142
+ intersection = sets[0]
143
+ for s in sets[1:]:
144
+ intersection = intersection & s
145
+ return [item for item in (reference_order or []) if item in intersection]
146
+
147
+
148
+ # ── Core compile function (packet-level, no disk I/O) ───────────────────────
149
+
150
+
151
+ def _compile_from_packets(
152
+ packets: list[dict],
153
+ packet_ids: list[str],
154
+ weights: Optional[list[float]] = None,
155
+ ) -> dict:
156
+ """Compile a hybrid brief from already-loaded packet dicts.
157
+
158
+ Public callers should use :func:`compile_hybrid_brief`. This split
159
+ exists so tests can inject synthetic packets (e.g., to force an
160
+ empty favor-intersection and exercise the UNION fallback).
161
+ """
162
+ if len(packets) < 2:
163
+ raise ValueError("Hybrid requires at least 2 packets")
164
+ if weights is not None and len(weights) != len(packets):
165
+ raise ValueError(
166
+ f"weights length ({len(weights)}) must match packets "
167
+ f"length ({len(packets)})"
168
+ )
169
+
170
+ if weights is None:
171
+ weights = [1.0 / len(packets)] * len(packets)
172
+ else:
173
+ total = sum(weights) or 1.0
174
+ weights = [w / total for w in weights]
175
+
176
+ warnings: list[str] = []
177
+
178
+ # ── UNION fields ─────────────────────────────────────────────────────
179
+ sonic_identity = _union_preserve_order(
180
+ p.get("sonic_identity") or [] for p in packets
181
+ )
182
+ avoid = _union_preserve_order(p.get("avoid") or [] for p in packets)
183
+ rhythm_idioms = _union_preserve_order(p.get("rhythm_idioms") or [] for p in packets)
184
+ harmony_idioms = _union_preserve_order(p.get("harmony_idioms") or [] for p in packets)
185
+ arrangement_idioms = _union_preserve_order(
186
+ p.get("arrangement_idioms") or [] for p in packets
187
+ )
188
+ texture_idioms = _union_preserve_order(p.get("texture_idioms") or [] for p in packets)
189
+ sample_roles = _union_preserve_order(p.get("sample_roles") or [] for p in packets)
190
+ dimensions_in_scope = _union_preserve_order(
191
+ p.get("dimensions_in_scope") or [] for p in packets
192
+ )
193
+
194
+ reach_for = {
195
+ "instruments": _union_preserve_order(
196
+ (p.get("reach_for") or {}).get("instruments") or [] for p in packets
197
+ ),
198
+ "effects": _union_preserve_order(
199
+ (p.get("reach_for") or {}).get("effects") or [] for p in packets
200
+ ),
201
+ "packs": _union_preserve_order(
202
+ (p.get("reach_for") or {}).get("packs") or [] for p in packets
203
+ ),
204
+ "utilities": _union_preserve_order(
205
+ (p.get("reach_for") or {}).get("utilities") or [] for p in packets
206
+ ),
207
+ }
208
+
209
+ # ── INTERSECTION fields (safety defaults — be cautious) ─────────────
210
+ # deprioritize only if ALL packets agree → a hybrid with one packet
211
+ # asking for rhythmic must NOT deprioritize rhythmic just because the
212
+ # other packet's aesthetic does.
213
+ dimensions_deprioritized = _intersection_preserve_order(
214
+ [p.get("dimensions_deprioritized") or [] for p in packets],
215
+ packets[0].get("dimensions_deprioritized") or [],
216
+ )
217
+
218
+ deprioritize = _intersection_preserve_order(
219
+ [(p.get("move_family_bias") or {}).get("deprioritize") or []
220
+ for p in packets],
221
+ (packets[0].get("move_family_bias") or {}).get("deprioritize") or [],
222
+ )
223
+
224
+ # ── favor: INTERSECTION preferred, UNION fallback with warning ──────
225
+ favor_lists = [
226
+ (p.get("move_family_bias") or {}).get("favor") or [] for p in packets
227
+ ]
228
+ favor_intersection = _intersection_preserve_order(
229
+ favor_lists, favor_lists[0],
230
+ )
231
+ if favor_intersection:
232
+ favor = favor_intersection
233
+ else:
234
+ favor = _union_preserve_order(favor_lists)
235
+ warnings.append(
236
+ "move_family_bias.favor intersection was empty — falling back "
237
+ "to UNION. Hybrid plans may span more families than either "
238
+ "source packet intends; prioritize explicit user framing."
239
+ )
240
+
241
+ # ── Numeric rules ───────────────────────────────────────────────────
242
+ # target_dimensions: WEIGHTED AVERAGE
243
+ all_dim_keys: set = set()
244
+ for p in packets:
245
+ td = (p.get("evaluation_bias") or {}).get("target_dimensions") or {}
246
+ all_dim_keys.update(td.keys())
247
+ target_dimensions: dict[str, float] = {}
248
+ for dim in sorted(all_dim_keys):
249
+ accum = 0.0
250
+ for w, p in zip(weights, packets):
251
+ td = (p.get("evaluation_bias") or {}).get("target_dimensions") or {}
252
+ val = td.get(dim, 0.0)
253
+ try:
254
+ accum += float(w) * float(val)
255
+ except (TypeError, ValueError):
256
+ continue
257
+ if accum > 0:
258
+ target_dimensions[dim] = round(accum, 4)
259
+
260
+ # protect: MAX per dimension (stricter wins)
261
+ all_protect_keys: set = set()
262
+ for p in packets:
263
+ pr = (p.get("evaluation_bias") or {}).get("protect") or {}
264
+ all_protect_keys.update(pr.keys())
265
+ protect: dict[str, float] = {}
266
+ for dim in sorted(all_protect_keys):
267
+ values = []
268
+ for p in packets:
269
+ pr = (p.get("evaluation_bias") or {}).get("protect") or {}
270
+ val = pr.get(dim, 0.0)
271
+ try:
272
+ values.append(float(val))
273
+ except (TypeError, ValueError):
274
+ continue
275
+ if values:
276
+ protect[dim] = max(values)
277
+
278
+ # novelty_budget_default: MAX (hybrids lean exploratory)
279
+ novelty_values: list[float] = []
280
+ for p in packets:
281
+ nb = p.get("novelty_budget_default")
282
+ if nb is None:
283
+ continue
284
+ try:
285
+ novelty_values.append(float(nb))
286
+ except (TypeError, ValueError):
287
+ continue
288
+ novelty_budget = max(novelty_values) if novelty_values else 0.5
289
+
290
+ # ── tempo_hint: NEAREST-OVERLAP ─────────────────────────────────────
291
+ tempo_ranges: list[tuple[float, float, str]] = []
292
+ for p in packets:
293
+ th = p.get("tempo_hint") or {}
294
+ lo, hi = th.get("min"), th.get("max")
295
+ if lo is None or hi is None:
296
+ continue
297
+ try:
298
+ tempo_ranges.append((float(lo), float(hi), p.get("name", "")))
299
+ except (TypeError, ValueError):
300
+ continue
301
+
302
+ tempo_hint: Optional[dict]
303
+ if not tempo_ranges:
304
+ tempo_hint = None
305
+ elif len(tempo_ranges) == 1:
306
+ lo, hi, _ = tempo_ranges[0]
307
+ tempo_hint = {"min": lo, "max": hi, "time_signature": "4/4"}
308
+ else:
309
+ overlap_lo = max(r[0] for r in tempo_ranges)
310
+ overlap_hi = min(r[1] for r in tempo_ranges)
311
+ if overlap_lo <= overlap_hi:
312
+ tempo_hint = {
313
+ "min": overlap_lo, "max": overlap_hi,
314
+ "time_signature": "4/4",
315
+ }
316
+ else:
317
+ # Disjoint ranges — compute gap midpoint, surface warning.
318
+ # The gap is between the highest range-max and the lowest
319
+ # range-min that exceeds it. For 2 ranges this is
320
+ # (max of all his, min of all los). For 3+ ranges this still
321
+ # reads as "the gap in the middle of the sorted range set".
322
+ sorted_ranges = sorted(tempo_ranges, key=lambda r: r[0])
323
+ gap_lo = max(r[1] for r in sorted_ranges if r[0] < sorted_ranges[-1][0])
324
+ gap_hi = sorted_ranges[-1][0]
325
+ midpoint = (gap_lo + gap_hi) / 2.0
326
+ tempo_hint = {
327
+ "min": midpoint - 2.5,
328
+ "max": midpoint + 2.5,
329
+ "time_signature": "4/4",
330
+ "disjoint": True,
331
+ }
332
+ range_desc = "; ".join(
333
+ f"{name or 'packet'} {lo:.0f}-{hi:.0f}"
334
+ for lo, hi, name in tempo_ranges
335
+ )
336
+ warnings.append(
337
+ f"Tempo ranges don't overlap ({range_desc}) — defaulting "
338
+ f"to midpoint {midpoint:.0f} BPM. Specify which anchor "
339
+ f"you want or pick a single packet."
340
+ )
341
+
342
+ # ── Output ───────────────────────────────────────────────────────────
343
+ names = [p.get("name") or pid for p, pid in zip(packets, packet_ids)]
344
+ hybrid_name = " × ".join(names)
345
+
346
+ return {
347
+ "type": "hybrid",
348
+ "source_packets": list(packet_ids),
349
+ "weights": list(weights),
350
+ "name": hybrid_name,
351
+ "sonic_identity": sonic_identity,
352
+ "reach_for": reach_for,
353
+ "avoid": avoid,
354
+ # Alias for compatibility with check_brief_compliance, which reads
355
+ # "anti_patterns". The semantics are identical — "avoid" at the
356
+ # packet layer, "anti_patterns" at the brief layer.
357
+ "anti_patterns": list(avoid),
358
+ "rhythm_idioms": rhythm_idioms,
359
+ "harmony_idioms": harmony_idioms,
360
+ "arrangement_idioms": arrangement_idioms,
361
+ "texture_idioms": texture_idioms,
362
+ "sample_roles": sample_roles,
363
+ "evaluation_bias": {
364
+ "target_dimensions": target_dimensions,
365
+ "protect": protect,
366
+ },
367
+ "move_family_bias": {
368
+ "favor": favor,
369
+ "deprioritize": deprioritize,
370
+ },
371
+ "dimensions_in_scope": dimensions_in_scope,
372
+ "dimensions_deprioritized": dimensions_deprioritized,
373
+ # Hybrids do not lock dimensions by default — locking is a per-turn
374
+ # user choice (e.g., "don't touch structure"). Included here for
375
+ # compat with check_brief_compliance which reads this field.
376
+ "locked_dimensions": [],
377
+ "novelty_budget_default": novelty_budget,
378
+ "tempo_hint": tempo_hint,
379
+ "warnings": warnings,
380
+ }
381
+
382
+
383
+ # ── Public API ───────────────────────────────────────────────────────────────
384
+
385
+
386
+ def compile_hybrid_brief(
387
+ packet_ids: list[str],
388
+ weights: Optional[list[float]] = None,
389
+ ) -> dict:
390
+ """Merge N concept packets into a single hybrid brief.
391
+
392
+ packet_ids: filename stems (``'basic-channel'``), aliases
393
+ (``'dilla'``), or packet ``id`` values (``'dub_techno__basic_channel'``).
394
+ At least 2 required.
395
+ weights: optional per-packet weighting for the target_dimensions
396
+ weighted-average step. If None, uniform weights are used.
397
+ Must match ``packet_ids`` length when provided. Normalized to
398
+ sum to 1.0 internally.
399
+
400
+ Raises:
401
+ ValueError: on fewer than 2 packets, an unresolvable packet id,
402
+ or a weights-length mismatch.
403
+
404
+ Returns:
405
+ A dict structurally compatible with the packet schema plus:
406
+ - ``type``: always ``"hybrid"``
407
+ - ``source_packets``: ``packet_ids`` echoed back
408
+ - ``weights``: normalized weights
409
+ - ``name``: ``"Packet A × Packet B"`` for user-facing display
410
+ - ``anti_patterns``: alias of ``avoid`` (compat with
411
+ ``check_brief_compliance``)
412
+ - ``locked_dimensions``: empty by default (hybrids don't lock)
413
+ - ``warnings``: list of human-readable ambiguity notes (tempo
414
+ disjunction, empty favor intersection fallback, etc.). Empty
415
+ when all merge rules resolved cleanly.
416
+ """
417
+ packets: list[dict] = []
418
+ missing: list[str] = []
419
+ for pid in packet_ids:
420
+ p = load_packet(pid)
421
+ if p is None:
422
+ missing.append(pid)
423
+ else:
424
+ packets.append(p)
425
+
426
+ if missing:
427
+ raise ValueError(f"Packets not found: {missing}")
428
+
429
+ return _compile_from_packets(packets, list(packet_ids), weights=weights)
@@ -18,6 +18,7 @@ from fastmcp import Context
18
18
 
19
19
  from ..server import mcp
20
20
  from .compliance import check_brief_compliance as _check_brief_compliance
21
+ from .hybrid import compile_hybrid_brief as _compile_hybrid_brief
21
22
 
22
23
 
23
24
  @mcp.tool()
@@ -70,3 +71,65 @@ def check_brief_compliance(
70
71
  tool_args=tool_args or {},
71
72
  )
72
73
  return result
74
+
75
+
76
+ @mcp.tool()
77
+ def compile_hybrid_brief(
78
+ ctx: Context,
79
+ packet_ids: list,
80
+ weights: Optional[list] = None,
81
+ ) -> dict:
82
+ """Merge 2+ concept packets into a single hybrid brief (v1.19 Item B).
83
+
84
+ When the user says "Basic Channel meets Dilla swing" or
85
+ "Villalobos but sparse like Gas", the director needs an explicit
86
+ merge algorithm — not LLM ad-hoc reasoning. This tool loads the
87
+ named concept packets from
88
+ ``livepilot/skills/livepilot-core/references/concepts/`` and merges
89
+ them per the rules in
90
+ ``livepilot/skills/livepilot-creative-director/references/hybrid-compilation.md``.
91
+
92
+ Merge rule summary:
93
+ - ``sonic_identity`` / ``avoid`` / ``reach_for.*`` / ``*_idioms``:
94
+ UNION, deduplicated, first-packet order preserved.
95
+ - ``dimensions_deprioritized`` and
96
+ ``move_family_bias.deprioritize``: INTERSECTION — only
97
+ deprioritize if ALL source packets do. Safer default for
98
+ hybrids where one packet may want what the other ignores.
99
+ - ``move_family_bias.favor``: INTERSECTION when non-empty
100
+ (hybrid focuses where both agree); UNION fallback otherwise
101
+ with a warning.
102
+ - ``evaluation_bias.target_dimensions``: WEIGHTED AVERAGE
103
+ (default uniform weights).
104
+ - ``evaluation_bias.protect``: MAX per dimension — stricter
105
+ floor wins.
106
+ - ``novelty_budget_default``: MAX (hybrids skew exploratory).
107
+ - ``tempo_hint``: NEAREST-OVERLAP — intersect overlapping
108
+ ranges, or warn + midpoint on disjoint ranges.
109
+
110
+ Args:
111
+ packet_ids: list of ≥2 packet IDs. Accepts filename stems
112
+ (``"basic-channel"``), aliases (``"dilla"``), or packet ``id``
113
+ values (``"dub_techno__basic_channel"``).
114
+ weights: optional per-packet weights for the
115
+ ``target_dimensions`` average. Must match ``packet_ids``
116
+ length. Normalized internally; defaults to uniform.
117
+
118
+ Returns:
119
+ A brief dict structurally compatible with
120
+ ``check_brief_compliance``. Exposes the merged ``avoid`` list
121
+ both as ``avoid`` (packet semantic) and ``anti_patterns``
122
+ (brief semantic). Includes a ``warnings`` list surfacing any
123
+ ambiguity the merge algorithm couldn't resolve cleanly.
124
+
125
+ Raises:
126
+ ValueError (surfaced as an error-dict response) on fewer than
127
+ 2 packets, an unresolvable packet id, or a weights-length
128
+ mismatch.
129
+ """
130
+ try:
131
+ pid_list = [str(x) for x in (packet_ids or [])]
132
+ w_list = [float(x) for x in weights] if weights else None
133
+ return _compile_hybrid_brief(packet_ids=pid_list, weights=w_list)
134
+ except ValueError as exc:
135
+ return {"error": str(exc)}
@@ -0,0 +1,138 @@
1
+ """Experiment baseline transport state — capture once, restore between branches.
2
+
3
+ v1.19 Item A: running N branches sequentially produces inconsistent
4
+ ``before_snapshot`` values because playback position, mute/solo/arm, and
5
+ playing-clip state drift across branches. ``undo()`` reverts command
6
+ history but doesn't guarantee transport state is identical at the start
7
+ of each branch's capture window.
8
+
9
+ Flow in ``run_experiment``:
10
+
11
+ 1. Before the first branch: ``capture_baseline(ableton)`` and stash on
12
+ the :class:`ExperimentSet`.
13
+ 2. Between branches (before capturing the next before_snapshot): call
14
+ ``restore_baseline(ableton, baseline)`` to stop transport, reset
15
+ mute/solo/arm, and pause briefly for meters to settle.
16
+
17
+ The module is deliberately thin — no LOM subscription, no state
18
+ monitoring. Just a snapshot dataclass + two functions.
19
+ """
20
+
21
+ from __future__ import annotations
22
+
23
+ import logging
24
+ import time
25
+ from dataclasses import dataclass, field
26
+ from typing import Optional
27
+
28
+ logger = logging.getLogger(__name__)
29
+
30
+
31
+ @dataclass
32
+ class BaselineTransportState:
33
+ """Transport + per-track state captured before the first experiment branch.
34
+
35
+ Kept deliberately shallow: we don't try to freeze automation or scene
36
+ state. Those are out of scope (plan §2 "What NOT to do" — automation
37
+ drift is an accepted limitation).
38
+ """
39
+
40
+ is_playing: bool = False
41
+ song_time: float = 0.0
42
+ track_states: list[dict] = field(default_factory=list)
43
+ captured_at_ms: int = 0
44
+
45
+ def to_dict(self) -> dict:
46
+ return {
47
+ "is_playing": self.is_playing,
48
+ "song_time": self.song_time,
49
+ "track_states": list(self.track_states),
50
+ "captured_at_ms": self.captured_at_ms,
51
+ }
52
+
53
+
54
+ def capture_baseline(ableton) -> BaselineTransportState:
55
+ """Capture current transport + per-track state.
56
+
57
+ Uses ``get_session_info`` (single round-trip for all fields we need).
58
+ Returns a frozen-in-time snapshot; subsequent state drift doesn't
59
+ affect it.
60
+ """
61
+ info = ableton.send_command("get_session_info")
62
+ if not isinstance(info, dict):
63
+ info = {}
64
+
65
+ tracks = info.get("tracks") or []
66
+ track_states: list[dict] = []
67
+ for i, t in enumerate(tracks):
68
+ if not isinstance(t, dict):
69
+ continue
70
+ track_states.append({
71
+ "index": int(t.get("index", i)),
72
+ "mute": bool(t.get("mute", False)),
73
+ "solo": bool(t.get("solo", False)),
74
+ "arm": bool(t.get("arm", False)),
75
+ })
76
+
77
+ return BaselineTransportState(
78
+ is_playing=bool(info.get("is_playing", False)),
79
+ song_time=float(info.get("current_song_time", 0.0) or 0.0),
80
+ track_states=track_states,
81
+ captured_at_ms=int(time.time() * 1000),
82
+ )
83
+
84
+
85
+ def restore_baseline(
86
+ ableton,
87
+ baseline: BaselineTransportState,
88
+ stabilize_ms: int = 300,
89
+ ) -> None:
90
+ """Restore transport + per-track state to the captured baseline.
91
+
92
+ Sequence:
93
+ 1. ``stop_playback`` (halt transport — also stops any live clips)
94
+ 2. For each track: ``set_track_mute`` / ``set_track_solo`` /
95
+ ``set_track_arm`` (best-effort; per-track failure is logged,
96
+ not fatal — a single flaky track should never abort restore
97
+ for the rest).
98
+ 3. Sleep ``stabilize_ms`` milliseconds so meters settle before the
99
+ next ``before_snapshot`` reads them. Pass ``0`` in tests.
100
+
101
+ We deliberately do NOT seek the transport to ``baseline.song_time``.
102
+ Returning from stopped transport is enough — re-seeking a stopped
103
+ transport is equivalent to not moving, and on a playing transport it
104
+ introduces its own stutter artefact. If a future branch needs timeline
105
+ position consistency, add a ``jump_to_time`` call here.
106
+
107
+ Return-track arms are skipped — ``set_track_arm`` on a negative index
108
+ raises (return tracks aren't armable in Live).
109
+ """
110
+ try:
111
+ ableton.send_command("stop_playback")
112
+ except Exception as exc:
113
+ logger.debug("restore_baseline stop_playback failed: %s", exc)
114
+
115
+ for ts in baseline.track_states:
116
+ idx = ts.get("index", -1)
117
+ try:
118
+ ableton.send_command("set_track_mute", {
119
+ "track_index": idx, "mute": bool(ts.get("mute", False)),
120
+ })
121
+ except Exception as exc:
122
+ logger.debug("restore_baseline set_track_mute(%s) failed: %s", idx, exc)
123
+ try:
124
+ ableton.send_command("set_track_solo", {
125
+ "track_index": idx, "solo": bool(ts.get("solo", False)),
126
+ })
127
+ except Exception as exc:
128
+ logger.debug("restore_baseline set_track_solo(%s) failed: %s", idx, exc)
129
+ if idx >= 0:
130
+ try:
131
+ ableton.send_command("set_track_arm", {
132
+ "track_index": idx, "arm": bool(ts.get("arm", False)),
133
+ })
134
+ except Exception as exc:
135
+ logger.debug("restore_baseline set_track_arm(%s) failed: %s", idx, exc)
136
+
137
+ if stabilize_ms > 0:
138
+ time.sleep(stabilize_ms / 1000.0)
@@ -445,3 +445,23 @@ def discard_experiment(experiment_id: str) -> dict:
445
445
  exp.status = "discarded"
446
446
 
447
447
  return {"discarded": True, "experiment_id": experiment_id}
448
+
449
+
450
+ # ── v1.19 Item A — between-branch baseline restore ───────────────────────────
451
+
452
+
453
+ def prepare_for_next_branch(ableton, baseline, stabilize_ms: int = 300) -> None:
454
+ """Restore baseline transport state before capturing the next branch.
455
+
456
+ Called by ``run_experiment`` between branches so each branch's
457
+ ``before_snapshot`` reads from identical starting conditions. No-op
458
+ when ``baseline`` is None (first branch — the baseline was just
459
+ captured, no drift to correct).
460
+
461
+ Thin wrapper around ``baseline.restore_baseline``; exists so the
462
+ MCP tool body stays small and the wiring is testable in isolation.
463
+ """
464
+ if baseline is None:
465
+ return
466
+ from .baseline import restore_baseline
467
+ restore_baseline(ableton, baseline, stabilize_ms=stabilize_ms)
@@ -19,6 +19,7 @@ from dataclasses import dataclass, field
19
19
  from typing import Any, Optional
20
20
 
21
21
  from ..branches import BranchSeed
22
+ from .baseline import BaselineTransportState
22
23
 
23
24
 
24
25
  @dataclass
@@ -249,6 +250,10 @@ class ExperimentSet:
249
250
  status: str = "open" # open | evaluated | committed | discarded
250
251
  winner_branch_id: Optional[str] = None
251
252
  created_at_ms: int = 0
253
+ # v1.19 Item A — transport state captured before the first branch runs
254
+ # and used to restore identical starting conditions between branches.
255
+ # See mcp_server.experiment.baseline for the snapshot / restore pair.
256
+ baseline_transport: Optional[BaselineTransportState] = None
252
257
 
253
258
  @property
254
259
  def branch_count(self) -> int:
@@ -290,7 +295,7 @@ class ExperimentSet:
290
295
  return _branch_rank_key(branch)
291
296
 
292
297
  def to_dict(self) -> dict:
293
- return {
298
+ d = {
294
299
  "experiment_id": self.experiment_id,
295
300
  "request_text": self.request_text,
296
301
  "status": self.status,
@@ -302,3 +307,6 @@ class ExperimentSet:
302
307
  for b in self.ranked_branches()
303
308
  ],
304
309
  }
310
+ if self.baseline_transport is not None:
311
+ d["baseline_transport"] = self.baseline_transport.to_dict()
312
+ return d
@@ -343,11 +343,33 @@ async def run_experiment(
343
343
  # Import compiler
344
344
  from ..semantic_moves import registry, compiler
345
345
 
346
+ # v1.19 Item A — capture baseline transport state BEFORE any branch runs.
347
+ # Each branch's before_snapshot is only comparable if it starts from the
348
+ # same reference state. Without this, live testing (v1.18.0 Test 8) showed
349
+ # 3 branches produce wildly inconsistent before_snapshot.track_meters[0].level
350
+ # values — clip stopped mid-experiment between branches.
351
+ if experiment.baseline_transport is None:
352
+ from .baseline import capture_baseline
353
+ try:
354
+ experiment.baseline_transport = capture_baseline(ableton)
355
+ except Exception as exc:
356
+ logger.debug("baseline capture failed: %s", exc)
357
+ experiment.baseline_transport = None
358
+
346
359
  results = []
360
+ pending_seen = 0
347
361
  for branch in experiment.branches:
348
362
  if branch.status != "pending":
349
363
  continue
350
364
 
365
+ # Between branches (not before the first), restore the baseline so
366
+ # the next before_snapshot reads from the same reference state.
367
+ if pending_seen > 0:
368
+ engine.prepare_for_next_branch(
369
+ ableton, experiment.baseline_transport, stabilize_ms=300,
370
+ )
371
+ pending_seen += 1
372
+
351
373
  # PR3: respect a pre-existing compiled_plan on the branch (freeform /
352
374
  # synthesis / composer producers bring their own). Only compile from
353
375
  # move_id when the branch arrived without a plan — which requires a
package/package.json CHANGED
@@ -1,8 +1,8 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.18.3",
3
+ "version": "1.19.0",
4
4
  "mcpName": "io.github.dreamrec/livepilot",
5
- "description": "Agentic production system for Ableton Live 12 — 428 tools, 53 domains. Device atlas (1305 devices), sample engine (Splice + browser + filesystem), auto-composition, spectral perception, technique memory, creative intelligence (12 engines)",
5
+ "description": "Agentic production system for Ableton Live 12 — 429 tools, 53 domains. Device atlas (1305 devices), sample engine (Splice + browser + filesystem), auto-composition, spectral perception, technique memory, creative intelligence (12 engines)",
6
6
  "author": "Pilot Studio",
7
7
  "license": "BSL-1.1",
8
8
  "type": "commonjs",
@@ -5,7 +5,7 @@ Entry point for the ControlSurface. Ableton calls create_instance(c_instance)
5
5
  when this script is selected in Preferences > Link, Tempo & MIDI.
6
6
  """
7
7
 
8
- __version__ = "1.18.3"
8
+ __version__ = "1.19.0"
9
9
 
10
10
  from _Framework.ControlSurface import ControlSurface
11
11
  from . import router
package/server.json CHANGED
@@ -1,17 +1,17 @@
1
1
  {
2
2
  "$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
3
3
  "name": "io.github.dreamrec/livepilot",
4
- "description": "428-tool agentic MCP production system for Ableton Live 12 — device atlas, sample engine, composer",
4
+ "description": "429-tool agentic MCP production system for Ableton Live 12 — device atlas, sample engine, composer",
5
5
  "repository": {
6
6
  "url": "https://github.com/dreamrec/LivePilot",
7
7
  "source": "github"
8
8
  },
9
- "version": "1.18.3",
9
+ "version": "1.19.0",
10
10
  "packages": [
11
11
  {
12
12
  "registryType": "npm",
13
13
  "identifier": "livepilot",
14
- "version": "1.18.3",
14
+ "version": "1.19.0",
15
15
  "transport": {
16
16
  "type": "stdio"
17
17
  }