livepilot 1.6.1 → 1.6.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,26 @@
1
1
  # Changelog
2
2
 
3
+ ## 1.6.3 — Audit Hardening (March 2026)
4
+
5
+ - Fix: cursor aliasing in M4L bridge `walk_device` — nested rack traversal now reads chain/pad counts before recursion clobbers shared cursors
6
+ - Fix: `clip_automation.py` — use `get_clip()` for bounds-checked access, add negative index guards, proper validation in `clear_clip_automation`
7
+ - Fix: `set_clip_loop` crash when `enabled` param omitted
8
+ - Fix: Brownian curve reflection escaping [0,1] for large volatility
9
+ - Fix: division by zero in M4L bridge when `sample_rate=0`
10
+ - Fix: `technique_store.get()` shallow copy allows shared mutation — now uses deepcopy
11
+ - Fix: `asyncio.get_event_loop()` deprecation — use `get_running_loop()` (Python 3.12+)
12
+ - Fix: dead code in `browser.py`, stale tool counts in docs (107 → 115 core)
13
+ - Fix: wrong param name in tool-reference docs (`soloed` → `solo`)
14
+ - Fix: social banner missing "automation" domain (11 → 12)
15
+ - Fix: tautological spring test, dead automation contract test, misleading clips test
16
+ - Add: `livepilot-release` skill registered in plugin.json
17
+ - Add: `__version__` to Remote Script `__init__.py`
18
+
19
+ ## 1.6.2 — Automation Params Fix (March 2026)
20
+
21
+ - Fix: expose all curve-specific params in `generate_automation_curve` and `apply_automation_shape` MCP tools — `values` (steps), `hits`/`steps` (euclidean), `seed`/`drift`/`volatility` (organic), `damping`/`stiffness` (spring), `control1`/`control2` (bezier), `easing_type`, `narrowing` (stochastic)
22
+ - Fix: `analyze_for_automation` spectral getter used wrong method (`.get_spectrum()` → `.get("spectrum")`)
23
+
3
24
  ## 1.6.1 — Hotfix (March 2026)
4
25
 
5
26
  - Fix: `clip_automation.py` imported `register` from `utils` instead of `router`, causing Remote Script to fail to load in Ableton (LivePilot disappeared from Control Surface list)
package/README.md CHANGED
@@ -77,12 +77,64 @@ No other Ableton MCP server does this. Others have tools. LivePilot has tools +
77
77
 
78
78
  ---
79
79
 
80
+ ## Automation Intelligence
81
+
82
+ Most DAW integrations let the AI set a parameter to a value. LivePilot lets the AI write **automation curves** — envelopes that evolve parameters over time inside clips. This is the difference between a static mix and a living one.
83
+
84
+ ### The Curve Engine
85
+
86
+ 16 mathematically precise curve types, organized in 4 categories:
87
+
88
+ | Category | Curves | What they do |
89
+ |----------|--------|-------------|
90
+ | **Basic Waveforms** | `linear` · `exponential` · `logarithmic` · `s_curve` · `sine` · `sawtooth` · `spike` · `square` · `steps` | The building blocks. Exponential for filter sweeps (perceptually even). Logarithmic for volume fades (matches the ear). Spike for dub throws. Sawtooth for sidechain pumps. |
91
+ | **Organic / Natural** | `perlin` · `brownian` · `spring` | What makes automation feel alive. Perlin noise for drifting textures. Brownian for analog-style parameter wander. Spring for realistic knob movements with overshoot and settle. |
92
+ | **Shape Control** | `bezier` · `easing` | Precision curves for intentional design. Bezier with arbitrary control points. 8 easing types from the animation world: bounce, elastic, back overshoot, ease in/out. |
93
+ | **Algorithmic** | `euclidean` · `stochastic` | Generative intelligence. Euclidean distributes automation events using the Bjorklund algorithm (the same math behind Euclidean rhythms). Stochastic applies Xenakis-inspired controlled randomness within narrowing bounds. |
94
+
95
+ Every curve generates normalized points (0.0–1.0) that map to any parameter in Ableton — volume, pan, sends, device parameters, anything with an envelope.
96
+
97
+ ### 15 Production Recipes
98
+
99
+ Named presets for common techniques. One tool call instead of manual point calculation:
100
+
101
+ | Recipe | Curve | What it does |
102
+ |--------|-------|-------------|
103
+ | `filter_sweep_up` | exponential | LP filter opening over 8-32 bars |
104
+ | `filter_sweep_down` | logarithmic | LP filter closing, mirrors the sweep up |
105
+ | `dub_throw` | spike | Instant send spike for reverb/delay throws |
106
+ | `tape_stop` | exponential | Pitch dropping to zero — steep deceleration |
107
+ | `build_rise` | exponential | Tension build on HP filter + volume + reverb |
108
+ | `sidechain_pump` | sawtooth | Volume ducking per beat — fast duck, slow recovery |
109
+ | `fade_in` / `fade_out` | log / exp | Perceptually smooth volume fades |
110
+ | `tremolo` | sine | Periodic volume oscillation |
111
+ | `auto_pan` | sine | Stereo movement via pan |
112
+ | `stutter` | square | Rapid on/off gating |
113
+ | `breathing` | sine | Subtle filter movement — acoustic instrument feel |
114
+ | `washout` | exponential | Reverb/delay feedback increasing to wash |
115
+ | `vinyl_crackle` | sine | Slow bit reduction for lo-fi character |
116
+ | `stereo_narrow` | exponential | Collapse stereo to mono before drop |
117
+
118
+ ### The Feedback Loop
119
+
120
+ `analyze_for_automation` reads the spectrum and device chain, then suggests what to automate:
121
+
122
+ 1. **Reads the spectrum** — identifies frequency balance, sub content, dynamic range
123
+ 2. **Scans the device chain** — detects filters, reverbs, synths, distortion
124
+ 3. **Suggests automation targets** — "Filter detected → automate cutoff for movement", "Heavy sub content → HP filter sweep for builds"
125
+ 4. **Recommends recipes** — maps each suggestion to the right named recipe
126
+
127
+ The AI doesn't just write automation — it knows what to automate based on what it hears.
128
+
129
+ ---
130
+
80
131
  ## What You Can Do
81
132
 
82
133
  - **Produce** — Create tracks, load instruments from the atlas, program drum patterns, bass lines, chord progressions, and melodies — informed by your saved techniques
83
134
  - **Arrange** — Build full song structures in arrangement view with MIDI editing, cue points, automation, and timeline navigation
84
135
  - **Design sounds** — Browse Ableton's library, load presets, tweak every device parameter, chain effects, walk nested racks 6 levels deep
85
136
  - **Mix with ears** — Set levels, panning, sends, and routing. Read the spectrum, check RMS, detect the key. The analyzer tells the AI what changed, not just what was set
137
+ - **Automate intelligently** — Write clip automation with 16 mathematically precise curve types, apply named recipes (dub throws, filter sweeps, sidechain pumps), get spectral-aware suggestions for what to automate next
86
138
  - **Remember and evolve** — Save techniques, build a personal style library, replay past decisions exactly or as variations
87
139
  - **Chop samples** — Load audio into Simpler, slice, reverse, crop, warp, and reprogram — all from conversation
88
140
  - **Iterate fast** — Transpose, humanize, quantize, duplicate, and reshape patterns through conversation
@@ -244,7 +296,7 @@ npx -y github:dreamrec/LivePilot --status
244
296
 
245
297
  ---
246
298
 
247
- ## 127 Tools Across 11 Domains
299
+ ## 135 Tools Across 12 Domains
248
300
 
249
301
  | Domain | Tools | What you can do |
250
302
  |--------|:-----:|-----------------|
@@ -257,6 +309,7 @@ npx -y github:dreamrec/LivePilot --status
257
309
  | **Mixing** | 11 | Volume, pan, sends, routing, meters, mix snapshot — return tracks and master fully supported |
258
310
  | **Browser** | 4 | Search Ableton's library, browse categories, load presets |
259
311
  | **Arrangement** | 19 | Create clips, full MIDI note CRUD, cue points, recording, automation |
312
+ | **Automation** | 8 | Clip envelope CRUD, 16-type curve engine, 15 named recipes, spectral-aware suggestions |
260
313
  | **Memory** | 8 | Save, recall, replay, and manage production techniques |
261
314
  | **Analyzer** | 20 | Real-time spectral analysis, key detection, sample manipulation, warp markers, device introspection (requires M4L device) |
262
315
 
@@ -290,6 +343,9 @@ npx -y github:dreamrec/LivePilot --status
290
343
  ### Arrangement (19)
291
344
  `get_arrangement_clips` · `create_arrangement_clip` · `add_arrangement_notes` · `get_arrangement_notes` · `remove_arrangement_notes` · `remove_arrangement_notes_by_id` · `modify_arrangement_notes` · `duplicate_arrangement_notes` · `transpose_arrangement_notes` · `set_arrangement_clip_name` · `set_arrangement_automation` · `back_to_arranger` · `jump_to_time` · `capture_midi` · `start_recording` · `stop_recording` · `get_cue_points` · `jump_to_cue` · `toggle_cue_point`
292
345
 
346
+ ### Automation (8)
347
+ `get_clip_automation` · `set_clip_automation` · `clear_clip_automation` · `apply_automation_shape` · `apply_automation_recipe` · `get_automation_recipes` · `generate_automation_curve` · `analyze_for_automation`
348
+
293
349
  ### Memory (8)
294
350
  `memory_learn` · `memory_recall` · `memory_get` · `memory_replay` · `memory_list` · `memory_favorite` · `memory_update` · `memory_delete`
295
351
 
@@ -334,7 +390,7 @@ The agent ships with a 2,700-line reference corpus (drum patterns, chord voicing
334
390
 
335
391
  The LivePilot Analyzer (`LivePilot_Analyzer.amxd`) gives the AI ears. Drop it on the master track and 20 additional tools unlock: 8-band spectral analysis, RMS/peak metering, Krumhansl-Schmuckler key detection, plus deep LOM access for sample manipulation, warp markers, device introspection, and human-readable parameter display values.
336
392
 
337
- All 107 core tools work without it. The analyzer is what turns LivePilot from a remote control into a feedback loop — the AI can set an EQ curve and then read the spectrum to verify the result.
393
+ All 115 core tools work without it. The analyzer is what turns LivePilot from a remote control into a feedback loop — the AI can set an EQ curve and then read the spectrum to verify the result.
338
394
 
339
395
  ---
340
396
 
@@ -346,7 +402,7 @@ There are **15+ MCP servers for Ableton Live** as of March 2026. Here's how the
346
402
 
347
403
  | | [LivePilot](https://github.com/dreamrec/LivePilot) | [AbletonMCP](https://github.com/ahujasid/ableton-mcp) | [MCP Extended](https://github.com/uisato/ableton-mcp-extended) | [Ableton Copilot](https://github.com/xiaolaa2/ableton-copilot-mcp) | [AbletonBridge](https://github.com/hidingwill/AbletonBridge) | [Producer Pal](https://github.com/adamjmurray/producer-pal) |
348
404
  |---|:-:|:-:|:-:|:-:|:-:|:-:|
349
- | **Tools** | 127 | ~20 | ~35 | ~45 | 322 | ~25 |
405
+ | **Tools** | 135 | ~20 | ~35 | ~45 | 322 | ~25 |
350
406
  | **Device knowledge** | 280+ devices | -- | -- | -- | -- | -- |
351
407
  | **Audio analysis** | Spectrum/RMS/key | -- | -- | -- | Metering | -- |
352
408
  | **Technique memory** | Persistent | -- | -- | -- | -- | -- |
@@ -363,6 +419,8 @@ There are **15+ MCP servers for Ableton Live** as of March 2026. Here's how the
363
419
  | Session clips | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
364
420
  | **Arrangement view** | ✅ | — | — | ✅ | ? | ? |
365
421
  | **Arrangement automation** | ✅ | — | — | — | ? | — |
422
+ | **Clip automation (envelopes)** | ✅ | — | — | — | — | — |
423
+ | **Automation curve engine** | ✅ | — | — | — | — | — |
366
424
  | MIDI notes (add/get) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
367
425
  | **MIDI notes (modify/delete by ID)** | ✅ | — | — | ✅ | ? | — |
368
426
  | **Per-note probability** | ✅ | — | — | — | — | — |
@@ -401,7 +459,7 @@ Every server on this list gives the AI tools to control Ableton. LivePilot is th
401
459
 
402
460
  The practical difference: other servers let the AI set a parameter. LivePilot lets the AI choose the right parameter based on what device is loaded (atlas), verify the result by reading the audio output (analyzer), and remember the technique for next time (memory).
403
461
 
404
- AbletonBridge has more raw tools (322 vs 127). Producer Pal has the easiest install (drag a .amxd). The original AbletonMCP has the community (2.3k stars). LivePilot has the deepest integration — tools that execute, knowledge that informs, perception that verifies, and memory that accumulates.
462
+ AbletonBridge has more raw tools (322 vs 135). Producer Pal has the easiest install (drag a .amxd). The original AbletonMCP has the community (2.3k stars). LivePilot has the deepest integration — tools that execute, knowledge that informs, perception that verifies, and memory that accumulates.
405
463
 
406
464
  ---
407
465
 
@@ -68,7 +68,7 @@ function anything() {
68
68
  function dispatch(cmd, args) {
69
69
  switch(cmd) {
70
70
  case "ping":
71
- send_response({"ok": true, "version": "1.6.0"});
71
+ send_response({"ok": true, "version": "1.6.3"});
72
72
  break;
73
73
  case "get_params":
74
74
  cmd_get_params(args);
@@ -300,6 +300,8 @@ function cmd_walk_rack(args) {
300
300
  function walk_device(path, depth) {
301
301
  if (depth > 6) return {"error": "max depth reached"};
302
302
 
303
+ // Read all properties from cursor BEFORE recursing — recursion
304
+ // overwrites both cursors, so we must capture everything first.
303
305
  cursor_a.goto(path);
304
306
  var result = {
305
307
  name: cursor_a.get("name").toString(),
@@ -310,11 +312,15 @@ function walk_device(path, depth) {
310
312
  param_count: cursor_a.getcount("parameters")
311
313
  };
312
314
 
313
- if (result.can_have_chains) {
314
- var chain_count = cursor_a.getcount("chains");
315
+ // Capture chain/pad counts BEFORE recursion clobbers cursors
316
+ var chain_count = result.can_have_chains ? cursor_a.getcount("chains") : 0;
317
+ var pad_count = result.can_have_drum_pads ? cursor_a.getcount("drum_pads") : 0;
318
+
319
+ if (chain_count > 0) {
315
320
  result.chains = [];
316
321
  for (var c = 0; c < chain_count; c++) {
317
322
  var chain_path = path + " chains " + c;
323
+ // Re-goto cursor_b each iteration (recursion may have moved it)
318
324
  cursor_b.goto(chain_path);
319
325
  var chain = {
320
326
  index: c,
@@ -329,10 +335,8 @@ function walk_device(path, depth) {
329
335
  }
330
336
  }
331
337
 
332
- if (result.can_have_drum_pads) {
333
- var pad_count = cursor_a.getcount("drum_pads");
338
+ if (pad_count > 0) {
334
339
  result.drum_pads = [];
335
- // Only report populated pads (up to 128, but most are empty)
336
340
  for (var p = 0; p < Math.min(pad_count, 128); p++) {
337
341
  var pad_path = path + " drum_pads " + p;
338
342
  cursor_b.goto(pad_path);
@@ -686,7 +690,7 @@ function cmd_get_simpler_slices(args) {
686
690
  "playback_mode_name": ["Classic", "One-Shot", "Slicing"][playback_mode] || "Unknown",
687
691
  "sample_rate": sample_rate,
688
692
  "sample_length_frames": length,
689
- "sample_length_seconds": length / sample_rate,
693
+ "sample_length_seconds": sample_rate > 0 ? length / sample_rate : 0,
690
694
  "slice_count": slices.length,
691
695
  "slices": slices
692
696
  });
@@ -1,2 +1,2 @@
1
1
  """LivePilot MCP Server — bridges MCP protocol to Ableton Live."""
2
- __version__ = "1.6.1"
2
+ __version__ = "1.6.3"
@@ -365,11 +365,12 @@ def _brownian(duration: float, density: int, start: float = 0.5,
365
365
  points.append({"time": t * duration, "value": value})
366
366
  step = drift / density + volatility * _det_random(i, seed)
367
367
  value += step
368
- # Soft boundary reflection (bounce off 0/1 instead of hard clamp)
369
- if value > 1.0:
370
- value = 2.0 - value
371
- elif value < 0.0:
372
- value = -value
368
+ # Soft boundary reflection (bounce off 0/1 until within range)
369
+ while value > 1.0 or value < 0.0:
370
+ if value > 1.0:
371
+ value = 2.0 - value
372
+ if value < 0.0:
373
+ value = -value
373
374
  return points
374
375
 
375
376
 
@@ -234,7 +234,7 @@ class M4LBridge:
234
234
  return {"error": "LivePilot Analyzer not connected. Drop it on the master track."}
235
235
 
236
236
  # Create a future for the response
237
- loop = asyncio.get_event_loop()
237
+ loop = asyncio.get_running_loop()
238
238
  future = loop.create_future()
239
239
  if self.receiver:
240
240
  self.receiver.set_response_future(future)
@@ -1,5 +1,6 @@
1
1
  """Persistent JSON store for LivePilot techniques (beat patterns, device chains, etc.)."""
2
2
 
3
+ import copy
3
4
  import json
4
5
  import os
5
6
  import threading
@@ -101,7 +102,7 @@ class TechniqueStore:
101
102
  with self._lock:
102
103
  for t in self._data["techniques"]:
103
104
  if t["id"] == technique_id:
104
- return dict(t)
105
+ return copy.deepcopy(t)
105
106
  raise ValueError(f"NOT_FOUND: technique '{technique_id}' does not exist")
106
107
 
107
108
  def search(
@@ -18,7 +18,7 @@ async def lifespan(server):
18
18
  m4l = M4LBridge(spectral, receiver)
19
19
 
20
20
  # Start UDP listener for incoming M4L spectral data (port 9880)
21
- loop = asyncio.get_event_loop()
21
+ loop = asyncio.get_running_loop()
22
22
  try:
23
23
  transport, _ = await loop.create_datagram_endpoint(
24
24
  lambda: receiver,
@@ -1,7 +1,7 @@
1
1
  """Analyzer MCP tools — real-time spectral analysis and deep LOM access.
2
2
 
3
3
  20 tools requiring the LivePilot Analyzer M4L device on the master track.
4
- These tools are optional — all 107 core tools work without the device.
4
+ These tools are optional — all 115 core tools work without the device.
5
5
  """
6
6
 
7
7
  from __future__ import annotations
@@ -137,6 +137,26 @@ def apply_automation_shape(
137
137
  factor: float = 3.0,
138
138
  invert: bool = False,
139
139
  time_offset: float = 0.0,
140
+ # Steps params
141
+ values: Optional[list[float]] = None,
142
+ # Euclidean params
143
+ hits: int = 5,
144
+ steps: int = 16,
145
+ # Organic params
146
+ seed: float = 0.0,
147
+ drift: float = 0.0,
148
+ volatility: float = 0.1,
149
+ damping: float = 0.15,
150
+ stiffness: float = 8.0,
151
+ # Bezier params
152
+ control1: float = 0.0,
153
+ control2: float = 1.0,
154
+ control1_time: float = 0.33,
155
+ control2_time: float = 0.66,
156
+ # Easing params
157
+ easing_type: str = "ease_out",
158
+ # Stochastic params
159
+ narrowing: float = 0.5,
140
160
  ) -> dict:
141
161
  """Generate and apply an automation curve to a session clip.
142
162
 
@@ -177,6 +197,14 @@ def apply_automation_shape(
177
197
  low=low, high=high,
178
198
  factor=factor,
179
199
  invert=invert,
200
+ values=values or [],
201
+ hits=hits, steps=steps,
202
+ seed=seed, drift=drift, volatility=volatility,
203
+ damping=damping, stiffness=stiffness,
204
+ control1=control1, control2=control2,
205
+ control1_time=control1_time, control2_time=control2_time,
206
+ easing_type=easing_type,
207
+ narrowing=narrowing,
180
208
  )
181
209
 
182
210
  # Apply time offset
@@ -293,6 +321,26 @@ def generate_automation_curve(
293
321
  high: float = 1.0,
294
322
  factor: float = 3.0,
295
323
  invert: bool = False,
324
+ # Steps params
325
+ values: Optional[list[float]] = None,
326
+ # Euclidean params
327
+ hits: int = 5,
328
+ steps: int = 16,
329
+ # Organic params
330
+ seed: float = 0.0,
331
+ drift: float = 0.0,
332
+ volatility: float = 0.1,
333
+ damping: float = 0.15,
334
+ stiffness: float = 8.0,
335
+ # Bezier params
336
+ control1: float = 0.0,
337
+ control2: float = 1.0,
338
+ control1_time: float = 0.33,
339
+ control2_time: float = 0.66,
340
+ # Easing params
341
+ easing_type: str = "ease_out",
342
+ # Stochastic params
343
+ narrowing: float = 0.5,
296
344
  ) -> dict:
297
345
  """Generate automation curve points WITHOUT writing them.
298
346
 
@@ -312,6 +360,14 @@ def generate_automation_curve(
312
360
  low=low, high=high,
313
361
  factor=factor,
314
362
  invert=invert,
363
+ values=values or [],
364
+ hits=hits, steps=steps,
365
+ seed=seed, drift=drift, volatility=volatility,
366
+ damping=damping, stiffness=stiffness,
367
+ control1=control1, control2=control2,
368
+ control1_time=control1_time, control2_time=control2_time,
369
+ easing_type=easing_type,
370
+ narrowing=narrowing,
315
371
  )
316
372
  return {
317
373
  "curve_type": curve_type,
@@ -349,7 +405,8 @@ def analyze_for_automation(
349
405
  spectral = ctx.lifespan_context.get("spectral")
350
406
  spectrum = {}
351
407
  if spectral and spectral.is_connected:
352
- spectrum = spectral.get_spectrum()
408
+ data = spectral.get("spectrum")
409
+ spectrum = data["value"] if data else {}
353
410
 
354
411
  # Get meter level
355
412
  meters = ableton.send_command("get_track_meters", {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.6.1",
3
+ "version": "1.6.3",
4
4
  "mcpName": "io.github.dreamrec/livepilot",
5
5
  "description": "AI copilot for Ableton Live 12 — 135 tools, device atlas (280+ devices), real-time audio analysis, automation intelligence, and technique memory",
6
6
  "author": "Pilot Studio",
@@ -1,10 +1,11 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.6.1",
3
+ "version": "1.6.3",
4
4
  "description": "AI copilot for Ableton Live 12 — 135 tools, device atlas (280+ devices), real-time audio analysis, automation intelligence, and technique memory",
5
5
  "author": "Pilot Studio",
6
6
  "skills": [
7
- "skills/livepilot-core"
7
+ "skills/livepilot-core",
8
+ "skills/livepilot-release"
8
9
  ],
9
10
  "commands": [
10
11
  "commands/session.md",
@@ -1,4 +1,4 @@
1
- # LivePilot v1.6.0 — Architecture & Tool Reference
1
+ # LivePilot v1.6.3 — Architecture & Tool Reference
2
2
 
3
3
  LivePilot is an agentic production system for Ableton Live 12. It combines 135 MCP tools with a device knowledge corpus, real-time audio analysis, automation intelligence, and persistent technique memory.
4
4
 
@@ -38,12 +38,12 @@ Run this checklist EVERY time the user says "update everything", "push", "releas
38
38
  - [ ] `mcp_server/tools/analyzer.py` → module docstring
39
39
  - [ ] `tests/test_tools_contract.py` → expected count
40
40
 
41
- **How to check:** `grep -rn "127\|104\|113" --include="*.md" --include="*.json" --include="*.py" --include="*.html" --include="*.js" . | grep -v node_modules | grep -v .git | grep -v __pycache__`
41
+ **How to check:** `grep -rn "127\|128\|129\|130\|131\|132\|133\|134" --include="*.md" --include="*.json" --include="*.py" --include="*.html" --include="*.js" . | grep -v node_modules | grep -v .git | grep -v __pycache__`
42
42
 
43
43
  ## 3. Domain Count
44
44
 
45
- - [ ] All files above that mention "10 domains" should say "11 domains"
46
- - [ ] Domain lists should include: transport, tracks, clips, notes, devices, scenes, mixing, browser, arrangement, memory, analyzer
45
+ - [ ] All files above that mention "11 domains" should say "12 domains"
46
+ - [ ] Domain lists should include: transport, tracks, clips, notes, devices, scenes, mixing, browser, arrangement, automation, memory, analyzer
47
47
 
48
48
  ## 4. npm Registry
49
49
 
@@ -5,6 +5,8 @@ Entry point for the ControlSurface. Ableton calls create_instance(c_instance)
5
5
  when this script is selected in Preferences > Link, Tempo & MIDI.
6
6
  """
7
7
 
8
+ __version__ = "1.6.3"
9
+
8
10
  from _Framework.ControlSurface import ControlSurface
9
11
  from .server import LivePilotServer
10
12
  from . import transport # noqa: F401 — registers transport handlers
@@ -32,7 +34,7 @@ class LivePilot(ControlSurface):
32
34
  ControlSurface.__init__(self, c_instance)
33
35
  self._server = LivePilotServer(self)
34
36
  self._server.start()
35
- self.log_message("LivePilot v1.6.1 initialized")
37
+ self.log_message("LivePilot v1.6.3 initialized")
36
38
  self.show_message("LivePilot: Listening on port 9878")
37
39
 
38
40
  def disconnect(self):
@@ -100,7 +100,6 @@ def _search_recursive(item, name_filter, loadable_only, results, depth, max_dept
100
100
  entry["uri"] = None
101
101
  results.append(entry)
102
102
  if child.is_folder:
103
- before = len(results)
104
103
  _search_recursive(
105
104
  child, name_filter, loadable_only, results, depth + 1, max_depth,
106
105
  max_results
@@ -6,7 +6,7 @@ but targets session clips via track.clip_slots[i].clip.
6
6
  """
7
7
 
8
8
  from .router import register
9
- from .utils import get_track
9
+ from .utils import get_track, get_clip
10
10
 
11
11
 
12
12
  @register("get_clip_automation")
@@ -16,12 +16,7 @@ def get_clip_automation(song, params):
16
16
  clip_index = params["clip_index"]
17
17
 
18
18
  track = get_track(song, track_index)
19
- clip_slot = list(track.clip_slots)[clip_index]
20
- if not clip_slot.has_clip:
21
- return {"error": {"code": "NOT_FOUND",
22
- "message": "No clip at slot %d" % clip_index}}
23
-
24
- clip = clip_slot.clip
19
+ clip = get_clip(song, track_index, clip_index)
25
20
  envelopes = []
26
21
 
27
22
  # Check mixer parameters: volume, panning, sends
@@ -94,12 +89,7 @@ def set_clip_automation(song, params):
94
89
  send_index = params.get("send_index")
95
90
 
96
91
  track = get_track(song, track_index)
97
- clip_slot = list(track.clip_slots)[clip_index]
98
- if not clip_slot.has_clip:
99
- return {"error": {"code": "NOT_FOUND",
100
- "message": "No clip at slot %d" % clip_index}}
101
-
102
- clip = clip_slot.clip
92
+ clip = get_clip(song, track_index, clip_index)
103
93
 
104
94
  # Resolve the target parameter
105
95
  if parameter_type == "volume":
@@ -111,7 +101,7 @@ def set_clip_automation(song, params):
111
101
  return {"error": {"code": "INVALID_PARAM",
112
102
  "message": "send_index required for send automation"}}
113
103
  sends = list(track.mixer_device.sends)
114
- if send_index >= len(sends):
104
+ if send_index < 0 or send_index >= len(sends):
115
105
  return {"error": {"code": "INDEX_ERROR",
116
106
  "message": "send_index %d out of range" % send_index}}
117
107
  parameter = sends[send_index]
@@ -120,11 +110,11 @@ def set_clip_automation(song, params):
120
110
  return {"error": {"code": "INVALID_PARAM",
121
111
  "message": "device_index and parameter_index required"}}
122
112
  devices = list(track.devices)
123
- if device_index >= len(devices):
113
+ if device_index < 0 or device_index >= len(devices):
124
114
  return {"error": {"code": "INDEX_ERROR",
125
115
  "message": "device_index %d out of range" % device_index}}
126
116
  dev_params = list(devices[device_index].parameters)
127
- if parameter_index >= len(dev_params):
117
+ if parameter_index < 0 or parameter_index >= len(dev_params):
128
118
  return {"error": {"code": "INDEX_ERROR",
129
119
  "message": "parameter_index %d out of range" % parameter_index}}
130
120
  parameter = dev_params[parameter_index]
@@ -173,12 +163,7 @@ def clear_clip_automation(song, params):
173
163
  parameter_type = params.get("parameter_type")
174
164
 
175
165
  track = get_track(song, track_index)
176
- clip_slot = list(track.clip_slots)[clip_index]
177
- if not clip_slot.has_clip:
178
- return {"error": {"code": "NOT_FOUND",
179
- "message": "No clip at slot %d" % clip_index}}
180
-
181
- clip = clip_slot.clip
166
+ clip = get_clip(song, track_index, clip_index)
182
167
 
183
168
  song.begin_undo_step()
184
169
  try:
@@ -197,13 +182,30 @@ def clear_clip_automation(song, params):
197
182
  elif parameter_type == "panning":
198
183
  parameter = track.mixer_device.panning
199
184
  elif parameter_type == "send":
200
- send_index = params.get("send_index", 0)
201
- parameter = list(track.mixer_device.sends)[send_index]
185
+ send_index = params.get("send_index")
186
+ if send_index is None:
187
+ return {"error": {"code": "INVALID_PARAM",
188
+ "message": "send_index required for send automation"}}
189
+ sends = list(track.mixer_device.sends)
190
+ if send_index < 0 or send_index >= len(sends):
191
+ return {"error": {"code": "INDEX_ERROR",
192
+ "message": "send_index %d out of range" % send_index}}
193
+ parameter = sends[send_index]
202
194
  elif parameter_type == "device":
203
- device_index = params.get("device_index", 0)
204
- parameter_index = params.get("parameter_index", 0)
205
- device = list(track.devices)[device_index]
206
- parameter = list(device.parameters)[parameter_index]
195
+ device_index = params.get("device_index")
196
+ parameter_index = params.get("parameter_index")
197
+ if device_index is None or parameter_index is None:
198
+ return {"error": {"code": "INVALID_PARAM",
199
+ "message": "device_index and parameter_index required"}}
200
+ devices = list(track.devices)
201
+ if device_index < 0 or device_index >= len(devices):
202
+ return {"error": {"code": "INDEX_ERROR",
203
+ "message": "device_index %d out of range" % device_index}}
204
+ dev_params = list(devices[device_index].parameters)
205
+ if parameter_index < 0 or parameter_index >= len(dev_params):
206
+ return {"error": {"code": "INDEX_ERROR",
207
+ "message": "parameter_index %d out of range" % parameter_index}}
208
+ parameter = dev_params[parameter_index]
207
209
  else:
208
210
  return {"error": {"code": "INVALID_PARAM",
209
211
  "message": "Unknown parameter_type"}}
@@ -147,7 +147,8 @@ def set_clip_loop(song, params):
147
147
  clip_index = int(params["clip_index"])
148
148
  clip = get_clip(song, track_index, clip_index)
149
149
 
150
- clip.looping = bool(params["enabled"])
150
+ if "enabled" in params:
151
+ clip.looping = bool(params["enabled"])
151
152
  if "start" in params:
152
153
  clip.loop_start = float(params["start"])
153
154
  if "end" in params: