livepilot 1.8.2 → 1.8.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -10,7 +10,7 @@
10
10
  {
11
11
  "name": "livepilot",
12
12
  "description": "Agentic production system for Ableton Live 12 — 168 tools, 17 domains, device atlas, spectral perception, technique memory, neo-Riemannian harmony, Euclidean rhythm, species counterpoint, MIDI I/O",
13
- "version": "1.8.1",
13
+ "version": "1.8.4",
14
14
  "author": {
15
15
  "name": "Pilot Studio"
16
16
  },
package/AGENTS.md ADDED
@@ -0,0 +1,46 @@
1
+ # LivePilot v1.8.4 — Ableton Live 12
2
+
3
+ ## Project
4
+ - **Repo:** This directory (LivePilot)
5
+ - **Type:** Agentic MCP production system for Ableton Live 12
6
+ - **Three layers:** Device Atlas (knowledge) + M4L Analyzer (perception) + Technique Memory (learning)
7
+ - **Sister projects:** TDPilot (TouchDesigner), ComfyPilot (ComfyUI)
8
+ - **Design spec:** `docs/specs/2026-03-17-livepilot-design.md`
9
+
10
+ ## Architecture
11
+ - **Remote Script** (`remote_script/LivePilot/`): Runs inside Ableton's Python, ControlSurface base class, TCP socket on port 9878
12
+ - **MCP Server** (`mcp_server/`): Python FastMCP server, validates inputs, sends commands to Remote Script
13
+ - **M4L Bridge** (`m4l_device/`): Max for Live Audio Effect on master track, UDP/OSC bridge for deep LOM access
14
+ - UDP 9880: M4L -> Server (spectral data, responses)
15
+ - OSC 9881: Server -> M4L (commands)
16
+ - `livepilot_bridge.js`: 22 bridge commands for LiveAPI access
17
+ - `SpectralCache`: thread-safe, time-expiring data cache (5s max age)
18
+ - Bridge is optional — all core tools work without it
19
+ - **Plugin** (`livepilot/`): Codex plugin (marketplace-compatible: `.Codex-plugin/plugin.json`)
20
+ - **Installer** (`installer/`): Auto-detects Ableton path, copies Remote Script
21
+
22
+ ## Key Rules
23
+ - ALL Live Object Model (LOM) calls must execute on Ableton's main thread via schedule_message queue
24
+ - Live 12 minimum — use modern note API (add_new_notes, get_notes_extended, apply_note_modifications)
25
+ - 168 tools across 17 domains: transport, tracks, clips, notes, devices, scenes, mixing, browser, arrangement, memory, analyzer, automation, theory, generative, harmony, midi_io, perception
26
+ - JSON over TCP, newline-delimited, port 9878
27
+ - Structured errors with codes: INDEX_ERROR, NOT_FOUND, INVALID_PARAM, STATE_ERROR, TIMEOUT, INTERNAL
28
+
29
+ ## M4L Bridge Notes
30
+ - OSC addresses must be sent WITHOUT leading `/` — Max `udpreceive` passes `/` as part of messagename
31
+ - `str_for_value` requires `call()` not `get()` (it's a function)
32
+ - `get()` in Max JS LiveAPI always returns arrays
33
+ - `warp_markers` is a dict property returning JSON string — use `JSON.parse()`
34
+ - `SimplerDevice.slices` lives on the `sample` child, not the device
35
+ - `replace_sample` only works on Simplers with existing samples
36
+ - Max freezes JS from search path cache, not source directory — copy to `~/Documents/Max 8/`
37
+
38
+ ## Binary Patching Workflow (.amxd)
39
+ When modifying .amxd attributes that Max editor won't persist (e.g., `openinpresentation`):
40
+ 1. Find the byte sequence in the .amxd binary
41
+ 2. Replace with same-byte-count alternative (file size must not change)
42
+ 3. Test by loading in Ableton
43
+ 4. Structure: 24-byte `ampf` header + `ptch` chunk + `mx@c` header + JSON patcher + frozen deps
44
+
45
+ ## Tool Count
46
+ Currently 168 tools. If adding/removing tools, update: README.md, package.json description, livepilot/.Codex-plugin/plugin.json, server.json, livepilot/skills/livepilot-core/SKILL.md, livepilot/skills/livepilot-core/references/overview.md, AGENTS.md, CHANGELOG.md, tests/test_tools_contract.py, docs/manual/index.md, docs/manual/tool-reference.md
package/CHANGELOG.md CHANGED
@@ -1,11 +1,30 @@
1
1
  # Changelog
2
2
 
3
- ## 1.8.2FluCoMa Wiring + Analyzer Fix (March 2026)
3
+ ## 1.8.4Bug Fix Audit (March 2026)
4
+
5
+ **5 bugs fixed (2 P1, 3 P2), verified live in Ableton.**
6
+
7
+ ### P1 — Safety-Critical
8
+ - Fix: `create_arrangement_clip` no longer hangs Ableton when `loop_length` is zero or negative — validation at MCP + Remote Script layers
9
+ - Fix: `import_midi_to_clip` now preserves the MIDI file's beat grid instead of scaling by session tempo — a 60 BPM MIDI imported at 120 BPM no longer doubles note positions
10
+
11
+ ### P2 — Correctness
12
+ - Fix: `create_arrangement_clip` now sets `loop_end` on duplicated clips when `loop_length < source_length`, with documented LOM limitation for arrangement clip resizing
13
+ - Fix: `--status` / `--doctor` CLI commands no longer report success for error responses — only resolves true on valid pong
14
+ - Fix: `import_midi_to_clip` with `create_clip=True` now checks for existing clips before creating — clears notes if occupied, creates if empty
15
+
16
+ ### Tests
17
+ - 2 new tests for MIDI tempo independence (`test_midi_io.py::TestImportTempoIndependence`)
18
+ - 255 total tests passing
19
+
20
+ ## 1.8.3 — FluCoMa Wiring + Analyzer Fix (March 2026)
4
21
 
5
22
  - Fix: wire 6 FluCoMa DSP objects into LivePilot_Analyzer.maxpat (spectral shape, mel bands, chroma, loudness, onset, novelty)
6
23
  - Fix: onset/novelty Python handlers now accept 1 arg (fluid.onsetfeature~/noveltyfeature~ output single float)
7
- - Fix: rebuild .amxd with FluCoMa objects + binary patch openinpresentation
24
+ - Fix: restore .amxd after binary corruption .amxd must be rebuilt via Max editor, not programmatic JSON editing
25
+ - Fix: panel z-order in .maxpat — move background panel first in boxes array so multislider renders on top
8
26
  - FluCoMa perception tools now fully functional when FluCoMa package is installed
27
+ - Note: after installing, rebuild .amxd from .maxpat via Max editor (see BUILD_GUIDE.md)
9
28
 
10
29
  ## 1.8.1 — Patch (March 2026)
11
30
 
package/README.md CHANGED
@@ -12,7 +12,6 @@ An agentic production system for Ableton Live 12.
12
12
  168 tools. Device atlas. Spectral perception. Technique memory.
13
13
  Neo-Riemannian harmony. Euclidean rhythm. Species counterpoint.
14
14
 
15
- It doesn't assist — it produces.
16
15
 
17
16
  <br>
18
17
 
@@ -96,7 +95,7 @@ Every tool maps directly to an LOM call — no abstraction, no guessing.
96
95
  The M4L Analyzer sits on the master track. UDP 9880 carries spectral data
97
96
  from Max to the server. OSC 9881 sends commands back.
98
97
 
99
- All 135 core tools work without it — the analyzer adds 20 more
98
+ All 139 core tools work without it — the analyzer adds 29 more
100
99
  and closes the feedback loop.
101
100
 
102
101
  <br>
@@ -673,7 +672,7 @@ Check memory before creative decisions. Verify every mutation.
673
672
 
674
673
  <br>
675
674
 
676
- ### Analyzer (20) `[M4L]`
675
+ ### Analyzer (29) `[M4L]`
677
676
 
678
677
  | Tool | Description |
679
678
  |------|-------------|
@@ -697,6 +696,26 @@ Check memory before creative decisions. Verify every mutation.
697
696
  | `remove_warp_marker` | Remove warp marker |
698
697
  | `scrub_clip` | Preview at beat position |
699
698
  | `stop_scrub` | Stop preview |
699
+ | `get_spectral_shape` | 7 spectral descriptors via FluCoMa |
700
+ | `get_mel_spectrum` | 40-band mel spectrum |
701
+ | `get_chroma` | 12 pitch class energies |
702
+ | `get_onsets` | Real-time onset detection |
703
+ | `get_novelty` | Spectral novelty for section boundaries |
704
+ | `get_momentary_loudness` | EBU R128 momentary LUFS + peak |
705
+ | `check_flucoma` | Verify FluCoMa installation |
706
+ | `capture_audio` | Record master output to WAV |
707
+ | `capture_stop` | Cancel in-progress capture |
708
+
709
+ <br>
710
+
711
+ ### Perception (4)
712
+
713
+ | Tool | Description |
714
+ |------|-------------|
715
+ | `analyze_loudness` | Integrated LUFS, true peak, LRA, streaming compliance |
716
+ | `analyze_spectrum_offline` | Spectral centroid, rolloff, flatness, 5-band balance |
717
+ | `compare_to_reference` | Mix vs reference: loudness + spectral delta |
718
+ | `read_audio_metadata` | Format, duration, sample rate, tags |
700
719
 
701
720
  <br>
702
721
 
@@ -753,11 +772,10 @@ Check memory before creative decisions. Verify every mutation.
753
772
  ## Coming
754
773
 
755
774
  ```
756
- □ Real-time DSP analysis via LOM meters
757
- □ M4L bridge expansion — deeper LiveAPI access
758
- □ Arrangement view — clip placement, tempo automation
759
- □ Audio clip manipulation — stretch, slice, resample
760
775
  □ Plugin parameter mapping — VST/AU deep control
776
+ □ Audio track freeze/flatten automation
777
+ □ Clip launch scene matrix operations
778
+ □ Multi-track arrangement templates
761
779
  ```
762
780
 
763
781
  <br>
package/bin/livepilot.js CHANGED
@@ -70,7 +70,7 @@ function ensureVenv(systemPython, prefixArgs) {
70
70
  // Check if venv already exists and has our deps
71
71
  if (fs.existsSync(venvPy)) {
72
72
  try {
73
- execFileSync(venvPy, ["-c", "import fastmcp; import midiutil; import pretty_midi"], {
73
+ execFileSync(venvPy, ["-c", "import fastmcp; import midiutil; import pretty_midi; import numpy; import pyloudnorm; import soundfile; import scipy"], {
74
74
  encoding: "utf-8",
75
75
  timeout: 10000,
76
76
  stdio: "pipe",
@@ -127,10 +127,12 @@ function checkStatus() {
127
127
  sock.on("data", (chunk) => {
128
128
  buf += chunk.toString();
129
129
  if (buf.includes("\n")) {
130
+ let ok = false;
130
131
  try {
131
132
  const resp = JSON.parse(buf.split("\n")[0]);
132
133
  if (resp.ok === true && resp.result && resp.result.pong) {
133
134
  console.log(" Ableton Live: connected on %s:%d", HOST, PORT);
135
+ ok = true;
134
136
  } else {
135
137
  console.log(" Ableton Live: unexpected response:", JSON.stringify(resp));
136
138
  }
@@ -138,7 +140,7 @@ function checkStatus() {
138
140
  console.log(" Ableton Live: invalid response");
139
141
  }
140
142
  sock.destroy();
141
- resolve(true);
143
+ resolve(ok);
142
144
  }
143
145
  });
144
146
 
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.8.2",
3
+ "version": "1.8.4",
4
4
  "description": "Agentic production system for Ableton Live 12 — 168 tools, 17 domains, device atlas, spectral perception, technique memory, neo-Riemannian harmony, Euclidean rhythm, species counterpoint, MIDI I/O",
5
5
  "author": {
6
6
  "name": "Pilot Studio"
@@ -149,8 +149,8 @@ Never skip levels. The user's question determines the entry point, but always st
149
149
  ### Memory (8)
150
150
  `memory_learn` · `memory_recall` · `memory_get` · `memory_replay` · `memory_list` · `memory_favorite` · `memory_update` · `memory_delete`
151
151
 
152
- ### Analyzer (20) — requires LivePilot Analyzer M4L device on master track
153
- `get_master_spectrum` · `get_master_rms` · `get_detected_key` · `get_hidden_parameters` · `get_automation_state` · `walk_device_tree` · `get_clip_file_path` · `replace_simpler_sample` · `load_sample_to_simpler` · `get_simpler_slices` · `crop_simpler` · `reverse_simpler` · `warp_simpler` · `get_warp_markers` · `add_warp_marker` · `move_warp_marker` · `remove_warp_marker` · `scrub_clip` · `stop_scrub` · `get_display_values`
152
+ ### Analyzer (29) — requires LivePilot Analyzer M4L device on master track
153
+ `get_master_spectrum` · `get_master_rms` · `get_detected_key` · `get_hidden_parameters` · `get_automation_state` · `walk_device_tree` · `get_clip_file_path` · `replace_simpler_sample` · `load_sample_to_simpler` · `get_simpler_slices` · `crop_simpler` · `reverse_simpler` · `warp_simpler` · `get_warp_markers` · `add_warp_marker` · `move_warp_marker` · `remove_warp_marker` · `scrub_clip` · `stop_scrub` · `get_display_values` · `get_spectral_shape` · `get_mel_spectrum` · `get_chroma` · `get_onsets` · `get_novelty` · `get_momentary_loudness` · `check_flucoma` · `capture_audio` · `capture_stop`
154
154
 
155
155
  ### Automation (8)
156
156
  Clip automation CRUD + intelligent curve generation with 15 built-in recipes.
@@ -175,6 +175,14 @@ Clip automation CRUD + intelligent curve generation with 15 built-in recipes.
175
175
  - Clear existing automation before rewriting: `clear_clip_automation` first
176
176
  - Load `references/automation-atlas.md` for curve theory, genre recipes, diagnostic technique, and cross-track spectral mapping
177
177
 
178
+ ### Perception (4) — offline audio analysis, no Ableton connection required
179
+ `analyze_loudness` · `analyze_spectrum_offline` · `compare_to_reference` · `read_audio_metadata`
180
+
181
+ **Key discipline:**
182
+ - These work on any local audio file — no Ableton connection needed
183
+ - Use `compare_to_reference` for A/B mix comparisons against reference tracks
184
+ - Use `analyze_loudness` to check streaming compliance (Spotify, Apple Music, YouTube targets)
185
+
178
186
  ### Theory (7)
179
187
  Music theory analysis — built-in pure Python engine, zero external dependencies.
180
188
 
@@ -1,4 +1,4 @@
1
- # LivePilot v1.8.0 — Architecture & Tool Reference
1
+ # LivePilot v1.8.4 — Architecture & Tool Reference
2
2
 
3
3
  Agentic production system for Ableton Live 12. 168 tools across 17 domains. Device atlas (280+ devices), spectral perception (M4L analyzer), technique memory, automation intelligence (16 curve types, 15 recipes), music theory (Krumhansl-Schmuckler, species counterpoint), generative algorithms (Euclidean rhythm, tintinnabuli, phase shift, additive process), neo-Riemannian harmony (PRL transforms, Tonnetz), MIDI file I/O.
4
4
 
@@ -202,7 +202,7 @@ This turns "set EQ band 3 to -4 dB" into "cut 400 Hz by 4 dB, then read the spec
202
202
  | `memory_update` | Updates name, tags, or qualities | `technique_id`, `name`, `tags`, `qualities` |
203
203
  | `memory_delete` | Removes technique (backs up first) | `technique_id` |
204
204
 
205
- ### Analyzer (20) — Real-time DSP analysis (requires LivePilot Analyzer M4L device on master track)
205
+ ### Analyzer (29) — Real-time DSP analysis (requires LivePilot Analyzer M4L device on master track)
206
206
 
207
207
  | Tool | What it does | Key params |
208
208
  |------|-------------|------------|
@@ -13,43 +13,56 @@ Run this checklist EVERY time the user says "update everything", "push", "releas
13
13
  - [ ] `package-lock.json` → `"version"` (run `npm install --package-lock-only` if stale)
14
14
  - [ ] `server.json` → `"version"` (TWO locations: top-level and package)
15
15
  - [ ] `livepilot/.claude-plugin/plugin.json` → `"version"`
16
+ - [ ] `.claude-plugin/marketplace.json` → `"version"` in plugins array
16
17
  - [ ] `mcp_server/__init__.py` → `__version__`
17
- - [ ] `remote_script/LivePilot/__init__.py` → version in log message
18
+ - [ ] `remote_script/LivePilot/__init__.py` → `__version__` (log message auto-uses it)
18
19
  - [ ] `m4l_device/livepilot_bridge.js` → version in ping response
20
+ - [ ] `CLAUDE.md` → header line
21
+ - [ ] `livepilot/skills/livepilot-core/references/overview.md` → header line
19
22
  - [ ] `CHANGELOG.md` → latest version header
20
23
  - [ ] `docs/social-banner.html` → version display
24
+ - [ ] `docs/M4L_BRIDGE.md` → ping response example
21
25
 
22
- **How to check:** `grep -rn "1\.[0-9]\.[0-9]" package.json server.json livepilot/.claude-plugin/plugin.json mcp_server/__init__.py remote_script/LivePilot/__init__.py m4l_device/livepilot_bridge.js CHANGELOG.md docs/social-banner.html`
26
+ **How to check:** `grep -rn "1\.[0-9]\.[0-9]" package.json server.json livepilot/.claude-plugin/plugin.json .claude-plugin/marketplace.json mcp_server/__init__.py remote_script/LivePilot/__init__.py m4l_device/livepilot_bridge.js CHANGELOG.md CLAUDE.md livepilot/skills/livepilot-core/references/overview.md docs/social-banner.html docs/M4L_BRIDGE.md`
23
27
 
24
28
  ## 2. Tool Count (must ALL match)
25
29
 
26
- - [ ] `README.md` every occurrence
27
- - [ ] `package.json` `"description"`
30
+ Current: **168 tools across 17 domains**.
31
+ Core (no M4L): **139**. Analyzer (M4L): **29**. Perception (offline): **4**.
32
+
33
+ Verify: `grep -rc "@mcp.tool" mcp_server/tools/ | grep -v ":0" | awk -F: '{sum+=$2} END{print sum}'`
34
+
35
+ Files that reference tool count:
36
+ - [ ] `README.md` — header, PERCEPTION section ("139 core...29 analyzer"), Analyzer table header "(29)", Perception table header "(4)"
37
+ - [ ] `package.json` → `"description"` (168 tools, 17 domains)
28
38
  - [ ] `server.json` → `"description"`
29
39
  - [ ] `livepilot/.claude-plugin/plugin.json` → `"description"`
30
- - [ ] `CLAUDE.md`
31
- - [ ] `livepilot/skills/livepilot-core/SKILL.md` header and domain breakdown
32
- - [ ] `livepilot/skills/livepilot-core/references/overview.md`
33
- - [ ] `docs/manual/index.md`
34
- - [ ] `docs/manual/tool-reference.md`
35
- - [ ] `docs/TOOL_REFERENCE.md`
36
- - [ ] `docs/M4L_BRIDGE.md`
40
+ - [ ] `.claude-plugin/marketplace.json` → `"description"`
41
+ - [ ] `CLAUDE.md` "168 tools across 17 domains"
42
+ - [ ] `livepilot/skills/livepilot-core/SKILL.md` — "168 tools across 17 domains", Analyzer (29), Perception (4)
43
+ - [ ] `livepilot/skills/livepilot-core/references/overview.md` — "168 tools across 17 domains"
44
+ - [ ] `docs/manual/index.md` — domain table: Analyzer (29), Perception (4)
45
+ - [ ] `docs/manual/getting-started.md` — "139 core tools...29 analyzer"
46
+ - [ ] `docs/manual/tool-reference.md` — all domains present with correct counts
47
+ - [ ] `docs/TOOL_REFERENCE.md` — all domains present
48
+ - [ ] `docs/M4L_BRIDGE.md` — "139 core tools...29 analyzer"
37
49
  - [ ] `docs/social-banner.html`
38
50
  - [ ] `mcp_server/tools/analyzer.py` → module docstring
39
- - [ ] `tests/test_tools_contract.py` → expected count
51
+ - [ ] `tests/test_tools_contract.py` → expected total count
40
52
 
41
- **How to check:** `grep -rn "127\|128\|129\|130\|131\|132\|133\|134" --include="*.md" --include="*.json" --include="*.py" --include="*.html" --include="*.js" . | grep -v node_modules | grep -v .git | grep -v __pycache__`
53
+ **How to check:** `grep -rn "168\|139\|135\|127\|115\|107" --include="*.md" --include="*.json" --include="*.py" --include="*.html" . | grep -v node_modules | grep -v .git | grep -v __pycache__ | grep -v CHANGELOG`
42
54
 
43
55
  ## 3. Domain Count
44
56
 
45
- - [ ] All files above that mention "11 domains" should say "12 domains"
46
- - [ ] Domain lists should include: transport, tracks, clips, notes, devices, scenes, mixing, browser, arrangement, automation, memory, analyzer
57
+ Current: **17 domains**: transport, tracks, clips, notes, devices, scenes, mixing, browser, arrangement, memory, analyzer, automation, theory, generative, harmony, midi_io, perception.
58
+
59
+ - [ ] All files that mention domain count say "17 domains"
60
+ - [ ] Domain lists include ALL 17 (especially perception — it's the newest and most often omitted)
47
61
 
48
62
  ## 4. npm Registry
49
63
 
50
64
  - [ ] `npm view livepilot version` matches local version
51
65
  - [ ] If not: `npm publish`
52
- - [ ] Verify badge will update: badge URL in README.md points to shields.io/npm/v/livepilot
53
66
 
54
67
  ## 5. GitHub
55
68
 
@@ -57,9 +70,6 @@ Run this checklist EVERY time the user says "update everything", "push", "releas
57
70
  - [ ] Topics are current (should include: ai, mcp, ableton, livepilot, max-for-live, audio-analysis)
58
71
  - [ ] Latest release matches current version (`gh release list`)
59
72
  - [ ] Release notes are current
60
- - [ ] Old stale releases cleaned up
61
- - [ ] Git tags: only relevant versions exist (`git tag -l`)
62
- - [ ] No co-author or unwanted metadata in commit messages
63
73
 
64
74
  ## 6. Plugin Cache
65
75
 
@@ -74,28 +84,26 @@ Run this checklist EVERY time the user says "update everything", "push", "releas
74
84
 
75
85
  ## 8. Documentation Content
76
86
 
77
- - [ ] `README.md` — features match current capabilities
78
- - [ ] `docs/manual/getting-started.md` — install instructions current, mentions M4L Analyzer
79
- - [ ] `docs/manual/tool-reference.md` — all tools listed
80
- - [ ] `docs/M4L_BRIDGE.md` — architecture accurate
81
- - [ ] `docs/TOOL_REFERENCE.md` — all tools listed with correct params
87
+ - [ ] `README.md` — features match current capabilities, "Coming" section is accurate
88
+ - [ ] `docs/manual/getting-started.md` — install instructions current
89
+ - [ ] `docs/manual/tool-reference.md` — all 17 domains listed, all 168 tools present
90
+ - [ ] `docs/TOOL_REFERENCE.md` — all 17 domains present
91
+ - [ ] `docs/M4L_BRIDGE.md` — architecture accurate, core tool count correct
82
92
 
83
93
  ## 9. Derived Artifacts
84
94
 
85
95
  - [ ] `m4l_device/LivePilot_Analyzer.amxd` — frozen JS matches source? All commands present?
86
- - [ ] Distributable zip on Desktop rebuilt with latest?
87
- - [ ] Private backup repo — synced and pushed?
88
- - [ ] `LivePilot-v*.INSTALL.txt` — updated?
96
+ - [ ] If `livepilot_bridge.js` changed amxd needs rebuilding in Max editor
89
97
 
90
98
  ## 10. Code Consistency
91
99
 
92
100
  - [ ] `@mcp.tool()` count matches documented tool count: `grep -r "@mcp.tool" mcp_server/tools/ | wc -l`
93
101
  - [ ] No dead imports or unused code in recently changed files
94
102
  - [ ] Remote script version matches MCP server version
103
+ - [ ] All tests pass: `python3 -m pytest tests/ -v`
95
104
 
96
105
  ## Quick Verify Command
97
106
 
98
- Run this one-liner to catch most issues:
99
107
  ```bash
100
- echo "=== Versions ===" && grep -h '"version"' package.json server.json livepilot/.claude-plugin/plugin.json | head -5 && grep __version__ mcp_server/__init__.py && echo "=== Tool count ===" && grep -rc "@mcp.tool" mcp_server/tools/ | tail -1 && echo "=== npm ===" && npm view livepilot version 2>/dev/null && echo "=== GitHub release ===" && gh release list --limit 1 && echo "=== Tags ===" && git tag -l
108
+ echo "=== Versions ===" && grep -h '"version"' package.json server.json livepilot/.claude-plugin/plugin.json .claude-plugin/marketplace.json | head -6 && grep __version__ mcp_server/__init__.py remote_script/LivePilot/__init__.py && echo "=== Tool count ===" && grep -rc "@mcp.tool" mcp_server/tools/ | grep -v ":0" | awk -F: '{sum+=$2} END{print "Total:", sum}' && echo "=== Tests ===" && python3 -m pytest tests/ -q 2>&1 | tail -1
101
109
  ```
Binary file
@@ -83,7 +83,7 @@ function anything() {
83
83
  function dispatch(cmd, args) {
84
84
  switch(cmd) {
85
85
  case "ping":
86
- send_response({"ok": true, "version": "1.8.2"});
86
+ send_response({"ok": true, "version": "1.8.4"});
87
87
  break;
88
88
  case "get_params":
89
89
  cmd_get_params(args);
@@ -1,2 +1,2 @@
1
1
  """LivePilot MCP Server — bridges MCP protocol to Ableton Live."""
2
- __version__ = "1.8.2"
2
+ __version__ = "1.8.4"
@@ -97,6 +97,7 @@ class SpectralReceiver(asyncio.DatagramProtocol):
97
97
  def __init__(self, cache: SpectralCache):
98
98
  self.cache = cache
99
99
  self._chunks: dict[str, dict] = {} # Reassembly buffer for chunked responses
100
+ self._chunk_id = 0
100
101
  self._response_callback: Optional[asyncio.Future] = None
101
102
  self._capture_future: Optional[asyncio.Future] = None
102
103
 
@@ -106,8 +107,9 @@ class SpectralReceiver(asyncio.DatagramProtocol):
106
107
  def datagram_received(self, data: bytes, addr: tuple) -> None:
107
108
  try:
108
109
  self._parse_osc(data)
109
- except Exception:
110
- pass # Malformed packet, ignore
110
+ except Exception as exc:
111
+ import sys
112
+ print(f"LivePilot: malformed OSC packet from {addr}: {exc}", file=sys.stderr)
111
113
 
112
114
  def _parse_osc(self, data: bytes) -> None:
113
115
  """Parse a minimal OSC message (address + typed args)."""
@@ -227,12 +229,15 @@ class SpectralReceiver(asyncio.DatagramProtocol):
227
229
  result = json.loads(decoded)
228
230
  if self._response_callback and not self._response_callback.done():
229
231
  self._response_callback.set_result(result)
230
- except Exception:
231
- pass
232
+ except Exception as exc:
233
+ import sys
234
+ print(f"LivePilot: failed to decode bridge response: {exc}", file=sys.stderr)
232
235
 
233
236
  def _handle_chunk(self, index: int, total: int, encoded: str) -> None:
234
237
  """Reassemble chunked responses."""
235
- key = str(total) # Simple key — assumes one response at a time
238
+ if index == 0:
239
+ self._chunk_id += 1
240
+ key = str(self._chunk_id)
236
241
  if key not in self._chunks:
237
242
  self._chunks[key] = {"parts": {}, "total": total}
238
243
 
@@ -254,8 +259,9 @@ class SpectralReceiver(asyncio.DatagramProtocol):
254
259
  result = json.loads(decoded)
255
260
  if self._capture_future and not self._capture_future.done():
256
261
  self._capture_future.set_result(result)
257
- except Exception:
258
- pass
262
+ except Exception as exc:
263
+ import sys
264
+ print(f"LivePilot: failed to decode capture response: {exc}", file=sys.stderr)
259
265
 
260
266
  def set_response_future(self, future: asyncio.Future) -> None:
261
267
  """Set a future to be resolved with the next response."""
@@ -278,29 +284,31 @@ class M4LBridge:
278
284
  self.receiver = receiver
279
285
  self._sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
280
286
  self._m4l_addr = ("127.0.0.1", 9881)
287
+ self._cmd_lock = asyncio.Lock()
281
288
 
282
289
  async def send_command(self, command: str, *args: Any, timeout: float = 5.0) -> dict:
283
290
  """Send an OSC command to the M4L device and wait for the response."""
284
291
  if not self.cache.is_connected:
285
292
  return {"error": "LivePilot Analyzer not connected. Drop it on the master track."}
286
293
 
287
- # Create a future for the response
288
- loop = asyncio.get_running_loop()
289
- future = loop.create_future()
290
- if self.receiver:
291
- self.receiver.set_response_future(future)
292
-
293
- # Build and send OSC message (no leading / — Max udpreceive
294
- # passes messagename with / intact to JS, breaking dispatch)
295
- osc_data = self._build_osc(command, args)
296
- self._sock.sendto(osc_data, self._m4l_addr)
297
-
298
- # Wait for response with timeout
299
- try:
300
- result = await asyncio.wait_for(future, timeout=timeout)
301
- return result
302
- except asyncio.TimeoutError:
303
- return {"error": "M4L bridge timeout — device may be busy or removed"}
294
+ async with self._cmd_lock:
295
+ # Create a future for the response
296
+ loop = asyncio.get_running_loop()
297
+ future = loop.create_future()
298
+ if self.receiver:
299
+ self.receiver.set_response_future(future)
300
+
301
+ # Build and send OSC message (no leading / — Max udpreceive
302
+ # passes messagename with / intact to JS, breaking dispatch)
303
+ osc_data = self._build_osc(command, args)
304
+ self._sock.sendto(osc_data, self._m4l_addr)
305
+
306
+ # Wait for response with timeout
307
+ try:
308
+ result = await asyncio.wait_for(future, timeout=timeout)
309
+ return result
310
+ except asyncio.TimeoutError:
311
+ return {"error": "M4L bridge timeout — device may be busy or removed"}
304
312
 
305
313
  async def send_capture(self, command: str, *args: Any, timeout: float = 35.0) -> dict:
306
314
  """Send a capture command to the M4L device and wait for /capture_complete."""
@@ -114,7 +114,7 @@ class TechniqueStore:
114
114
  ) -> list[dict]:
115
115
  """Search techniques. Returns summaries (no payload)."""
116
116
  with self._lock:
117
- results = list(self._data["techniques"])
117
+ results = copy.deepcopy(self._data["techniques"])
118
118
 
119
119
  # filter by type
120
120
  if type_filter:
@@ -156,7 +156,7 @@ class TechniqueStore:
156
156
  )
157
157
 
158
158
  with self._lock:
159
- results = list(self._data["techniques"])
159
+ results = copy.deepcopy(self._data["techniques"])
160
160
 
161
161
  if type_filter:
162
162
  results = [t for t in results if t["type"] == type_filter]
@@ -29,7 +29,16 @@ def _kill_port_holder(port: int) -> None:
29
29
  for pid_str in out.splitlines():
30
30
  pid = int(pid_str)
31
31
  if pid != my_pid:
32
- os.kill(pid, signal.SIGTERM)
32
+ # Only kill if it looks like a Python/LivePilot process
33
+ try:
34
+ cmdline = subprocess.check_output(
35
+ ["ps", "-p", str(pid), "-o", "command="],
36
+ text=True, timeout=2,
37
+ ).strip()
38
+ if "mcp_server" in cmdline or "livepilot" in cmdline.lower():
39
+ os.kill(pid, signal.SIGTERM)
40
+ except (subprocess.CalledProcessError, FileNotFoundError):
41
+ pass # Can't verify — don't kill
33
42
  except (subprocess.CalledProcessError, FileNotFoundError, ValueError):
34
43
  pass # lsof not found or no process — nothing to kill
35
44
 
@@ -139,6 +148,12 @@ def _get_all_tools():
139
148
  # FastMCP 3.x: mcp._local_provider._components (dict of key -> Tool)
140
149
  if hasattr(mcp, "_local_provider") and hasattr(mcp._local_provider, "_components"):
141
150
  return list(mcp._local_provider._components.values())
151
+ import sys
152
+ print(
153
+ "LivePilot: WARNING — could not access FastMCP tool registry, "
154
+ "string-to-number schema coercion will not work",
155
+ file=sys.stderr,
156
+ )
142
157
  return []
143
158
 
144
159
 
@@ -63,7 +63,8 @@ def _normalize_to_lufs(
63
63
  gain_linear = 10 ** (gain_db / 20.0)
64
64
  data, sr = _load_audio(file_path)
65
65
  normalized = np.clip(data * gain_linear, -1.0, 1.0)
66
- tmp_path = tempfile.mktemp(suffix=".wav")
66
+ tmp_fd, tmp_path = tempfile.mkstemp(suffix=".wav")
67
+ os.close(tmp_fd)
67
68
  try:
68
69
  sf.write(tmp_path, normalized, sr)
69
70
  except Exception:
@@ -155,7 +156,7 @@ def compute_loudness(file_path: str, detail: str = "summary") -> dict[str, Any]:
155
156
 
156
157
  # Streaming compliance
157
158
  meets_streaming = {
158
- name: integrated_lufs >= (target - 1.0) # ±1 LU tolerance
159
+ name: abs(integrated_lufs - target) <= 1.0 # ±1 LU tolerance
159
160
  for name, target in STREAMING_TARGETS.items()
160
161
  }
161
162
 
@@ -284,9 +284,15 @@ async def load_sample_to_simpler(
284
284
  "uri": uri,
285
285
  })
286
286
 
287
- # Step 2: Replace with the desired sample via M4L bridge
287
+ # Step 2: Find the newly created device (it's at the end of the chain)
288
+ track_info = ableton.send_command("get_track_info", {"track_index": track_index})
289
+ actual_device_index = len(track_info.get("devices", [])) - 1
290
+ if actual_device_index < 0:
291
+ actual_device_index = 0
292
+
293
+ # Step 3: Replace with the desired sample via M4L bridge
288
294
  result = await bridge.send_command(
289
- "replace_simpler_sample", track_index, device_index, file_path
295
+ "replace_simpler_sample", track_index, actual_device_index, file_path
290
296
  )
291
297
  if "error" in result:
292
298
  return result
@@ -11,6 +11,7 @@ from typing import Any, Optional
11
11
  from fastmcp import Context
12
12
 
13
13
  from ..server import mcp
14
+ from .notes import _validate_note
14
15
 
15
16
 
16
17
  def _get_ableton(ctx: Context):
@@ -104,7 +105,13 @@ def create_arrangement_clip(
104
105
  length: total clip length in beats on the timeline
105
106
  loop_length: pattern length to loop within the clip (e.g. 8.0 for an
106
107
  8-beat pattern inside a 128-beat section). Defaults to
107
- the source clip's length.
108
+ the source clip's length. Must be > 0.
109
+
110
+ When loop_length < source clip length, overlapping copies are placed
111
+ every loop_length beats. Ableton's "later clip takes priority" rule
112
+ ensures correct playback. Each copy's internal loop region is set to
113
+ loop_length beats. For best results, use loop_length >= source length.
114
+
108
115
  name: optional clip display name
109
116
  color_index: optional 0-69 Ableton color
110
117
 
@@ -124,6 +131,8 @@ def create_arrangement_clip(
124
131
  "length": length,
125
132
  }
126
133
  if loop_length is not None:
134
+ if loop_length <= 0:
135
+ raise ValueError("loop_length must be > 0")
127
136
  params["loop_length"] = loop_length
128
137
  if name:
129
138
  params["name"] = name
@@ -154,6 +163,8 @@ def add_arrangement_notes(
154
163
  _validate_clip_index(clip_index)
155
164
  if isinstance(notes, str):
156
165
  notes = json.loads(notes)
166
+ for note in notes:
167
+ _validate_note(note)
157
168
  return _get_ableton(ctx).send_command("add_arrangement_notes", {
158
169
  "track_index": track_index,
159
170
  "clip_index": clip_index,
@@ -23,7 +23,9 @@ def _ensure_list(v: Any) -> list:
23
23
  if isinstance(v, str):
24
24
  import json
25
25
  return json.loads(v)
26
- return list(v)
26
+ if isinstance(v, list):
27
+ return v
28
+ return [v]
27
29
 
28
30
 
29
31
  @mcp.tool()
@@ -482,7 +484,7 @@ def analyze_for_automation(
482
484
  "track_index": track_index,
483
485
  "track_name": track_info.get("name", ""),
484
486
  "device_count": len(devices),
485
- "current_level": meters.get("tracks", [{}])[0].get("level", 0),
487
+ "current_level": (meters.get("tracks") or [{}])[0].get("level", 0) if meters.get("tracks") else 0,
486
488
  "spectrum": spectrum,
487
489
  "suggestions": suggestions,
488
490
  }
@@ -226,10 +226,10 @@ def suggest_chromatic_mediants(
226
226
 
227
227
  mediants = harmony.get_chromatic_mediants(root_pc, quality)
228
228
 
229
- chord_pcs = set(harmony.chord_to_midi(root_pc, quality))
229
+ chord_pcs = {p % 12 for p in harmony.chord_to_midi(root_pc, quality)}
230
230
  formatted = {}
231
231
  for key, (r, q) in mediants.items():
232
- mediant_pcs = set(harmony.chord_to_midi(r, q))
232
+ mediant_pcs = {p % 12 for p in harmony.chord_to_midi(r, q)}
233
233
  common = len(chord_pcs & mediant_pcs)
234
234
  formatted[key] = {
235
235
  "chord": harmony.chord_to_str(r, q),
@@ -59,6 +59,27 @@ def _validate_midi_path(file_path: str) -> Path:
59
59
  return p
60
60
 
61
61
 
62
+ def _midi_notes_to_beats(pm) -> list[dict]:
63
+ """Convert pretty_midi notes to beat-position dicts using the file's own tempo map.
64
+
65
+ Uses time_to_tick/resolution to preserve the MIDI file's beat grid,
66
+ regardless of the current Ableton session tempo.
67
+ """
68
+ notes_raw = []
69
+ for inst in pm.instruments:
70
+ for n in inst.notes:
71
+ start_beat = round(pm.time_to_tick(n.start) / pm.resolution, 3)
72
+ end_beat = round(pm.time_to_tick(n.end) / pm.resolution, 3)
73
+ dur_beat = round(end_beat - start_beat, 3)
74
+ notes_raw.append({
75
+ "pitch": n.pitch,
76
+ "start_time": start_beat,
77
+ "duration": max(dur_beat, 0.001),
78
+ "velocity": n.velocity,
79
+ })
80
+ return notes_raw
81
+
82
+
62
83
  # -- Tool 1: export_clip_midi ------------------------------------------------
63
84
 
64
85
  @mcp.tool()
@@ -127,8 +148,10 @@ def import_midi_to_clip(
127
148
  ) -> dict:
128
149
  """Load a .mid file into a session clip.
129
150
 
130
- Reads MIDI, converts timing to beats using session tempo, and writes
131
- notes into the target clip slot. Creates the clip if needed.
151
+ Reads MIDI, converts timing to beats using the file's own tempo map,
152
+ and writes notes into the target clip slot. When create_clip=True
153
+ (default), creates a new clip if the slot is empty; if a clip already
154
+ exists, clears its notes before importing.
132
155
  """
133
156
  pretty_midi = _require_pretty_midi()
134
157
  ableton = _get_ableton(ctx)
@@ -136,20 +159,8 @@ def import_midi_to_clip(
136
159
  path = _validate_midi_path(file_path)
137
160
  pm = pretty_midi.PrettyMIDI(str(path))
138
161
 
139
- session = ableton.send_command("get_session_info", {})
140
- tempo = float(session.get("tempo", 120.0))
141
-
142
- notes_raw = []
143
- for inst in pm.instruments:
144
- for n in inst.notes:
145
- start_beat = round(n.start * (tempo / 60.0), 3)
146
- dur_beat = round((n.end - n.start) * (tempo / 60.0), 3)
147
- notes_raw.append({
148
- "pitch": n.pitch,
149
- "start_time": start_beat,
150
- "duration": max(dur_beat, 0.001),
151
- "velocity": n.velocity,
152
- })
162
+ # Convert using the MIDI file's own tempo map (not session tempo)
163
+ notes_raw = _midi_notes_to_beats(pm)
153
164
 
154
165
  seen = set()
155
166
  notes = []
@@ -165,11 +176,24 @@ def import_midi_to_clip(
165
176
  default=4.0)
166
177
 
167
178
  if create_clip:
168
- ableton.send_command("create_clip", {
169
- "track_index": track_index,
170
- "clip_index": clip_index,
171
- "length": round(duration_beats, 2),
172
- })
179
+ # Check if clip already exists — only create if the slot is empty
180
+ try:
181
+ ableton.send_command("get_clip_info", {
182
+ "track_index": track_index,
183
+ "clip_index": clip_index,
184
+ })
185
+ # Clip exists — clear its notes before importing
186
+ ableton.send_command("remove_notes", {
187
+ "track_index": track_index,
188
+ "clip_index": clip_index,
189
+ })
190
+ except Exception:
191
+ # No clip in slot — create one
192
+ ableton.send_command("create_clip", {
193
+ "track_index": track_index,
194
+ "clip_index": clip_index,
195
+ "length": round(duration_beats, 2),
196
+ })
173
197
 
174
198
  if notes:
175
199
  ableton.send_command("add_notes", {
@@ -178,10 +202,13 @@ def import_midi_to_clip(
178
202
  "notes": notes,
179
203
  })
180
204
 
205
+ tempo_changes = pm.get_tempo_changes()
206
+ midi_tempo = float(tempo_changes[1][0]) if len(tempo_changes[1]) > 0 else 120.0
207
+
181
208
  return {
182
209
  "note_count": len(notes),
183
210
  "duration_beats": round(duration_beats, 4),
184
- "tempo_source": tempo,
211
+ "midi_tempo": midi_tempo,
185
212
  }
186
213
 
187
214
 
@@ -135,6 +135,10 @@ def remove_notes(
135
135
  """Remove all MIDI notes in a pitch/time region. Use undo to revert. Defaults remove ALL notes in the clip."""
136
136
  _validate_track_index(track_index)
137
137
  _validate_clip_index(clip_index)
138
+ if not 0 <= from_pitch <= 127:
139
+ raise ValueError("from_pitch must be 0-127")
140
+ if pitch_span < 1 or pitch_span > 128:
141
+ raise ValueError("pitch_span must be 1-128")
138
142
  params = {
139
143
  "track_index": track_index,
140
144
  "clip_index": clip_index,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.8.2",
3
+ "version": "1.8.4",
4
4
  "mcpName": "io.github.dreamrec/livepilot",
5
5
  "description": "Agentic production system for Ableton Live 12 — 168 tools, 17 domains, device atlas, spectral perception, technique memory, neo-Riemannian harmony, Euclidean rhythm, species counterpoint, MIDI I/O",
6
6
  "author": "Pilot Studio",
@@ -5,7 +5,7 @@ Entry point for the ControlSurface. Ableton calls create_instance(c_instance)
5
5
  when this script is selected in Preferences > Link, Tempo & MIDI.
6
6
  """
7
7
 
8
- __version__ = "1.8.2"
8
+ __version__ = "1.8.4"
9
9
 
10
10
  from _Framework.ControlSurface import ControlSurface
11
11
  from .server import LivePilotServer
@@ -34,7 +34,7 @@ class LivePilot(ControlSurface):
34
34
  ControlSurface.__init__(self, c_instance)
35
35
  self._server = LivePilotServer(self)
36
36
  self._server.start()
37
- self.log_message("LivePilot v1.8.1 initialized")
37
+ self.log_message("LivePilot v%s initialized" % __version__)
38
38
  self.show_message("LivePilot: Listening on port 9878")
39
39
 
40
40
  def disconnect(self):
@@ -61,6 +61,8 @@ def create_arrangement_clip(song, params):
61
61
 
62
62
  # Use loop_length as the repeat unit (defaults to source clip length)
63
63
  loop_length = float(params.get("loop_length", source_length))
64
+ if loop_length <= 0:
65
+ raise ValueError("loop_length must be > 0")
64
66
 
65
67
  name = str(params.get("name", ""))
66
68
  color_index = params.get("color_index")
@@ -86,6 +88,21 @@ def create_arrangement_clip(song, params):
86
88
  c.name = name
87
89
  if color_index is not None:
88
90
  c.color_index = int(color_index)
91
+
92
+ # When loop_length < source_length, set the internal
93
+ # loop region so only loop_length beats of content play.
94
+ # Arrangement clip timeline length is read-only in the
95
+ # LOM, but overlapping clips are handled by Ableton
96
+ # (later clips take priority), so playback is correct.
97
+ remaining = end_pos - pos
98
+ target_len = min(loop_length, remaining)
99
+ if target_len < source_length:
100
+ try:
101
+ c.looping = True
102
+ c.loop_start = 0.0
103
+ c.loop_end = target_len
104
+ except (AttributeError, RuntimeError):
105
+ pass
89
106
  break
90
107
 
91
108
  clip_count += 1
@@ -12,8 +12,8 @@ from .utils import get_track, get_clip
12
12
  @register("get_clip_automation")
13
13
  def get_clip_automation(song, params):
14
14
  """List automation envelopes on a session clip."""
15
- track_index = params["track_index"]
16
- clip_index = params["clip_index"]
15
+ track_index = int(params["track_index"])
16
+ clip_index = int(params["clip_index"])
17
17
 
18
18
  track = get_track(song, track_index)
19
19
  clip = get_clip(song, track_index, clip_index)
@@ -80,8 +80,8 @@ def set_clip_automation(song, params):
80
80
  parameter_type: "device", "volume", "panning", "send"
81
81
  points: [{time, value, duration?}] — time relative to clip start
82
82
  """
83
- track_index = params["track_index"]
84
- clip_index = params["clip_index"]
83
+ track_index = int(params["track_index"])
84
+ clip_index = int(params["clip_index"])
85
85
  parameter_type = params["parameter_type"]
86
86
  points = params["points"]
87
87
  device_index = params.get("device_index")
@@ -98,29 +98,23 @@ def set_clip_automation(song, params):
98
98
  parameter = track.mixer_device.panning
99
99
  elif parameter_type == "send":
100
100
  if send_index is None:
101
- return {"error": {"code": "INVALID_PARAM",
102
- "message": "send_index required for send automation"}}
101
+ raise ValueError("send_index required for send automation")
103
102
  sends = list(track.mixer_device.sends)
104
103
  if send_index < 0 or send_index >= len(sends):
105
- return {"error": {"code": "INDEX_ERROR",
106
- "message": "send_index %d out of range" % send_index}}
104
+ raise IndexError("send_index %d out of range" % send_index)
107
105
  parameter = sends[send_index]
108
106
  elif parameter_type == "device":
109
107
  if device_index is None or parameter_index is None:
110
- return {"error": {"code": "INVALID_PARAM",
111
- "message": "device_index and parameter_index required"}}
108
+ raise ValueError("device_index and parameter_index required")
112
109
  devices = list(track.devices)
113
110
  if device_index < 0 or device_index >= len(devices):
114
- return {"error": {"code": "INDEX_ERROR",
115
- "message": "device_index %d out of range" % device_index}}
111
+ raise IndexError("device_index %d out of range" % device_index)
116
112
  dev_params = list(devices[device_index].parameters)
117
113
  if parameter_index < 0 or parameter_index >= len(dev_params):
118
- return {"error": {"code": "INDEX_ERROR",
119
- "message": "parameter_index %d out of range" % parameter_index}}
114
+ raise IndexError("parameter_index %d out of range" % parameter_index)
120
115
  parameter = dev_params[parameter_index]
121
116
  else:
122
- return {"error": {"code": "INVALID_PARAM",
123
- "message": "parameter_type must be device/volume/panning/send"}}
117
+ raise ValueError("parameter_type must be device/volume/panning/send")
124
118
 
125
119
  # Get or create envelope
126
120
  song.begin_undo_step()
@@ -158,8 +152,8 @@ def clear_clip_automation(song, params):
158
152
  If parameter_type is provided, clears only that parameter's envelope.
159
153
  If omitted, clears ALL envelopes on the clip.
160
154
  """
161
- track_index = params["track_index"]
162
- clip_index = params["clip_index"]
155
+ track_index = int(params["track_index"])
156
+ clip_index = int(params["clip_index"])
163
157
  parameter_type = params.get("parameter_type")
164
158
 
165
159
  track = get_track(song, track_index)
@@ -184,31 +178,25 @@ def clear_clip_automation(song, params):
184
178
  elif parameter_type == "send":
185
179
  send_index = params.get("send_index")
186
180
  if send_index is None:
187
- return {"error": {"code": "INVALID_PARAM",
188
- "message": "send_index required for send automation"}}
181
+ raise ValueError("send_index required for send automation")
189
182
  sends = list(track.mixer_device.sends)
190
183
  if send_index < 0 or send_index >= len(sends):
191
- return {"error": {"code": "INDEX_ERROR",
192
- "message": "send_index %d out of range" % send_index}}
184
+ raise IndexError("send_index %d out of range" % send_index)
193
185
  parameter = sends[send_index]
194
186
  elif parameter_type == "device":
195
187
  device_index = params.get("device_index")
196
188
  parameter_index = params.get("parameter_index")
197
189
  if device_index is None or parameter_index is None:
198
- return {"error": {"code": "INVALID_PARAM",
199
- "message": "device_index and parameter_index required"}}
190
+ raise ValueError("device_index and parameter_index required")
200
191
  devices = list(track.devices)
201
192
  if device_index < 0 or device_index >= len(devices):
202
- return {"error": {"code": "INDEX_ERROR",
203
- "message": "device_index %d out of range" % device_index}}
193
+ raise IndexError("device_index %d out of range" % device_index)
204
194
  dev_params = list(devices[device_index].parameters)
205
195
  if parameter_index < 0 or parameter_index >= len(dev_params):
206
- return {"error": {"code": "INDEX_ERROR",
207
- "message": "parameter_index %d out of range" % parameter_index}}
196
+ raise IndexError("parameter_index %d out of range" % parameter_index)
208
197
  parameter = dev_params[parameter_index]
209
198
  else:
210
- return {"error": {"code": "INVALID_PARAM",
211
- "message": "Unknown parameter_type"}}
199
+ raise ValueError("Unknown parameter_type")
212
200
 
213
201
  clip.clear_envelope(parameter)
214
202
  finally:
@@ -101,7 +101,8 @@ def get_track_info(song, params):
101
101
  result["arm"] = None
102
102
  result["has_midi_input"] = None
103
103
  result["has_audio_input"] = None
104
- result["is_return_track"] = True
104
+ result["is_return_track"] = track_index != -1000
105
+ result["is_master_track"] = track_index == -1000
105
106
 
106
107
  return result
107
108