livepilot 1.9.20 → 1.9.22

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -10,7 +10,7 @@
10
10
  {
11
11
  "name": "livepilot",
12
12
  "description": "Agentic production system for Ableton Live 12 — 237 tools, 32 domains, device atlas, spectral perception, technique memory, neo-Riemannian harmony, Euclidean rhythm, species counterpoint, MIDI I/O",
13
- "version": "1.9.20",
13
+ "version": "1.9.22",
14
14
  "author": {
15
15
  "name": "Pilot Studio"
16
16
  },
package/CHANGELOG.md CHANGED
@@ -1,5 +1,22 @@
1
1
  # Changelog
2
2
 
3
+ ## 1.9.22 — Skill & Command Overhaul (April 2026)
4
+
5
+ ### Skill Updates
6
+ - **feat(beat):** Added Step 0 "Session Prep" — for fresh projects, deletes all tracks and loads M4L Analyzer on master before starting. Includes perception check (Step 11) with spectral balance verification.
7
+ - **feat(mix):** Added analyzer auto-load (Step 2), spectral targets by genre (Step 6), mandatory meter verification after every change (Step 8), capture+analyze loop (Step 11)
8
+ - **feat(sounddesign):** Added mandatory `value_string` verification (Step 5), perception check (Step 11), organic movement with perlin curves (Step 8)
9
+ - **feat(arrange):** Added motif detection (Step 3), gesture template execution (Step 7), perlin organic movement (Step 8), emotional arc verification (Step 9), LRA check for dynamic range (Step 10)
10
+ - **feat(evaluate):** Added analyzer auto-load (Step 2), full perception snapshot with track meters (Step 6), capture+analyze offline as ground truth option
11
+
12
+ ## 1.9.21 — Verification Discipline Pass (April 2026)
13
+
14
+ ### Systemic Fixes
15
+ - **fix(devices):** `set_device_parameter` and `batch_set_parameters` now return `value_string`, `min`, `max` in response — the agent can immediately see "26.0 Hz" instead of just "75" and catch nonsensical values
16
+ - **fix(automation):** `apply_automation_recipe` now auto-scales 0-1 recipe curves to the target parameter's actual native range (e.g., Auto Filter 20-135, Bit Depth 1-16). Previously, a "0.3 center" vinyl_crackle on a 20-135 range wrote 0.3 literally, killing audio
17
+ - **fix(automation):** `auto_pan` recipe pan values now clamped to ±0.6 to prevent full L/R swing that makes tracks disappear from one channel
18
+ - **docs(skill):** Added Golden Rules 15-16 — mandatory post-write verification protocol: always read `value_string`, always check track meters after filter/effect changes, never apply automation recipes without understanding the target parameter's range
19
+
3
20
  ## 1.9.20 — Deep Production Test Pass (April 2026)
4
21
 
5
22
  ### New Tool
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.9.20",
3
+ "version": "1.9.22",
4
4
  "description": "Agentic production system for Ableton Live 12 — 237 tools, 32 domains, device atlas, spectral perception, technique memory, neo-Riemannian harmony, Euclidean rhythm, species counterpoint, MIDI I/O",
5
5
  "author": {
6
6
  "name": "Pilot Studio"
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.9.20",
3
+ "version": "1.9.22",
4
4
  "description": "Agentic production system for Ableton Live 12 — 237 tools, 32 domains, device atlas, spectral perception, technique memory, neo-Riemannian harmony, Euclidean rhythm, species counterpoint, MIDI I/O",
5
5
  "author": {
6
6
  "name": "Pilot Studio"
@@ -5,15 +5,24 @@ description: Guided arrangement — build song structure with sections, transiti
5
5
 
6
6
  Guide the user through arranging their session into a full song structure.
7
7
 
8
- 1. **Read the session** — `get_session_info` to see all tracks and clips
9
- 2. **Analyze current structure** — `get_section_graph` to see inferred sections from scene names. If no sections, `get_scenes_info` for raw scene data.
10
- 3. **Ask about the target structure** — what form? (verse-chorus, AABA, build-drop, through-composed). What energy arc? (gradual build, peaks and valleys, flat)
11
- 4. **Plan the arrangement** — `plan_arrangement` with the target structure. Review the proposed section order and transitions.
12
- 5. **Build sections** — for each section, create or duplicate scenes, set scene names and colors. Use `duplicate_scene` for variations.
13
- 6. **Check transitions** — `analyze_transition(from_section, to_section)` for each adjacent pair. `score_transition` to evaluate quality.
14
- 7. **Plan weak transitions** — `plan_transition` for any scored below 0.6. Execute the suggested gestures (filter sweeps, energy ramps, rhythmic fills).
15
- 8. **Check emotional arc** — `get_emotional_arc` to verify the energy flow matches the target.
16
- 9. **Translation check** — `check_translation` to verify mono compatibility and spectral consistency across sections.
17
- 10. **Record to arrangement** — when the user is happy, guide them through `back_to_arranger` and recording the session performance to the timeline.
8
+ 1. **Read the session** — `get_session_info` to see all tracks and clips. `get_scene_matrix` for the full clip grid.
9
+ 2. **Analyze current structure** — `get_section_graph` to see inferred sections from scene names. `get_emotional_arc` to understand current energy flow.
10
+ 3. **Detect motifs** — `get_motif_graph` to find recurring patterns, fatigue risk, and suggested developments (inversion, augmentation, fragmentation).
11
+ 4. **Ask about the target structure** — what form? (verse-chorus, AABA, build-drop, through-composed). What energy arc? (gradual build, peaks and valleys, flat)
12
+ 5. **Plan the arrangement** — `plan_arrangement` with the target bar count and style. Review the proposed section order, transitions, and gesture templates.
13
+ 6. **Build sections** — for each section, create or duplicate scenes, set scene names and colors. Use `duplicate_scene` for variations. Use `transform_motif` for melodic development (inversion, retrograde, diminution, augmentation).
14
+ 7. **Apply gesture templates** — for each transition, use `apply_gesture_template` with the planned template:
15
+ - `pre_arrival_vacuum` pull energy back before a drop
16
+ - `harmonic_tint_rise` gradually open filters for intro/build sections
17
+ - `re_entry_spotlight` darken then snap back to highlight returning elements
18
+ - `tension_ratchet` — stepped build in 4-bar stages
19
+ - `outro_decay_dissolve` — gradual dissolution for endings
20
+ - `phrase_end_throw` — reverb/delay throw at section boundaries
21
+ Then execute each gesture plan using `apply_automation_shape` with the suggested curve_family.
22
+ 8. **Add organic movement** — use `apply_automation_shape` with `curve_type="perlin"` on filter/send/effect parameters so nothing repeats exactly the same each cycle.
23
+ 9. **Check emotional arc** — `get_emotional_arc` to verify the energy flow matches the target. Address any issues flagged (no_clear_build, peak_too_early, no_resolution).
24
+ 10. **Verify with perception** — `get_master_spectrum` during each section to confirm spectral differentiation. `capture_audio` + `analyze_loudness` to check LRA (should be >2 LU for dynamic arrangements).
25
+ 11. **Translation check** — `check_translation` to verify mono compatibility and spectral consistency across sections.
26
+ 12. **Record to arrangement** — when satisfied, use `create_arrangement_clip` to lay out clips on the timeline, or guide the user through `back_to_arranger` and recording the session performance.
18
27
 
19
28
  Use the livepilot-composition-engine skill for section analysis and transition planning. Present the arrangement as a visual timeline to the user.
@@ -5,26 +5,73 @@ description: Guided beat creation — create a beat from scratch with genre, tem
5
5
 
6
6
  Guide the user through creating a beat from scratch. Follow these steps:
7
7
 
8
- 1. **Ask about the vibe** genre, tempo range, mood, reference tracks
9
- 2. **Set up the session** — `set_tempo`, create tracks for drums/bass/harmony/melody with `create_midi_track`, name and color them
10
- 3. **Load instruments** use `find_and_load_device` for appropriate instruments per track
11
- 4. **Verify device health** — after loading, run `get_device_info` on each loaded device. A `parameter_count` of 0 or 1 on AU/VST plugins means the plugin failed to initialize (dead plugin). If detected:
12
- - Delete the dead plugin with `delete_device`
13
- - Replace with a native Ableton alternative (e.g., Saturator instead of tape plugins, Operator instead of failed FM synths)
14
- - Run `get_session_diagnostics` for any other issues (armed tracks, missing instruments)
15
- - Inform the user which plugin failed and what replacement was used
16
- 5. **Program drums first** — create a clip, add kick/snare/hat patterns with `add_notes`
17
- 6. **Add bass** — create clip, program a bassline that locks with the kick
18
- 7. **Add harmony** — chords or pads that set the mood
19
- 8. **Add melody** — top-line or lead element
20
- 9. **Mix** balance levels with `set_track_volume` and `set_track_pan`
21
- 10. **Pitch & tuning audit** — MANDATORY before firing. Run on every melodic track (skip drums):
22
- - `identify_scale` on each track — verify all tracks agree on the same tonal center
23
- - `analyze_harmony` on chordal tracks (pads, keys) — verify chord quality (no accidental augmented/diminished chords unless intentional)
24
- - `detect_theory_issues` with `strict=true` on each track check for out-of-key notes, parallel fifths, voice crossing
25
- - **Interpret results against the intended scale**, not just C major. The analyzer only knows 7 standard modes — exotic scales (Hijaz, Hungarian minor, whole tone, etc.) will produce false "out of key" warnings. Cross-reference flagged notes against the intended scale manually.
26
- - Report a clear tuning table to the user: which tracks are clean, which have issues, what the issues are
27
- - Fix wrong notes with `modify_notes` before proceeding
28
- 11. **Fire the scene** to listen, iterate based on feedback
8
+ ## Step 0: Session Prep (fresh projects only)
9
+
10
+ If the user asks for a **fresh start** (new track, clean slate, start from scratch):
11
+
12
+ 1. **Read the session** `get_session_info` to see what exists
13
+ 2. **Delete all existing tracks** — loop through all tracks with `delete_track`, starting from the highest index down to 0 (deleting from the top prevents index shifts)
14
+ 3. **Load the M4L Analyzer on master** — `find_and_load_device(track_index=-1000, device_name="LivePilot_Analyzer")`. This enables real-time spectral analysis, RMS metering, and key detection for the entire session. If it fails, try `search_browser(path="max_for_live", name_filter="LivePilot")` to find the URI and load manually.
15
+ 4. **Set up master chain** load Glue Compressor + EQ Eight + Utility on master for bus processing
16
+ 5. **Verify analyzer** — wait 2 seconds, then call `get_master_spectrum`. If it returns data, the bridge is connected. If it errors with "UDP bridge not connected", call `reconnect_bridge` to rebind.
17
+
18
+ If the user is adding to an **existing project**, skip step 0 just call `get_session_info` and work with what's there.
19
+
20
+ ## Step 1: Ask about the vibe
21
+ Genre, tempo range, mood, reference tracks.
22
+
23
+ ## Step 2: Set up the session
24
+ `set_tempo`, create tracks for drums/bass/harmony/melody with `create_midi_track`, name and color them.
25
+
26
+ ## Step 3: Load instruments
27
+ Use `search_browser` to find devices by name, `load_browser_item` or `find_and_load_device` to load them.
28
+
29
+ ## Step 4: Verify device health
30
+ After loading, run `get_device_info` on each loaded device. A `parameter_count` of 0 or 1 on AU/VST plugins means the plugin failed to initialize (dead plugin). If detected:
31
+ - Delete the dead plugin with `delete_device`
32
+ - Replace with a native Ableton alternative (e.g., Saturator instead of tape plugins, Operator instead of failed FM synths)
33
+ - Run `get_session_diagnostics` for any other issues (armed tracks, missing instruments)
34
+ - Inform the user which plugin failed and what replacement was used
35
+
36
+ ## Step 5: Program drums first
37
+ Create a clip, add kick/snare/hat patterns with `add_notes`.
38
+
39
+ ## Step 6: Add bass
40
+ Create clip, program a bassline that locks with the kick.
41
+
42
+ ## Step 7: Add harmony
43
+ Chords or pads that set the mood.
44
+
45
+ ## Step 8: Add melody
46
+ Top-line or lead element.
47
+
48
+ ## Step 9: Mix + VERIFY
49
+ Balance levels with `set_track_volume` and `set_track_pan`.
50
+
51
+ **MANDATORY after every parameter change:**
52
+ - Read `value_string` in the response to confirm the actual Hz/dB/% value makes sense
53
+ - Call `get_track_meters(include_stereo=true)` and verify each track has non-zero output
54
+ - If a track's stereo output drops to 0, the effect is killing the signal — check `get_device_parameters` for `value_string`, fix, re-verify
55
+ - Parameter ranges are NOT always 0-1. Always read `value_string`.
56
+
57
+ ## Step 10: Pitch & tuning audit
58
+ MANDATORY before firing. Run on every melodic track (skip drums):
59
+ - `identify_scale` on each track — verify all tracks agree on the same tonal center
60
+ - `analyze_harmony` on chordal tracks — verify chord quality
61
+ - `detect_theory_issues` with `strict=true` on each track — check for out-of-key notes, parallel fifths, voice crossing
62
+ - **Interpret results against the intended scale**, not just C major
63
+ - Report a clear tuning table to the user: which tracks are clean, which have issues
64
+ - Fix wrong notes with `modify_notes` before proceeding
65
+
66
+ ## Step 11: Perception check
67
+ If the M4L Analyzer is connected:
68
+ - `get_master_spectrum` — check spectral balance (sub should be present but not >60% for most genres)
69
+ - `get_master_rms` — check levels aren't clipping
70
+ - `get_detected_key` — verify the analyzer agrees with the intended key
71
+
72
+ If not connected, use `capture_audio` + `analyze_loudness` + `analyze_spectrum_offline` for offline analysis.
73
+
74
+ ## Step 12: Fire the scene
75
+ Fire to listen, iterate based on feedback.
29
76
 
30
77
  Use the livepilot-core skill for all tool calls. Verify after each step. Keep the user informed of what you're doing and why.
@@ -11,28 +11,38 @@ Run the universal evaluation loop on recent production changes.
11
11
  - `judgment_only` — no analyzer, parameter-level heuristics only
12
12
  - `read_only` — session disconnected
13
13
 
14
- 2. **Get the last move** — `get_last_move` to understand what was changed. If no recent move, `get_recent_actions` for history.
14
+ 2. **Ensure analyzer** — if mode is `judgment_only`, try to get full perception:
15
+ - `find_and_load_device(track_index=-1000, device_name="LivePilot_Analyzer")`
16
+ - Wait 2s, then `get_master_spectrum` to test the bridge
17
+ - If bridge disconnected: `reconnect_bridge`
18
+ - If still unavailable: proceed with `judgment_only` but tell the user
15
19
 
16
- 3. **Ask what the goal was** — what were they trying to achieve? More clarity? Wider stereo? Punchier drums?
20
+ 3. **Get the last move** — `get_last_move` to understand what was changed. If no recent move, `get_recent_actions` for history.
17
21
 
18
- 4. **Compile the goal** — `compile_goal_vector(goal_description, mode="improve")`
22
+ 4. **Ask what the goal was** — what were they trying to achieve? More clarity? Wider stereo? Punchier drums?
19
23
 
20
- 5. **Capture current state** — `get_master_spectrum` + `get_master_rms` + `get_mix_snapshot`
24
+ 5. **Compile the goal** — `compile_goal_vector(goal_description, mode="improve")`
21
25
 
22
- 6. **Undo the change** — `undo()` to restore the before state
26
+ 6. **Capture current state** — full perception snapshot:
27
+ - `get_master_spectrum` + `get_master_rms` (if analyzer available)
28
+ - `get_track_meters(include_stereo=true)` — verify all tracks producing audio
29
+ - `get_mix_snapshot` — full volume/pan/send state
30
+ - Optionally: `capture_audio` + `analyze_loudness` + `analyze_spectrum_offline` for ground truth
23
31
 
24
- 7. **Capture before state** — same reads as step 5
32
+ 7. **Undo the change** — `undo()` to restore the before state
25
33
 
26
- 8. **Redo the change** — `redo()` to restore the after state
34
+ 8. **Capture before state** — same reads as step 6
27
35
 
28
- 9. **Evaluate** — `evaluate_move(before_snapshot, after_snapshot, goal)` or use engine-specific:
29
- - Mix changes: `evaluate_mix_move`
30
- - Composition changes: `evaluate_composition_move`
31
- - Multi-dimensional: `evaluate_with_fabric`
36
+ 9. **Redo the change** — `redo()` to restore the after state
32
37
 
33
- 10. **Report results** — show: score (0-1), keep_change recommendation, goal_progress, collateral_damage, dimension changes
38
+ 10. **Evaluate** — `evaluate_move(before_snapshot, after_snapshot, goal)` or use engine-specific:
39
+ - Mix changes: `evaluate_mix_move`
40
+ - Composition changes: `evaluate_composition_move`
41
+ - Multi-dimensional: `evaluate_with_fabric`
34
42
 
35
- 11. **Act on recommendation:**
43
+ 11. **Report results** — show: score (0-1), keep_change recommendation, goal_progress, collateral_damage, dimension changes
44
+
45
+ 12. **Act on recommendation:**
36
46
  - If `keep_change=true` — keep, suggest `memory_learn` if score > 0.7
37
47
  - If `keep_change=false` — `undo()`, explain why (collateral damage, goal regression)
38
48
 
@@ -6,14 +6,23 @@ description: Mixing assistant — analyze and balance track levels, panning, and
6
6
  Help the user mix their session. Follow these steps:
7
7
 
8
8
  1. **Read the session** — `get_session_info` to see all tracks
9
- 2. **Analyze each track** — `get_track_info` for clip and device details, check current volume/pan
10
- 3. **Quick mix status** — `get_mix_summary` for a fast overview (track count, dynamics, stereo, issues)
11
- 4. **Run critics** — `get_mix_issues` to detect problems (masking, dynamics, stereo width, headroom)
12
- 5. **Suggest a mix** — propose volume levels, panning positions, and send amounts based on the track types, instruments, and detected issues
13
- 6. **Apply with confirmation** — only change levels after the user approves each suggestion
14
- 7. **Check return tracks** `get_return_tracks` to see shared effects
15
- 8. **Master chain** `get_master_track` to review the master
16
- 9. **Evaluate** after changes, `evaluate_mix_move` with before/after snapshots to verify improvement
9
+ 2. **Ensure analyzer** — check if M4L Analyzer is on master. If `get_master_spectrum` errors, load it: `find_and_load_device(track_index=-1000, device_name="LivePilot_Analyzer")`. If bridge disconnected, try `reconnect_bridge`.
10
+ 3. **Analyze each track** — `get_track_info` for clip and device details, `get_track_meters(include_stereo=true)` for actual output levels
11
+ 4. **Quick mix status** — `get_mix_summary` for a fast overview (track count, dynamics, stereo, issues)
12
+ 5. **Run critics** — `get_mix_issues` to detect problems (masking, dynamics, stereo width, headroom). For frequency collisions: `get_masking_report`
13
+ 6. **Spectral check** — `get_master_spectrum` for 8-band frequency balance. Typical targets:
14
+ - Hip-hop: sub dominant, centroid 400-800 Hz
15
+ - Electronic: balanced, centroid 800-1500 Hz
16
+ - Ambient: mid-focused, low sub, centroid 500-1000 Hz
17
+ 7. **Suggest a mix** — propose volume levels, panning positions, and send amounts based on the track types, instruments, and detected issues
18
+ 8. **Apply with verification** — after EVERY volume/pan/send change:
19
+ - Read `value_string` in the response
20
+ - Call `get_track_meters(include_stereo=true)` to verify no track went silent
21
+ - Check master spectrum to verify the balance improved
22
+ 9. **Check return tracks** — `get_return_tracks` to see shared effects
23
+ 10. **Master chain** — `get_master_track` to review the master. Typical chain: Glue Compressor → EQ Eight → Utility → LivePilot_Analyzer
24
+ 11. **Capture + analyze** — `capture_audio` then `analyze_loudness` for LUFS/peak/LRA, `analyze_spectrum_offline` for centroid/rolloff/balance
25
+ 12. **Evaluate** — `evaluate_mix_move` with before/after snapshots to verify improvement. If `keep_change` is false, `undo` immediately.
17
26
 
18
27
  Present suggestions in a clear table format. Always explain the reasoning (e.g., "panning the hi-hats slightly right to create stereo width"). Use `undo` if the user doesn't like a change.
19
28
 
@@ -8,14 +8,22 @@ Guide the user through designing a sound. Follow these steps:
8
8
  1. **Ask about the target sound** — what character? (warm pad, aggressive bass, shimmering lead, atmospheric texture, etc.)
9
9
  2. **Choose an instrument** — pick the right synth for the job, load it with `search_browser` → `load_browser_item`
10
10
  3. **Verify device loaded** — `get_device_info` to confirm plugin initialized (AU/VST with `parameter_count` <= 1 = dead plugin — delete and replace with native alternative)
11
- 4. **Get parameters** — `get_device_parameters` to see what's available
11
+ 4. **Get parameters** — `get_device_parameters` to see what's available. **Read `value_string`** to understand actual units (Hz, dB, %, ms) — raw values are often NOT in human units.
12
12
  5. **Shape the sound** — `set_device_parameter` or `batch_set_parameters` to dial in the character
13
+ - **ALWAYS read `value_string` in the response** to confirm the actual value makes sense
14
+ - **ALWAYS call `get_track_meters(include_stereo=true)` after filter/effect changes** — verify the track still produces audio
15
+ - If stereo output drops to 0, the effect is killing the signal. Check `value_string`, fix, re-verify.
13
16
  6. **Run critics** — `analyze_sound_design(track_index)` to check for static timbre, missing modulation, spectral imbalance
14
- 7. **Address issues** — `plan_sound_design_move(track_index)` for suggested improvements
15
- 8. **Add effects** — load effects (reverb, delay, chorus, distortion, etc.) and tweak their parameters
17
+ 7. **Address issues** — `plan_sound_design_move(track_index)` for suggested improvements, `get_patch_model` to see chain structure
18
+ 8. **Add effects** — load effects and tweak. For automation, use `apply_automation_shape` with musically appropriate curve types:
19
+ - Filter sweeps → exponential (perceptually even)
20
+ - Volume fades → logarithmic (matches ear's response)
21
+ - Organic movement → perlin (never repeats exactly)
22
+ - If using `apply_automation_recipe`, verify the recipe didn't push parameters to extremes
16
23
  9. **Create a test pattern** — `create_clip` + `add_notes` with a simple pattern to audition
17
24
  10. **Fire the clip** to listen, iterate based on feedback
18
- 11. **Evaluate** — `evaluate_move(engine="sound_design")` with before/after to verify improvement
25
+ 11. **Perception check** — `get_master_spectrum` or `capture_audio` + `analyze_spectrum_offline` to verify the sound sits correctly in the frequency spectrum
26
+ 12. **Evaluate** — `evaluate_move` with before/after snapshots to verify improvement
19
27
 
20
28
  Explain what each parameter does as you adjust it. Use `undo` liberally if something sounds wrong.
21
29
 
@@ -27,6 +27,13 @@ Agentic production system for Ableton Live 12. 237 tools across 32 domains, thre
27
27
  12. **ALWAYS report tool errors** — never silently swallow errors. Include: tool name, error message, fallback plan
28
28
  13. **Verify plugin health after loading** — check `health_flags`, `mcp_sound_design_ready`, `plugin_host_status`. If `parameter_count` <= 1 on AU/VST → dead plugin, delete and replace
29
29
  14. **Use `C hijaz` for Hijaz/Phrygian Dominant keys** — avoids false out-of-key warnings
30
+ 15. **VERIFY AFTER EVERY WRITE** — mandatory, non-negotiable:
31
+ - After `set_device_parameter` or `batch_set_parameters`: read `value_string` in the response to confirm the actual Hz/dB/% value makes sense
32
+ - After any filter, EQ, or effect parameter change: call `get_track_meters(include_stereo=true)` and verify the target track has non-zero left AND right levels
33
+ - After `apply_automation_recipe`: check that the recipe didn't push the parameter to an extreme that kills audio
34
+ - If a track's stereo output drops to 0: the effect is killing the signal — check `get_device_parameters` for `value_string`, fix, re-verify
35
+ - **Parameter ranges are NOT always 0-1.** Auto Filter Frequency is 20-135. Bit Depth is 1-16. Always read `value_string` to see actual units.
36
+ 16. **NEVER apply automation recipes without understanding the target parameter's range** — recipes generate 0-1 curves that get auto-scaled for device parameters, but always verify the result
30
37
 
31
38
  ## Tool Speed Tiers
32
39
 
@@ -1,4 +1,4 @@
1
- # LivePilot v1.9.20 — Architecture & Tool Reference
1
+ # LivePilot v1.9.22 — Architecture & Tool Reference
2
2
 
3
3
  Agentic production system for Ableton Live 12. 237 tools across 32 domains. Device atlas (280+ devices), spectral perception (M4L analyzer), technique memory, automation intelligence (16 curve types, 15 recipes), music theory (Krumhansl-Schmuckler, species counterpoint), generative algorithms (Euclidean rhythm, tintinnabuli, phase shift, additive process), neo-Riemannian harmony (PRL transforms, Tonnetz), MIDI file I/O.
4
4
 
@@ -84,7 +84,7 @@ function anything() {
84
84
  function dispatch(cmd, args) {
85
85
  switch(cmd) {
86
86
  case "ping":
87
- send_response({"ok": true, "version": "1.9.20"});
87
+ send_response({"ok": true, "version": "1.9.22"});
88
88
  break;
89
89
  case "get_params":
90
90
  cmd_get_params(args);
@@ -1,2 +1,2 @@
1
1
  """LivePilot MCP Server — bridges MCP protocol to Ableton Live."""
2
- __version__ = "1.9.20"
2
+ __version__ = "1.9.22"
@@ -341,6 +341,32 @@ def apply_automation_recipe(
341
341
 
342
342
  points = generate_from_recipe(recipe, duration=duration, density=density)
343
343
 
344
+ # Scale recipe's 0.0-1.0 curves to the parameter's actual native range.
345
+ # Without this, a "0.3" center on a 20-135 range parameter writes 0.3
346
+ # literally instead of scaling to the 20-135 range — killing the signal.
347
+ if parameter_type == "device" and device_index is not None and parameter_index is not None:
348
+ try:
349
+ dev_info = _get_ableton(ctx).send_command("get_device_parameters", {
350
+ "track_index": track_index,
351
+ "device_index": device_index,
352
+ })
353
+ params_list = dev_info.get("parameters", [])
354
+ if parameter_index < len(params_list):
355
+ p_info = params_list[parameter_index]
356
+ p_min = float(p_info.get("min", 0))
357
+ p_max = float(p_info.get("max", 1))
358
+ # Only scale if the range is NOT already 0-1
359
+ if abs(p_max - p_min) > 1.5 or p_min < -0.5:
360
+ for pt in points:
361
+ pt["value"] = p_min + pt["value"] * (p_max - p_min)
362
+ except Exception:
363
+ pass # Fail open — write values as-is if we can't read the range
364
+
365
+ # Safety clamp: auto_pan amplitude should be limited to avoid full L/R swing
366
+ if recipe == "auto_pan" and parameter_type == "panning":
367
+ for pt in points:
368
+ pt["value"] = max(-0.6, min(0.6, pt["value"]))
369
+
344
370
  if time_offset > 0:
345
371
  for p in points:
346
372
  p["time"] += time_offset
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.9.20",
3
+ "version": "1.9.22",
4
4
  "mcpName": "io.github.dreamrec/livepilot",
5
5
  "description": "Agentic production system for Ableton Live 12 — 237 tools, 32 domains, device atlas, spectral perception, technique memory, neo-Riemannian harmony, Euclidean rhythm, species counterpoint, MIDI I/O",
6
6
  "author": "Pilot Studio",
@@ -5,7 +5,7 @@ Entry point for the ControlSurface. Ableton calls create_instance(c_instance)
5
5
  when this script is selected in Preferences > Link, Tempo & MIDI.
6
6
  """
7
7
 
8
- __version__ = "1.9.20"
8
+ __version__ = "1.9.22"
9
9
 
10
10
  from _Framework.ControlSurface import ControlSurface
11
11
  from .server import LivePilotServer
@@ -104,7 +104,13 @@ def set_device_parameter(song, params):
104
104
  raise ValueError("Must provide parameter_name or parameter_index")
105
105
 
106
106
  param.value = value
107
- return {"name": param.name, "value": param.value}
107
+ return {
108
+ "name": param.name,
109
+ "value": param.value,
110
+ "value_string": param.str_for_value(param.value),
111
+ "min": param.min,
112
+ "max": param.max,
113
+ }
108
114
 
109
115
 
110
116
  @register("batch_set_parameters")
@@ -157,7 +163,11 @@ def batch_set_parameters(song, params):
157
163
  )
158
164
 
159
165
  param.value = value
160
- results.append({"name": param.name, "value": param.value})
166
+ results.append({
167
+ "name": param.name,
168
+ "value": param.value,
169
+ "value_string": param.str_for_value(param.value),
170
+ })
161
171
 
162
172
  return {"parameters": results}
163
173