livepilot 1.6.3 → 1.6.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,26 @@
1
1
  # Changelog
2
2
 
3
+ ## 1.6.4 — Music Theory (March 2026)
4
+
5
+ **7 new tools (135 -> 142), music21-powered theory analysis on live MIDI clips.**
6
+
7
+ ### Theory Domain (7 tools)
8
+ - `analyze_harmony` — chord-by-chord Roman numeral analysis of session clips
9
+ - `suggest_next_chord` — theory-valid chord continuations with style presets (common_practice, jazz, modal, pop)
10
+ - `detect_theory_issues` — parallel fifths/octaves, out-of-key notes, voice crossing, unresolved dominants
11
+ - `identify_scale` — Krumhansl-Schmuckler key/mode detection with confidence-ranked alternatives
12
+ - `harmonize_melody` — 2 or 4-voice SATB harmonization with smooth voice leading
13
+ - `generate_countermelody` — species counterpoint (1st or 2nd species) against a melody
14
+ - `transpose_smart` — diatonic or chromatic transposition preserving scale relationships
15
+
16
+ ### Architecture
17
+ - Notes-to-Stream bridge: converts LivePilot note dicts → music21 Part objects with quantization (1/32 note grid)
18
+ - Lazy imports: music21 is optional — all 135 core tools work without it installed
19
+ - Key hint parsing: handles "A minor" → lowercase tonic for music21 compatibility
20
+
21
+ ### Dependencies
22
+ - Optional: `pip install 'music21>=9.3'` (not auto-installed — ~50MB with numpy/matplotlib)
23
+
3
24
  ## 1.6.3 — Audit Hardening (March 2026)
4
25
 
5
26
  - Fix: cursor aliasing in M4L bridge `walk_device` — nested rack traversal now reads chain/pad counts before recursion clobbers shared cursors
package/README.md CHANGED
@@ -12,7 +12,7 @@
12
12
  [![GitHub stars](https://img.shields.io/github/stars/dreamrec/LivePilot)](https://github.com/dreamrec/LivePilot/stargazers)
13
13
  [![npm](https://img.shields.io/npm/v/livepilot)](https://www.npmjs.com/package/livepilot)
14
14
 
15
- **AI copilot for Ableton Live 12** — 135 MCP tools, a deep device knowledge corpus, real-time audio analysis, and persistent technique memory.
15
+ **AI copilot for Ableton Live 12** — 142 MCP tools, a deep device knowledge corpus, real-time audio analysis, and persistent technique memory.
16
16
 
17
17
  Most Ableton MCP servers give the AI tools to push buttons. LivePilot gives it three things on top of that:
18
18
 
@@ -20,7 +20,7 @@ Most Ableton MCP servers give the AI tools to push buttons. LivePilot gives it t
20
20
  - **Perception** — An M4L analyzer that reads the master bus in real-time: 8-band spectrum, RMS/peak metering, pitch tracking, key detection. The AI makes decisions based on what it hears, not just what's configured.
21
21
  - **Memory** — A technique library that persists across sessions. The AI remembers how you built that bass sound, what swing you like on hi-hats, which reverb chain worked on vocals. It learns your taste over time.
22
22
 
23
- These three layers sit on top of 135 deterministic MCP tools that cover transport, tracks, clips, MIDI, devices, scenes, mixing, browser, arrangement, and sample manipulation. Every command goes through Ableton's official Live Object Model API — the same interface Ableton's own control surfaces use. Everything is reversible with undo.
23
+ These three layers sit on top of 142 deterministic MCP tools that cover transport, tracks, clips, MIDI, devices, scenes, mixing, browser, arrangement, and sample manipulation. Every command goes through Ableton's official Live Object Model API — the same interface Ableton's own control surfaces use. Everything is reversible with undo.
24
24
 
25
25
  ---
26
26
 
@@ -296,7 +296,7 @@ npx -y github:dreamrec/LivePilot --status
296
296
 
297
297
  ---
298
298
 
299
- ## 135 Tools Across 12 Domains
299
+ ## 142 Tools Across 13 Domains
300
300
 
301
301
  | Domain | Tools | What you can do |
302
302
  |--------|:-----:|-----------------|
@@ -312,6 +312,7 @@ npx -y github:dreamrec/LivePilot --status
312
312
  | **Automation** | 8 | Clip envelope CRUD, 16-type curve engine, 15 named recipes, spectral-aware suggestions |
313
313
  | **Memory** | 8 | Save, recall, replay, and manage production techniques |
314
314
  | **Analyzer** | 20 | Real-time spectral analysis, key detection, sample manipulation, warp markers, device introspection (requires M4L device) |
315
+ | **Theory** | 7 | Harmony analysis, Roman numerals, scale identification, chord suggestions, countermelody, SATB harmonization, smart transposition (requires music21) |
315
316
 
316
317
  <details>
317
318
  <summary><strong>Full tool list</strong></summary>
@@ -402,7 +403,7 @@ There are **15+ MCP servers for Ableton Live** as of March 2026. Here's how the
402
403
 
403
404
  | | [LivePilot](https://github.com/dreamrec/LivePilot) | [AbletonMCP](https://github.com/ahujasid/ableton-mcp) | [MCP Extended](https://github.com/uisato/ableton-mcp-extended) | [Ableton Copilot](https://github.com/xiaolaa2/ableton-copilot-mcp) | [AbletonBridge](https://github.com/hidingwill/AbletonBridge) | [Producer Pal](https://github.com/adamjmurray/producer-pal) |
404
405
  |---|:-:|:-:|:-:|:-:|:-:|:-:|
405
- | **Tools** | 135 | ~20 | ~35 | ~45 | 322 | ~25 |
406
+ | **Tools** | 142 | ~20 | ~35 | ~45 | 322 | ~25 |
406
407
  | **Device knowledge** | 280+ devices | -- | -- | -- | -- | -- |
407
408
  | **Audio analysis** | Spectrum/RMS/key | -- | -- | -- | Metering | -- |
408
409
  | **Technique memory** | Persistent | -- | -- | -- | -- | -- |
@@ -459,7 +460,7 @@ Every server on this list gives the AI tools to control Ableton. LivePilot is th
459
460
 
460
461
  The practical difference: other servers let the AI set a parameter. LivePilot lets the AI choose the right parameter based on what device is loaded (atlas), verify the result by reading the audio output (analyzer), and remember the technique for next time (memory).
461
462
 
462
- AbletonBridge has more raw tools (322 vs 135). Producer Pal has the easiest install (drag a .amxd). The original AbletonMCP has the community (2.3k stars). LivePilot has the deepest integration — tools that execute, knowledge that informs, perception that verifies, and memory that accumulates.
463
+ AbletonBridge has more raw tools (322 vs 142). Producer Pal has the easiest install (drag a .amxd). The original AbletonMCP has the community (2.3k stars). LivePilot has the deepest integration — tools that execute, knowledge that informs, perception that verifies, and memory that accumulates.
463
464
 
464
465
  ---
465
466
 
@@ -68,7 +68,7 @@ function anything() {
68
68
  function dispatch(cmd, args) {
69
69
  switch(cmd) {
70
70
  case "ping":
71
- send_response({"ok": true, "version": "1.6.3"});
71
+ send_response({"ok": true, "version": "1.6.4"});
72
72
  break;
73
73
  case "get_params":
74
74
  cmd_get_params(args);
@@ -1,2 +1,2 @@
1
1
  """LivePilot MCP Server — bridges MCP protocol to Ableton Live."""
2
- __version__ = "1.6.3"
2
+ __version__ = "1.6.4"
@@ -56,6 +56,7 @@ from .tools import arrangement # noqa: F401, E402
56
56
  from .tools import memory # noqa: F401, E402
57
57
  from .tools import analyzer # noqa: F401, E402
58
58
  from .tools import automation # noqa: F401, E402
59
+ from .tools import theory # noqa: F401, E402
59
60
 
60
61
 
61
62
  # ---------------------------------------------------------------------------
@@ -0,0 +1,872 @@
1
+ """Music theory tools powered by music21.
2
+
3
+ 7 tools for harmonic analysis, chord suggestion, voice leading detection,
4
+ counterpoint generation, scale identification, harmonization, and intelligent
5
+ transposition — all working directly on live session clip data via get_notes.
6
+
7
+ Design principle: tools compute from data, the LLM interprets and explains.
8
+ Returns precise musical data (Roman numerals, pitch names, intervals), never
9
+ explanations the LLM already knows from training.
10
+
11
+ Requires: pip install music21 (lazy-imported, never at module level)
12
+ """
13
+
14
+ from __future__ import annotations
15
+
16
+ from collections import defaultdict
17
+ from typing import Optional
18
+
19
+ from fastmcp import Context
20
+
21
+ from ..server import mcp
22
+
23
+
24
+ # -- Shared utilities --------------------------------------------------------
25
+
26
+ def _get_ableton(ctx: Context):
27
+ return ctx.lifespan_context["ableton"]
28
+
29
+
30
+ def _get_clip_notes(ctx: Context, track_index: int, clip_index: int) -> list[dict]:
31
+ """Fetch notes from a session clip via the remote script."""
32
+ result = _get_ableton(ctx).send_command("get_notes", {
33
+ "track_index": track_index,
34
+ "clip_index": clip_index,
35
+ })
36
+ return result.get("notes", [])
37
+
38
+
39
+ def _parse_key_string(key_str: str):
40
+ """Parse a human-friendly key string into a music21 Key object.
41
+
42
+ Accepts: "C", "c", "C major", "A minor", "g minor", "F# major", etc.
43
+ music21's Key() wants: uppercase tonic = major, lowercase = minor.
44
+ """
45
+ from music21 import key
46
+ hint = key_str.strip()
47
+ if ' ' in hint:
48
+ parts = hint.split()
49
+ tonic = parts[0]
50
+ mode = parts[1].lower() if len(parts) > 1 else 'major'
51
+ if mode == 'minor':
52
+ tonic = tonic[0].lower() + tonic[1:]
53
+ return key.Key(tonic)
54
+ return key.Key(hint)
55
+
56
+
57
+ def _notes_to_stream(notes: list[dict], key_hint: str | None = None):
58
+ """Convert LivePilot note dicts to a music21 Stream.
59
+
60
+ This is the bridge between Ableton's note format and music21's
61
+ analysis engine. Groups simultaneous notes into Chord objects.
62
+ Quantizes start_times to 1/32 note resolution to avoid chordify
63
+ fragmentation from micro-timing variations.
64
+ """
65
+ from music21 import stream, note, chord, meter
66
+
67
+ s = stream.Part()
68
+ s.append(meter.TimeSignature('4/4'))
69
+
70
+ if key_hint:
71
+ try:
72
+ k = _parse_key_string(key_hint)
73
+ s.insert(0, k)
74
+ except Exception:
75
+ pass
76
+
77
+ # Quantize to 1/32 note (0.125 beats) to group near-simultaneous notes
78
+ QUANT = 0.125
79
+
80
+ time_groups: dict[float, list[dict]] = defaultdict(list)
81
+ for n in notes:
82
+ if n.get("mute", False):
83
+ continue
84
+ q_time = round(n["start_time"] / QUANT) * QUANT
85
+ time_groups[q_time].append(n)
86
+
87
+ for t in sorted(time_groups.keys()):
88
+ group = time_groups[t]
89
+ if len(group) == 1:
90
+ n = group[0]
91
+ m21_note = note.Note(n["pitch"])
92
+ m21_note.quarterLength = max(QUANT, n["duration"])
93
+ m21_note.volume.velocity = n.get("velocity", 100)
94
+ s.insert(t, m21_note)
95
+ else:
96
+ pitches = sorted(set(n["pitch"] for n in group))
97
+ dur = max(n["duration"] for n in group)
98
+ m21_chord = chord.Chord(pitches)
99
+ m21_chord.quarterLength = max(QUANT, dur)
100
+ s.insert(t, m21_chord)
101
+
102
+ return s
103
+
104
+
105
+ def _detect_key(s):
106
+ """Detect key from a music21 stream. Uses Krumhansl-Schmuckler algorithm."""
107
+ from music21 import key as m21key
108
+
109
+ # Check if key was already set by the user
110
+ existing = list(s.recurse().getElementsByClass(m21key.Key))
111
+ if existing:
112
+ return existing[0]
113
+
114
+ return s.analyze('key')
115
+
116
+
117
+ def _pitch_name(midi_num: int) -> str:
118
+ """MIDI number to note name (e.g., 60 → 'C4')."""
119
+ from music21 import pitch
120
+ return str(pitch.Pitch(midi_num))
121
+
122
+
123
+ def _require_music21():
124
+ """Verify music21 is installed, raise clear error if not."""
125
+ try:
126
+ import music21 # noqa: F401
127
+ except ImportError:
128
+ raise ImportError(
129
+ "music21 is required for theory tools. "
130
+ "Install with: pip install 'music21>=9.3'"
131
+ )
132
+
133
+
134
+ # -- Tool 1: analyze_harmony ------------------------------------------------
135
+
136
+ @mcp.tool()
137
+ def analyze_harmony(
138
+ ctx: Context,
139
+ track_index: int,
140
+ clip_index: int,
141
+ key: Optional[str] = None,
142
+ ) -> dict:
143
+ """Analyze harmony of a MIDI clip: chords, Roman numerals, progression.
144
+
145
+ Reads notes directly from a session clip — no bouncing needed.
146
+ Auto-detects key if not provided.
147
+
148
+ Returns chord progression with Roman numeral analysis. The tool computes
149
+ the data; interpret the musical meaning yourself.
150
+ """
151
+ _require_music21()
152
+ from music21 import roman
153
+
154
+ notes = _get_clip_notes(ctx, track_index, clip_index)
155
+ if not notes:
156
+ return {"error": "No notes in clip", "suggestion": "Add notes first"}
157
+
158
+ s = _notes_to_stream(notes, key_hint=key)
159
+ detected_key = _detect_key(s)
160
+
161
+ chordified = s.chordify()
162
+ chords = []
163
+
164
+ for c in chordified.recurse().getElementsByClass('Chord'):
165
+ entry = {
166
+ "beat": round(float(c.offset), 3),
167
+ "duration": round(float(c.quarterLength), 3),
168
+ "pitches": [str(p) for p in c.pitches],
169
+ "midi_pitches": [p.midi for p in c.pitches],
170
+ "chord_name": c.pitchedCommonName,
171
+ }
172
+ try:
173
+ rn = roman.romanNumeralFromChord(c, detected_key)
174
+ entry["roman_numeral"] = rn.romanNumeral
175
+ entry["figure"] = rn.figure
176
+ entry["quality"] = rn.quality
177
+ entry["inversion"] = rn.inversion()
178
+ entry["scale_degree"] = rn.scaleDegree
179
+ except Exception:
180
+ entry["roman_numeral"] = "?"
181
+ entry["figure"] = "?"
182
+
183
+ chords.append(entry)
184
+
185
+ progression = " - ".join(c.get("figure", "?") for c in chords[:24])
186
+
187
+ # Key confidence
188
+ key_info = {"key": str(detected_key)}
189
+ if hasattr(detected_key, 'correlationCoefficient'):
190
+ key_info["confidence"] = round(detected_key.correlationCoefficient, 3)
191
+ if hasattr(detected_key, 'alternateInterpretations'):
192
+ alts = detected_key.alternateInterpretations[:3]
193
+ key_info["alternatives"] = [str(k) for k in alts]
194
+
195
+ return {
196
+ "track_index": track_index,
197
+ "clip_index": clip_index,
198
+ **key_info,
199
+ "chord_count": len(chords),
200
+ "progression": progression,
201
+ "chords": chords[:32],
202
+ }
203
+
204
+
205
+ # -- Tool 2: suggest_next_chord ---------------------------------------------
206
+
207
+ @mcp.tool()
208
+ def suggest_next_chord(
209
+ ctx: Context,
210
+ track_index: int,
211
+ clip_index: int,
212
+ key: Optional[str] = None,
213
+ style: str = "common_practice",
214
+ ) -> dict:
215
+ """Suggest the next chord based on the current progression.
216
+
217
+ Analyzes existing chords and suggests theory-valid continuations.
218
+ style: common_practice, jazz, modal, pop — affects which progressions
219
+ are preferred.
220
+
221
+ Returns concrete chord suggestions with pitches ready for add_notes.
222
+ """
223
+ _require_music21()
224
+ from music21 import roman
225
+
226
+ notes = _get_clip_notes(ctx, track_index, clip_index)
227
+ if not notes:
228
+ return {"error": "No notes in clip"}
229
+
230
+ s = _notes_to_stream(notes, key_hint=key)
231
+ detected_key = _detect_key(s)
232
+
233
+ # Find the last chord
234
+ chordified = s.chordify()
235
+ chord_list = list(chordified.recurse().getElementsByClass('Chord'))
236
+ if not chord_list:
237
+ return {"error": "No chords detected in clip"}
238
+
239
+ last_chord = chord_list[-1]
240
+ last_figure = "I"
241
+ try:
242
+ last_rn = roman.romanNumeralFromChord(last_chord, detected_key)
243
+ last_figure = last_rn.romanNumeral
244
+ except Exception:
245
+ last_rn = None
246
+
247
+ # Progression maps by style
248
+ _progressions = {
249
+ "common_practice": {
250
+ "I": ["IV", "V", "vi", "ii"],
251
+ "ii": ["V", "viio", "IV"],
252
+ "iii": ["vi", "IV", "ii"],
253
+ "IV": ["V", "I", "ii"],
254
+ "V": ["I", "vi", "IV"],
255
+ "vi": ["ii", "IV", "V", "I"],
256
+ "viio": ["I", "iii"],
257
+ },
258
+ "jazz": {
259
+ "I": ["IV7", "ii7", "vi7", "bVII7"],
260
+ "ii7": ["V7", "bII7"],
261
+ "IV7": ["V7", "#ivo7", "bVII7"],
262
+ "V7": ["I", "vi", "bVI"],
263
+ "vi7": ["ii7", "IV7"],
264
+ },
265
+ "modal": {
266
+ "I": ["bVII", "IV", "v", "bIII"],
267
+ "IV": ["I", "bVII", "v"],
268
+ "v": ["bVII", "IV", "I"],
269
+ "bVII": ["I", "IV", "v"],
270
+ "bIII": ["IV", "bVII"],
271
+ },
272
+ "pop": {
273
+ "I": ["V", "vi", "IV"],
274
+ "ii": ["V", "IV"],
275
+ "IV": ["I", "V", "vi"],
276
+ "V": ["I", "vi", "IV"],
277
+ "vi": ["IV", "V", "I"],
278
+ },
279
+ }
280
+
281
+ style_map = _progressions.get(style, _progressions["common_practice"])
282
+
283
+ # Match the last chord to the closest key in the map
284
+ candidates = style_map.get(last_figure)
285
+ if not candidates:
286
+ # Try uppercase/lowercase variants
287
+ for k in style_map:
288
+ if k.upper() == last_figure.upper():
289
+ candidates = style_map[k]
290
+ break
291
+ if not candidates:
292
+ candidates = style_map.get("I", ["IV", "V"])
293
+
294
+ # Build concrete suggestions with MIDI pitches
295
+ suggestions = []
296
+ for fig in candidates:
297
+ try:
298
+ rn = roman.RomanNumeral(fig, detected_key)
299
+ suggestions.append({
300
+ "figure": fig,
301
+ "chord_name": rn.pitchedCommonName,
302
+ "pitches": [str(p) for p in rn.pitches],
303
+ "midi_pitches": [p.midi for p in rn.pitches],
304
+ "quality": rn.quality,
305
+ })
306
+ except Exception:
307
+ suggestions.append({"figure": fig, "chord_name": fig})
308
+
309
+ return {
310
+ "key": str(detected_key),
311
+ "last_chord": last_rn.figure if last_rn else "unknown",
312
+ "style": style,
313
+ "suggestions": suggestions,
314
+ }
315
+
316
+
317
+ # -- Tool 3: detect_theory_issues -------------------------------------------
318
+
319
+ @mcp.tool()
320
+ def detect_theory_issues(
321
+ ctx: Context,
322
+ track_index: int,
323
+ clip_index: int,
324
+ key: Optional[str] = None,
325
+ strict: bool = False,
326
+ ) -> dict:
327
+ """Detect music theory issues: parallel fifths/octaves, out-of-key notes,
328
+ voice crossing, unresolved dominants.
329
+
330
+ strict=False: Only clear errors (parallels, out-of-key).
331
+ strict=True: Also flag style issues (large leaps, missing resolution).
332
+
333
+ Uses music21's VoiceLeadingQuartet for accurate parallel detection.
334
+ Returns ranked issues with beat positions.
335
+ """
336
+ _require_music21()
337
+ from music21 import roman, voiceLeading, note as m21note
338
+
339
+ notes = _get_clip_notes(ctx, track_index, clip_index)
340
+ if not notes:
341
+ return {"error": "No notes in clip"}
342
+
343
+ s = _notes_to_stream(notes, key_hint=key)
344
+ detected_key = _detect_key(s)
345
+ scale_pitch_classes = set(
346
+ p.midi % 12 for p in detected_key.getScale().getPitches()
347
+ )
348
+
349
+ issues = []
350
+
351
+ # 1. Out-of-key notes
352
+ for n in notes:
353
+ if n.get("mute", False):
354
+ continue
355
+ if n["pitch"] % 12 not in scale_pitch_classes:
356
+ issues.append({
357
+ "type": "out_of_key",
358
+ "severity": "warning",
359
+ "beat": round(n["start_time"], 3),
360
+ "detail": f"{_pitch_name(n['pitch'])} not in {detected_key}",
361
+ })
362
+
363
+ # 2. Parallel fifths/octaves using VoiceLeadingQuartet
364
+ chordified = s.chordify()
365
+ chord_list = list(chordified.recurse().getElementsByClass('Chord'))
366
+
367
+ for i in range(1, len(chord_list)):
368
+ prev_c = chord_list[i - 1]
369
+ curr_c = chord_list[i]
370
+ prev_pitches = sorted(prev_c.pitches, key=lambda p: p.midi)
371
+ curr_pitches = sorted(curr_c.pitches, key=lambda p: p.midi)
372
+
373
+ if len(prev_pitches) < 2 or len(curr_pitches) < 2:
374
+ continue
375
+
376
+ # Check outer voices (bass and soprano)
377
+ try:
378
+ vlq = voiceLeading.VoiceLeadingQuartet(
379
+ prev_pitches[-1], curr_pitches[-1], # soprano
380
+ prev_pitches[0], curr_pitches[0], # bass
381
+ )
382
+ if vlq.parallelFifth():
383
+ issues.append({
384
+ "type": "parallel_fifths",
385
+ "severity": "error",
386
+ "beat": round(float(curr_c.offset), 3),
387
+ "detail": "Parallel fifths in outer voices",
388
+ })
389
+ if vlq.parallelOctave():
390
+ issues.append({
391
+ "type": "parallel_octaves",
392
+ "severity": "error",
393
+ "beat": round(float(curr_c.offset), 3),
394
+ "detail": "Parallel octaves in outer voices",
395
+ })
396
+ if vlq.voiceCrossing():
397
+ issues.append({
398
+ "type": "voice_crossing",
399
+ "severity": "warning",
400
+ "beat": round(float(curr_c.offset), 3),
401
+ "detail": "Voice crossing detected",
402
+ })
403
+ if strict and vlq.hiddenFifth():
404
+ issues.append({
405
+ "type": "hidden_fifth",
406
+ "severity": "info",
407
+ "beat": round(float(curr_c.offset), 3),
408
+ "detail": "Hidden fifth in outer voices",
409
+ })
410
+ except Exception:
411
+ pass
412
+
413
+ # 3. Unresolved dominant (strict mode)
414
+ if strict:
415
+ for i in range(len(chord_list) - 1):
416
+ try:
417
+ rn = roman.romanNumeralFromChord(chord_list[i], detected_key)
418
+ next_rn = roman.romanNumeralFromChord(
419
+ chord_list[i + 1], detected_key
420
+ )
421
+ if rn.romanNumeral in ('V', 'V7') and next_rn.romanNumeral not in (
422
+ 'I', 'i', 'vi', 'VI'
423
+ ):
424
+ issues.append({
425
+ "type": "unresolved_dominant",
426
+ "severity": "info",
427
+ "beat": round(float(chord_list[i].offset), 3),
428
+ "detail": (
429
+ f"{rn.figure} resolves to {next_rn.figure} "
430
+ f"instead of tonic"
431
+ ),
432
+ })
433
+ except Exception:
434
+ pass
435
+
436
+ # 4. Large leaps without resolution (strict mode)
437
+ if strict:
438
+ sorted_notes = sorted(
439
+ [n for n in notes if not n.get("mute", False)],
440
+ key=lambda n: n["start_time"],
441
+ )
442
+ for i in range(1, len(sorted_notes)):
443
+ leap = abs(sorted_notes[i]["pitch"] - sorted_notes[i - 1]["pitch"])
444
+ if leap > 7:
445
+ issues.append({
446
+ "type": "large_leap",
447
+ "severity": "info",
448
+ "beat": round(sorted_notes[i]["start_time"], 3),
449
+ "detail": f"{leap} semitone leap",
450
+ })
451
+
452
+ severity_order = {"error": 0, "warning": 1, "info": 2}
453
+ issues.sort(key=lambda x: (severity_order.get(x["severity"], 3), x.get("beat", 0)))
454
+
455
+ return {
456
+ "key": str(detected_key),
457
+ "strict_mode": strict,
458
+ "issue_count": len(issues),
459
+ "errors": sum(1 for i in issues if i["severity"] == "error"),
460
+ "warnings": sum(1 for i in issues if i["severity"] == "warning"),
461
+ "issues": issues[:30],
462
+ }
463
+
464
+
465
+ # -- Tool 4: identify_scale -------------------------------------------------
466
+
467
+ @mcp.tool()
468
+ def identify_scale(
469
+ ctx: Context,
470
+ track_index: int,
471
+ clip_index: int,
472
+ ) -> dict:
473
+ """Identify the scale/mode of a MIDI clip beyond basic major/minor.
474
+
475
+ Goes deeper than get_detected_key — uses music21's Krumhansl-Schmuckler
476
+ algorithm with alternateInterpretations for modes (Dorian, Phrygian,
477
+ Lydian, Mixolydian) and exotic scales.
478
+
479
+ Returns ranked key matches with confidence scores.
480
+ """
481
+ _require_music21()
482
+
483
+ notes = _get_clip_notes(ctx, track_index, clip_index)
484
+ if not notes:
485
+ return {"error": "No notes in clip"}
486
+
487
+ s = _notes_to_stream(notes)
488
+
489
+ # music21's key analysis returns the best match and alternatives
490
+ detected = s.analyze('key')
491
+
492
+ results = [{
493
+ "key": str(detected),
494
+ "confidence": round(detected.correlationCoefficient, 3)
495
+ if hasattr(detected, 'correlationCoefficient') else None,
496
+ "mode": detected.mode,
497
+ "tonic": str(detected.tonic),
498
+ }]
499
+
500
+ # Add alternatives
501
+ if hasattr(detected, 'alternateInterpretations'):
502
+ for alt in detected.alternateInterpretations[:7]:
503
+ results.append({
504
+ "key": str(alt),
505
+ "confidence": round(alt.correlationCoefficient, 3)
506
+ if hasattr(alt, 'correlationCoefficient') else None,
507
+ "mode": alt.mode,
508
+ "tonic": str(alt.tonic),
509
+ })
510
+
511
+ # Pitch class usage for context
512
+ pitch_classes = defaultdict(float)
513
+ for n in notes:
514
+ if not n.get("mute", False):
515
+ pitch_classes[n["pitch"] % 12] += n["duration"]
516
+
517
+ note_names = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B']
518
+ pc_usage = {
519
+ note_names[pc]: round(dur, 3)
520
+ for pc, dur in sorted(pitch_classes.items())
521
+ }
522
+
523
+ return {
524
+ "top_match": results[0] if results else None,
525
+ "alternatives": results[1:],
526
+ "pitch_classes_used": len(pitch_classes),
527
+ "pitch_class_weights": pc_usage,
528
+ }
529
+
530
+
531
+ # -- Tool 5: harmonize_melody -----------------------------------------------
532
+
533
+ @mcp.tool()
534
+ def harmonize_melody(
535
+ ctx: Context,
536
+ track_index: int,
537
+ clip_index: int,
538
+ key: Optional[str] = None,
539
+ voices: int = 4,
540
+ ) -> dict:
541
+ """Generate a multi-voice harmonization of a melody from a MIDI clip.
542
+
543
+ Finds diatonic chords containing each melody note and voices them
544
+ following basic voice leading rules (smooth bass motion, no crossing).
545
+
546
+ voices: 2 (melody + bass) or 4 (SATB). Default 4.
547
+ Returns note data ready for add_notes on new tracks.
548
+
549
+ Processing time: 2-5s.
550
+ """
551
+ _require_music21()
552
+ from music21 import roman
553
+
554
+ notes = _get_clip_notes(ctx, track_index, clip_index)
555
+ if not notes:
556
+ return {"error": "No notes in clip"}
557
+
558
+ # Use only non-muted, sorted by time
559
+ melody = sorted(
560
+ [n for n in notes if not n.get("mute", False)],
561
+ key=lambda n: n["start_time"],
562
+ )
563
+
564
+ s = _notes_to_stream(melody, key_hint=key)
565
+ detected_key = _detect_key(s)
566
+
567
+ result_voices = {"soprano": [], "bass": []}
568
+ if voices == 4:
569
+ result_voices["alto"] = []
570
+ result_voices["tenor"] = []
571
+
572
+ prev_bass_midi = None
573
+
574
+ for n in melody:
575
+ from music21 import pitch as m21pitch
576
+ melody_pitch = n["pitch"]
577
+ beat = n["start_time"]
578
+ dur = n["duration"]
579
+ mel_pc = melody_pitch % 12
580
+
581
+ # Find the best diatonic chord containing this pitch
582
+ best_rn = None
583
+ for degree in [1, 4, 5, 6, 2, 3, 7]:
584
+ try:
585
+ rn = roman.RomanNumeral(degree, detected_key)
586
+ chord_pcs = [p.midi % 12 for p in rn.pitches]
587
+ if mel_pc in chord_pcs:
588
+ best_rn = rn
589
+ break
590
+ except Exception:
591
+ continue
592
+
593
+ if best_rn is None:
594
+ # Fallback: use tonic triad
595
+ best_rn = roman.RomanNumeral(1, detected_key)
596
+
597
+ chord_midis = sorted([p.midi for p in best_rn.pitches])
598
+
599
+ # Bass: root in low octave, smooth motion preferred
600
+ bass = chord_midis[0]
601
+ while bass > 52:
602
+ bass -= 12
603
+ while bass < 36:
604
+ bass += 12
605
+ if prev_bass_midi is not None:
606
+ # Try octave that's closest to previous bass
607
+ options = [bass, bass - 12, bass + 12]
608
+ options = [b for b in options if 33 <= b <= 55]
609
+ if options:
610
+ bass = min(options, key=lambda b: abs(b - prev_bass_midi))
611
+ prev_bass_midi = bass
612
+
613
+ vel = n.get("velocity", 100)
614
+
615
+ result_voices["soprano"].append({
616
+ "pitch": melody_pitch, "start_time": beat,
617
+ "duration": dur, "velocity": vel,
618
+ })
619
+ result_voices["bass"].append({
620
+ "pitch": bass, "start_time": beat,
621
+ "duration": dur, "velocity": int(vel * 0.8),
622
+ })
623
+
624
+ if voices == 4 and len(chord_midis) >= 2:
625
+ # Alto: chord tone near soprano
626
+ alto = chord_midis[1] if len(chord_midis) > 1 else chord_midis[0]
627
+ while alto < melody_pitch - 14:
628
+ alto += 12
629
+ while alto >= melody_pitch:
630
+ alto -= 12
631
+ if alto < bass:
632
+ alto += 12
633
+
634
+ # Tenor: chord tone between bass and alto
635
+ tenor = chord_midis[2] if len(chord_midis) > 2 else chord_midis[0]
636
+ while tenor < bass:
637
+ tenor += 12
638
+ while tenor >= alto:
639
+ tenor -= 12
640
+ if tenor < bass:
641
+ tenor = bass + (alto - bass) // 2
642
+
643
+ result_voices["alto"].append({
644
+ "pitch": max(36, min(96, alto)), "start_time": beat,
645
+ "duration": dur, "velocity": int(vel * 0.7),
646
+ })
647
+ result_voices["tenor"].append({
648
+ "pitch": max(36, min(96, tenor)), "start_time": beat,
649
+ "duration": dur, "velocity": int(vel * 0.7),
650
+ })
651
+
652
+ result = {
653
+ "key": str(detected_key),
654
+ "voices": voices,
655
+ "melody_notes": len(melody),
656
+ }
657
+ for voice_name, voice_notes in result_voices.items():
658
+ if voice_notes:
659
+ result[voice_name] = voice_notes
660
+
661
+ return result
662
+
663
+
664
+ # -- Tool 6: generate_countermelody -----------------------------------------
665
+
666
+ @mcp.tool()
667
+ def generate_countermelody(
668
+ ctx: Context,
669
+ track_index: int,
670
+ clip_index: int,
671
+ key: Optional[str] = None,
672
+ species: int = 1,
673
+ range_low: int = 48,
674
+ range_high: int = 72,
675
+ seed: int = 0,
676
+ ) -> dict:
677
+ """Generate a countermelody using species counterpoint rules.
678
+
679
+ species: 1 (note-against-note), 2 (2 notes per melody note).
680
+ Follows strict rules: no parallel fifths/octaves, contrary motion
681
+ preferred, consonant intervals on strong beats.
682
+
683
+ Returns note data ready for add_notes on a new track.
684
+ Processing time: 2-5s.
685
+ """
686
+ _require_music21()
687
+ import random
688
+ random.seed(seed)
689
+
690
+ notes = _get_clip_notes(ctx, track_index, clip_index)
691
+ if not notes:
692
+ return {"error": "No notes in clip"}
693
+
694
+ melody = sorted(
695
+ [n for n in notes if not n.get("mute", False)],
696
+ key=lambda n: n["start_time"],
697
+ )
698
+
699
+ s = _notes_to_stream(melody, key_hint=key)
700
+ detected_key = _detect_key(s)
701
+ scale_pcs = [p.midi % 12 for p in detected_key.getScale().getPitches()]
702
+
703
+ # Build pool of scale pitches in range
704
+ pool = []
705
+ for p in range(range_low, range_high + 1):
706
+ if p % 12 in scale_pcs:
707
+ pool.append(p)
708
+ if not pool:
709
+ return {"error": "No scale pitches in given range"}
710
+
711
+ # Consonant intervals (semitones mod 12): P1, m3, M3, P4, P5, m6, M6, P8
712
+ consonant = {0, 3, 4, 5, 7, 8, 9}
713
+
714
+ counter_notes = []
715
+ prev_cp = None
716
+
717
+ for i, n in enumerate(melody):
718
+ mel_pitch = n["pitch"]
719
+ beat = n["start_time"]
720
+ dur = n["duration"] / species
721
+
722
+ for s_idx in range(species):
723
+ # Score candidates
724
+ scored = []
725
+ for cp in pool:
726
+ iv = abs(cp - mel_pitch) % 12
727
+ if iv not in consonant:
728
+ continue
729
+
730
+ score = 0.0
731
+
732
+ # Contrary motion bonus
733
+ if prev_cp is not None and i > 0:
734
+ mel_dir = mel_pitch - melody[i - 1]["pitch"]
735
+ cp_dir = cp - prev_cp
736
+ if (mel_dir > 0 and cp_dir < 0) or (mel_dir < 0 and cp_dir > 0):
737
+ score += 10
738
+ # Penalize parallel perfect intervals
739
+ if prev_cp is not None and i > 0:
740
+ prev_iv = abs(prev_cp - melody[i - 1]["pitch"]) % 12
741
+ if prev_iv == iv and iv in (0, 7):
742
+ score -= 50 # Hard penalty for parallel P5/P8
743
+
744
+ # Stepwise motion bonus
745
+ if prev_cp is not None:
746
+ step = abs(cp - prev_cp)
747
+ if step <= 2:
748
+ score += 5
749
+ elif step <= 4:
750
+ score += 2
751
+ elif step > 7:
752
+ score -= 3
753
+ else:
754
+ score += 3
755
+
756
+ # Small random variation for musicality
757
+ score += random.uniform(0, 2)
758
+ scored.append((cp, score))
759
+
760
+ if not scored:
761
+ # Fallback: pick any pool note
762
+ scored = [(random.choice(pool), 0)]
763
+
764
+ scored.sort(key=lambda x: -x[1])
765
+ chosen = scored[0][0]
766
+
767
+ counter_notes.append({
768
+ "pitch": chosen,
769
+ "start_time": round(beat + s_idx * dur, 4),
770
+ "duration": round(dur, 4),
771
+ "velocity": 80 if s_idx == 0 else 65,
772
+ })
773
+ prev_cp = chosen
774
+
775
+ return {
776
+ "key": str(detected_key),
777
+ "species": species,
778
+ "melody_notes": len(melody),
779
+ "counter_notes": counter_notes,
780
+ "counter_note_count": len(counter_notes),
781
+ "range": f"{_pitch_name(range_low)}-{_pitch_name(range_high)}",
782
+ "seed": seed,
783
+ }
784
+
785
+
786
+ # -- Tool 7: transpose_smart ------------------------------------------------
787
+
788
+ @mcp.tool()
789
+ def transpose_smart(
790
+ ctx: Context,
791
+ track_index: int,
792
+ clip_index: int,
793
+ target_key: str,
794
+ mode: str = "diatonic",
795
+ ) -> dict:
796
+ """Transpose a MIDI clip to a new key with musical intelligence.
797
+
798
+ mode:
799
+ - diatonic: Maps scale degrees (C major -> G major keeps intervals
800
+ relative to the scale). Chromatic notes shift by tonic distance.
801
+ - chromatic: Simple semitone shift (preserves exact intervals).
802
+
803
+ Returns transposed note data ready for add_notes or modify_notes.
804
+ """
805
+ _require_music21()
806
+ from music21 import pitch as m21pitch
807
+
808
+ notes = _get_clip_notes(ctx, track_index, clip_index)
809
+ if not notes:
810
+ return {"error": "No notes in clip"}
811
+
812
+ s = _notes_to_stream(notes)
813
+ source_key = _detect_key(s)
814
+
815
+ try:
816
+ target = _parse_key_string(target_key)
817
+ except Exception:
818
+ return {"error": f"Invalid target key: {target_key}"}
819
+
820
+ source_tonic = m21pitch.Pitch(str(source_key.tonic))
821
+ target_tonic = m21pitch.Pitch(str(target.tonic))
822
+ semitone_shift = target_tonic.midi - source_tonic.midi
823
+
824
+ if mode == "chromatic":
825
+ transposed = []
826
+ for n in notes:
827
+ tn = dict(n)
828
+ new_pitch = n["pitch"] + semitone_shift
829
+ tn["pitch"] = max(0, min(127, new_pitch))
830
+ transposed.append(tn)
831
+ else:
832
+ # Diatonic: map scale degrees
833
+ source_scale = source_key.getScale().getPitches()
834
+ target_scale = target.getScale().getPitches()
835
+ source_pcs = [p.midi % 12 for p in source_scale]
836
+ target_pcs = [p.midi % 12 for p in target_scale]
837
+
838
+ degree_map = {}
839
+ for i in range(min(len(source_pcs), len(target_pcs))):
840
+ degree_map[source_pcs[i]] = target_pcs[i]
841
+
842
+ transposed = []
843
+ for n in notes:
844
+ tn = dict(n)
845
+ pc = n["pitch"] % 12
846
+ octave = n["pitch"] // 12
847
+
848
+ if pc in degree_map:
849
+ new_pc = degree_map[pc]
850
+ # Calculate octave adjustment from tonic shift
851
+ new_pitch = octave * 12 + new_pc
852
+ # Adjust if the shift crossed an octave boundary
853
+ if abs(new_pitch - (n["pitch"] + semitone_shift)) > 6:
854
+ if new_pitch < n["pitch"] + semitone_shift:
855
+ new_pitch += 12
856
+ else:
857
+ new_pitch -= 12
858
+ else:
859
+ # Chromatic note: shift by tonic distance
860
+ new_pitch = n["pitch"] + semitone_shift
861
+
862
+ tn["pitch"] = max(0, min(127, new_pitch))
863
+ transposed.append(tn)
864
+
865
+ return {
866
+ "source_key": str(source_key),
867
+ "target_key": str(target),
868
+ "mode": mode,
869
+ "semitone_shift": semitone_shift,
870
+ "note_count": len(transposed),
871
+ "notes": transposed,
872
+ }
package/package.json CHANGED
@@ -1,8 +1,8 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.6.3",
3
+ "version": "1.6.4",
4
4
  "mcpName": "io.github.dreamrec/livepilot",
5
- "description": "AI copilot for Ableton Live 12 — 135 tools, device atlas (280+ devices), real-time audio analysis, automation intelligence, and technique memory",
5
+ "description": "AI copilot for Ableton Live 12 — 142 tools, device atlas (280+ devices), real-time audio analysis, automation intelligence, and technique memory",
6
6
  "author": "Pilot Studio",
7
7
  "license": "MIT",
8
8
  "type": "commonjs",
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.6.3",
4
- "description": "AI copilot for Ableton Live 12 — 135 tools, device atlas (280+ devices), real-time audio analysis, automation intelligence, and technique memory",
3
+ "version": "1.6.4",
4
+ "description": "AI copilot for Ableton Live 12 — 142 tools, device atlas (280+ devices), real-time audio analysis, automation intelligence, and technique memory",
5
5
  "author": "Pilot Studio",
6
6
  "skills": [
7
7
  "skills/livepilot-core",
@@ -1,17 +1,17 @@
1
1
  ---
2
2
  name: livepilot-core
3
- description: Core discipline for controlling Ableton Live 12 through LivePilot's 135 MCP tools, device atlas (280+ devices), M4L analyzer (spectrum/RMS/key detection), automation intelligence (16 curve types, 15 recipes), and technique memory. Use whenever working with Ableton Live through MCP tools.
3
+ description: Core discipline for controlling Ableton Live 12 through LivePilot's 142 MCP tools, device atlas (280+ devices), M4L analyzer (spectrum/RMS/key detection), automation intelligence (16 curve types, 15 recipes), music theory analysis (music21), and technique memory. Use whenever working with Ableton Live through MCP tools.
4
4
  ---
5
5
 
6
6
  # LivePilot Core — Ableton Live 12 AI Copilot
7
7
 
8
- LivePilot is an agentic production system for Ableton Live 12. It combines 135 MCP tools with three layers of intelligence:
8
+ LivePilot is an agentic production system for Ableton Live 12. It combines 142 MCP tools with three layers of intelligence:
9
9
 
10
10
  - **Device Atlas** — A structured knowledge corpus of 280+ instruments, 139 drum kits, and 350+ impulse responses. Consult the atlas before loading any device. It contains real browser URIs, preset names, and sonic descriptions. Never guess a device name — look it up.
11
11
  - **M4L Analyzer** — Real-time audio analysis on the master bus (8-band spectrum, RMS/peak, key detection). Use it to verify mixing decisions, detect frequency problems, and find the key before writing harmonic content.
12
12
  - **Technique Memory** — Persistent storage for production decisions. Consult `memory_recall` before creative tasks to understand the user's taste. Save techniques when the user likes something. The memory shapes future decisions without constraining them.
13
13
 
14
- These layers sit on top of 135 deterministic tools across 12 domains: transport, tracks, clips, MIDI notes, devices, scenes, mixing, browser, arrangement, technique memory, real-time DSP analysis, and automation.
14
+ These layers sit on top of 142 deterministic tools across 13 domains: transport, tracks, clips, MIDI notes, devices, scenes, mixing, browser, arrangement, technique memory, real-time DSP analysis, automation, and music theory.
15
15
 
16
16
  ## Golden Rules
17
17
 
@@ -25,6 +25,59 @@ These layers sit on top of 135 deterministic tools across 12 domains: transport,
25
25
  8. **Volume is 0.0-1.0, pan is -1.0 to 1.0** — these are normalized, not dB
26
26
  9. **Tempo range 20-999 BPM** — validated before sending to Ableton
27
27
  10. **Always name your tracks and clips** — organization is part of the creative process
28
+ 11. **Respect tool speed tiers** — see below. Never call heavy tools without user consent.
29
+
30
+ ## Tool Speed Tiers
31
+
32
+ Not all tools respond instantly. Know the tiers and act accordingly.
33
+
34
+ ### Instant (<1s) — Use freely, no warning needed
35
+ All 142 core tools (transport, tracks, clips, notes, devices, scenes, mixing, browser, arrangement, memory, automation, theory) plus Layer A perception tools (spectral shape, timbral profile, mel spectrum, chroma, onsets, harmonic/percussive, novelty, momentary loudness). These are the reflex tools — call them anytime without hesitation.
36
+
37
+ ### Fast (1-5s) — Use freely, barely noticeable
38
+ `analyze_loudness` · `analyze_dynamic_range` · `compare_loudness`
39
+
40
+ File-based analysis that reads audio from disk. Fast enough to use mid-conversation. No warning needed for files under 2 minutes.
41
+
42
+ ### Slow (5-15s) — Tell the user before calling
43
+ `analyze_spectral_evolution` · `compare_to_reference` · `transcribe_to_midi`
44
+
45
+ These run multi-pass analysis or load AI models. Always tell the user what you're about to do and roughly how long it takes. Never chain multiple slow tools back-to-back without checking in.
46
+
47
+ ### Heavy (30-120s) — ALWAYS ask the user first
48
+ `separate_stems` · `diagnose_mix`
49
+
50
+ These run GPU-intensive processes (Demucs stem separation). Processing time: 15-25s on GPU, 60-90s on CPU/MPS. `diagnose_mix` chains stem separation with per-stem analysis and can take 2+ minutes.
51
+
52
+ **CRITICAL: Heavy Tool Protocol**
53
+ - NEVER call `separate_stems` or `diagnose_mix` unless the user explicitly requests it
54
+ - NEVER call them speculatively or "just to check"
55
+ - NEVER call them during creative flow (beat-making, sound design, mixing) — they break momentum
56
+ - ALWAYS warn the user with an estimated time before calling
57
+ - ALWAYS prefer fast tools first: if the user says "check my mix", use `analyze_loudness` + `analyze_dynamic_range` (2 seconds total), report findings, THEN offer to escalate: "I could separate stems to investigate further, but that takes about a minute. Want me to?"
58
+
59
+ **Wrong:** User says "how does my track sound?" → call `diagnose_mix` (120s surprise)
60
+ **Right:** User says "how does my track sound?" → call `analyze_loudness` + `get_master_spectrum` (1s) → report findings → offer heavy analysis only if needed
61
+
62
+ ### Escalation Pattern for Analysis
63
+
64
+ Always follow this ladder — start fast, escalate only with consent:
65
+
66
+ ```
67
+ Level 1 (instant): get_master_spectrum + get_track_meters
68
+ → frequency balance + levels. Enough for 80% of questions.
69
+
70
+ Level 2 (fast): analyze_loudness + analyze_dynamic_range
71
+ → LUFS, true peak, LRA, crest factor. For mastering prep.
72
+
73
+ Level 3 (slow): analyze_spectral_evolution + compare_to_reference
74
+ → timbral trends, reference matching. Ask first.
75
+
76
+ Level 4 (heavy): separate_stems → per-stem analysis → diagnose_mix
77
+ → full diagnostic. Explicit user consent required.
78
+ ```
79
+
80
+ Never skip levels. The user's question determines the entry point, but always start at the lowest appropriate level and offer to go deeper.
28
81
 
29
82
  ## Track Health Checks — MANDATORY
30
83
 
@@ -64,7 +117,7 @@ These layers sit on top of 135 deterministic tools across 12 domains: transport,
64
117
  - MIDI track with no instrument loaded
65
118
  - Notes programmed but clip not fired
66
119
 
67
- ## Tool Domains (135 total)
120
+ ## Tool Domains (142 total)
68
121
 
69
122
  ### Transport (12)
70
123
  `get_session_info` · `set_tempo` · `set_time_signature` · `start_playback` · `stop_playback` · `continue_playback` · `toggle_metronome` · `set_session_loop` · `undo` · `redo` · `get_recent_actions` · `get_session_diagnostics`
@@ -122,6 +175,18 @@ Clip automation CRUD + intelligent curve generation with 15 built-in recipes.
122
175
  - Clear existing automation before rewriting: `clear_clip_automation` first
123
176
  - Load `references/automation-atlas.md` for curve theory, genre recipes, diagnostic technique, and cross-track spectral mapping
124
177
 
178
+ ### Theory (7)
179
+ Music theory analysis powered by music21. Optional dependency — install with `pip install 'music21>=9.3'`.
180
+
181
+ **Tools:** `analyze_harmony` · `suggest_next_chord` · `detect_theory_issues` · `identify_scale` · `harmonize_melody` · `generate_countermelody` · `transpose_smart`
182
+
183
+ **Key discipline:**
184
+ - These tools read MIDI notes directly from session clips — no file export needed
185
+ - Auto-detects key via Krumhansl-Schmuckler if not provided; pass `key` hint for better accuracy
186
+ - `analyze_harmony` and `detect_theory_issues` are analysis-only; `harmonize_melody`, `generate_countermelody`, and `transpose_smart` return note data ready for `add_notes`
187
+ - Use your own musical knowledge alongside these tools — music21 provides data, you provide interpretation
188
+ - Processing time: 2-5s for generative tools (harmonize, countermelody)
189
+
125
190
  ## Workflow: Building a Beat
126
191
 
127
192
  1. `get_session_info` — check current state
@@ -263,7 +328,7 @@ Deep production knowledge lives in `references/`. Consult these when making crea
263
328
 
264
329
  | File | What's inside | When to consult |
265
330
  |------|--------------|-----------------|
266
- | `references/overview.md` | All 135 tools mapped with params, units, ranges | Quick lookup for any tool |
331
+ | `references/overview.md` | All 142 tools mapped with params, units, ranges | Quick lookup for any tool |
267
332
  | `references/midi-recipes.md` | Drum patterns by genre, chord voicings, scales, hi-hat techniques, humanization, polymetrics | Programming MIDI notes, building beats |
268
333
  | `references/sound-design.md` | Stock instruments/effects, parameter recipes for bass/pad/lead/pluck, device chain patterns | Loading and configuring devices |
269
334
  | `references/mixing-patterns.md` | Gain staging, parallel compression, sidechain, EQ by instrument, bus processing, stereo width | Setting volumes, panning, adding effects |
@@ -1,6 +1,6 @@
1
- # LivePilot v1.6.3 — Architecture & Tool Reference
1
+ # LivePilot v1.6.4 — Architecture & Tool Reference
2
2
 
3
- LivePilot is an agentic production system for Ableton Live 12. It combines 135 MCP tools with a device knowledge corpus, real-time audio analysis, automation intelligence, and persistent technique memory.
3
+ LivePilot is an agentic production system for Ableton Live 12. It combines 142 MCP tools with a device knowledge corpus, real-time audio analysis, automation intelligence, and persistent technique memory.
4
4
 
5
5
  ## Architecture
6
6
 
@@ -32,7 +32,7 @@ A flat tool list lets the AI press buttons. LivePilot's three layers give it con
32
32
 
33
33
  This turns "set EQ band 3 to -4 dB" into "cut 400 Hz by 4 dB, then read the spectrum to confirm the mud is actually reduced."
34
34
 
35
- ## The 135 Tools — What Each One Does
35
+ ## The 142 Tools — What Each One Does
36
36
 
37
37
  ### Transport (12) — Playback, tempo, global state, diagnostics
38
38
 
@@ -244,6 +244,20 @@ This turns "set EQ band 3 to -4 dB" into "cut 400 Hz by 4 dB, then read the spec
244
244
 
245
245
  **15 recipes:** filter_sweep_up, filter_sweep_down, dub_throw, tape_stop, build_rise, sidechain_pump, fade_in, fade_out, tremolo, auto_pan, stutter, breathing, washout, vinyl_crackle, stereo_narrow
246
246
 
247
+ ### Theory (7) — Music theory analysis powered by music21 (optional dependency)
248
+
249
+ | Tool | What it does | Key params |
250
+ |------|-------------|------------|
251
+ | `analyze_harmony` | Chord-by-chord Roman numeral analysis of a clip | `track_index`, `clip_index`, `key` (optional) |
252
+ | `suggest_next_chord` | Suggests theory-valid chord continuations | `track_index`, `clip_index`, `style` (common_practice/jazz/modal/pop) |
253
+ | `detect_theory_issues` | Finds parallel fifths/octaves, out-of-key notes, voice crossing | `track_index`, `clip_index`, `strict` (bool) |
254
+ | `identify_scale` | Deep scale/mode identification with confidence ranking | `track_index`, `clip_index` |
255
+ | `harmonize_melody` | Generates 2 or 4-voice SATB harmonization | `track_index`, `clip_index`, `voices` (2 or 4) |
256
+ | `generate_countermelody` | Species counterpoint against a melody | `track_index`, `clip_index`, `species` (1 or 2) |
257
+ | `transpose_smart` | Diatonic or chromatic transposition to a new key | `track_index`, `clip_index`, `target_key`, `mode` (diatonic/chromatic) |
258
+
259
+ **Requires:** `pip install 'music21>=9.3'` — optional dependency, all other 135 tools work without it.
260
+
247
261
  ## Units & Ranges Quick Reference
248
262
 
249
263
  | Concept | Unit/Range | Notes |
@@ -5,7 +5,7 @@ Entry point for the ControlSurface. Ableton calls create_instance(c_instance)
5
5
  when this script is selected in Preferences > Link, Tempo & MIDI.
6
6
  """
7
7
 
8
- __version__ = "1.6.3"
8
+ __version__ = "1.6.4"
9
9
 
10
10
  from _Framework.ControlSurface import ControlSurface
11
11
  from .server import LivePilotServer
@@ -34,7 +34,7 @@ class LivePilot(ControlSurface):
34
34
  ControlSurface.__init__(self, c_instance)
35
35
  self._server = LivePilotServer(self)
36
36
  self._server.start()
37
- self.log_message("LivePilot v1.6.3 initialized")
37
+ self.log_message("LivePilot v1.6.4 initialized")
38
38
  self.show_message("LivePilot: Listening on port 9878")
39
39
 
40
40
  def disconnect(self):
package/requirements.txt CHANGED
@@ -1,2 +1,5 @@
1
1
  # LivePilot MCP Server dependencies
2
2
  fastmcp>=3.0.0,<4.0.0
3
+
4
+ # Optional: Music theory tools (lazy-imported, not required for core tools)
5
+ # pip install music21>=9.3