livepilot 1.6.4 → 1.6.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,8 +1,18 @@
1
1
  # Changelog
2
2
 
3
+ ## 1.6.5 — Drop music21 (March 2026)
4
+
5
+ **Theory tools rewritten with zero-dependency pure Python engine.**
6
+
7
+ - Replace music21 (~100MB) with built-in `_theory_engine.py` (~350 lines)
8
+ - Krumhansl-Schmuckler key detection with 7 mode profiles (major, minor, dorian, phrygian, lydian, mixolydian, locrian)
9
+ - All 7 theory tools keep identical APIs and return formats
10
+ - Zero external dependencies — theory tools work on every install
11
+ - 2-3s faster cold start (no music21 import overhead)
12
+
3
13
  ## 1.6.4 — Music Theory (March 2026)
4
14
 
5
- **7 new tools (135 -> 142), music21-powered theory analysis on live MIDI clips.**
15
+ **7 new tools (135 -> 142), theory analysis on live MIDI clips.**
6
16
 
7
17
  ### Theory Domain (7 tools)
8
18
  - `analyze_harmony` — chord-by-chord Roman numeral analysis of session clips
@@ -14,12 +24,9 @@
14
24
  - `transpose_smart` — diatonic or chromatic transposition preserving scale relationships
15
25
 
16
26
  ### Architecture
17
- - Notes-to-Stream bridge: converts LivePilot note dicts music21 Part objects with quantization (1/32 note grid)
18
- - Lazy imports: music21 is optional all 135 core tools work without it installed
19
- - Key hint parsing: handles "A minor" lowercase tonic for music21 compatibility
20
-
21
- ### Dependencies
22
- - Optional: `pip install 'music21>=9.3'` (not auto-installed — ~50MB with numpy/matplotlib)
27
+ - Pure Python `_theory_engine.py`: Krumhansl-Schmuckler key detection, Roman numeral analysis, voice leading checks
28
+ - Chordify bridge: groups notes by quantized beat position (1/32 note grid)
29
+ - Key hint parsing: handles "A minor", "F# major", enharmonic normalization
23
30
 
24
31
  ## 1.6.3 — Audit Hardening (March 2026)
25
32
 
package/README.md CHANGED
@@ -312,7 +312,7 @@ npx -y github:dreamrec/LivePilot --status
312
312
  | **Automation** | 8 | Clip envelope CRUD, 16-type curve engine, 15 named recipes, spectral-aware suggestions |
313
313
  | **Memory** | 8 | Save, recall, replay, and manage production techniques |
314
314
  | **Analyzer** | 20 | Real-time spectral analysis, key detection, sample manipulation, warp markers, device introspection (requires M4L device) |
315
- | **Theory** | 7 | Harmony analysis, Roman numerals, scale identification, chord suggestions, countermelody, SATB harmonization, smart transposition (requires music21) |
315
+ | **Theory** | 7 | Harmony analysis, Roman numerals, scale identification, chord suggestions, countermelody, SATB harmonization, smart transposition |
316
316
 
317
317
  <details>
318
318
  <summary><strong>Full tool list</strong></summary>
@@ -68,7 +68,7 @@ function anything() {
68
68
  function dispatch(cmd, args) {
69
69
  switch(cmd) {
70
70
  case "ping":
71
- send_response({"ok": true, "version": "1.6.4"});
71
+ send_response({"ok": true, "version": "1.6.5"});
72
72
  break;
73
73
  case "get_params":
74
74
  cmd_get_params(args);
@@ -1,2 +1,2 @@
1
1
  """LivePilot MCP Server — bridges MCP protocol to Ableton Live."""
2
- __version__ = "1.6.4"
2
+ __version__ = "1.6.5"
@@ -0,0 +1,366 @@
1
+ """Pure Python music theory engine — zero dependencies.
2
+
3
+ Replaces music21 for LivePilot's 7 theory tools. Implements:
4
+ Krumhansl-Schmuckler key detection, Roman numeral analysis,
5
+ voice leading checks, chord naming, and scale construction.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ import math
11
+ import re
12
+ from collections import defaultdict
13
+
14
+ # ---------------------------------------------------------------------------
15
+ # Constants
16
+ # ---------------------------------------------------------------------------
17
+
18
+ NOTE_NAMES = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B']
19
+
20
+ ENHARMONIC = {
21
+ 'Cb': 'B', 'Db': 'C#', 'Eb': 'D#', 'Fb': 'E', 'Gb': 'F#',
22
+ 'Ab': 'G#', 'Bb': 'A#',
23
+ 'B#': 'C', 'E#': 'F',
24
+ 'Cbb': 'A#', 'Dbb': 'C', 'Ebb': 'D', 'Fbb': 'D#', 'Gbb': 'F',
25
+ 'Abb': 'G', 'Bbb': 'A',
26
+ 'C##': 'D', 'D##': 'E', 'E##': 'F#', 'F##': 'G', 'G##': 'A',
27
+ 'A##': 'B', 'B##': 'C#',
28
+ }
29
+
30
+ MAJOR_PROFILE = [6.35, 2.23, 3.48, 2.33, 4.38, 4.09, 2.52, 5.19, 2.39, 3.66, 2.29, 2.88]
31
+ MINOR_PROFILE = [6.33, 2.68, 3.52, 5.38, 2.60, 3.53, 2.54, 4.75, 3.98, 2.69, 3.34, 3.17]
32
+
33
+ DORIAN_PROFILE = MAJOR_PROFILE[2:] + MAJOR_PROFILE[:2]
34
+ PHRYGIAN_PROFILE = MAJOR_PROFILE[4:] + MAJOR_PROFILE[:4]
35
+ LYDIAN_PROFILE = MAJOR_PROFILE[5:] + MAJOR_PROFILE[:5]
36
+ MIXOLYDIAN_PROFILE = MAJOR_PROFILE[7:] + MAJOR_PROFILE[:7]
37
+ LOCRIAN_PROFILE = MAJOR_PROFILE[11:] + MAJOR_PROFILE[:11]
38
+
39
+ MODE_PROFILES = {
40
+ 'major': MAJOR_PROFILE, 'minor': MINOR_PROFILE,
41
+ 'dorian': DORIAN_PROFILE, 'phrygian': PHRYGIAN_PROFILE,
42
+ 'lydian': LYDIAN_PROFILE, 'mixolydian': MIXOLYDIAN_PROFILE,
43
+ 'locrian': LOCRIAN_PROFILE,
44
+ }
45
+
46
+ SCALES = {
47
+ 'major': [0, 2, 4, 5, 7, 9, 11], 'minor': [0, 2, 3, 5, 7, 8, 10],
48
+ 'dorian': [0, 2, 3, 5, 7, 9, 10], 'phrygian': [0, 1, 3, 5, 7, 8, 10],
49
+ 'lydian': [0, 2, 4, 6, 7, 9, 11], 'mixolydian': [0, 2, 4, 5, 7, 9, 10],
50
+ 'locrian': [0, 1, 3, 5, 6, 8, 10],
51
+ }
52
+
53
+ TRIAD_QUALITIES = {
54
+ 'major': ['major', 'minor', 'minor', 'major', 'major', 'minor', 'diminished'],
55
+ 'minor': ['minor', 'diminished', 'major', 'minor', 'minor', 'major', 'major'],
56
+ 'dorian': ['minor', 'minor', 'major', 'major', 'minor', 'diminished', 'major'],
57
+ 'phrygian': ['minor', 'major', 'major', 'minor', 'diminished', 'major', 'minor'],
58
+ 'lydian': ['major', 'major', 'minor', 'diminished', 'major', 'minor', 'minor'],
59
+ 'mixolydian': ['major', 'minor', 'diminished', 'major', 'minor', 'minor', 'major'],
60
+ 'locrian': ['diminished', 'major', 'minor', 'minor', 'major', 'major', 'minor'],
61
+ }
62
+
63
+ CHORD_PATTERNS = {
64
+ (0, 4, 7): 'major triad', (0, 3, 7): 'minor triad',
65
+ (0, 3, 6): 'diminished triad', (0, 4, 8): 'augmented triad',
66
+ (0, 2, 7): 'sus2', (0, 5, 7): 'sus4',
67
+ (0, 4, 7, 11): 'major seventh', (0, 3, 7, 10): 'minor seventh',
68
+ (0, 4, 7, 10): 'dominant seventh', (0, 3, 6, 9): 'diminished seventh',
69
+ (0, 3, 6, 10): 'half-diminished seventh',
70
+ }
71
+
72
+ ROMAN_LABELS = ['I', 'II', 'III', 'IV', 'V', 'VI', 'VII']
73
+
74
+ # ---------------------------------------------------------------------------
75
+ # Functions
76
+ # ---------------------------------------------------------------------------
77
+
78
+
79
+ def pitch_name(midi: int) -> str:
80
+ """MIDI number -> note name. Always sharp spelling."""
81
+ return NOTE_NAMES[midi % 12] + str(midi // 12 - 1)
82
+
83
+
84
+ def parse_key(key_str: str) -> dict:
85
+ """Parse key string -> {tonic: 0-11, mode: str}."""
86
+ parts = key_str.strip().split()
87
+ raw_tonic = parts[0]
88
+ mode = parts[1].lower() if len(parts) > 1 else 'major'
89
+
90
+ # Normalize tonic: capitalize first letter
91
+ tonic_name = raw_tonic[0].upper() + raw_tonic[1:]
92
+
93
+ # Resolve enharmonics
94
+ if tonic_name in ENHARMONIC:
95
+ tonic_name = ENHARMONIC[tonic_name]
96
+
97
+ if tonic_name not in NOTE_NAMES:
98
+ raise ValueError(f"Unknown tonic: {tonic_name} (from '{key_str}')")
99
+
100
+ return {"tonic": NOTE_NAMES.index(tonic_name), "mode": mode}
101
+
102
+
103
+ def get_scale_pitches(tonic: int, mode: str) -> list[int]:
104
+ """Return pitch classes (0-11) for the scale."""
105
+ intervals = SCALES.get(mode, SCALES['major'])
106
+ return [(tonic + iv) % 12 for iv in intervals]
107
+
108
+
109
+ def build_chord(degree: int, tonic: int, mode: str) -> dict:
110
+ """Build triad from scale degree (0-indexed)."""
111
+ scale = get_scale_pitches(tonic, mode)
112
+ root = scale[degree % 7]
113
+ third = scale[(degree + 2) % 7]
114
+ fifth = scale[(degree + 4) % 7]
115
+ quality = TRIAD_QUALITIES.get(mode, TRIAD_QUALITIES['major'])[degree % 7]
116
+ return {
117
+ "root_pc": root,
118
+ "pitch_classes": [root, third, fifth],
119
+ "quality": quality,
120
+ "root_name": NOTE_NAMES[root],
121
+ }
122
+
123
+
124
+ def _pearson(x: list[float], y: list[float]) -> float:
125
+ """Pearson correlation coefficient."""
126
+ n = len(x)
127
+ mx = sum(x) / n
128
+ my = sum(y) / n
129
+ num = sum((xi - mx) * (yi - my) for xi, yi in zip(x, y))
130
+ dx = math.sqrt(sum((xi - mx) ** 2 for xi in x))
131
+ dy = math.sqrt(sum((yi - my) ** 2 for yi in y))
132
+ if dx == 0 or dy == 0:
133
+ return 0.0
134
+ return num / (dx * dy)
135
+
136
+
137
+ def detect_key(notes: list[dict], mode_detection: bool = True) -> dict:
138
+ """Krumhansl-Schmuckler key detection."""
139
+ # Build pitch-class histogram weighted by duration
140
+ histogram = [0.0] * 12
141
+ for n in notes:
142
+ if n.get("mute", False):
143
+ continue
144
+ pc = n["pitch"] % 12
145
+ histogram[pc] += n.get("duration", 1.0)
146
+
147
+ profiles = MODE_PROFILES if mode_detection else {
148
+ 'major': MAJOR_PROFILE, 'minor': MINOR_PROFILE,
149
+ }
150
+
151
+ candidates = []
152
+ for mode_name, profile in profiles.items():
153
+ for tonic in range(12):
154
+ rotated = [histogram[(tonic + i) % 12] for i in range(12)]
155
+ r = _pearson(rotated, profile)
156
+ candidates.append({
157
+ "tonic": tonic,
158
+ "tonic_name": NOTE_NAMES[tonic],
159
+ "mode": mode_name,
160
+ "confidence": round(r, 3),
161
+ })
162
+
163
+ candidates.sort(key=lambda c: -c["confidence"])
164
+ best = candidates[0]
165
+ return {
166
+ "tonic": best["tonic"],
167
+ "tonic_name": best["tonic_name"],
168
+ "mode": best["mode"],
169
+ "confidence": best["confidence"],
170
+ "alternatives": candidates[1:9],
171
+ }
172
+
173
+
174
+ def chord_name(midi_pitches: list[int]) -> str:
175
+ """Identify chord from MIDI pitches -> 'C-major triad'."""
176
+ pcs = sorted(set(p % 12 for p in midi_pitches))
177
+ if not pcs:
178
+ return "unknown"
179
+ # Try each pitch class as potential root
180
+ for root in pcs:
181
+ intervals = tuple(sorted((pc - root) % 12 for pc in pcs))
182
+ if intervals in CHORD_PATTERNS:
183
+ return f"{NOTE_NAMES[root]}-{CHORD_PATTERNS[intervals]}"
184
+ return f"{NOTE_NAMES[pcs[0]]} chord"
185
+
186
+
187
+ def roman_numeral(chord_pcs: list[int], tonic: int, mode: str) -> dict:
188
+ """Match chord pitch classes -> Roman numeral figure."""
189
+ pcs_set = set(pc % 12 for pc in chord_pcs)
190
+ bass_pc = chord_pcs[0] % 12 if chord_pcs else 0
191
+
192
+ best = {"figure": "?", "quality": "unknown", "degree": 0,
193
+ "inversion": 0, "root_name": NOTE_NAMES[tonic]}
194
+
195
+ for degree in range(7):
196
+ triad = build_chord(degree, tonic, mode)
197
+ triad_set = set(triad["pitch_classes"])
198
+ if pcs_set == triad_set or pcs_set.issubset(triad_set):
199
+ quality = triad["quality"]
200
+ label = ROMAN_LABELS[degree]
201
+ if quality in ("minor", "diminished"):
202
+ label = label.lower()
203
+ if quality == "diminished":
204
+ label += "\u00b0"
205
+ # Detect inversion
206
+ inv = 0
207
+ if bass_pc != triad["root_pc"]:
208
+ if bass_pc == triad["pitch_classes"][1]:
209
+ inv = 1
210
+ elif bass_pc == triad["pitch_classes"][2]:
211
+ inv = 2
212
+ best = {"figure": label, "quality": quality, "degree": degree,
213
+ "inversion": inv, "root_name": triad["root_name"]}
214
+ break
215
+
216
+ return best
217
+
218
+
219
+ def roman_figure_to_pitches(figure: str, tonic: int, mode: str) -> dict:
220
+ """Parse Roman numeral string -> concrete MIDI pitches.
221
+
222
+ Handles: 'IV', 'bVII7', '#ivo7', 'ii7', etc.
223
+ """
224
+ remaining = figure
225
+ chromatic_shift = 0
226
+ acc_len = 0
227
+
228
+ # Parse leading accidentals
229
+ while remaining and remaining[0] in ('b', '#'):
230
+ if remaining[0] == 'b':
231
+ chromatic_shift -= 1
232
+ else:
233
+ chromatic_shift += 1
234
+ remaining = remaining[1:]
235
+ acc_len += 1
236
+
237
+ # Parse Roman numeral
238
+ upper_remaining = remaining.upper()
239
+ numeral = ""
240
+ for rn in ['VII', 'VI', 'IV', 'III', 'II', 'V', 'I']:
241
+ if upper_remaining.startswith(rn):
242
+ numeral = rn
243
+ break
244
+
245
+ if not numeral:
246
+ return {"figure": figure, "error": f"Cannot parse: {figure}"}
247
+
248
+ # Detect case of the numeral in original figure
249
+ numeral_in_orig = remaining[:len(numeral)]
250
+ is_minor_quality = numeral_in_orig == numeral_in_orig.lower()
251
+ remaining = remaining[len(numeral):]
252
+
253
+ degree = ['I', 'II', 'III', 'IV', 'V', 'VI', 'VII'].index(numeral)
254
+
255
+ # Build base triad from scale
256
+ chord = build_chord(degree, tonic, mode)
257
+ root_pc = (chord["root_pc"] + chromatic_shift) % 12
258
+
259
+ # Build pitch classes based on quality
260
+ if is_minor_quality:
261
+ pcs = [root_pc, (root_pc + 3) % 12, (root_pc + 7) % 12]
262
+ quality = "minor"
263
+ else:
264
+ # Use scale-derived quality
265
+ quality = chord["quality"]
266
+ if quality == "minor":
267
+ pcs = [root_pc, (root_pc + 3) % 12, (root_pc + 7) % 12]
268
+ elif quality == "diminished":
269
+ pcs = [root_pc, (root_pc + 3) % 12, (root_pc + 6) % 12]
270
+ elif quality == "augmented":
271
+ pcs = [root_pc, (root_pc + 4) % 12, (root_pc + 8) % 12]
272
+ else:
273
+ pcs = [root_pc, (root_pc + 4) % 12, (root_pc + 7) % 12]
274
+
275
+ # Handle suffix
276
+ suffix = remaining.lower()
277
+ if suffix == "7":
278
+ seventh = (root_pc + 10) % 12 # dominant/minor 7th
279
+ pcs.append(seventh)
280
+ if quality == "minor":
281
+ quality = "minor seventh"
282
+ else:
283
+ quality = "dominant seventh"
284
+ elif suffix == "o7":
285
+ seventh = (root_pc + 9) % 12 # diminished 7th
286
+ pcs.append(seventh)
287
+ quality = "diminished seventh"
288
+ elif suffix == "\u00b0":
289
+ quality = "diminished"
290
+ pcs = [root_pc, (root_pc + 3) % 12, (root_pc + 6) % 12]
291
+
292
+ # Convert to MIDI pitches in octave 4 (root at its natural octave-4 pitch)
293
+ base_midi = 60 + root_pc
294
+ midi = []
295
+ for pc in pcs:
296
+ p = base_midi + ((pc - root_pc) % 12)
297
+ midi.append(p)
298
+
299
+ return {
300
+ "figure": figure,
301
+ "root_pc": root_pc,
302
+ "pitches": [pitch_name(m) for m in midi],
303
+ "midi_pitches": midi,
304
+ "quality": quality,
305
+ }
306
+
307
+
308
+ def check_voice_leading(prev_pitches: list[int], curr_pitches: list[int]) -> list[dict]:
309
+ """Check voice leading issues between two chords."""
310
+ issues = []
311
+ if len(prev_pitches) < 2 or len(curr_pitches) < 2:
312
+ if len(curr_pitches) >= 2 and curr_pitches[-1] < curr_pitches[0]:
313
+ issues.append({"type": "voice_crossing"})
314
+ return issues
315
+
316
+ prev_bass, prev_sop = prev_pitches[0], prev_pitches[-1]
317
+ curr_bass, curr_sop = curr_pitches[0], curr_pitches[-1]
318
+
319
+ prev_iv = (prev_sop - prev_bass) % 12
320
+ curr_iv = (curr_sop - curr_bass) % 12
321
+
322
+ bass_moved = prev_bass != curr_bass
323
+ sop_moved = prev_sop != curr_sop
324
+ both_moved = bass_moved and sop_moved
325
+
326
+ if both_moved and prev_iv == 7 and curr_iv == 7:
327
+ issues.append({"type": "parallel_fifths"})
328
+
329
+ if both_moved and prev_iv % 12 == 0 and curr_iv % 12 == 0:
330
+ issues.append({"type": "parallel_octaves"})
331
+
332
+ if curr_sop < curr_bass:
333
+ issues.append({"type": "voice_crossing"})
334
+
335
+ # Hidden fifth: same direction motion landing on P5
336
+ if both_moved:
337
+ bass_dir = curr_bass - prev_bass
338
+ sop_dir = curr_sop - prev_sop
339
+ same_dir = (bass_dir > 0 and sop_dir > 0) or (bass_dir < 0 and sop_dir < 0)
340
+ if same_dir and curr_iv == 7:
341
+ issues.append({"type": "hidden_fifth"})
342
+
343
+ return issues
344
+
345
+
346
+ def chordify(notes: list[dict], quant: float = 0.125) -> list[dict]:
347
+ """Group notes by quantized beat position."""
348
+ groups: dict[float, list[dict]] = defaultdict(list)
349
+ for n in notes:
350
+ if n.get("mute", False):
351
+ continue
352
+ q_time = round(n["start_time"] / quant) * quant
353
+ groups[q_time].append(n)
354
+
355
+ result = []
356
+ for beat in sorted(groups.keys()):
357
+ group = groups[beat]
358
+ pitches = sorted(n["pitch"] for n in group)
359
+ duration = max(max(n["duration"] for n in group), quant)
360
+ result.append({
361
+ "beat": round(beat, 4),
362
+ "duration": round(duration, 4),
363
+ "pitches": pitches,
364
+ "pitch_classes": sorted(set(p % 12 for p in pitches)),
365
+ })
366
+ return result
@@ -1,4 +1,4 @@
1
- """Music theory tools powered by music21.
1
+ """Music theory tools pure Python, zero dependencies.
2
2
 
3
3
  7 tools for harmonic analysis, chord suggestion, voice leading detection,
4
4
  counterpoint generation, scale identification, harmonization, and intelligent
@@ -7,18 +7,18 @@ transposition — all working directly on live session clip data via get_notes.
7
7
  Design principle: tools compute from data, the LLM interprets and explains.
8
8
  Returns precise musical data (Roman numerals, pitch names, intervals), never
9
9
  explanations the LLM already knows from training.
10
-
11
- Requires: pip install music21 (lazy-imported, never at module level)
12
10
  """
13
11
 
14
12
  from __future__ import annotations
15
13
 
14
+ import random
16
15
  from collections import defaultdict
17
16
  from typing import Optional
18
17
 
19
18
  from fastmcp import Context
20
19
 
21
20
  from ..server import mcp
21
+ from . import _theory_engine as engine
22
22
 
23
23
 
24
24
  # -- Shared utilities --------------------------------------------------------
@@ -36,99 +36,19 @@ def _get_clip_notes(ctx: Context, track_index: int, clip_index: int) -> list[dic
36
36
  return result.get("notes", [])
37
37
 
38
38
 
39
- def _parse_key_string(key_str: str):
40
- """Parse a human-friendly key string into a music21 Key object.
41
-
42
- Accepts: "C", "c", "C major", "A minor", "g minor", "F# major", etc.
43
- music21's Key() wants: uppercase tonic = major, lowercase = minor.
44
- """
45
- from music21 import key
46
- hint = key_str.strip()
47
- if ' ' in hint:
48
- parts = hint.split()
49
- tonic = parts[0]
50
- mode = parts[1].lower() if len(parts) > 1 else 'major'
51
- if mode == 'minor':
52
- tonic = tonic[0].lower() + tonic[1:]
53
- return key.Key(tonic)
54
- return key.Key(hint)
55
-
56
-
57
- def _notes_to_stream(notes: list[dict], key_hint: str | None = None):
58
- """Convert LivePilot note dicts to a music21 Stream.
59
-
60
- This is the bridge between Ableton's note format and music21's
61
- analysis engine. Groups simultaneous notes into Chord objects.
62
- Quantizes start_times to 1/32 note resolution to avoid chordify
63
- fragmentation from micro-timing variations.
64
- """
65
- from music21 import stream, note, chord, meter
66
-
67
- s = stream.Part()
68
- s.append(meter.TimeSignature('4/4'))
69
-
39
+ def _detect_or_parse_key(notes: list[dict], key_hint: str | None = None) -> dict:
40
+ """Detect key from notes, or parse the user's hint."""
70
41
  if key_hint:
71
42
  try:
72
- k = _parse_key_string(key_hint)
73
- s.insert(0, k)
74
- except Exception:
43
+ return engine.parse_key(key_hint)
44
+ except ValueError:
75
45
  pass
46
+ return engine.detect_key(notes)
76
47
 
77
- # Quantize to 1/32 note (0.125 beats) to group near-simultaneous notes
78
- QUANT = 0.125
79
-
80
- time_groups: dict[float, list[dict]] = defaultdict(list)
81
- for n in notes:
82
- if n.get("mute", False):
83
- continue
84
- q_time = round(n["start_time"] / QUANT) * QUANT
85
- time_groups[q_time].append(n)
86
-
87
- for t in sorted(time_groups.keys()):
88
- group = time_groups[t]
89
- if len(group) == 1:
90
- n = group[0]
91
- m21_note = note.Note(n["pitch"])
92
- m21_note.quarterLength = max(QUANT, n["duration"])
93
- m21_note.volume.velocity = n.get("velocity", 100)
94
- s.insert(t, m21_note)
95
- else:
96
- pitches = sorted(set(n["pitch"] for n in group))
97
- dur = max(n["duration"] for n in group)
98
- m21_chord = chord.Chord(pitches)
99
- m21_chord.quarterLength = max(QUANT, dur)
100
- s.insert(t, m21_chord)
101
-
102
- return s
103
-
104
-
105
- def _detect_key(s):
106
- """Detect key from a music21 stream. Uses Krumhansl-Schmuckler algorithm."""
107
- from music21 import key as m21key
108
-
109
- # Check if key was already set by the user
110
- existing = list(s.recurse().getElementsByClass(m21key.Key))
111
- if existing:
112
- return existing[0]
113
-
114
- return s.analyze('key')
115
48
 
116
-
117
- def _pitch_name(midi_num: int) -> str:
118
- """MIDI number to note name (e.g., 60 → 'C4')."""
119
- from music21 import pitch
120
- return str(pitch.Pitch(midi_num))
121
-
122
-
123
- def _require_music21():
124
- """Verify music21 is installed, raise clear error if not."""
125
- try:
126
- import music21 # noqa: F401
127
- except ImportError:
128
- raise ImportError(
129
- "music21 is required for theory tools. "
130
- "Install with: pip install 'music21>=9.3'"
131
- )
49
+ def _key_display(key_info: dict) -> str:
50
+ """Format key info as 'C major' string."""
51
+ return f"{key_info['tonic_name']} {key_info['mode']}"
132
52
 
133
53
 
134
54
  # -- Tool 1: analyze_harmony ------------------------------------------------
@@ -148,54 +68,53 @@ def analyze_harmony(
148
68
  Returns chord progression with Roman numeral analysis. The tool computes
149
69
  the data; interpret the musical meaning yourself.
150
70
  """
151
- _require_music21()
152
- from music21 import roman
153
-
154
71
  notes = _get_clip_notes(ctx, track_index, clip_index)
155
72
  if not notes:
156
73
  return {"error": "No notes in clip", "suggestion": "Add notes first"}
157
74
 
158
- s = _notes_to_stream(notes, key_hint=key)
159
- detected_key = _detect_key(s)
75
+ key_info = _detect_or_parse_key(notes, key_hint=key)
76
+ tonic = key_info["tonic"]
77
+ mode = key_info["mode"]
160
78
 
161
- chordified = s.chordify()
79
+ chord_groups = engine.chordify(notes)
162
80
  chords = []
163
81
 
164
- for c in chordified.recurse().getElementsByClass('Chord'):
82
+ for group in chord_groups:
83
+ pitches = group["pitches"]
84
+ pcs = group["pitch_classes"]
85
+
86
+ rn = engine.roman_numeral(pcs, tonic, mode)
87
+ cn = engine.chord_name(pitches)
88
+
165
89
  entry = {
166
- "beat": round(float(c.offset), 3),
167
- "duration": round(float(c.quarterLength), 3),
168
- "pitches": [str(p) for p in c.pitches],
169
- "midi_pitches": [p.midi for p in c.pitches],
170
- "chord_name": c.pitchedCommonName,
90
+ "beat": group["beat"],
91
+ "duration": group["duration"],
92
+ "pitches": [engine.pitch_name(p) for p in pitches],
93
+ "midi_pitches": pitches,
94
+ "chord_name": cn,
95
+ "roman_numeral": rn["figure"],
96
+ "figure": rn["figure"],
97
+ "quality": rn["quality"],
98
+ "inversion": rn["inversion"],
99
+ "scale_degree": rn["degree"] + 1,
171
100
  }
172
- try:
173
- rn = roman.romanNumeralFromChord(c, detected_key)
174
- entry["roman_numeral"] = rn.romanNumeral
175
- entry["figure"] = rn.figure
176
- entry["quality"] = rn.quality
177
- entry["inversion"] = rn.inversion()
178
- entry["scale_degree"] = rn.scaleDegree
179
- except Exception:
180
- entry["roman_numeral"] = "?"
181
- entry["figure"] = "?"
182
-
183
101
  chords.append(entry)
184
102
 
185
103
  progression = " - ".join(c.get("figure", "?") for c in chords[:24])
186
104
 
187
- # Key confidence
188
- key_info = {"key": str(detected_key)}
189
- if hasattr(detected_key, 'correlationCoefficient'):
190
- key_info["confidence"] = round(detected_key.correlationCoefficient, 3)
191
- if hasattr(detected_key, 'alternateInterpretations'):
192
- alts = detected_key.alternateInterpretations[:3]
193
- key_info["alternatives"] = [str(k) for k in alts]
105
+ key_result = {
106
+ "key": _key_display(key_info),
107
+ "confidence": key_info.get("confidence"),
108
+ }
109
+ if "alternatives" in key_info:
110
+ key_result["alternatives"] = [
111
+ f"{a['tonic_name']} {a['mode']}" for a in key_info["alternatives"][:3]
112
+ ]
194
113
 
195
114
  return {
196
115
  "track_index": track_index,
197
116
  "clip_index": clip_index,
198
- **key_info,
117
+ **key_result,
199
118
  "chord_count": len(chords),
200
119
  "progression": progression,
201
120
  "chords": chords[:32],
@@ -220,40 +139,33 @@ def suggest_next_chord(
220
139
 
221
140
  Returns concrete chord suggestions with pitches ready for add_notes.
222
141
  """
223
- _require_music21()
224
- from music21 import roman
225
-
226
142
  notes = _get_clip_notes(ctx, track_index, clip_index)
227
143
  if not notes:
228
144
  return {"error": "No notes in clip"}
229
145
 
230
- s = _notes_to_stream(notes, key_hint=key)
231
- detected_key = _detect_key(s)
146
+ key_info = _detect_or_parse_key(notes, key_hint=key)
147
+ tonic = key_info["tonic"]
148
+ mode = key_info["mode"]
232
149
 
233
- # Find the last chord
234
- chordified = s.chordify()
235
- chord_list = list(chordified.recurse().getElementsByClass('Chord'))
236
- if not chord_list:
150
+ chord_groups = engine.chordify(notes)
151
+ if not chord_groups:
237
152
  return {"error": "No chords detected in clip"}
238
153
 
239
- last_chord = chord_list[-1]
240
- last_figure = "I"
241
- try:
242
- last_rn = roman.romanNumeralFromChord(last_chord, detected_key)
243
- last_figure = last_rn.romanNumeral
244
- except Exception:
245
- last_rn = None
154
+ # Analyze last chord
155
+ last_group = chord_groups[-1]
156
+ last_rn = engine.roman_numeral(last_group["pitch_classes"], tonic, mode)
157
+ last_figure = last_rn["figure"]
246
158
 
247
159
  # Progression maps by style
248
160
  _progressions = {
249
161
  "common_practice": {
250
162
  "I": ["IV", "V", "vi", "ii"],
251
- "ii": ["V", "viio", "IV"],
163
+ "ii": ["V", "vii\u00b0", "IV"],
252
164
  "iii": ["vi", "IV", "ii"],
253
165
  "IV": ["V", "I", "ii"],
254
166
  "V": ["I", "vi", "IV"],
255
167
  "vi": ["ii", "IV", "V", "I"],
256
- "viio": ["I", "iii"],
168
+ "vii\u00b0": ["I", "iii"],
257
169
  },
258
170
  "jazz": {
259
171
  "I": ["IV7", "ii7", "vi7", "bVII7"],
@@ -283,7 +195,6 @@ def suggest_next_chord(
283
195
  # Match the last chord to the closest key in the map
284
196
  candidates = style_map.get(last_figure)
285
197
  if not candidates:
286
- # Try uppercase/lowercase variants
287
198
  for k in style_map:
288
199
  if k.upper() == last_figure.upper():
289
200
  candidates = style_map[k]
@@ -294,21 +205,21 @@ def suggest_next_chord(
294
205
  # Build concrete suggestions with MIDI pitches
295
206
  suggestions = []
296
207
  for fig in candidates:
297
- try:
298
- rn = roman.RomanNumeral(fig, detected_key)
208
+ result = engine.roman_figure_to_pitches(fig, tonic, mode)
209
+ if "error" not in result:
299
210
  suggestions.append({
300
211
  "figure": fig,
301
- "chord_name": rn.pitchedCommonName,
302
- "pitches": [str(p) for p in rn.pitches],
303
- "midi_pitches": [p.midi for p in rn.pitches],
304
- "quality": rn.quality,
212
+ "chord_name": engine.chord_name(result["midi_pitches"]),
213
+ "pitches": result["pitches"],
214
+ "midi_pitches": result["midi_pitches"],
215
+ "quality": result["quality"],
305
216
  })
306
- except Exception:
217
+ else:
307
218
  suggestions.append({"figure": fig, "chord_name": fig})
308
219
 
309
220
  return {
310
- "key": str(detected_key),
311
- "last_chord": last_rn.figure if last_rn else "unknown",
221
+ "key": _key_display(key_info),
222
+ "last_chord": last_figure,
312
223
  "style": style,
313
224
  "suggestions": suggestions,
314
225
  }
@@ -330,21 +241,16 @@ def detect_theory_issues(
330
241
  strict=False: Only clear errors (parallels, out-of-key).
331
242
  strict=True: Also flag style issues (large leaps, missing resolution).
332
243
 
333
- Uses music21's VoiceLeadingQuartet for accurate parallel detection.
334
244
  Returns ranked issues with beat positions.
335
245
  """
336
- _require_music21()
337
- from music21 import roman, voiceLeading, note as m21note
338
-
339
246
  notes = _get_clip_notes(ctx, track_index, clip_index)
340
247
  if not notes:
341
248
  return {"error": "No notes in clip"}
342
249
 
343
- s = _notes_to_stream(notes, key_hint=key)
344
- detected_key = _detect_key(s)
345
- scale_pitch_classes = set(
346
- p.midi % 12 for p in detected_key.getScale().getPitches()
347
- )
250
+ key_info = _detect_or_parse_key(notes, key_hint=key)
251
+ tonic = key_info["tonic"]
252
+ mode = key_info["mode"]
253
+ scale_pcs = set(engine.get_scale_pitches(tonic, mode))
348
254
 
349
255
  issues = []
350
256
 
@@ -352,86 +258,53 @@ def detect_theory_issues(
352
258
  for n in notes:
353
259
  if n.get("mute", False):
354
260
  continue
355
- if n["pitch"] % 12 not in scale_pitch_classes:
261
+ if n["pitch"] % 12 not in scale_pcs:
356
262
  issues.append({
357
263
  "type": "out_of_key",
358
264
  "severity": "warning",
359
265
  "beat": round(n["start_time"], 3),
360
- "detail": f"{_pitch_name(n['pitch'])} not in {detected_key}",
266
+ "detail": f"{engine.pitch_name(n['pitch'])} not in {_key_display(key_info)}",
361
267
  })
362
268
 
363
- # 2. Parallel fifths/octaves using VoiceLeadingQuartet
364
- chordified = s.chordify()
365
- chord_list = list(chordified.recurse().getElementsByClass('Chord'))
366
-
367
- for i in range(1, len(chord_list)):
368
- prev_c = chord_list[i - 1]
369
- curr_c = chord_list[i]
370
- prev_pitches = sorted(prev_c.pitches, key=lambda p: p.midi)
371
- curr_pitches = sorted(curr_c.pitches, key=lambda p: p.midi)
372
-
373
- if len(prev_pitches) < 2 or len(curr_pitches) < 2:
374
- continue
269
+ # 2. Parallel fifths/octaves and voice crossing
270
+ chord_groups = engine.chordify(notes)
271
+ for i in range(1, len(chord_groups)):
272
+ prev_pitches = chord_groups[i - 1]["pitches"]
273
+ curr_pitches = chord_groups[i]["pitches"]
274
+ beat = chord_groups[i]["beat"]
275
+
276
+ vl_issues = engine.check_voice_leading(prev_pitches, curr_pitches)
277
+ for vl in vl_issues:
278
+ severity = "error" if vl["type"] in ("parallel_fifths", "parallel_octaves") else "warning"
279
+ if vl["type"] == "hidden_fifth":
280
+ severity = "info"
281
+ if not strict:
282
+ continue
283
+ detail_map = {
284
+ "parallel_fifths": "Parallel fifths in outer voices",
285
+ "parallel_octaves": "Parallel octaves in outer voices",
286
+ "voice_crossing": "Voice crossing detected",
287
+ "hidden_fifth": "Hidden fifth in outer voices",
288
+ }
289
+ issues.append({
290
+ "type": vl["type"],
291
+ "severity": severity,
292
+ "beat": round(beat, 3),
293
+ "detail": detail_map.get(vl["type"], vl["type"]),
294
+ })
375
295
 
376
- # Check outer voices (bass and soprano)
377
- try:
378
- vlq = voiceLeading.VoiceLeadingQuartet(
379
- prev_pitches[-1], curr_pitches[-1], # soprano
380
- prev_pitches[0], curr_pitches[0], # bass
381
- )
382
- if vlq.parallelFifth():
383
- issues.append({
384
- "type": "parallel_fifths",
385
- "severity": "error",
386
- "beat": round(float(curr_c.offset), 3),
387
- "detail": "Parallel fifths in outer voices",
388
- })
389
- if vlq.parallelOctave():
390
- issues.append({
391
- "type": "parallel_octaves",
392
- "severity": "error",
393
- "beat": round(float(curr_c.offset), 3),
394
- "detail": "Parallel octaves in outer voices",
395
- })
396
- if vlq.voiceCrossing():
397
- issues.append({
398
- "type": "voice_crossing",
399
- "severity": "warning",
400
- "beat": round(float(curr_c.offset), 3),
401
- "detail": "Voice crossing detected",
402
- })
403
- if strict and vlq.hiddenFifth():
296
+ # 3. Unresolved dominant (strict mode)
297
+ if strict:
298
+ for i in range(len(chord_groups) - 1):
299
+ rn = engine.roman_numeral(chord_groups[i]["pitch_classes"], tonic, mode)
300
+ next_rn = engine.roman_numeral(chord_groups[i + 1]["pitch_classes"], tonic, mode)
301
+ if rn["figure"] in ('V', 'V7') and next_rn["figure"] not in ('I', 'i', 'vi', 'VI'):
404
302
  issues.append({
405
- "type": "hidden_fifth",
303
+ "type": "unresolved_dominant",
406
304
  "severity": "info",
407
- "beat": round(float(curr_c.offset), 3),
408
- "detail": "Hidden fifth in outer voices",
305
+ "beat": round(chord_groups[i]["beat"], 3),
306
+ "detail": f"{rn['figure']} resolves to {next_rn['figure']} instead of tonic",
409
307
  })
410
- except Exception:
411
- pass
412
-
413
- # 3. Unresolved dominant (strict mode)
414
- if strict:
415
- for i in range(len(chord_list) - 1):
416
- try:
417
- rn = roman.romanNumeralFromChord(chord_list[i], detected_key)
418
- next_rn = roman.romanNumeralFromChord(
419
- chord_list[i + 1], detected_key
420
- )
421
- if rn.romanNumeral in ('V', 'V7') and next_rn.romanNumeral not in (
422
- 'I', 'i', 'vi', 'VI'
423
- ):
424
- issues.append({
425
- "type": "unresolved_dominant",
426
- "severity": "info",
427
- "beat": round(float(chord_list[i].offset), 3),
428
- "detail": (
429
- f"{rn.figure} resolves to {next_rn.figure} "
430
- f"instead of tonic"
431
- ),
432
- })
433
- except Exception:
434
- pass
435
308
 
436
309
  # 4. Large leaps without resolution (strict mode)
437
310
  if strict:
@@ -453,7 +326,7 @@ def detect_theory_issues(
453
326
  issues.sort(key=lambda x: (severity_order.get(x["severity"], 3), x.get("beat", 0)))
454
327
 
455
328
  return {
456
- "key": str(detected_key),
329
+ "key": _key_display(key_info),
457
330
  "strict_mode": strict,
458
331
  "issue_count": len(issues),
459
332
  "errors": sum(1 for i in issues if i["severity"] == "error"),
@@ -472,41 +345,31 @@ def identify_scale(
472
345
  ) -> dict:
473
346
  """Identify the scale/mode of a MIDI clip beyond basic major/minor.
474
347
 
475
- Goes deeper than get_detected_key uses music21's Krumhansl-Schmuckler
476
- algorithm with alternateInterpretations for modes (Dorian, Phrygian,
477
- Lydian, Mixolydian) and exotic scales.
348
+ Uses Krumhansl-Schmuckler algorithm with 7 mode profiles (major, minor,
349
+ dorian, phrygian, lydian, mixolydian, locrian).
478
350
 
479
351
  Returns ranked key matches with confidence scores.
480
352
  """
481
- _require_music21()
482
-
483
353
  notes = _get_clip_notes(ctx, track_index, clip_index)
484
354
  if not notes:
485
355
  return {"error": "No notes in clip"}
486
356
 
487
- s = _notes_to_stream(notes)
488
-
489
- # music21's key analysis returns the best match and alternatives
490
- detected = s.analyze('key')
357
+ detected = engine.detect_key(notes, mode_detection=True)
491
358
 
492
359
  results = [{
493
- "key": str(detected),
494
- "confidence": round(detected.correlationCoefficient, 3)
495
- if hasattr(detected, 'correlationCoefficient') else None,
496
- "mode": detected.mode,
497
- "tonic": str(detected.tonic),
360
+ "key": f"{detected['tonic_name']} {detected['mode']}",
361
+ "confidence": detected["confidence"],
362
+ "mode": detected["mode"],
363
+ "tonic": detected["tonic_name"],
498
364
  }]
499
365
 
500
- # Add alternatives
501
- if hasattr(detected, 'alternateInterpretations'):
502
- for alt in detected.alternateInterpretations[:7]:
503
- results.append({
504
- "key": str(alt),
505
- "confidence": round(alt.correlationCoefficient, 3)
506
- if hasattr(alt, 'correlationCoefficient') else None,
507
- "mode": alt.mode,
508
- "tonic": str(alt.tonic),
509
- })
366
+ for alt in detected.get("alternatives", [])[:7]:
367
+ results.append({
368
+ "key": f"{alt['tonic_name']} {alt['mode']}",
369
+ "confidence": alt["confidence"],
370
+ "mode": alt["mode"],
371
+ "tonic": alt["tonic_name"],
372
+ })
510
373
 
511
374
  # Pitch class usage for context
512
375
  pitch_classes = defaultdict(float)
@@ -514,9 +377,8 @@ def identify_scale(
514
377
  if not n.get("mute", False):
515
378
  pitch_classes[n["pitch"] % 12] += n["duration"]
516
379
 
517
- note_names = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B']
518
380
  pc_usage = {
519
- note_names[pc]: round(dur, 3)
381
+ engine.NOTE_NAMES[pc]: round(dur, 3)
520
382
  for pc, dur in sorted(pitch_classes.items())
521
383
  }
522
384
 
@@ -548,21 +410,18 @@ def harmonize_melody(
548
410
 
549
411
  Processing time: 2-5s.
550
412
  """
551
- _require_music21()
552
- from music21 import roman
553
-
554
413
  notes = _get_clip_notes(ctx, track_index, clip_index)
555
414
  if not notes:
556
415
  return {"error": "No notes in clip"}
557
416
 
558
- # Use only non-muted, sorted by time
559
417
  melody = sorted(
560
418
  [n for n in notes if not n.get("mute", False)],
561
419
  key=lambda n: n["start_time"],
562
420
  )
563
421
 
564
- s = _notes_to_stream(melody, key_hint=key)
565
- detected_key = _detect_key(s)
422
+ key_info = _detect_or_parse_key(melody, key_hint=key)
423
+ tonic = key_info["tonic"]
424
+ mode = key_info["mode"]
566
425
 
567
426
  result_voices = {"soprano": [], "bass": []}
568
427
  if voices == 4:
@@ -572,38 +431,35 @@ def harmonize_melody(
572
431
  prev_bass_midi = None
573
432
 
574
433
  for n in melody:
575
- from music21 import pitch as m21pitch
576
434
  melody_pitch = n["pitch"]
577
435
  beat = n["start_time"]
578
436
  dur = n["duration"]
579
437
  mel_pc = melody_pitch % 12
580
438
 
581
439
  # Find the best diatonic chord containing this pitch
582
- best_rn = None
583
- for degree in [1, 4, 5, 6, 2, 3, 7]:
584
- try:
585
- rn = roman.RomanNumeral(degree, detected_key)
586
- chord_pcs = [p.midi % 12 for p in rn.pitches]
587
- if mel_pc in chord_pcs:
588
- best_rn = rn
589
- break
590
- except Exception:
591
- continue
592
-
593
- if best_rn is None:
594
- # Fallback: use tonic triad
595
- best_rn = roman.RomanNumeral(1, detected_key)
596
-
597
- chord_midis = sorted([p.midi for p in best_rn.pitches])
440
+ best_chord = None
441
+ for degree in [0, 3, 4, 5, 1, 2, 6]: # I, IV, V, vi, ii, iii, vii
442
+ chord = engine.build_chord(degree, tonic, mode)
443
+ if mel_pc in chord["pitch_classes"]:
444
+ best_chord = chord
445
+ break
446
+
447
+ if best_chord is None:
448
+ best_chord = engine.build_chord(0, tonic, mode)
449
+
450
+ # Build MIDI pitches for the chord
451
+ chord_midis = sorted([
452
+ 60 + ((pc - best_chord["root_pc"]) % 12) + best_chord["root_pc"]
453
+ for pc in best_chord["pitch_classes"]
454
+ ])
598
455
 
599
456
  # Bass: root in low octave, smooth motion preferred
600
- bass = chord_midis[0]
601
- while bass > 52:
457
+ bass = 36 + best_chord["root_pc"]
458
+ if bass > 52:
602
459
  bass -= 12
603
- while bass < 36:
460
+ if bass < 36:
604
461
  bass += 12
605
462
  if prev_bass_midi is not None:
606
- # Try octave that's closest to previous bass
607
463
  options = [bass, bass - 12, bass + 12]
608
464
  options = [b for b in options if 33 <= b <= 55]
609
465
  if options:
@@ -650,7 +506,7 @@ def harmonize_melody(
650
506
  })
651
507
 
652
508
  result = {
653
- "key": str(detected_key),
509
+ "key": _key_display(key_info),
654
510
  "voices": voices,
655
511
  "melody_notes": len(melody),
656
512
  }
@@ -683,8 +539,6 @@ def generate_countermelody(
683
539
  Returns note data ready for add_notes on a new track.
684
540
  Processing time: 2-5s.
685
541
  """
686
- _require_music21()
687
- import random
688
542
  random.seed(seed)
689
543
 
690
544
  notes = _get_clip_notes(ctx, track_index, clip_index)
@@ -696,15 +550,11 @@ def generate_countermelody(
696
550
  key=lambda n: n["start_time"],
697
551
  )
698
552
 
699
- s = _notes_to_stream(melody, key_hint=key)
700
- detected_key = _detect_key(s)
701
- scale_pcs = [p.midi % 12 for p in detected_key.getScale().getPitches()]
553
+ key_info = _detect_or_parse_key(melody, key_hint=key)
554
+ scale_pcs = set(engine.get_scale_pitches(key_info["tonic"], key_info["mode"]))
702
555
 
703
556
  # Build pool of scale pitches in range
704
- pool = []
705
- for p in range(range_low, range_high + 1):
706
- if p % 12 in scale_pcs:
707
- pool.append(p)
557
+ pool = [p for p in range(range_low, range_high + 1) if p % 12 in scale_pcs]
708
558
  if not pool:
709
559
  return {"error": "No scale pitches in given range"}
710
560
 
@@ -720,7 +570,6 @@ def generate_countermelody(
720
570
  dur = n["duration"] / species
721
571
 
722
572
  for s_idx in range(species):
723
- # Score candidates
724
573
  scored = []
725
574
  for cp in pool:
726
575
  iv = abs(cp - mel_pitch) % 12
@@ -736,10 +585,9 @@ def generate_countermelody(
736
585
  if (mel_dir > 0 and cp_dir < 0) or (mel_dir < 0 and cp_dir > 0):
737
586
  score += 10
738
587
  # Penalize parallel perfect intervals
739
- if prev_cp is not None and i > 0:
740
- prev_iv = abs(prev_cp - melody[i - 1]["pitch"]) % 12
741
- if prev_iv == iv and iv in (0, 7):
742
- score -= 50 # Hard penalty for parallel P5/P8
588
+ prev_iv = abs(prev_cp - melody[i - 1]["pitch"]) % 12
589
+ if prev_iv == iv and iv in (0, 7):
590
+ score -= 50
743
591
 
744
592
  # Stepwise motion bonus
745
593
  if prev_cp is not None:
@@ -753,12 +601,10 @@ def generate_countermelody(
753
601
  else:
754
602
  score += 3
755
603
 
756
- # Small random variation for musicality
757
604
  score += random.uniform(0, 2)
758
605
  scored.append((cp, score))
759
606
 
760
607
  if not scored:
761
- # Fallback: pick any pool note
762
608
  scored = [(random.choice(pool), 0)]
763
609
 
764
610
  scored.sort(key=lambda x: -x[1])
@@ -773,12 +619,12 @@ def generate_countermelody(
773
619
  prev_cp = chosen
774
620
 
775
621
  return {
776
- "key": str(detected_key),
622
+ "key": _key_display(key_info),
777
623
  "species": species,
778
624
  "melody_notes": len(melody),
779
625
  "counter_notes": counter_notes,
780
626
  "counter_note_count": len(counter_notes),
781
- "range": f"{_pitch_name(range_low)}-{_pitch_name(range_high)}",
627
+ "range": f"{engine.pitch_name(range_low)}-{engine.pitch_name(range_high)}",
782
628
  "seed": seed,
783
629
  }
784
630
 
@@ -802,24 +648,20 @@ def transpose_smart(
802
648
 
803
649
  Returns transposed note data ready for add_notes or modify_notes.
804
650
  """
805
- _require_music21()
806
- from music21 import pitch as m21pitch
807
-
808
651
  notes = _get_clip_notes(ctx, track_index, clip_index)
809
652
  if not notes:
810
653
  return {"error": "No notes in clip"}
811
654
 
812
- s = _notes_to_stream(notes)
813
- source_key = _detect_key(s)
655
+ source_key = engine.detect_key(notes)
814
656
 
815
657
  try:
816
- target = _parse_key_string(target_key)
817
- except Exception:
658
+ target = engine.parse_key(target_key)
659
+ except ValueError:
818
660
  return {"error": f"Invalid target key: {target_key}"}
819
661
 
820
- source_tonic = m21pitch.Pitch(str(source_key.tonic))
821
- target_tonic = m21pitch.Pitch(str(target.tonic))
822
- semitone_shift = target_tonic.midi - source_tonic.midi
662
+ source_tonic = source_key["tonic"]
663
+ target_tonic = target["tonic"]
664
+ semitone_shift = target_tonic - source_tonic
823
665
 
824
666
  if mode == "chromatic":
825
667
  transposed = []
@@ -830,10 +672,10 @@ def transpose_smart(
830
672
  transposed.append(tn)
831
673
  else:
832
674
  # Diatonic: map scale degrees
833
- source_scale = source_key.getScale().getPitches()
834
- target_scale = target.getScale().getPitches()
835
- source_pcs = [p.midi % 12 for p in source_scale]
836
- target_pcs = [p.midi % 12 for p in target_scale]
675
+ source_mode = source_key["mode"]
676
+ target_mode = target.get("mode", source_mode)
677
+ source_pcs = engine.get_scale_pitches(source_tonic, source_mode)
678
+ target_pcs = engine.get_scale_pitches(target_tonic, target_mode)
837
679
 
838
680
  degree_map = {}
839
681
  for i in range(min(len(source_pcs), len(target_pcs))):
@@ -847,7 +689,6 @@ def transpose_smart(
847
689
 
848
690
  if pc in degree_map:
849
691
  new_pc = degree_map[pc]
850
- # Calculate octave adjustment from tonic shift
851
692
  new_pitch = octave * 12 + new_pc
852
693
  # Adjust if the shift crossed an octave boundary
853
694
  if abs(new_pitch - (n["pitch"] + semitone_shift)) > 6:
@@ -856,15 +697,14 @@ def transpose_smart(
856
697
  else:
857
698
  new_pitch -= 12
858
699
  else:
859
- # Chromatic note: shift by tonic distance
860
700
  new_pitch = n["pitch"] + semitone_shift
861
701
 
862
702
  tn["pitch"] = max(0, min(127, new_pitch))
863
703
  transposed.append(tn)
864
704
 
865
705
  return {
866
- "source_key": str(source_key),
867
- "target_key": str(target),
706
+ "source_key": _key_display(source_key),
707
+ "target_key": f"{engine.NOTE_NAMES[target_tonic]} {target.get('mode', 'major')}",
868
708
  "mode": mode,
869
709
  "semitone_shift": semitone_shift,
870
710
  "note_count": len(transposed),
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.6.4",
3
+ "version": "1.6.5",
4
4
  "mcpName": "io.github.dreamrec/livepilot",
5
5
  "description": "AI copilot for Ableton Live 12 — 142 tools, device atlas (280+ devices), real-time audio analysis, automation intelligence, and technique memory",
6
6
  "author": "Pilot Studio",
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "livepilot",
3
- "version": "1.6.4",
3
+ "version": "1.6.5",
4
4
  "description": "AI copilot for Ableton Live 12 — 142 tools, device atlas (280+ devices), real-time audio analysis, automation intelligence, and technique memory",
5
5
  "author": "Pilot Studio",
6
6
  "skills": [
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: livepilot-core
3
- description: Core discipline for controlling Ableton Live 12 through LivePilot's 142 MCP tools, device atlas (280+ devices), M4L analyzer (spectrum/RMS/key detection), automation intelligence (16 curve types, 15 recipes), music theory analysis (music21), and technique memory. Use whenever working with Ableton Live through MCP tools.
3
+ description: Core discipline for controlling Ableton Live 12 through LivePilot's 142 MCP tools, device atlas (280+ devices), M4L analyzer (spectrum/RMS/key detection), automation intelligence (16 curve types, 15 recipes), music theory analysis, and technique memory. Use whenever working with Ableton Live through MCP tools.
4
4
  ---
5
5
 
6
6
  # LivePilot Core — Ableton Live 12 AI Copilot
@@ -176,7 +176,7 @@ Clip automation CRUD + intelligent curve generation with 15 built-in recipes.
176
176
  - Load `references/automation-atlas.md` for curve theory, genre recipes, diagnostic technique, and cross-track spectral mapping
177
177
 
178
178
  ### Theory (7)
179
- Music theory analysis powered by music21. Optional dependency install with `pip install 'music21>=9.3'`.
179
+ Music theory analysis built-in pure Python engine, zero external dependencies.
180
180
 
181
181
  **Tools:** `analyze_harmony` · `suggest_next_chord` · `detect_theory_issues` · `identify_scale` · `harmonize_melody` · `generate_countermelody` · `transpose_smart`
182
182
 
@@ -184,7 +184,7 @@ Music theory analysis powered by music21. Optional dependency — install with `
184
184
  - These tools read MIDI notes directly from session clips — no file export needed
185
185
  - Auto-detects key via Krumhansl-Schmuckler if not provided; pass `key` hint for better accuracy
186
186
  - `analyze_harmony` and `detect_theory_issues` are analysis-only; `harmonize_melody`, `generate_countermelody`, and `transpose_smart` return note data ready for `add_notes`
187
- - Use your own musical knowledge alongside these tools — music21 provides data, you provide interpretation
187
+ - Use your own musical knowledge alongside these tools — the engine provides data, you provide interpretation
188
188
  - Processing time: 2-5s for generative tools (harmonize, countermelody)
189
189
 
190
190
  ## Workflow: Building a Beat
@@ -1,4 +1,4 @@
1
- # LivePilot v1.6.4 — Architecture & Tool Reference
1
+ # LivePilot v1.6.5 — Architecture & Tool Reference
2
2
 
3
3
  LivePilot is an agentic production system for Ableton Live 12. It combines 142 MCP tools with a device knowledge corpus, real-time audio analysis, automation intelligence, and persistent technique memory.
4
4
 
@@ -244,7 +244,7 @@ This turns "set EQ band 3 to -4 dB" into "cut 400 Hz by 4 dB, then read the spec
244
244
 
245
245
  **15 recipes:** filter_sweep_up, filter_sweep_down, dub_throw, tape_stop, build_rise, sidechain_pump, fade_in, fade_out, tremolo, auto_pan, stutter, breathing, washout, vinyl_crackle, stereo_narrow
246
246
 
247
- ### Theory (7) — Music theory analysis powered by music21 (optional dependency)
247
+ ### Theory (7) — Built-in music theory analysis (zero dependencies)
248
248
 
249
249
  | Tool | What it does | Key params |
250
250
  |------|-------------|------------|
@@ -256,7 +256,7 @@ This turns "set EQ band 3 to -4 dB" into "cut 400 Hz by 4 dB, then read the spec
256
256
  | `generate_countermelody` | Species counterpoint against a melody | `track_index`, `clip_index`, `species` (1 or 2) |
257
257
  | `transpose_smart` | Diatonic or chromatic transposition to a new key | `track_index`, `clip_index`, `target_key`, `mode` (diatonic/chromatic) |
258
258
 
259
- **Requires:** `pip install 'music21>=9.3'` — optional dependency, all other 135 tools work without it.
259
+ **Built-in** zero external dependencies, works on every LivePilot install.
260
260
 
261
261
  ## Units & Ranges Quick Reference
262
262
 
@@ -5,7 +5,7 @@ Entry point for the ControlSurface. Ableton calls create_instance(c_instance)
5
5
  when this script is selected in Preferences > Link, Tempo & MIDI.
6
6
  """
7
7
 
8
- __version__ = "1.6.4"
8
+ __version__ = "1.6.5"
9
9
 
10
10
  from _Framework.ControlSurface import ControlSurface
11
11
  from .server import LivePilotServer
@@ -34,7 +34,7 @@ class LivePilot(ControlSurface):
34
34
  ControlSurface.__init__(self, c_instance)
35
35
  self._server = LivePilotServer(self)
36
36
  self._server.start()
37
- self.log_message("LivePilot v1.6.4 initialized")
37
+ self.log_message("LivePilot v1.6.5 initialized")
38
38
  self.show_message("LivePilot: Listening on port 9878")
39
39
 
40
40
  def disconnect(self):
package/requirements.txt CHANGED
@@ -1,5 +1,2 @@
1
1
  # LivePilot MCP Server dependencies
2
2
  fastmcp>=3.0.0,<4.0.0
3
-
4
- # Optional: Music theory tools (lazy-imported, not required for core tools)
5
- # pip install music21>=9.3