livepilot 1.6.3 → 1.6.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,33 @@
1
1
  # Changelog
2
2
 
3
+ ## 1.6.5 — Drop music21 (March 2026)
4
+
5
+ **Theory tools rewritten with zero-dependency pure Python engine.**
6
+
7
+ - Replace music21 (~100MB) with built-in `_theory_engine.py` (~350 lines)
8
+ - Krumhansl-Schmuckler key detection with 7 mode profiles (major, minor, dorian, phrygian, lydian, mixolydian, locrian)
9
+ - All 7 theory tools keep identical APIs and return formats
10
+ - Zero external dependencies — theory tools work on every install
11
+ - 2-3s faster cold start (no music21 import overhead)
12
+
13
+ ## 1.6.4 — Music Theory (March 2026)
14
+
15
+ **7 new tools (135 -> 142), theory analysis on live MIDI clips.**
16
+
17
+ ### Theory Domain (7 tools)
18
+ - `analyze_harmony` — chord-by-chord Roman numeral analysis of session clips
19
+ - `suggest_next_chord` — theory-valid chord continuations with style presets (common_practice, jazz, modal, pop)
20
+ - `detect_theory_issues` — parallel fifths/octaves, out-of-key notes, voice crossing, unresolved dominants
21
+ - `identify_scale` — Krumhansl-Schmuckler key/mode detection with confidence-ranked alternatives
22
+ - `harmonize_melody` — 2 or 4-voice SATB harmonization with smooth voice leading
23
+ - `generate_countermelody` — species counterpoint (1st or 2nd species) against a melody
24
+ - `transpose_smart` — diatonic or chromatic transposition preserving scale relationships
25
+
26
+ ### Architecture
27
+ - Pure Python `_theory_engine.py`: Krumhansl-Schmuckler key detection, Roman numeral analysis, voice leading checks
28
+ - Chordify bridge: groups notes by quantized beat position (1/32 note grid)
29
+ - Key hint parsing: handles "A minor", "F# major", enharmonic normalization
30
+
3
31
  ## 1.6.3 — Audit Hardening (March 2026)
4
32
 
5
33
  - Fix: cursor aliasing in M4L bridge `walk_device` — nested rack traversal now reads chain/pad counts before recursion clobbers shared cursors
package/README.md CHANGED
@@ -12,7 +12,7 @@
12
12
  [![GitHub stars](https://img.shields.io/github/stars/dreamrec/LivePilot)](https://github.com/dreamrec/LivePilot/stargazers)
13
13
  [![npm](https://img.shields.io/npm/v/livepilot)](https://www.npmjs.com/package/livepilot)
14
14
 
15
- **AI copilot for Ableton Live 12** — 135 MCP tools, a deep device knowledge corpus, real-time audio analysis, and persistent technique memory.
15
+ **AI copilot for Ableton Live 12** — 142 MCP tools, a deep device knowledge corpus, real-time audio analysis, and persistent technique memory.
16
16
 
17
17
  Most Ableton MCP servers give the AI tools to push buttons. LivePilot gives it three things on top of that:
18
18
 
@@ -20,7 +20,7 @@ Most Ableton MCP servers give the AI tools to push buttons. LivePilot gives it t
20
20
  - **Perception** — An M4L analyzer that reads the master bus in real-time: 8-band spectrum, RMS/peak metering, pitch tracking, key detection. The AI makes decisions based on what it hears, not just what's configured.
21
21
  - **Memory** — A technique library that persists across sessions. The AI remembers how you built that bass sound, what swing you like on hi-hats, which reverb chain worked on vocals. It learns your taste over time.
22
22
 
23
- These three layers sit on top of 135 deterministic MCP tools that cover transport, tracks, clips, MIDI, devices, scenes, mixing, browser, arrangement, and sample manipulation. Every command goes through Ableton's official Live Object Model API — the same interface Ableton's own control surfaces use. Everything is reversible with undo.
23
+ These three layers sit on top of 142 deterministic MCP tools that cover transport, tracks, clips, MIDI, devices, scenes, mixing, browser, arrangement, and sample manipulation. Every command goes through Ableton's official Live Object Model API — the same interface Ableton's own control surfaces use. Everything is reversible with undo.
24
24
 
25
25
  ---
26
26
 
@@ -296,7 +296,7 @@ npx -y github:dreamrec/LivePilot --status
296
296
 
297
297
  ---
298
298
 
299
- ## 135 Tools Across 12 Domains
299
+ ## 142 Tools Across 13 Domains
300
300
 
301
301
  | Domain | Tools | What you can do |
302
302
  |--------|:-----:|-----------------|
@@ -312,6 +312,7 @@ npx -y github:dreamrec/LivePilot --status
312
312
  | **Automation** | 8 | Clip envelope CRUD, 16-type curve engine, 15 named recipes, spectral-aware suggestions |
313
313
  | **Memory** | 8 | Save, recall, replay, and manage production techniques |
314
314
  | **Analyzer** | 20 | Real-time spectral analysis, key detection, sample manipulation, warp markers, device introspection (requires M4L device) |
315
+ | **Theory** | 7 | Harmony analysis, Roman numerals, scale identification, chord suggestions, countermelody, SATB harmonization, smart transposition |
315
316
 
316
317
  <details>
317
318
  <summary><strong>Full tool list</strong></summary>
@@ -402,7 +403,7 @@ There are **15+ MCP servers for Ableton Live** as of March 2026. Here's how the
402
403
 
403
404
  | | [LivePilot](https://github.com/dreamrec/LivePilot) | [AbletonMCP](https://github.com/ahujasid/ableton-mcp) | [MCP Extended](https://github.com/uisato/ableton-mcp-extended) | [Ableton Copilot](https://github.com/xiaolaa2/ableton-copilot-mcp) | [AbletonBridge](https://github.com/hidingwill/AbletonBridge) | [Producer Pal](https://github.com/adamjmurray/producer-pal) |
404
405
  |---|:-:|:-:|:-:|:-:|:-:|:-:|
405
- | **Tools** | 135 | ~20 | ~35 | ~45 | 322 | ~25 |
406
+ | **Tools** | 142 | ~20 | ~35 | ~45 | 322 | ~25 |
406
407
  | **Device knowledge** | 280+ devices | -- | -- | -- | -- | -- |
407
408
  | **Audio analysis** | Spectrum/RMS/key | -- | -- | -- | Metering | -- |
408
409
  | **Technique memory** | Persistent | -- | -- | -- | -- | -- |
@@ -459,7 +460,7 @@ Every server on this list gives the AI tools to control Ableton. LivePilot is th
459
460
 
460
461
  The practical difference: other servers let the AI set a parameter. LivePilot lets the AI choose the right parameter based on what device is loaded (atlas), verify the result by reading the audio output (analyzer), and remember the technique for next time (memory).
461
462
 
462
- AbletonBridge has more raw tools (322 vs 135). Producer Pal has the easiest install (drag a .amxd). The original AbletonMCP has the community (2.3k stars). LivePilot has the deepest integration — tools that execute, knowledge that informs, perception that verifies, and memory that accumulates.
463
+ AbletonBridge has more raw tools (322 vs 142). Producer Pal has the easiest install (drag a .amxd). The original AbletonMCP has the community (2.3k stars). LivePilot has the deepest integration — tools that execute, knowledge that informs, perception that verifies, and memory that accumulates.
463
464
 
464
465
  ---
465
466
 
@@ -68,7 +68,7 @@ function anything() {
68
68
  function dispatch(cmd, args) {
69
69
  switch(cmd) {
70
70
  case "ping":
71
- send_response({"ok": true, "version": "1.6.3"});
71
+ send_response({"ok": true, "version": "1.6.5"});
72
72
  break;
73
73
  case "get_params":
74
74
  cmd_get_params(args);
@@ -1,2 +1,2 @@
1
1
  """LivePilot MCP Server — bridges MCP protocol to Ableton Live."""
2
- __version__ = "1.6.3"
2
+ __version__ = "1.6.5"
@@ -56,6 +56,7 @@ from .tools import arrangement # noqa: F401, E402
56
56
  from .tools import memory # noqa: F401, E402
57
57
  from .tools import analyzer # noqa: F401, E402
58
58
  from .tools import automation # noqa: F401, E402
59
+ from .tools import theory # noqa: F401, E402
59
60
 
60
61
 
61
62
  # ---------------------------------------------------------------------------
@@ -0,0 +1,366 @@
1
+ """Pure Python music theory engine — zero dependencies.
2
+
3
+ Replaces music21 for LivePilot's 7 theory tools. Implements:
4
+ Krumhansl-Schmuckler key detection, Roman numeral analysis,
5
+ voice leading checks, chord naming, and scale construction.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ import math
11
+ import re
12
+ from collections import defaultdict
13
+
14
+ # ---------------------------------------------------------------------------
15
+ # Constants
16
+ # ---------------------------------------------------------------------------
17
+
18
+ NOTE_NAMES = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B']
19
+
20
+ ENHARMONIC = {
21
+ 'Cb': 'B', 'Db': 'C#', 'Eb': 'D#', 'Fb': 'E', 'Gb': 'F#',
22
+ 'Ab': 'G#', 'Bb': 'A#',
23
+ 'B#': 'C', 'E#': 'F',
24
+ 'Cbb': 'A#', 'Dbb': 'C', 'Ebb': 'D', 'Fbb': 'D#', 'Gbb': 'F',
25
+ 'Abb': 'G', 'Bbb': 'A',
26
+ 'C##': 'D', 'D##': 'E', 'E##': 'F#', 'F##': 'G', 'G##': 'A',
27
+ 'A##': 'B', 'B##': 'C#',
28
+ }
29
+
30
+ MAJOR_PROFILE = [6.35, 2.23, 3.48, 2.33, 4.38, 4.09, 2.52, 5.19, 2.39, 3.66, 2.29, 2.88]
31
+ MINOR_PROFILE = [6.33, 2.68, 3.52, 5.38, 2.60, 3.53, 2.54, 4.75, 3.98, 2.69, 3.34, 3.17]
32
+
33
+ DORIAN_PROFILE = MAJOR_PROFILE[2:] + MAJOR_PROFILE[:2]
34
+ PHRYGIAN_PROFILE = MAJOR_PROFILE[4:] + MAJOR_PROFILE[:4]
35
+ LYDIAN_PROFILE = MAJOR_PROFILE[5:] + MAJOR_PROFILE[:5]
36
+ MIXOLYDIAN_PROFILE = MAJOR_PROFILE[7:] + MAJOR_PROFILE[:7]
37
+ LOCRIAN_PROFILE = MAJOR_PROFILE[11:] + MAJOR_PROFILE[:11]
38
+
39
+ MODE_PROFILES = {
40
+ 'major': MAJOR_PROFILE, 'minor': MINOR_PROFILE,
41
+ 'dorian': DORIAN_PROFILE, 'phrygian': PHRYGIAN_PROFILE,
42
+ 'lydian': LYDIAN_PROFILE, 'mixolydian': MIXOLYDIAN_PROFILE,
43
+ 'locrian': LOCRIAN_PROFILE,
44
+ }
45
+
46
+ SCALES = {
47
+ 'major': [0, 2, 4, 5, 7, 9, 11], 'minor': [0, 2, 3, 5, 7, 8, 10],
48
+ 'dorian': [0, 2, 3, 5, 7, 9, 10], 'phrygian': [0, 1, 3, 5, 7, 8, 10],
49
+ 'lydian': [0, 2, 4, 6, 7, 9, 11], 'mixolydian': [0, 2, 4, 5, 7, 9, 10],
50
+ 'locrian': [0, 1, 3, 5, 6, 8, 10],
51
+ }
52
+
53
+ TRIAD_QUALITIES = {
54
+ 'major': ['major', 'minor', 'minor', 'major', 'major', 'minor', 'diminished'],
55
+ 'minor': ['minor', 'diminished', 'major', 'minor', 'minor', 'major', 'major'],
56
+ 'dorian': ['minor', 'minor', 'major', 'major', 'minor', 'diminished', 'major'],
57
+ 'phrygian': ['minor', 'major', 'major', 'minor', 'diminished', 'major', 'minor'],
58
+ 'lydian': ['major', 'major', 'minor', 'diminished', 'major', 'minor', 'minor'],
59
+ 'mixolydian': ['major', 'minor', 'diminished', 'major', 'minor', 'minor', 'major'],
60
+ 'locrian': ['diminished', 'major', 'minor', 'minor', 'major', 'major', 'minor'],
61
+ }
62
+
63
+ CHORD_PATTERNS = {
64
+ (0, 4, 7): 'major triad', (0, 3, 7): 'minor triad',
65
+ (0, 3, 6): 'diminished triad', (0, 4, 8): 'augmented triad',
66
+ (0, 2, 7): 'sus2', (0, 5, 7): 'sus4',
67
+ (0, 4, 7, 11): 'major seventh', (0, 3, 7, 10): 'minor seventh',
68
+ (0, 4, 7, 10): 'dominant seventh', (0, 3, 6, 9): 'diminished seventh',
69
+ (0, 3, 6, 10): 'half-diminished seventh',
70
+ }
71
+
72
+ ROMAN_LABELS = ['I', 'II', 'III', 'IV', 'V', 'VI', 'VII']
73
+
74
+ # ---------------------------------------------------------------------------
75
+ # Functions
76
+ # ---------------------------------------------------------------------------
77
+
78
+
79
+ def pitch_name(midi: int) -> str:
80
+ """MIDI number -> note name. Always sharp spelling."""
81
+ return NOTE_NAMES[midi % 12] + str(midi // 12 - 1)
82
+
83
+
84
+ def parse_key(key_str: str) -> dict:
85
+ """Parse key string -> {tonic: 0-11, mode: str}."""
86
+ parts = key_str.strip().split()
87
+ raw_tonic = parts[0]
88
+ mode = parts[1].lower() if len(parts) > 1 else 'major'
89
+
90
+ # Normalize tonic: capitalize first letter
91
+ tonic_name = raw_tonic[0].upper() + raw_tonic[1:]
92
+
93
+ # Resolve enharmonics
94
+ if tonic_name in ENHARMONIC:
95
+ tonic_name = ENHARMONIC[tonic_name]
96
+
97
+ if tonic_name not in NOTE_NAMES:
98
+ raise ValueError(f"Unknown tonic: {tonic_name} (from '{key_str}')")
99
+
100
+ return {"tonic": NOTE_NAMES.index(tonic_name), "mode": mode}
101
+
102
+
103
+ def get_scale_pitches(tonic: int, mode: str) -> list[int]:
104
+ """Return pitch classes (0-11) for the scale."""
105
+ intervals = SCALES.get(mode, SCALES['major'])
106
+ return [(tonic + iv) % 12 for iv in intervals]
107
+
108
+
109
+ def build_chord(degree: int, tonic: int, mode: str) -> dict:
110
+ """Build triad from scale degree (0-indexed)."""
111
+ scale = get_scale_pitches(tonic, mode)
112
+ root = scale[degree % 7]
113
+ third = scale[(degree + 2) % 7]
114
+ fifth = scale[(degree + 4) % 7]
115
+ quality = TRIAD_QUALITIES.get(mode, TRIAD_QUALITIES['major'])[degree % 7]
116
+ return {
117
+ "root_pc": root,
118
+ "pitch_classes": [root, third, fifth],
119
+ "quality": quality,
120
+ "root_name": NOTE_NAMES[root],
121
+ }
122
+
123
+
124
+ def _pearson(x: list[float], y: list[float]) -> float:
125
+ """Pearson correlation coefficient."""
126
+ n = len(x)
127
+ mx = sum(x) / n
128
+ my = sum(y) / n
129
+ num = sum((xi - mx) * (yi - my) for xi, yi in zip(x, y))
130
+ dx = math.sqrt(sum((xi - mx) ** 2 for xi in x))
131
+ dy = math.sqrt(sum((yi - my) ** 2 for yi in y))
132
+ if dx == 0 or dy == 0:
133
+ return 0.0
134
+ return num / (dx * dy)
135
+
136
+
137
+ def detect_key(notes: list[dict], mode_detection: bool = True) -> dict:
138
+ """Krumhansl-Schmuckler key detection."""
139
+ # Build pitch-class histogram weighted by duration
140
+ histogram = [0.0] * 12
141
+ for n in notes:
142
+ if n.get("mute", False):
143
+ continue
144
+ pc = n["pitch"] % 12
145
+ histogram[pc] += n.get("duration", 1.0)
146
+
147
+ profiles = MODE_PROFILES if mode_detection else {
148
+ 'major': MAJOR_PROFILE, 'minor': MINOR_PROFILE,
149
+ }
150
+
151
+ candidates = []
152
+ for mode_name, profile in profiles.items():
153
+ for tonic in range(12):
154
+ rotated = [histogram[(tonic + i) % 12] for i in range(12)]
155
+ r = _pearson(rotated, profile)
156
+ candidates.append({
157
+ "tonic": tonic,
158
+ "tonic_name": NOTE_NAMES[tonic],
159
+ "mode": mode_name,
160
+ "confidence": round(r, 3),
161
+ })
162
+
163
+ candidates.sort(key=lambda c: -c["confidence"])
164
+ best = candidates[0]
165
+ return {
166
+ "tonic": best["tonic"],
167
+ "tonic_name": best["tonic_name"],
168
+ "mode": best["mode"],
169
+ "confidence": best["confidence"],
170
+ "alternatives": candidates[1:9],
171
+ }
172
+
173
+
174
+ def chord_name(midi_pitches: list[int]) -> str:
175
+ """Identify chord from MIDI pitches -> 'C-major triad'."""
176
+ pcs = sorted(set(p % 12 for p in midi_pitches))
177
+ if not pcs:
178
+ return "unknown"
179
+ # Try each pitch class as potential root
180
+ for root in pcs:
181
+ intervals = tuple(sorted((pc - root) % 12 for pc in pcs))
182
+ if intervals in CHORD_PATTERNS:
183
+ return f"{NOTE_NAMES[root]}-{CHORD_PATTERNS[intervals]}"
184
+ return f"{NOTE_NAMES[pcs[0]]} chord"
185
+
186
+
187
+ def roman_numeral(chord_pcs: list[int], tonic: int, mode: str) -> dict:
188
+ """Match chord pitch classes -> Roman numeral figure."""
189
+ pcs_set = set(pc % 12 for pc in chord_pcs)
190
+ bass_pc = chord_pcs[0] % 12 if chord_pcs else 0
191
+
192
+ best = {"figure": "?", "quality": "unknown", "degree": 0,
193
+ "inversion": 0, "root_name": NOTE_NAMES[tonic]}
194
+
195
+ for degree in range(7):
196
+ triad = build_chord(degree, tonic, mode)
197
+ triad_set = set(triad["pitch_classes"])
198
+ if pcs_set == triad_set or pcs_set.issubset(triad_set):
199
+ quality = triad["quality"]
200
+ label = ROMAN_LABELS[degree]
201
+ if quality in ("minor", "diminished"):
202
+ label = label.lower()
203
+ if quality == "diminished":
204
+ label += "\u00b0"
205
+ # Detect inversion
206
+ inv = 0
207
+ if bass_pc != triad["root_pc"]:
208
+ if bass_pc == triad["pitch_classes"][1]:
209
+ inv = 1
210
+ elif bass_pc == triad["pitch_classes"][2]:
211
+ inv = 2
212
+ best = {"figure": label, "quality": quality, "degree": degree,
213
+ "inversion": inv, "root_name": triad["root_name"]}
214
+ break
215
+
216
+ return best
217
+
218
+
219
+ def roman_figure_to_pitches(figure: str, tonic: int, mode: str) -> dict:
220
+ """Parse Roman numeral string -> concrete MIDI pitches.
221
+
222
+ Handles: 'IV', 'bVII7', '#ivo7', 'ii7', etc.
223
+ """
224
+ remaining = figure
225
+ chromatic_shift = 0
226
+ acc_len = 0
227
+
228
+ # Parse leading accidentals
229
+ while remaining and remaining[0] in ('b', '#'):
230
+ if remaining[0] == 'b':
231
+ chromatic_shift -= 1
232
+ else:
233
+ chromatic_shift += 1
234
+ remaining = remaining[1:]
235
+ acc_len += 1
236
+
237
+ # Parse Roman numeral
238
+ upper_remaining = remaining.upper()
239
+ numeral = ""
240
+ for rn in ['VII', 'VI', 'IV', 'III', 'II', 'V', 'I']:
241
+ if upper_remaining.startswith(rn):
242
+ numeral = rn
243
+ break
244
+
245
+ if not numeral:
246
+ return {"figure": figure, "error": f"Cannot parse: {figure}"}
247
+
248
+ # Detect case of the numeral in original figure
249
+ numeral_in_orig = remaining[:len(numeral)]
250
+ is_minor_quality = numeral_in_orig == numeral_in_orig.lower()
251
+ remaining = remaining[len(numeral):]
252
+
253
+ degree = ['I', 'II', 'III', 'IV', 'V', 'VI', 'VII'].index(numeral)
254
+
255
+ # Build base triad from scale
256
+ chord = build_chord(degree, tonic, mode)
257
+ root_pc = (chord["root_pc"] + chromatic_shift) % 12
258
+
259
+ # Build pitch classes based on quality
260
+ if is_minor_quality:
261
+ pcs = [root_pc, (root_pc + 3) % 12, (root_pc + 7) % 12]
262
+ quality = "minor"
263
+ else:
264
+ # Use scale-derived quality
265
+ quality = chord["quality"]
266
+ if quality == "minor":
267
+ pcs = [root_pc, (root_pc + 3) % 12, (root_pc + 7) % 12]
268
+ elif quality == "diminished":
269
+ pcs = [root_pc, (root_pc + 3) % 12, (root_pc + 6) % 12]
270
+ elif quality == "augmented":
271
+ pcs = [root_pc, (root_pc + 4) % 12, (root_pc + 8) % 12]
272
+ else:
273
+ pcs = [root_pc, (root_pc + 4) % 12, (root_pc + 7) % 12]
274
+
275
+ # Handle suffix
276
+ suffix = remaining.lower()
277
+ if suffix == "7":
278
+ seventh = (root_pc + 10) % 12 # dominant/minor 7th
279
+ pcs.append(seventh)
280
+ if quality == "minor":
281
+ quality = "minor seventh"
282
+ else:
283
+ quality = "dominant seventh"
284
+ elif suffix == "o7":
285
+ seventh = (root_pc + 9) % 12 # diminished 7th
286
+ pcs.append(seventh)
287
+ quality = "diminished seventh"
288
+ elif suffix == "\u00b0":
289
+ quality = "diminished"
290
+ pcs = [root_pc, (root_pc + 3) % 12, (root_pc + 6) % 12]
291
+
292
+ # Convert to MIDI pitches in octave 4 (root at its natural octave-4 pitch)
293
+ base_midi = 60 + root_pc
294
+ midi = []
295
+ for pc in pcs:
296
+ p = base_midi + ((pc - root_pc) % 12)
297
+ midi.append(p)
298
+
299
+ return {
300
+ "figure": figure,
301
+ "root_pc": root_pc,
302
+ "pitches": [pitch_name(m) for m in midi],
303
+ "midi_pitches": midi,
304
+ "quality": quality,
305
+ }
306
+
307
+
308
+ def check_voice_leading(prev_pitches: list[int], curr_pitches: list[int]) -> list[dict]:
309
+ """Check voice leading issues between two chords."""
310
+ issues = []
311
+ if len(prev_pitches) < 2 or len(curr_pitches) < 2:
312
+ if len(curr_pitches) >= 2 and curr_pitches[-1] < curr_pitches[0]:
313
+ issues.append({"type": "voice_crossing"})
314
+ return issues
315
+
316
+ prev_bass, prev_sop = prev_pitches[0], prev_pitches[-1]
317
+ curr_bass, curr_sop = curr_pitches[0], curr_pitches[-1]
318
+
319
+ prev_iv = (prev_sop - prev_bass) % 12
320
+ curr_iv = (curr_sop - curr_bass) % 12
321
+
322
+ bass_moved = prev_bass != curr_bass
323
+ sop_moved = prev_sop != curr_sop
324
+ both_moved = bass_moved and sop_moved
325
+
326
+ if both_moved and prev_iv == 7 and curr_iv == 7:
327
+ issues.append({"type": "parallel_fifths"})
328
+
329
+ if both_moved and prev_iv % 12 == 0 and curr_iv % 12 == 0:
330
+ issues.append({"type": "parallel_octaves"})
331
+
332
+ if curr_sop < curr_bass:
333
+ issues.append({"type": "voice_crossing"})
334
+
335
+ # Hidden fifth: same direction motion landing on P5
336
+ if both_moved:
337
+ bass_dir = curr_bass - prev_bass
338
+ sop_dir = curr_sop - prev_sop
339
+ same_dir = (bass_dir > 0 and sop_dir > 0) or (bass_dir < 0 and sop_dir < 0)
340
+ if same_dir and curr_iv == 7:
341
+ issues.append({"type": "hidden_fifth"})
342
+
343
+ return issues
344
+
345
+
346
+ def chordify(notes: list[dict], quant: float = 0.125) -> list[dict]:
347
+ """Group notes by quantized beat position."""
348
+ groups: dict[float, list[dict]] = defaultdict(list)
349
+ for n in notes:
350
+ if n.get("mute", False):
351
+ continue
352
+ q_time = round(n["start_time"] / quant) * quant
353
+ groups[q_time].append(n)
354
+
355
+ result = []
356
+ for beat in sorted(groups.keys()):
357
+ group = groups[beat]
358
+ pitches = sorted(n["pitch"] for n in group)
359
+ duration = max(max(n["duration"] for n in group), quant)
360
+ result.append({
361
+ "beat": round(beat, 4),
362
+ "duration": round(duration, 4),
363
+ "pitches": pitches,
364
+ "pitch_classes": sorted(set(p % 12 for p in pitches)),
365
+ })
366
+ return result