livepilot 1.10.5 → 1.10.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +3 -3
- package/.mcp.json.disabled +9 -0
- package/.mcpbignore +3 -0
- package/AGENTS.md +3 -3
- package/BUGS.md +1570 -0
- package/CHANGELOG.md +92 -0
- package/CONTRIBUTING.md +1 -1
- package/README.md +7 -7
- package/bin/livepilot.js +28 -8
- package/livepilot/.Codex-plugin/plugin.json +2 -2
- package/livepilot/.claude-plugin/plugin.json +2 -2
- package/livepilot/skills/livepilot-core/SKILL.md +4 -4
- package/livepilot/skills/livepilot-core/references/overview.md +2 -2
- package/livepilot/skills/livepilot-evaluation/references/capability-modes.md +1 -1
- package/livepilot/skills/livepilot-release/SKILL.md +8 -8
- package/m4l_device/LivePilot_Analyzer.amxd +0 -0
- package/m4l_device/LivePilot_Analyzer.amxd.pre-presentation-backup +0 -0
- package/m4l_device/LivePilot_Analyzer.maxproj +53 -0
- package/m4l_device/livepilot_bridge.js +226 -3
- package/manifest.json +3 -3
- package/mcp_server/__init__.py +1 -1
- package/mcp_server/atlas/__init__.py +93 -26
- package/mcp_server/composer/sample_resolver.py +10 -6
- package/mcp_server/composer/tools.py +10 -6
- package/mcp_server/connection.py +6 -1
- package/mcp_server/creative_constraints/tools.py +214 -40
- package/mcp_server/experiment/engine.py +16 -14
- package/mcp_server/experiment/tools.py +9 -9
- package/mcp_server/hook_hunter/analyzer.py +62 -9
- package/mcp_server/hook_hunter/tools.py +74 -18
- package/mcp_server/m4l_bridge.py +32 -6
- package/mcp_server/memory/taste_graph.py +7 -2
- package/mcp_server/mix_engine/tools.py +8 -3
- package/mcp_server/musical_intelligence/detectors.py +32 -0
- package/mcp_server/musical_intelligence/tools.py +15 -10
- package/mcp_server/performance_engine/tools.py +117 -30
- package/mcp_server/preview_studio/engine.py +89 -8
- package/mcp_server/preview_studio/tools.py +43 -21
- package/mcp_server/project_brain/automation_graph.py +71 -19
- package/mcp_server/project_brain/builder.py +2 -0
- package/mcp_server/project_brain/tools.py +73 -15
- package/mcp_server/reference_engine/profile_builder.py +129 -3
- package/mcp_server/reference_engine/tools.py +54 -11
- package/mcp_server/runtime/capability_probe.py +10 -4
- package/mcp_server/runtime/execution_router.py +50 -0
- package/mcp_server/runtime/mcp_dispatch.py +75 -3
- package/mcp_server/runtime/remote_commands.py +4 -2
- package/mcp_server/runtime/tools.py +8 -2
- package/mcp_server/sample_engine/analyzer.py +131 -4
- package/mcp_server/sample_engine/critics.py +29 -8
- package/mcp_server/sample_engine/models.py +20 -1
- package/mcp_server/sample_engine/tools.py +74 -31
- package/mcp_server/semantic_moves/sound_design_compilers.py +22 -59
- package/mcp_server/semantic_moves/tools.py +5 -1
- package/mcp_server/semantic_moves/transition_compilers.py +12 -19
- package/mcp_server/server.py +78 -11
- package/mcp_server/services/motif_service.py +9 -3
- package/mcp_server/session_continuity/models.py +4 -0
- package/mcp_server/session_continuity/tools.py +7 -3
- package/mcp_server/session_continuity/tracker.py +23 -9
- package/mcp_server/song_brain/builder.py +110 -12
- package/mcp_server/song_brain/tools.py +94 -25
- package/mcp_server/sound_design/tools.py +112 -1
- package/mcp_server/splice_client/client.py +19 -6
- package/mcp_server/stuckness_detector/detector.py +90 -0
- package/mcp_server/stuckness_detector/tools.py +49 -5
- package/mcp_server/tools/_agent_os_engine/__init__.py +52 -0
- package/mcp_server/tools/_agent_os_engine/critics.py +158 -0
- package/mcp_server/tools/_agent_os_engine/evaluation.py +206 -0
- package/mcp_server/tools/_agent_os_engine/models.py +132 -0
- package/mcp_server/tools/_agent_os_engine/taste.py +192 -0
- package/mcp_server/tools/_agent_os_engine/techniques.py +161 -0
- package/mcp_server/tools/_agent_os_engine/world_model.py +170 -0
- package/mcp_server/tools/_composition_engine/__init__.py +67 -0
- package/mcp_server/tools/_composition_engine/analysis.py +174 -0
- package/mcp_server/tools/_composition_engine/critics.py +522 -0
- package/mcp_server/tools/_composition_engine/gestures.py +230 -0
- package/mcp_server/tools/_composition_engine/harmony.py +160 -0
- package/mcp_server/tools/_composition_engine/models.py +193 -0
- package/mcp_server/tools/_composition_engine/sections.py +414 -0
- package/mcp_server/tools/_harmony_engine.py +52 -8
- package/mcp_server/tools/_perception_engine.py +18 -11
- package/mcp_server/tools/_research_engine.py +98 -19
- package/mcp_server/tools/_theory_engine.py +138 -9
- package/mcp_server/tools/agent_os.py +43 -18
- package/mcp_server/tools/analyzer.py +105 -8
- package/mcp_server/tools/automation.py +6 -1
- package/mcp_server/tools/clips.py +45 -0
- package/mcp_server/tools/composition.py +90 -38
- package/mcp_server/tools/devices.py +32 -7
- package/mcp_server/tools/harmony.py +115 -14
- package/mcp_server/tools/midi_io.py +13 -1
- package/mcp_server/tools/mixing.py +35 -1
- package/mcp_server/tools/motif.py +56 -5
- package/mcp_server/tools/planner.py +6 -2
- package/mcp_server/tools/research.py +37 -10
- package/mcp_server/tools/theory.py +108 -16
- package/mcp_server/transition_engine/critics.py +18 -11
- package/mcp_server/transition_engine/tools.py +6 -1
- package/mcp_server/translation_engine/tools.py +8 -6
- package/mcp_server/wonder_mode/engine.py +8 -3
- package/mcp_server/wonder_mode/tools.py +29 -21
- package/package.json +2 -2
- package/remote_script/LivePilot/__init__.py +57 -2
- package/remote_script/LivePilot/clips.py +69 -0
- package/remote_script/LivePilot/mixing.py +117 -0
- package/remote_script/LivePilot/router.py +13 -1
- package/scripts/generate_tool_catalog.py +13 -38
- package/scripts/sync_metadata.py +231 -14
- package/mcp_server/tools/_agent_os_engine.py +0 -947
- package/mcp_server/tools/_composition_engine.py +0 -1530
package/BUGS.md
ADDED
|
@@ -0,0 +1,1570 @@
|
|
|
1
|
+
# LivePilot Bug Tracker
|
|
2
|
+
|
|
3
|
+
Living list of bugs + follow-ups captured during the deep audit + Dabrye-Core creative session (2026-04-17, session HEAD `16f3bfc` / release v1.10.6).
|
|
4
|
+
|
|
5
|
+
Bugs are categorized by surface:
|
|
6
|
+
|
|
7
|
+
- **A** = LivePilot server / Remote Script / LOM gaps
|
|
8
|
+
- **B** = Analyzers / critics (false positives, misattribution)
|
|
9
|
+
- **C** = Audit follow-ups from the fresh-audit pass
|
|
10
|
+
- **D** = Session-specific / creative tracking
|
|
11
|
+
|
|
12
|
+
Status flags: `🔴 open` · `🟡 in-progress` · `🟢 fixed` · `⚪️ wontfix / by-design`
|
|
13
|
+
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
## A. Server / LOM gaps
|
|
17
|
+
|
|
18
|
+
### BUG-A1 · `🟢 fixed (Batch 2)` · insert_device returned "Unknown command type"
|
|
19
|
+
|
|
20
|
+
**Reproducer:** `insert_device(track_index=3, device_name="Auto Filter", position=-1)` returned:
|
|
21
|
+
```
|
|
22
|
+
[NOT_FOUND] Unknown command type: insert_device (while running 'insert_device')
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Root cause (diagnosed):** NOT a missing handler — it already existed in `remote_script/LivePilot/devices.py` at `@register("insert_device")`. The bug was **install drift**: the installed Remote Script at `~/Music/Ableton/User Library/Remote Scripts/LivePilot/` was dated Apr 11 (before the handler was added Apr 14). Ableton loads Remote Scripts once at process start and caches them in `sys.modules`, so source-tree edits never reached the running Live process.
|
|
26
|
+
|
|
27
|
+
**Fix (landed):**
|
|
28
|
+
1. `remote_script/LivePilot/router.py` — `ping` response now embeds `remote_script_version` and the full `commands` list so stale installs are detectable.
|
|
29
|
+
2. `mcp_server/server.py::_check_remote_script_version()` — called in the lifespan context after connect; logs a loud warning if the installed version doesn't match the MCP server version ("Run 'npx livepilot --install' and restart Ableton Live").
|
|
30
|
+
3. Reinstalled the Remote Script at `~/Music/Ableton/User Library/Remote Scripts/LivePilot/` (devices.py 22KB→30KB, `version_detect.py` added, `clips.py` now contains `set_clip_pitch`). User must restart Ableton Live for the new code to take effect.
|
|
31
|
+
4. Regression test `test_bug_a1_ping_embeds_remote_script_version_and_commands`.
|
|
32
|
+
|
|
33
|
+
**Impact:** Future drift surfaces as a clear on-connect warning instead of mysterious "Unknown command type" errors mid-session.
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
|
|
37
|
+
### BUG-A2 · `🟢 bridge-side fixed — awaiting .amxd re-freeze (Batch 18)` · Simpler Warp mode not exposed via Python LOM
|
|
38
|
+
|
|
39
|
+
**Reproducer:** `get_hidden_parameters(track=5, device=0)` returns all 83 Simpler params — no "Warp" / "Warp Mode". Python's Remote Script ControlSurface API doesn't surface it.
|
|
40
|
+
|
|
41
|
+
**Key insight:** Ableton's LOM has two tiers. Python-Remote-Script sees only the automatable parameter surface. **Max for Live's JavaScript LiveAPI can reach deeper model objects** (e.g. `SimplerDevice.sample.*`) where Warp actually lives. The existing `livepilot_bridge.js` already uses this pattern for `get_simpler_slices` and `replace_simpler_sample` — so the infrastructure is proven.
|
|
42
|
+
|
|
43
|
+
**Impact:** Today, user must click Warp in Simpler's Sample tab manually. Blocks automatic tempo-sync when loading samples.
|
|
44
|
+
|
|
45
|
+
**Fix path (recommended — Option 1: extend the M4L bridge):**
|
|
46
|
+
|
|
47
|
+
Add a new bridge command `simpler_set_warp` to `m4l_device/livepilot_bridge.js`:
|
|
48
|
+
```javascript
|
|
49
|
+
function cmd_simpler_set_warp(args) {
|
|
50
|
+
// args: [track_index, device_index, warp_on (0/1), warp_mode (0..6)]
|
|
51
|
+
var track_idx = parseInt(args[0]);
|
|
52
|
+
var device_idx = parseInt(args[1]);
|
|
53
|
+
var warp_on = parseInt(args[2]);
|
|
54
|
+
var warp_mode = parseInt(args[3]);
|
|
55
|
+
var path = "live_set tracks " + track_idx +
|
|
56
|
+
" devices " + device_idx + " sample";
|
|
57
|
+
cursor_a.goto(path);
|
|
58
|
+
cursor_a.set("warping", warp_on);
|
|
59
|
+
if (warp_on && warp_mode >= 0) {
|
|
60
|
+
cursor_a.set("warp_mode", warp_mode);
|
|
61
|
+
}
|
|
62
|
+
send_response({ok: true, warping: warp_on, warp_mode: warp_mode});
|
|
63
|
+
}
|
|
64
|
+
```
|
|
65
|
+
Plus register in the `dispatch()` switch. Then a Python wrapper in `mcp_server/m4l_bridge.py`
|
|
66
|
+
(following the `replace_simpler_sample` pattern) and a `@mcp.tool` in `mcp_server/sample_engine/tools.py`.
|
|
67
|
+
Estimated: ~30 minutes work + .amxd re-freeze (per the `feedback_amxd_freeze_drift` memory).
|
|
68
|
+
|
|
69
|
+
**Fallback paths if `SimplerDevice.sample.warping` isn't accessible via Max JS:**
|
|
70
|
+
|
|
71
|
+
- **Option 2: Resample-and-replace pipeline** — create temp audio track → load sample as audio clip → enable warp via `set_clip_warp_mode` → consolidate to pre-warped .wav → `replace_simpler_sample` with new path → delete temp track. Automatable today with existing tools, but creates disk artifacts and is slower.
|
|
72
|
+
- **Option 3: Drum Rack wrapper** — wrap the sample in a Drum Rack chain (chain clips respect warp). Loses Simpler-specific ADSR/glide surface.
|
|
73
|
+
- **Option 4: Use Sampler instead of Simpler** — Live's bigger sampler may expose more params to LOM. Worth probing in a test session.
|
|
74
|
+
- **Option 5: Status quo** — 1 click in Simpler's Sample tab. Reliable, 2 seconds of user action.
|
|
75
|
+
|
|
76
|
+
**Dependency:** Bridge-side bump to `livepilot_bridge.js` requires .amxd re-freeze + version-string sync (per `feedback_amxd_freeze_drift`).
|
|
77
|
+
|
|
78
|
+
**Batch 18 landed (2026-04-17):** Bridge command `cmd_simpler_set_warp` added to `livepilot_bridge.js` (dispatches via OSC to `SimplerDevice.sample.warping` and `warp_mode`, class-name-guarded with verification read-back). Python wrapper `simpler_set_warp(track_index, device_index, warping: bool, warp_mode: 0|1|2|3|4|6)` registered as `@mcp.tool()` in `mcp_server/tools/analyzer.py`. Tool count 321→323. `test_tools_contract` green. **Remaining:** user must re-freeze `LivePilot_Analyzer.amxd` in Max 9 so the frozen JS matches source (see `feedback_amxd_freeze_drift`).
|
|
79
|
+
|
|
80
|
+
**UX polish (independent of the fix):** When we detect a tempo mismatch between Simpler sample and session — filename `<BPM>bpm` vs `tempo` — emit a friendly warning: "Simpler has a 86 BPM sample in a 90 BPM session. Either run `simpler_set_warp(warp_on=1, warp_mode=6)` or click Warp in the Sample tab."
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
### BUG-A3 · `🟢 fixed Python-side (Batch 19, reload-plumbed in Batch 20) — awaiting one Ableton restart` · Compressor sidechain INPUT ROUTING not programmable
|
|
85
|
+
|
|
86
|
+
**Reproducer:** `get_device_parameters(track=1, device=1)` on a Compressor returns `S/C On` parameter but no "Audio From" / input-routing source parameter.
|
|
87
|
+
|
|
88
|
+
**Impact:** Can't set up a sidechain duck fully programmatically. We can enable `S/C On` and the EQ, but the source track must be selected manually in the Compressor's routing dropdown.
|
|
89
|
+
|
|
90
|
+
**Same LOM-layer logic as BUG-A2:** Python's Remote Script only sees automatable parameters. Sidechain routing in Live 12 is typically exposed as a LiveAPI property (not an automatable parameter) on a device's routing descriptor. Max JS LiveAPI should reach it.
|
|
91
|
+
|
|
92
|
+
**Probe path to add in `livepilot_bridge.js`:**
|
|
93
|
+
```javascript
|
|
94
|
+
function cmd_compressor_set_sidechain(args) {
|
|
95
|
+
// args: [track_index, device_index, source_type, source_channel]
|
|
96
|
+
// source_type example: "Audio In", "Ext. In", "No Input", or another track's output
|
|
97
|
+
// source_channel: "Post FX", "Pre FX", "Post Mixer" typically
|
|
98
|
+
var path = "live_set tracks " + args[0] + " devices " + args[1];
|
|
99
|
+
cursor_a.goto(path);
|
|
100
|
+
|
|
101
|
+
// Modern Live Compressor exposes routing via these properties:
|
|
102
|
+
// - sidechain_input_routing_type
|
|
103
|
+
// - sidechain_input_routing_channel
|
|
104
|
+
// Check availability first:
|
|
105
|
+
try {
|
|
106
|
+
cursor_a.set("sidechain_input_routing_type", args[2]);
|
|
107
|
+
cursor_a.set("sidechain_input_routing_channel", args[3]);
|
|
108
|
+
send_response({ok: true, sidechain: {type: args[2], channel: args[3]}});
|
|
109
|
+
} catch(e) {
|
|
110
|
+
send_response({error: "sidechain routing not accessible: " + e.message});
|
|
111
|
+
}
|
|
112
|
+
}
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
**Fallback if not accessible:** Use the existing `set_track_routing` pattern (which DOES work for tracks) as a model and see if a `set_device_sidechain_routing` command can be generalized.
|
|
116
|
+
|
|
117
|
+
**Dependency:** Same `.amxd` re-freeze + version-string sync as BUG-A2.
|
|
118
|
+
|
|
119
|
+
**Batch 18 (2026-04-17) — attempted via M4L bridge, superseded by Batch 19:** Originally added `cmd_compressor_set_sidechain` to `livepilot_bridge.js` setting `sidechain_input_routing_type` directly. Two blockers emerged in live test: (1) Max JS LiveAPI's `get("available_sidechain_input_routing_types")` returned nothing in Live 12.3.6 / Max 9 — couldn't enumerate routing targets to match by display_name; (2) `set()` on RoutingType properties needs a structured `{identifier:N}` dict, not a raw string. The route was abandoned.
|
|
120
|
+
|
|
121
|
+
**Batch 19 landed (2026-04-17) — Python Remote Script path:** Added `@register("set_compressor_sidechain")` handler to `remote_script/LivePilot/mixing.py` using the exact LOM pattern as `set_track_routing`: `list(device.available_sidechain_input_routing_types)` → match by `display_name` → assign directly (`device.sidechain_input_routing_type = matched`). Enables sidechain via `device.sidechain_enabled = True` with a `"S/C On"` parameter fallback for legacy builds. Raises `ValueError` with the full options list when the display_name doesn't match. MCP tool `compressor_set_sidechain(track_index, device_index, source_type, source_channel)` in `analyzer.py` now routes via `ableton.send_command("set_compressor_sidechain", ...)` on TCP instead of the M4L bridge. Added to the `REMOTE_COMMANDS` allowlist in `mcp_server/runtime/remote_commands.py` (mixing section 11 → 12). No longer requires the M4L Analyzer. **Remaining:** user must reload the Remote Script in Ableton Prefs (Link, Tempo & MIDI → Control Surface → LivePilot → None → LivePilot) to pick up the new handler, and restart Claude Code so the MCP server re-imports the updated `analyzer.py`.
|
|
122
|
+
|
|
123
|
+
**Key insight (corrects Batch 18's premise):** The old claim "Python Remote Script ControlSurface API can only see automatable parameters" was an oversimplification. Python LOM exposes full device properties — the same `available_*_routing_types` family works on Compressor's sidechain as on Track's input. The M4L bridge is only needed for properties genuinely hidden from Python's LOM (like `SimplerDevice.sample.warping`, which BUG-A2 still uses). Trying to use the bridge for properties Python can reach adds serialization complexity, Live-version fragility, and depends on a frozen `.amxd` — for no benefit. Default to Python first, bridge second.
|
|
124
|
+
|
|
125
|
+
**Batch 20 landed (2026-04-17) — module-reload plumbing fix:** During Batch 19's functional test, discovered that the `set_compressor_sidechain` handler wasn't reachable even after a Control Surface toggle with `__pycache__/` cleared. Root cause: Ableton's embedded Python retains `sys.modules["LivePilot.mixing"]` across Control Surface toggles, so `__init__.py`'s module-body `from . import mixing` returns the CACHED module object — not a fresh re-import — meaning new `@register("set_compressor_sidechain")` decorators never fire. Toggling unloads the `ControlSurface` instance but doesn't clear `sys.modules`. This is the Remote Script analog of `feedback_amxd_freeze_drift`. Fix: `remote_script/LivePilot/__init__.py` now tracks a `_FIRST_CREATE_INSTANCE` flag; on every non-first `create_instance()` call, it calls `importlib.reload(router)` (to clear `_handlers`) and then `importlib.reload()` on each handler module (to re-fire `@register` decorators). **One-time cost:** user must fully quit+relaunch Ableton ONCE to bootstrap the new `__init__.py`. After that, Control Surface toggle behaves like a true reload for all future handler edits. Also cleaned up a stale nested `~/Music/Ableton/User Library/Remote Scripts/LivePilot/LivePilot/` v1.9.3 subdirectory that was harmless (Ableton loads the top-level) but cluttered the install.
|
|
126
|
+
|
|
127
|
+
---
|
|
128
|
+
|
|
129
|
+
### BUG-A4 · `🟢 fixed (Batch 2)` · get_clip_info missing audio-clip pitch offset
|
|
130
|
+
|
|
131
|
+
**Reproducer:** `get_clip_info(track=6, clip=0)` on the Splice audio clip returned:
|
|
132
|
+
```json
|
|
133
|
+
{"warping": true, "warp_mode": 4, "name": "...D#min", ...}
|
|
134
|
+
```
|
|
135
|
+
No `pitch_coarse` / `pitch_fine` / `gain` fields.
|
|
136
|
+
|
|
137
|
+
**Fix (landed) — `remote_script/LivePilot/clips.py::get_clip_info`:**
|
|
138
|
+
```python
|
|
139
|
+
if clip.is_audio_clip:
|
|
140
|
+
result["warping"] = clip.warping
|
|
141
|
+
result["warp_mode"] = clip.warp_mode
|
|
142
|
+
for attr in ("pitch_coarse", "pitch_fine", "gain"):
|
|
143
|
+
try:
|
|
144
|
+
result[attr] = getattr(clip, attr)
|
|
145
|
+
except AttributeError:
|
|
146
|
+
pass # some Live builds omit these on fresh clips
|
|
147
|
+
```
|
|
148
|
+
Regression tests: `test_bug_a4_get_clip_info_exposes_audio_pitch_and_gain` and `test_bug_a4_midi_clips_do_not_report_pitch_fields` (in `tests/test_remote_script_contracts.py`).
|
|
149
|
+
|
|
150
|
+
---
|
|
151
|
+
|
|
152
|
+
### BUG-A5 · `🟢 fixed (Batch 2)` · No programmatic way to set audio-clip pitch offset
|
|
153
|
+
|
|
154
|
+
**Fix (landed):** New `set_clip_pitch(ctx, track_index, clip_index, coarse=None, fine=None, gain=None)` MCP tool in `mcp_server/tools/clips.py` plus matching `@register("set_clip_pitch")` handler in `remote_script/LivePilot/clips.py`. Audio-only; MIDI clips raise ValueError. Ranges enforced: coarse −48..+48 semitones, fine −50..+50 cents, gain 0..1.
|
|
155
|
+
|
|
156
|
+
**Registry/docs synced:** tool count 320→321, `remote_commands.py` allowlist, `tool-catalog.md`, `test_tools_contract.py`, and the full release-checklist doc sweep.
|
|
157
|
+
|
|
158
|
+
Regression tests: `test_bug_a5_set_clip_pitch_writes_coarse_and_fine`, `test_bug_a5_set_clip_pitch_rejects_midi_clips`, `test_bug_a5_set_clip_pitch_requires_at_least_one_param`, `test_bug_a5_set_clip_pitch_rejects_out_of_range_coarse`.
|
|
159
|
+
|
|
160
|
+
**Unblocks:** BUG-D1 — automatic "transpose −1 semi to fix D#min sample in Dm session" correction now possible. Re-run the Dabrye D#min Splice clip experiment once Ableton is restarted to pick up the new Remote Script.
|
|
161
|
+
|
|
162
|
+
---
|
|
163
|
+
|
|
164
|
+
## B. Analyzers / critics
|
|
165
|
+
|
|
166
|
+
### BUG-B1 · `🟢 fixed (Batch 10)` · detect_role_conflicts false positive on DRUMS + PERC
|
|
167
|
+
|
|
168
|
+
**Reproducer:** Session has tracks 0 "DRUMS" (Boom Bap Kit) and 4 "PERC" (Percussion Core Kit). `detect_role_conflicts` returns:
|
|
169
|
+
```json
|
|
170
|
+
{"role": "drums", "tracks": [0, 4], "severity": 0.5,
|
|
171
|
+
"recommendation": "Layer drum parts into one Drum Rack or pan them apart"}
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
**Why it's wrong:** In hip-hop / Dabrye-core / Dilla / lo-fi, **intentional drum + perc layering** is the core aesthetic — not a conflict. The critic's heuristic treats any two drum-role tracks as competing, regardless of genre context.
|
|
175
|
+
|
|
176
|
+
**Fix direction:** In `mcp_server/tools/_composition_engine/critics.py` (or the pure engine module), gate drum-conflict severity by:
|
|
177
|
+
- Genre inference (if style tactics include "hip-hop", reduce severity)
|
|
178
|
+
- Pan separation (already done? If DRUMS center + PERC pan 0.25, severity should drop)
|
|
179
|
+
- Frequency separation check (kick-heavy vs hi-hat-heavy? check Drum Rack chain distributions)
|
|
180
|
+
|
|
181
|
+
**Impact:** Low (annoyance, not broken). But degrades trust in the critic for hip-hop users.
|
|
182
|
+
|
|
183
|
+
---
|
|
184
|
+
|
|
185
|
+
### BUG-B2 · `🟢 fixed (Batch 4)` · analyze_harmony mislabeled iv turnaround chord
|
|
186
|
+
|
|
187
|
+
**Reproducer:** RHODES clip beat 13.5 pitches `[G3, A#3, D4, F4, A4]` returned `{"chord_name": "D chord", ...}` instead of `Gm7` (= iv7 in Dm).
|
|
188
|
+
|
|
189
|
+
**Root cause:** `chord_name()` in `_theory_engine.py` only matched EXACT interval tuples in `CHORD_PATTERNS`. On miss, it returned `NOTE_NAMES[pcs[0]]` — the *numerically lowest pitch class*, not the bass note. Since `pcs_sorted = [2, 5, 7, 9, 10]`, `pcs[0] = 2` = D, so the chord was labeled "D chord".
|
|
190
|
+
|
|
191
|
+
**Fix (landed in `mcp_server/tools/_theory_engine.py::chord_name`):**
|
|
192
|
+
Four-pass chord identification:
|
|
193
|
+
1. Exact `CHORD_PATTERNS` match with bass-note-preferred root selection
|
|
194
|
+
2. Subset match → partial chord labeled with `(no X)` annotation
|
|
195
|
+
3. Superset match → extended chord labeled with `(add X)` annotation
|
|
196
|
+
4. Final fallback names the bass pitch (not the numerically lowest pc)
|
|
197
|
+
|
|
198
|
+
BUG-B2 input `[G3, Bb3, D4, F4, A4]` now returns **"G-minor seventh (add 9)"** (pass 3: G-Bb-D-F = minor seventh pattern + A as added 9/11 tension).
|
|
199
|
+
|
|
200
|
+
**Impact:** Medium — now closed.
|
|
201
|
+
|
|
202
|
+
---
|
|
203
|
+
|
|
204
|
+
### BUG-B3 · `🟢 fixed (Batch 10)` · get_track_meters level vs left/right desync
|
|
205
|
+
|
|
206
|
+
**Reproducer:**
|
|
207
|
+
1. Stop playback
|
|
208
|
+
2. Call `get_track_meters(include_stereo=true)`
|
|
209
|
+
3. Some tracks return `{level: 0.81, left: 0, right: 0}`
|
|
210
|
+
|
|
211
|
+
**Why it's confusing:** `level` reports peak-hold (last loud moment), while `left`/`right` report instantaneous post-fader channel levels. On stopped playback they decouple.
|
|
212
|
+
|
|
213
|
+
**Fix direction:** One of:
|
|
214
|
+
- Document the semantic (cheap)
|
|
215
|
+
- Return `peak_hold` and `current_left` / `current_right` as distinct fields
|
|
216
|
+
- Suppress `left`/`right` when `is_playing` is false
|
|
217
|
+
- Or sync all readings to one sampling moment
|
|
218
|
+
|
|
219
|
+
**Impact:** Low. Creates diagnostic false alarms when debugging "is my filter killing the signal?" during stopped playback.
|
|
220
|
+
|
|
221
|
+
---
|
|
222
|
+
|
|
223
|
+
### BUG-B4 · `🟢 documented (Batch 17)` · Auto Filter LFO Amount display scale mismatch
|
|
224
|
+
|
|
225
|
+
**Reproducer:** `batch_set_parameters` on Auto Filter with `{"name_or_index": "LFO Amount", "value": 0.25}` returns:
|
|
226
|
+
```json
|
|
227
|
+
{"name": "LFO Amount", "value": 0.25, "value_string": "6.2 %"}
|
|
228
|
+
```
|
|
229
|
+
|
|
230
|
+
**Why it's confusing:** The parameter VALUE is 0.25 (normalized 0-1) but the VALUE_STRING says "6.2 %". Existing Auto Filter instances (pre-session) had e.g. `LFO Amount: 0.42` with no `%` in the readout visible via get_device_parameters' earlier form. Unclear if:
|
|
231
|
+
- 0.25 actual value corresponds to a 6.2% depth in display (scaling factor present)
|
|
232
|
+
- value_string display is buggy
|
|
233
|
+
- The parameter interpretation changed between Auto Filter v1 → AutoFilter2
|
|
234
|
+
|
|
235
|
+
**Fix direction:** Document the mapping between normalized parameter values and their human-readable displays. The value_string IS the source of truth for the user — make sure docs reflect that "LFO Amount 0.25 = 6.2% depth".
|
|
236
|
+
|
|
237
|
+
**Impact:** Low (display-only), but makes automation recipes hard to reason about without testing.
|
|
238
|
+
|
|
239
|
+
---
|
|
240
|
+
|
|
241
|
+
### BUG-B5 · `🟢 fixed (Batch 4)` · analyze_harmony chord naming on incomplete chords
|
|
242
|
+
|
|
243
|
+
**Reproducer:** Pad Lush clip 0 "Intro Wash" pitches `[D3, F3, C4]` returned `{"chord_name": "C chord", ...}` instead of `Dm7(no5)`.
|
|
244
|
+
|
|
245
|
+
**Root cause:** Same `chord_name()` fallback bug as BUG-B2. Pitch classes `{0, 2, 5}` (C, D, F) sorted numerically puts C first (pc 0), so the fallback returned "C chord".
|
|
246
|
+
|
|
247
|
+
**Fix (landed with BUG-B2 in Batch 4):** Subset-match pass now catches partial chords. D (bass, pc 2) → intervals `{0, 3, 10}` → subset of minor-seventh pattern `{0, 3, 7, 10}` → returns **"D-minor seventh (no 5)"**.
|
|
248
|
+
|
|
249
|
+
Regression tests (all in `tests/test_theory_engine.py::TestChordName`):
|
|
250
|
+
- `test_bug_b2_gm7_with_added_tension_rooted_on_bass`
|
|
251
|
+
- `test_bug_b5_dm7_no5_rooted_on_bass_not_c`
|
|
252
|
+
- `test_partial_minor_triad_still_rooted_on_bass`
|
|
253
|
+
- `test_major_triad_with_added_ninth`
|
|
254
|
+
- `test_exact_match_still_wins_over_subset_guess`
|
|
255
|
+
- `test_empty_pitches_returns_unknown`
|
|
256
|
+
|
|
257
|
+
**Impact:** Medium — now closed. Composition critics get correct chord names on pad/sustain clips that drop the fifth.
|
|
258
|
+
|
|
259
|
+
---
|
|
260
|
+
|
|
261
|
+
### BUG-B6 · `🟢 fixed (Batch 11)` · detect_stuckness ignored current session state
|
|
262
|
+
|
|
263
|
+
**Reproducer:** `detect_stuckness()` returns `{"confidence": 0, "level": "flowing", "signals": [], "diagnosis": ""}` even though:
|
|
264
|
+
- `detect_repetition_fatigue` reports `fatigue_level: 0.93` with 8 motif overuse issues
|
|
265
|
+
- `analyze_mix` flags `support_too_loud` (Texture track)
|
|
266
|
+
- No clip automation in any section (arrangement flatness signal)
|
|
267
|
+
|
|
268
|
+
**Why it's limiting:** `detect_stuckness` only analyzes the action ledger (user's recent clicks/undos), not current session state critic output. When a user just opened a project or made no recent changes, stuckness will always report "flowing" regardless of actual session health.
|
|
269
|
+
|
|
270
|
+
**Fix direction:** Extend `detect_stuckness` to merge action-ledger signals with current-state critic signals. Weight them: ledger-based signals (active user-is-stuck behavior) count heavier than state-based signals (project-is-stuck shape). Add `state_fatigue_score` to the output.
|
|
271
|
+
|
|
272
|
+
**Impact:** Medium. Rescue / Wonder Mode routing depends on stuckness detection. When fatigue is 0.93 but stuckness is 0, Wonder Mode would never auto-trigger.
|
|
273
|
+
|
|
274
|
+
---
|
|
275
|
+
|
|
276
|
+
### BUG-B7 · `🟢 fixed (Batch 7)` · get_motif_graph returned 90KB payload — exceeded inline limits
|
|
277
|
+
|
|
278
|
+
**Reproducer:** `get_motif_graph()` on a 10-track session with 49 clips returns a 90,430-char JSON (Handler system wrote it to disk because it exceeded token limits).
|
|
279
|
+
|
|
280
|
+
**Why it's a bug:** No pagination, no limit parameter. Every motif with its occurrence details is included — for larger sessions this blows through context and tool-result limits.
|
|
281
|
+
|
|
282
|
+
**Fix direction:** Add `limit` and `offset` parameters (default limit = 50 motifs). Add `summary_only` mode that returns motif IDs + scores without occurrence arrays.
|
|
283
|
+
|
|
284
|
+
**Impact:** Medium. Makes the tool unusable on real production sessions.
|
|
285
|
+
|
|
286
|
+
---
|
|
287
|
+
|
|
288
|
+
### BUG-B8 · `🟢 fixed (Batch 7)` · rank_hook_candidates returned duplicate "motif_unknown" hooks
|
|
289
|
+
|
|
290
|
+
**Reproducer:** `rank_hook_candidates(limit=5)` returns entries like:
|
|
291
|
+
```json
|
|
292
|
+
[
|
|
293
|
+
{"hook_id": "track_10-vox_lch_...", "location": "10-VOX_LCH_..."},
|
|
294
|
+
{"hook_id": "motif_unknown", "location": ""},
|
|
295
|
+
{"hook_id": "motif_unknown", "location": ""},
|
|
296
|
+
{"hook_id": "motif_unknown", "location": ""},
|
|
297
|
+
{"hook_id": "motif_unknown", "location": ""}
|
|
298
|
+
]
|
|
299
|
+
```
|
|
300
|
+
|
|
301
|
+
**Why it's wrong:** Four motif-based hooks all have the same `hook_id` ("motif_unknown") and empty `location`. The motif IDs from `get_motif_graph` (motif_000, motif_001, etc.) aren't propagating to hook candidates — they're being collapsed to a generic "unknown" label.
|
|
302
|
+
|
|
303
|
+
**Fix direction:** In the hook-ranking engine (likely in `mcp_server/hook_hunter/`), when iterating motif hook candidates, preserve `motif_id` from the source motif and populate `location` with the track/clip origin.
|
|
304
|
+
|
|
305
|
+
**Impact:** Medium. Hook development workflows can't address specific motifs when all are labeled "unknown".
|
|
306
|
+
|
|
307
|
+
---
|
|
308
|
+
|
|
309
|
+
### BUG-B9 · `🟢 documented (Batch 17)` · Auto Filter vs Auto Filter Legacy parameter scale mismatch
|
|
310
|
+
|
|
311
|
+
**Reproducer:** Bass track (track 6) has device "Auto Filter Legacy" (class `AutoFilter`) with parameters:
|
|
312
|
+
- `Frequency`: min 20, max 135 (Ableton's internal 20-135 index, NOT normalized)
|
|
313
|
+
- `LFO Amount`: min 0, max 30 (NOT normalized 0-1)
|
|
314
|
+
- `Env. Modulation`: min -127, max 127
|
|
315
|
+
- `Resonance`: min 0, max 1.25 (NOT 0-1)
|
|
316
|
+
|
|
317
|
+
Compare to the newer "Auto Filter" (class `AutoFilter2`) which uses 0-1 normalized everywhere.
|
|
318
|
+
|
|
319
|
+
**Why it's a bug:** Tools that assume 0-1 parameter ranges (automation recipes, LFO recipes, filter sweeps) will drastically misconfigure Auto Filter Legacy. Setting `Frequency = 0.75` on legacy gets clamped to 20 (the minimum of 20-135 range) → filter closes completely → track goes silent.
|
|
320
|
+
|
|
321
|
+
**Fix direction:**
|
|
322
|
+
1. In `atlas_search` / `atlas_device_info`, tag `class_name == "AutoFilter"` (legacy) as having non-normalized params
|
|
323
|
+
2. Automation-recipe compiler should read `min`/`max` from `get_device_parameters` and scale curves accordingly, not assume 0-1
|
|
324
|
+
3. Also applies to Ableton's older **Dynamic Tube, Vocoder, Compressor I, Gate** — all pre-2010 devices with absolute units
|
|
325
|
+
|
|
326
|
+
**Impact:** High on mixed-vintage sessions. Silent in most new projects that use modern devices, but existing templates / older projects will misbehave.
|
|
327
|
+
|
|
328
|
+
---
|
|
329
|
+
|
|
330
|
+
### BUG-B10 · `🟢 fixed (Batch 11)` · build_song_brain identity_core was a lazy fallback
|
|
331
|
+
|
|
332
|
+
**Reproducer:** `build_song_brain()` on a session with 10 named tracks ("Pad Lush", "Glitch Chops", "Atmo FX", etc.), clear D minor key, 119 BPM, named scenes ("Intro Dust" → "Sun Peak") returns:
|
|
333
|
+
```json
|
|
334
|
+
{"identity_core": "Dominant texture: drums", "identity_confidence": 0.47}
|
|
335
|
+
```
|
|
336
|
+
|
|
337
|
+
**Why it's weak:** The engine defaults to "Dominant texture: drums" because drum tracks have the most notes. But the user's intent is clearly melodic/harmonic (Pad Lush is the most-named track with 43 arrangement clips, vocal hook is the Splice feature). Low confidence (0.47) suggests the engine knows it's unsure.
|
|
338
|
+
|
|
339
|
+
**Fix direction:** When confidence < 0.6, the identity engine should fall back to:
|
|
340
|
+
1. Most-featured track by clip count OR arrangement presence
|
|
341
|
+
2. Most-named section / most repeated motif
|
|
342
|
+
3. Explicit name in scene 0 ("Intro Dust" → likely "dust-toned")
|
|
343
|
+
4. Combine: tempo + key + primary-role description ("D minor 119 BPM electronic with vocal hook lead")
|
|
344
|
+
|
|
345
|
+
**Impact:** Medium. Song identity feeds downstream engines; weak identity = weak reasoning.
|
|
346
|
+
|
|
347
|
+
---
|
|
348
|
+
|
|
349
|
+
### BUG-B11 · `🟢 fixed (commit 7142319)` · SongBrain section_purposes internal inconsistency
|
|
350
|
+
|
|
351
|
+
**Reproducer:** `build_song_brain()` returns section "Deep Flow" with:
|
|
352
|
+
```json
|
|
353
|
+
{"emotional_intent": "payoff", "is_payoff": false}
|
|
354
|
+
```
|
|
355
|
+
|
|
356
|
+
**Why it's wrong:** A section labeled `emotional_intent: "payoff"` should have `is_payoff: true` by definition — that's what the label *means*. Having `is_payoff: false` when the intent IS payoff is a clear internal contradiction.
|
|
357
|
+
|
|
358
|
+
**Fix direction:** After labeling `emotional_intent`, derive `is_payoff` as `emotional_intent == "payoff"`. Single source of truth.
|
|
359
|
+
|
|
360
|
+
**Impact:** Medium. `payoff_targets` field returns `[]` in the same response while Deep Flow is labeled payoff — suggests downstream logic uses `is_payoff` not `emotional_intent`, creating silent disagreement.
|
|
361
|
+
|
|
362
|
+
---
|
|
363
|
+
|
|
364
|
+
### BUG-B12 · `🟢 fixed (commit 7142319)` · build_song_brain includes empty 8th section
|
|
365
|
+
|
|
366
|
+
**Reproducer:** `build_song_brain()` section_purposes includes:
|
|
367
|
+
```json
|
|
368
|
+
{"section_id": "", "label": "", "emotional_intent": "contrast", "energy_level": 0, "is_payoff": false}
|
|
369
|
+
```
|
|
370
|
+
|
|
371
|
+
**Why it's wrong:** The session has a trailing empty scene 7 (no name, no clips). Song brain builds a "section" for it with empty string ID/label and energy 0, pollutes the energy_arc, and skews section_purpose counts.
|
|
372
|
+
|
|
373
|
+
**Fix direction:** Filter sections where `name == ""` AND no clips across tracks. Empty scenes aren't sections.
|
|
374
|
+
|
|
375
|
+
**Impact:** Low-medium. Pollutes the energy_arc `[0.7, 0.9, 0.9, 0.5, 0.6, 0.9, 0.4, 0]` — that trailing 0 throws off "front-loaded / back-loaded" heuristics.
|
|
376
|
+
|
|
377
|
+
---
|
|
378
|
+
|
|
379
|
+
### BUG-B13 · `🟢 fixed (Batch 11)` · energy_shape description mismatched arc
|
|
380
|
+
|
|
381
|
+
**Reproducer:** `explain_song_identity()` returns:
|
|
382
|
+
```json
|
|
383
|
+
{"energy_shape": "front-loaded — peaks early"}
|
|
384
|
+
```
|
|
385
|
+
But the actual `energy_arc` is `[0.7, 0.9, 0.9, 0.5, 0.6, 0.9, 0.4, 0]` — peaks occur at positions 1, 2, AND 5. Position 5 ("Sun Peak") is 62% through the arrangement, not "early."
|
|
386
|
+
|
|
387
|
+
**Why it's wrong:** The classifier likely checks "is the first third above average?" → yes, because positions 0-2 are all ≥ 0.7. But it misses that position 5 is also a peak. "Peaks early" obscures the real shape (dual-peak with valley at positions 3-4).
|
|
388
|
+
|
|
389
|
+
**Fix direction:** Instead of checking just "where is the peak", look for the count and distribution of peaks (> 0.8) and valleys. Label shapes as: "rising", "falling", "arch (single peak)", "dual-peak", "plateau", "front-loaded".
|
|
390
|
+
|
|
391
|
+
**Impact:** Medium. The label feeds user-facing explanation and could mislead creative decisions.
|
|
392
|
+
|
|
393
|
+
---
|
|
394
|
+
|
|
395
|
+
### BUG-B14 · `🟢 fixed (commit 7142319)` · open_questions false positive — "No intro section"
|
|
396
|
+
|
|
397
|
+
**Reproducer:** `build_song_brain()` returns:
|
|
398
|
+
```json
|
|
399
|
+
"open_questions": [
|
|
400
|
+
{"question": "No intro section — does the track need an opening?", "priority": 0.4}
|
|
401
|
+
]
|
|
402
|
+
```
|
|
403
|
+
But the session HAS "Intro Dust" as scene 0. The engine found it and even labeled it `emotional_intent: "tension"` — but not `"intro"`. So the open-question check asks "is any section labeled intro?" → no → flags as missing.
|
|
404
|
+
|
|
405
|
+
**Why it's wrong:** The check should consider the scene NAME (containing "intro") OR the emotional_intent (being "intro"). Intro-by-name is a stronger signal than intro-by-function.
|
|
406
|
+
|
|
407
|
+
**Fix direction:** Check for "intro" in section names OR emotional_intent OR section index 0 with lower energy than position 1. Any of those = has intro.
|
|
408
|
+
|
|
409
|
+
**Impact:** Low-medium. Wastes a slot in open_questions on a non-issue.
|
|
410
|
+
|
|
411
|
+
---
|
|
412
|
+
|
|
413
|
+
### BUG-B15 · `🟢 fixed (commit 7142319)` · analyze_transition archetype_section_mismatch ignores "any_section_change" wildcard
|
|
414
|
+
|
|
415
|
+
**Reproducer:** `analyze_transition(from="Intro Dust", to="Groove Build")` returns:
|
|
416
|
+
```json
|
|
417
|
+
{
|
|
418
|
+
"archetype": {
|
|
419
|
+
"name": "fill_and_reset",
|
|
420
|
+
"use_cases": ["verse_to_chorus", "chorus_to_verse", "any_section_change"]
|
|
421
|
+
},
|
|
422
|
+
"issues": [{
|
|
423
|
+
"issue_type": "archetype_section_mismatch",
|
|
424
|
+
"severity": 0.5,
|
|
425
|
+
"evidence": "Archetype 'fill_and_reset' (use_cases=[...]) doesn't match section pair intro -> build"
|
|
426
|
+
}]
|
|
427
|
+
}
|
|
428
|
+
```
|
|
429
|
+
|
|
430
|
+
**Why it's wrong:** The archetype's use_cases explicitly includes **"any_section_change"** — a wildcard that matches any pair. The critic ignores that wildcard and checks only exact pair matches, firing a false positive.
|
|
431
|
+
|
|
432
|
+
**Fix direction:** In the mismatch critic, check:
|
|
433
|
+
```python
|
|
434
|
+
if "any_section_change" in archetype.use_cases:
|
|
435
|
+
return # wildcard matches, no issue
|
|
436
|
+
```
|
|
437
|
+
|
|
438
|
+
**Impact:** Medium. Creates false transition issues on perfectly sensible archetype selections.
|
|
439
|
+
|
|
440
|
+
---
|
|
441
|
+
|
|
442
|
+
### BUG-B17 · `🟢 fixed (Batch 13)` · distill_reference_principles returned empty output
|
|
443
|
+
|
|
444
|
+
**Reproducer:** `distill_reference_principles(reference_description="cold 90s hip-hop with ghostly vocal chops and dusty drums", style_name="dabrye")` returns:
|
|
445
|
+
```json
|
|
446
|
+
{"reference_id": "2910e05eca", "principles": [], "emotional_posture": "",
|
|
447
|
+
"density_motion": "", "arrangement_patience": "", "texture_treatment": "",
|
|
448
|
+
"foreground_background": "", "width_strategy": "", "payoff_architecture": "",
|
|
449
|
+
"principle_count": 0}
|
|
450
|
+
```
|
|
451
|
+
A reference_id is generated but every principle field is empty.
|
|
452
|
+
|
|
453
|
+
**Why it's a bug:** Tool accepts input, generates an ID, but produces nothing. Two probable causes:
|
|
454
|
+
1. The "dabrye" style has no entry in the style_tactics corpus (confirmed — `get_style_tactics` only knows burial/daft_punk/techno/ambient/trap/lo-fi)
|
|
455
|
+
2. The `reference_description` text parser doesn't actually distill from free-text — it only looks up style names
|
|
456
|
+
|
|
457
|
+
**Fix direction:**
|
|
458
|
+
- Either implement text-based distillation (use the description's semantic keywords: "cold", "ghostly", "dusty" → texture_treatment/emotional_posture), OR
|
|
459
|
+
- Return a clear error like "No principles found for style 'dabrye'; supported styles: [...]" instead of an empty-field success response
|
|
460
|
+
|
|
461
|
+
**Impact:** Medium. The tool is silently useless for any style not in the 6-entry corpus.
|
|
462
|
+
|
|
463
|
+
---
|
|
464
|
+
|
|
465
|
+
### BUG-B18 · `🟢 fixed (Batch 13)` · get_style_tactics corpus disconnected from memory
|
|
466
|
+
|
|
467
|
+
**Reproducer:** `get_style_tactics(artist_or_genre="prefuse73")` returns:
|
|
468
|
+
```json
|
|
469
|
+
{"tactics": [], "note": "No tactics found for 'prefuse73'. Available built-in styles: burial, daft punk, techno, ambient, trap, lo-fi"}
|
|
470
|
+
```
|
|
471
|
+
But `memory_list()` shows the user has **3 saved Prefuse73 techniques** from April 2026:
|
|
472
|
+
- "Prefuse73 Complete Session — Full Production Workflow"
|
|
473
|
+
- "Prefuse73 Glitch-Hop Beat"
|
|
474
|
+
- "Prefuse73 Advanced — Phase Shift + Polyrhythm + Effect Chains"
|
|
475
|
+
|
|
476
|
+
**Why it's a bug:** Saved memories should feed back into style tactics. Currently memory and style_tactics are separate stores with no cross-pollination. Users who build up style libraries via `memory_learn` get nothing back from `get_style_tactics`.
|
|
477
|
+
|
|
478
|
+
**Fix direction:** Extend `get_style_tactics` to also query `memory_store` for entries tagged with the artist/genre name. Merge results, labeling source ("built-in" vs "user-saved").
|
|
479
|
+
|
|
480
|
+
**Impact:** Medium-High. Undercuts the value proposition of the memory system — users can save techniques but can't surface them as style tactics.
|
|
481
|
+
|
|
482
|
+
---
|
|
483
|
+
|
|
484
|
+
### BUG-B19 · `🟢 fixed (Batch 13)` · build_reference_profile + analyze_reference_gaps limited to 6 built-in styles
|
|
485
|
+
|
|
486
|
+
**Reproducer:** `build_reference_profile(style="prefuse73")` returns `NOT_FOUND`. Same for `analyze_reference_gaps(style="prefuse73")`.
|
|
487
|
+
|
|
488
|
+
**Why it's limiting:** The reference engine ONLY works with the 6 built-in styles. Custom styles, user-provided descriptions, or memory-saved templates are not sources. That's a huge gap — reference-based workflow is one of LivePilot's headline features.
|
|
489
|
+
|
|
490
|
+
**Fix direction:**
|
|
491
|
+
- Same as BUG-B18 — hydrate reference profiles from memory store
|
|
492
|
+
- For audio-file-based workflow (`reference_path=<file>`), that works independently of the style corpus and should be exercised to confirm
|
|
493
|
+
|
|
494
|
+
**Impact:** High. The whole reference engine is locked to 6 styles for non-audio workflow.
|
|
495
|
+
|
|
496
|
+
---
|
|
497
|
+
|
|
498
|
+
### BUG-B20 · `🟢 fixed (Batch 11)` · suggest_momentum_rescue wrapped BUG-B6 (same fix)
|
|
499
|
+
|
|
500
|
+
**Reproducer:** `suggest_momentum_rescue(mode="direct")` returns:
|
|
501
|
+
```json
|
|
502
|
+
{"stuckness": {"confidence": 0, "level": "flowing"}, "suggestions": [],
|
|
503
|
+
"note": "Session is flowing well — no rescue needed"}
|
|
504
|
+
```
|
|
505
|
+
|
|
506
|
+
Despite session having: 0.93 repetition fatigue + `peak_too_early` emotional arc issue + 6 transition issues.
|
|
507
|
+
|
|
508
|
+
**Why it's a bug:** `suggest_momentum_rescue` is a thin wrapper over `detect_stuckness`. Same blindness as BUG-B6 — it only reads the action ledger, not the current session state.
|
|
509
|
+
|
|
510
|
+
**Fix direction:** Same as BUG-B6 — extend stuckness detection to include state critic signals. Fix in one place, both tools benefit.
|
|
511
|
+
|
|
512
|
+
**Impact:** Medium. Rescue suggestion is a core safety net. When fatigue is high but ledger is empty, users get zero help.
|
|
513
|
+
|
|
514
|
+
---
|
|
515
|
+
|
|
516
|
+
### BUG-B21 · `🟢 closed (Batch 17)` · Three different energy metrics across engines — two unified, third is by-design
|
|
517
|
+
|
|
518
|
+
**Reproducer:** Same session, three different "energy" readings for the 7 sections:
|
|
519
|
+
|
|
520
|
+
| Section | `get_section_graph.energy` | `get_emotional_arc.tension` | `get_performance_state.energy_level` |
|
|
521
|
+
|---|---|---|---|
|
|
522
|
+
| Intro Dust | 0.7 | 0.56 | **0.2** |
|
|
523
|
+
| Groove Build | 0.9 | 0.72 | **0.6** |
|
|
524
|
+
| Deep Flow | 0.9 | 0.72 | **0.4** |
|
|
525
|
+
| Breakdown | 0.5 | 0.4 | **0.3** |
|
|
526
|
+
| Re-Entry | 0.6 | 0.48 | **0.7** |
|
|
527
|
+
| Sun Peak | 0.9 | 0.72 | **0.7** |
|
|
528
|
+
| Outro Dust | 0.4 | 0.32 | **0.2** |
|
|
529
|
+
|
|
530
|
+
**Why it's a bug:** Three engines compute "energy" independently. They're not just scaled differently — the *ordering* differs (e.g. "Deep Flow" is a peak in composition but mid-tier in performance). Downstream engines that mix these signals (e.g. "energy-aware scene handoff") get contradictory inputs.
|
|
531
|
+
|
|
532
|
+
**Fix direction:**
|
|
533
|
+
1. Unify on one canonical energy model in a shared module (`mcp_server/tools/_composition_engine/sections.py` has the base). Other engines should derive.
|
|
534
|
+
2. OR document the three metrics as distinct (density-energy, tension-energy, performance-energy) and rename them so their differences are visible in field names.
|
|
535
|
+
|
|
536
|
+
**Impact:** High. Root-cause for multiple downstream inconsistencies (BUG-E4/E5 below are instances of this).
|
|
537
|
+
|
|
538
|
+
---
|
|
539
|
+
|
|
540
|
+
### BUG-B22 · `🟢 fixed (Batch 11)` · get_phrase_grid phrase note_density 0 for active section
|
|
541
|
+
|
|
542
|
+
**Reproducer:** Section 1 ("Groove Build") has `tracks_active: [0,1,2,3,5,6,7,8,9]` (9 tracks playing, density 0.9). `get_phrase_grid(section_index=1)` returns:
|
|
543
|
+
```json
|
|
544
|
+
{"phrases": [
|
|
545
|
+
{"phrase_id": "sec_01_phr_00", "start_bar": 8, "end_bar": 12, "note_density": 36.5},
|
|
546
|
+
{"phrase_id": "sec_01_phr_01", "start_bar": 12, "end_bar": 16, "note_density": 0, "has_variation": true}
|
|
547
|
+
]}
|
|
548
|
+
```
|
|
549
|
+
Second phrase (bars 12-16) has note_density = 0 despite being in the highest-density section.
|
|
550
|
+
|
|
551
|
+
**Why it's a bug:** Likely reading notes in the wrong window (off-by-one error on bar→time conversion) or from the wrong track. The session has 49 clips total — bars 12-16 inside "Groove Build" should have plenty of notes.
|
|
552
|
+
|
|
553
|
+
**Fix direction:** Audit the phrase-note-counting logic in `mcp_server/tools/_composition_engine/sections.py::detect_phrases`. Confirm it's enumerating ALL active tracks in the section, not just one.
|
|
554
|
+
|
|
555
|
+
**Impact:** Medium. `phrase` objects with note_density 0 falsely signal "phrase is empty" to downstream critics.
|
|
556
|
+
|
|
557
|
+
---
|
|
558
|
+
|
|
559
|
+
### BUG-B24 · `🟢 fixed (Batch 8)` · classify_progression returned "?" for valid transform
|
|
560
|
+
|
|
561
|
+
**Reproducer:** `classify_progression(chords=["Dm", "Gm", "Am", "Dm"])` returns:
|
|
562
|
+
```json
|
|
563
|
+
{"transforms": ["LR", "?", "LR"], "pattern": "LR?LR", "classification": "diatonic cycle fragment"}
|
|
564
|
+
```
|
|
565
|
+
The middle transform (Gm → Am) returns "?" — the neo-Riemannian transform engine couldn't classify it. Yet the overall classification is "diatonic cycle fragment" (ignoring the unresolved middle).
|
|
566
|
+
|
|
567
|
+
**Why it's a bug:** Gm → Am is a whole-step root shift that IS classifiable (chromatic mediant by doubled L/P). Returning "?" means the transform vocabulary is incomplete. The classification then lies — "diatonic cycle fragment" with a "?" in the middle is contradictory.
|
|
568
|
+
|
|
569
|
+
**Fix direction:** Extend the transform set in `_composition_engine/harmony.py` (or wherever the transform alphabet lives) to cover whole-step root shifts. Add "step" or "cycle" transforms.
|
|
570
|
+
|
|
571
|
+
**Impact:** Medium. Progression classification is used by downstream creative reasoning.
|
|
572
|
+
|
|
573
|
+
---
|
|
574
|
+
|
|
575
|
+
### BUG-B25 · `🟢 fixed (Batch 8)` · find_voice_leading_path returned non-smooth leading
|
|
576
|
+
|
|
577
|
+
**Reproducer:** `find_voice_leading_path(from="Dm", to="Bb", max_steps=4)` returns:
|
|
578
|
+
```json
|
|
579
|
+
{"path": ["D minor", "Bb major"], "steps": 1,
|
|
580
|
+
"voice_leading": [{"movement": "D4→A#4, F4→D5, A4→F5"}]}
|
|
581
|
+
```
|
|
582
|
+
D4→A#4 is a **minor 6th jump upward** — not smooth voice leading. For Dm→Bb, the smooth path would be: D→D (common tone, stay), F→F (common tone), A→Bb (semitone) — keeping 2 voices and moving 1 semitone in the third.
|
|
583
|
+
|
|
584
|
+
**Why it's questionable:** "Shortest" in the tool's sense is "fewest transforms" (single L transform), but voice leading should prefer smooth voicings. The output shows unnecessary large leaps.
|
|
585
|
+
|
|
586
|
+
**Fix direction:** Add a post-process step that optimizes voice assignments for minimum total interval movement. Or document that "shortest" means transforms-count, not voice-movement.
|
|
587
|
+
|
|
588
|
+
**Impact:** Low-Medium. The path is correct; the voicing isn't pianist-friendly.
|
|
589
|
+
|
|
590
|
+
---
|
|
591
|
+
|
|
592
|
+
### BUG-B26 · `🟢 fixed (Batch 9)` · harmonize_melody bass stuck on tonic pedal
|
|
593
|
+
|
|
594
|
+
**Reproducer:** `harmonize_melody(track=3, clip=0, voices=4)` on Pad Lush's Intro Wash returns:
|
|
595
|
+
```json
|
|
596
|
+
{"bass": [
|
|
597
|
+
{"pitch": 38, ...}, {"pitch": 38}, {"pitch": 38}, {"pitch": 38}, {"pitch": 38}, {"pitch": 33}
|
|
598
|
+
]}
|
|
599
|
+
```
|
|
600
|
+
5 of 6 bass notes are **D2 (38)** — the tonic. One is C#2 (33). Bass line has no motion.
|
|
601
|
+
|
|
602
|
+
**Why it's a bug:** 4-voice harmonization should produce a bass line that follows chord roots. The melody notes shift across Dm and D-F-C chords, so the bass should walk: D for Dm, G for iv (Gm), or at least the chord root for each harmonization point. Stuck on tonic = not harmonizing, just pedaling.
|
|
603
|
+
|
|
604
|
+
**Fix direction:** In the harmonization engine, after selecting a chord per melody note, assign the bass to the chord's root (or 3rd for inversions) rather than always the scale tonic.
|
|
605
|
+
|
|
606
|
+
**Impact:** High. Harmonize_melody is broken as a creative tool — produces unusable output.
|
|
607
|
+
|
|
608
|
+
---
|
|
609
|
+
|
|
610
|
+
### BUG-B27 · `🟢 fixed (Batch 9)` · harmonize_melody soprano duplicates original melody
|
|
611
|
+
|
|
612
|
+
**Reproducer:** Same call as B26. Input melody (from Pad Lush clip): `[D3, F3, A3, D3, F3, C4]` = `[50, 53, 57, 50, 53, 60]`. Output soprano:
|
|
613
|
+
```json
|
|
614
|
+
[{"pitch": 50}, {"pitch": 53}, {"pitch": 57}, {"pitch": 50}, {"pitch": 53}, {"pitch": 60}]
|
|
615
|
+
```
|
|
616
|
+
**Exactly the input melody.**
|
|
617
|
+
|
|
618
|
+
**Why it's a bug:** In 4-voice harmonization (SATB), soprano should be a distinct voice — typically the MELODY in hymn-style harmonization, OR a harmonization above the melody when the melody is placed elsewhere. Returning the exact input as soprano means the "harmonization" is just the 3 lower voices (bass, tenor, alto) added — which could be correct IF the melody is copied to soprano deliberately. But the field is labeled "soprano" distinct from "melody_notes" suggesting they should differ.
|
|
619
|
+
|
|
620
|
+
**Fix direction:** Either (a) document that soprano is always the melody line, or (b) generate an actual upper-voice harmonization above the melody when the melody is in an inner voice.
|
|
621
|
+
|
|
622
|
+
**Impact:** Medium. Confusing output — user has to interpret whether soprano is melody-duplicate or a separate voice.
|
|
623
|
+
|
|
624
|
+
---
|
|
625
|
+
|
|
626
|
+
### BUG-B28 · `🟢 fixed (Batch 9)` · generate_countermelody returned near-static pedal
|
|
627
|
+
|
|
628
|
+
**Reproducer:** `generate_countermelody(track=3, clip=0, species=1)` returns counter_notes with pitches `[50, 48, 50, 53, 50, 48]` — 3 distinct values across 6 positions, mostly D and C around the same octave as the bass.
|
|
629
|
+
|
|
630
|
+
**Why it's weak:** Species 1 counterpoint should explore contrary motion and use the full range. A counter that sits on tonic/7th with only 3 pitches is closer to a pedal ostinato than an actual contrapuntal line.
|
|
631
|
+
|
|
632
|
+
**Fix direction:** Species counterpoint algorithm should enforce:
|
|
633
|
+
1. Contrary motion on strong beats
|
|
634
|
+
2. Pitch range exploration (at least 5 distinct pitches for 6 melody notes)
|
|
635
|
+
3. Variety in motion types (steps, skips)
|
|
636
|
+
|
|
637
|
+
**Impact:** Medium. Makes the generative tool less useful for composition.
|
|
638
|
+
|
|
639
|
+
---
|
|
640
|
+
|
|
641
|
+
### BUG-B31 · `🟢 fixed (commit 7142319)` · develop_hook ignores discovered primary hook when hook_id is default
|
|
642
|
+
|
|
643
|
+
**Reproducer:** `develop_hook(mode="chorus")` (no hook_id provided) returns:
|
|
644
|
+
```json
|
|
645
|
+
{"hook_id": "", "hook_description": "the hook", "tactics": [
|
|
646
|
+
"Double the hook with octave or harmony",
|
|
647
|
+
"Add supporting harmonic movement underneath the melodic contour and pitch",
|
|
648
|
+
"Increase rhythmic density around the hook",
|
|
649
|
+
"Layer complementary textures that frame the melodic contour and pitch"
|
|
650
|
+
]}
|
|
651
|
+
```
|
|
652
|
+
Generic advice with empty hook_id.
|
|
653
|
+
|
|
654
|
+
**Why it's a bug:** `find_primary_hook()` DOES return a primary hook for this session (`hook_id: "track_10-vox_lch_..."`). `develop_hook` with no explicit hook_id should default to the primary hook, not "the hook" (generic). The session state has what it needs — the engine just doesn't connect the dots.
|
|
655
|
+
|
|
656
|
+
**Fix direction:** In `develop_hook`, when `hook_id` is empty, call `find_primary_hook()` internally and use that ID.
|
|
657
|
+
|
|
658
|
+
**Impact:** Medium. Users have to manually chain find_primary_hook → develop_hook instead of single-call.
|
|
659
|
+
|
|
660
|
+
---
|
|
661
|
+
|
|
662
|
+
### BUG-B35 · `🟢 fixed (Batch 10)` · analyze_sound_design flagged simple Kick as "too_few_blocks"
|
|
663
|
+
|
|
664
|
+
**Reproducer:** `analyze_sound_design(track=0)` on Kick (DS Kick + Saturator) returns:
|
|
665
|
+
```json
|
|
666
|
+
{"issues": [
|
|
667
|
+
{"issue_type": "no_modulation_sources", "severity": 0.3},
|
|
668
|
+
{"issue_type": "too_few_blocks", "severity": 0.5, "evidence": "Only 1 controllable block(s) — patch lacks timbral sculpting potential"}
|
|
669
|
+
]}
|
|
670
|
+
```
|
|
671
|
+
|
|
672
|
+
**Why it's misleading:** Kicks are SUPPOSED to be simple. A DS Kick + Saturator chain is textbook electronic kick design. Flagging it as "weak identity" treats a kick like a pad and misses the instrument-type context.
|
|
673
|
+
|
|
674
|
+
**Fix direction:** Weight the "too_few_blocks" and "no_modulation_sources" critics by track role. For drums, kicks, and bass — simple is correct. For pads, leads, and textures — complexity is expected.
|
|
675
|
+
|
|
676
|
+
**Impact:** Medium. Same family as BUG-B1 — role/context-unaware critics produce false positives.
|
|
677
|
+
|
|
678
|
+
---
|
|
679
|
+
|
|
680
|
+
### BUG-B36 · `🟢 fixed (Batch 17)` · plan_sound_design_move now cross-references mix issues
|
|
681
|
+
|
|
682
|
+
**Reproducer:** `analyze_mix` flagged Texture (track 7) for `support_too_loud` severity 0.57. But `plan_sound_design_move(track=7)` returns:
|
|
683
|
+
```json
|
|
684
|
+
{"moves": [], "move_count": 0, "issue_count": 0}
|
|
685
|
+
```
|
|
686
|
+
|
|
687
|
+
**Why it's a bug:** The track has a KNOWN issue in a sibling engine (mix), but sound-design plan just reports empty. A user running `plan_sound_design_move` on a problematic track gets no guidance, even though there IS a documented fix.
|
|
688
|
+
|
|
689
|
+
**Fix direction:** When `plan_sound_design_move` finds zero sound-design issues but there ARE mix issues on the track, return a pointer:
|
|
690
|
+
```json
|
|
691
|
+
{"moves": [], "issue_count": 0,
|
|
692
|
+
"hint": "No sound-design issues, but mix critic flagged 'support_too_loud'. Try plan_mix_move."}
|
|
693
|
+
```
|
|
694
|
+
|
|
695
|
+
**Impact:** Low-Medium. Discoverability bug — the tool silently misses cross-engine issues.
|
|
696
|
+
|
|
697
|
+
---
|
|
698
|
+
|
|
699
|
+
### BUG-B37 · `🟢 fixed (Batch 14)` · evaluate_sample_fit couldn't find session key — wrong field-name typo
|
|
700
|
+
|
|
701
|
+
**Reproducer:** Session is in **D minor** (confirmed by `analyze_harmony`, `identify_scale`, `suggest_next_chord` — all return Dm with high confidence). `evaluate_sample_fit(file_path=..., intent="vocal")` returns:
|
|
702
|
+
```json
|
|
703
|
+
{"critics": {
|
|
704
|
+
"key_fit": {"score": 0.5, "recommendation": "Song key unknown — cannot evaluate fit", "rating": "fair"}
|
|
705
|
+
}}
|
|
706
|
+
```
|
|
707
|
+
"Song key unknown" despite the whole session being in Dm.
|
|
708
|
+
|
|
709
|
+
**Why it's critical:** Sample-fit evaluation is core workflow. If it can't determine the song key, it can't recommend key-compatible samples. This is a disconnected engine — sample_engine has its own song-key inference that doesn't use the harmonic engines' data.
|
|
710
|
+
|
|
711
|
+
**Fix direction:** In `mcp_server/sample_engine/critics.py` (or wherever `key_fit` lives), replace the in-house key inference with a call to `identify_scale` or `analyze_harmony` on a primary harmonic track. OR: accept an optional `song_key` param and let the caller pass it in.
|
|
712
|
+
|
|
713
|
+
**Impact:** **High.** Breaks a flagship workflow. The tool's own output even suggests the sample IS in Dm — that should trivially match session Dm.
|
|
714
|
+
|
|
715
|
+
---
|
|
716
|
+
|
|
717
|
+
### BUG-B39 · `🟢 fixed (Batch 15)` · atlas_chain_suggest returned empty chain for standard role
|
|
718
|
+
|
|
719
|
+
**Reproducer:** `atlas_chain_suggest(role="bass", genre="electronic")` returns:
|
|
720
|
+
```json
|
|
721
|
+
{"role": "bass", "genre": "electronic", "chain": []}
|
|
722
|
+
```
|
|
723
|
+
|
|
724
|
+
**Why it's a bug:** A query for a core role ("bass") + common genre ("electronic") should return a recommended device chain (synth → compressor → saturation → EQ, etc.). Returns empty — the tool can't suggest chains even for its most basic use case.
|
|
725
|
+
|
|
726
|
+
**Fix direction:** The chain-suggestion logic in `mcp_server/atlas/` is probably missing a data source or has an empty fallback. Verify that atlas enrichment data includes role→chain templates, OR have the tool fall back to `atlas_suggest(intent=role)` and build a basic chain from the top instrument + standard FX.
|
|
727
|
+
|
|
728
|
+
**Impact:** High. Tool is documented as "Suggest a full device chain for a track role" — doesn't deliver.
|
|
729
|
+
|
|
730
|
+
---
|
|
731
|
+
|
|
732
|
+
### BUG-B40 · `🟢 fixed (Batch 15)` · atlas_compare returned sparse data — wrong field names
|
|
733
|
+
|
|
734
|
+
**Reproducer:** `atlas_compare(device_a="Wavetable", device_b="Drift", role="pad")` returns:
|
|
735
|
+
```json
|
|
736
|
+
{
|
|
737
|
+
"device_a": {"name": "Wavetable", "tags": [], "genres": {}, "description": "", "cpu_weight": "unknown", "sweet_spot": "", "use_cases": [...]},
|
|
738
|
+
"device_b": {...similar sparsity...},
|
|
739
|
+
"recommendation": "Both devices are equally suited for pad"
|
|
740
|
+
}
|
|
741
|
+
```
|
|
742
|
+
|
|
743
|
+
But `atlas_device_info("Wavetable")` returns rich character_tags, detailed sonic_description, genre_affinity, starter_recipes, etc.
|
|
744
|
+
|
|
745
|
+
**Why it's a bug:** `atlas_compare` isn't reading from the same enriched atlas source that `atlas_device_info` uses. The "comparison" can't do its job with empty tags/description — it just defaults to "Both devices are equally suited."
|
|
746
|
+
|
|
747
|
+
**Fix direction:** Have `atlas_compare` call the same enrichment-aware lookup that `atlas_device_info` uses. Then compute real strengths/weaknesses from the comparable fields (character_tags overlap, genre_affinity overlap, cpu_weight diff).
|
|
748
|
+
|
|
749
|
+
**Impact:** Medium. Makes atlas_compare unhelpful for decision-making.
|
|
750
|
+
|
|
751
|
+
---
|
|
752
|
+
|
|
753
|
+
### BUG-B41 · `🟢 fixed (Batch 15)` · atlas_search ranked "Bass" device highest for "warm analog bass"
|
|
754
|
+
|
|
755
|
+
**Reproducer:** `atlas_search(query="warm analog bass")` returns:
|
|
756
|
+
```json
|
|
757
|
+
{"results": [
|
|
758
|
+
{"name": "Bass", "score": 100, "character_tags": ["deep","powerful","focused","punchy","low_end"]},
|
|
759
|
+
{"name": "Dynamic Tube", "score": 50, "character_tags": ["warm","dynamic","tube","responsive","musical"]},
|
|
760
|
+
{"name": "Overdrive", "score": 50, "character_tags": ["warm","crunchy","amp_like"]},
|
|
761
|
+
...
|
|
762
|
+
]}
|
|
763
|
+
```
|
|
764
|
+
|
|
765
|
+
**Why it's odd:** For query "warm analog bass":
|
|
766
|
+
- "Bass" device has NONE of the query words in its character tags, yet scores 100
|
|
767
|
+
- Analog synth (which the user is actually using on the Bass track!) doesn't appear in top 5
|
|
768
|
+
- Drift (another analog-emulating synth) doesn't appear
|
|
769
|
+
|
|
770
|
+
The scoring clearly weights the device NAME "Bass" as a perfect match for the word "bass" in the query, ignoring that "warm" and "analog" don't match at all.
|
|
771
|
+
|
|
772
|
+
**Fix direction:** Weight tag-match and description-match higher relative to name-match. For query "warm analog bass", the Analog synth device should rank top because its tags include warmth AND it's an analog-emulating instrument AND it's useful for bass.
|
|
773
|
+
|
|
774
|
+
**Impact:** Medium. Users asking for sonic characteristics get name-match results instead.
|
|
775
|
+
|
|
776
|
+
---
|
|
777
|
+
|
|
778
|
+
### BUG-B43 · `🟢 fixed (Batch 16)` · research_technique returned phantom "Unknown Device" findings
|
|
779
|
+
|
|
780
|
+
**Reproducer:** `research_technique(query="sidechain bass to kick for tight low end", scope="targeted")` returns:
|
|
781
|
+
```json
|
|
782
|
+
{"findings": [
|
|
783
|
+
{"source_type": "device_atlas", "relevance": 0, "content": "Device: Unknown",
|
|
784
|
+
"metadata": {"device_name": "", "category": ""}}
|
|
785
|
+
],
|
|
786
|
+
"technique_card": {"method": "Research findings for: sidechain bass to kick for tight low end",
|
|
787
|
+
"verification": ["Check sidechain results with analyzer", "Check bass results with analyzer"]},
|
|
788
|
+
"confidence": 0}
|
|
789
|
+
```
|
|
790
|
+
|
|
791
|
+
**Why it's broken:**
|
|
792
|
+
1. Findings has one phantom entry with `relevance: 0`, `content: "Device: Unknown"`, empty device_name/category — that's a malformed/default entry, not actual search output
|
|
793
|
+
2. `technique_card.method` is a template-string substitution, not actual research
|
|
794
|
+
3. `confidence: 0` — the tool itself reports no useful results
|
|
795
|
+
4. verification steps are generic placeholders derived from query keywords
|
|
796
|
+
|
|
797
|
+
**Expected:** For "sidechain bass to kick", the atlas should return:
|
|
798
|
+
- Compressor device info (sidechain capability, threshold/ratio/attack recipes)
|
|
799
|
+
- Glue Compressor info
|
|
800
|
+
- Ableton's native sidechain routing guide
|
|
801
|
+
- Related memory techniques
|
|
802
|
+
|
|
803
|
+
The tool's own output lists relevant devices in the `technique_card.devices` array (Compressor, Glue Compressor, Auto Filter, Operator, Analog) — so it DOES know the devices, but doesn't flow them into `findings`.
|
|
804
|
+
|
|
805
|
+
**Fix direction:** Audit `mcp_server/tools/_research_engine.py` — the atlas-search step is likely returning raw enrichment data but the findings builder is ignoring it and emitting a default "Unknown Device" template.
|
|
806
|
+
|
|
807
|
+
**Impact:** High. The whole research engine returns junk for a core workflow.
|
|
808
|
+
|
|
809
|
+
---
|
|
810
|
+
|
|
811
|
+
### BUG-B44 · `🟢 fixed (Batch 12)` · create_preview_set "strong" variant missing compiled_plan
|
|
812
|
+
|
|
813
|
+
**Reproducer:** `create_preview_set(request_text="make this more magical and dusty")` returns 3 variants:
|
|
814
|
+
- **safe** — has `compiled_plan` with `move_id: "make_punchier"`, 2 steps
|
|
815
|
+
- **strong** — has `move_id: "make_kick_bass_lock"` but **NO compiled_plan field**
|
|
816
|
+
- **unexpected** — has `compiled_plan` with `move_id: "reduce_repetition_fatigue"`, 1 step
|
|
817
|
+
|
|
818
|
+
**Why it's a bug:** Per livepilot-core skill's Wonder Mode routing section:
|
|
819
|
+
> "Do not describe a branch as previewable unless it has a valid `compiled_plan`"
|
|
820
|
+
|
|
821
|
+
The strong variant is shown with `status: "pending"` and `executable`-implying labels ("Best balance of impact and safety") — but silently lacks a compiled_plan. A user committing this variant would hit a missing-plan error.
|
|
822
|
+
|
|
823
|
+
**Fix direction:** In `mcp_server/preview_studio/engine.py`, when building variants, ensure every variant gets a compiled plan OR explicitly marks `executable: false` with a reason.
|
|
824
|
+
|
|
825
|
+
**Impact:** Medium. Leads to silent execution failures or misleading UI.
|
|
826
|
+
|
|
827
|
+
---
|
|
828
|
+
|
|
829
|
+
### BUG-B45 · `🟢 fixed (Batch 12)` · create_preview_set variants had empty user-facing description fields
|
|
830
|
+
|
|
831
|
+
**Reproducer:** Each variant in the preview set returns:
|
|
832
|
+
```json
|
|
833
|
+
{
|
|
834
|
+
"summary": "",
|
|
835
|
+
"what_changed": "",
|
|
836
|
+
"render_ref": "",
|
|
837
|
+
"why_it_matters": "Best balance of impact and safety",
|
|
838
|
+
"what_preserved": "Maintains Glitch Chops (lead role)..."
|
|
839
|
+
}
|
|
840
|
+
```
|
|
841
|
+
|
|
842
|
+
**Why it's a bug:** `why_it_matters` is populated (useful!) and `what_preserved` is populated. But `what_changed` is empty — the USER needs to know what the variant actually CHANGES, not just why it matters. That's the primary decision criterion. `summary` and `render_ref` also empty.
|
|
843
|
+
|
|
844
|
+
**Fix direction:** The compiled_plan has step descriptions like "Read current levels for all tracks", "Verify all tracks still producing audio". Aggregate the plan's step descriptions OR the move's `intent` field into `what_changed`. Example:
|
|
845
|
+
```python
|
|
846
|
+
variant["what_changed"] = compiled_plan.get("intent", "") or \
|
|
847
|
+
" → ".join(s["description"] for s in compiled_plan["steps"])
|
|
848
|
+
```
|
|
849
|
+
|
|
850
|
+
**Impact:** Medium-High. Preview sets are core UX. Variants without `what_changed` = unusable for creative decisions.
|
|
851
|
+
|
|
852
|
+
---
|
|
853
|
+
|
|
854
|
+
### BUG-B46 · `🟢 fixed (Batch 12)` · generate_constrained_variants returned empty-move variants
|
|
855
|
+
|
|
856
|
+
**Reproducer:** `generate_constrained_variants(request_text="reduce energy without losing groove", constraints=["subtraction_only"])` returns 3 variants all with:
|
|
857
|
+
```json
|
|
858
|
+
{"move_id": "", "what_preserved": "... | Constraints: subtraction_only"}
|
|
859
|
+
```
|
|
860
|
+
Compare to unconstrained `create_preview_set` which populated real `move_id` values (make_punchier, make_kick_bass_lock, reduce_repetition_fatigue).
|
|
861
|
+
|
|
862
|
+
**Why it's a bug:** The constraint filter appears to eliminate ALL available moves that match "subtraction_only", leaving variants with no executable plan. The tool says `"note": "Variants with violating plans have been filtered"` — but instead of reporting zero variants, it still returns 3 shell variants with empty move_ids.
|
|
863
|
+
|
|
864
|
+
**Fix direction:** Either:
|
|
865
|
+
1. Make the constraint filter more lenient — if no move matches, find the closest "subtraction-like" move (e.g., `tighten_low_end` involves reducing sub mud)
|
|
866
|
+
2. Return an empty variants list + explanatory note: "No moves match constraints [subtraction_only] — try loosening constraints"
|
|
867
|
+
3. Mark the variants explicitly as `executable: false` with a `blocked_reason` field
|
|
868
|
+
|
|
869
|
+
**Impact:** Medium. Constrained variant generation is silent about its failures.
|
|
870
|
+
|
|
871
|
+
---
|
|
872
|
+
|
|
873
|
+
### BUG-B49 · `🟢 fixed (Batch 14)` · analyze_sample now runs real offline spectral analysis
|
|
874
|
+
|
|
875
|
+
**Reproducer:** `analyze_sample(file_path="/Users/.../JJP_90SS2_86_vocal_lead_hurt_you_Dm.wav")` returns:
|
|
876
|
+
```json
|
|
877
|
+
{"key": "Dm", "key_confidence": 0.5, "bpm": 86, "bpm_confidence": 0.5,
|
|
878
|
+
"material_type": "vocal", "material_confidence": 0.4,
|
|
879
|
+
"frequency_center": 0, "frequency_spread": 0, "brightness": 0,
|
|
880
|
+
"transient_density": 0, "duration_seconds": 0, "has_clear_downbeat": false}
|
|
881
|
+
```
|
|
882
|
+
|
|
883
|
+
Every spectral/temporal field is zero. Key/BPM/material come from filename parsing (confidence 0.5 = filename-only).
|
|
884
|
+
|
|
885
|
+
**Why it's a bug:** The tool's own docstring says "Falls back to filename-only analysis if M4L bridge unavailable." But `check_flucoma` confirms all 6 FluCoMa streams active. The bridge IS available. The tool is defaulting to filename-only even when proper analysis should be possible.
|
|
886
|
+
|
|
887
|
+
**Fix direction:** In `mcp_server/sample_engine/analyzer.py`, when `file_path` is given, read the file via soundfile/librosa (offline — no M4L needed) and compute:
|
|
888
|
+
- duration (trivial — read frames / sample rate)
|
|
889
|
+
- spectral centroid + spread (numpy FFT)
|
|
890
|
+
- transient density (onset detection via librosa)
|
|
891
|
+
- has_clear_downbeat (tempo estimation)
|
|
892
|
+
|
|
893
|
+
M4L bridge isn't even the right dependency for file-based analysis — that's what `analyze_loudness` does offline via numpy.
|
|
894
|
+
|
|
895
|
+
**Impact:** High. Sample analysis is the foundation for sample-engine decisions. Returning zeros means every downstream critic has no real data.
|
|
896
|
+
|
|
897
|
+
---
|
|
898
|
+
|
|
899
|
+
### BUG-B51 · `🟢 fixed (Batch 16)` · compare_phrase_impact returned identical scores for distinct sections
|
|
900
|
+
|
|
901
|
+
**Reproducer:** `compare_phrase_impact(section_indices=[2, 5], target="drop")` on Deep Flow (sec_02) vs Sun Peak (sec_05):
|
|
902
|
+
```json
|
|
903
|
+
{"rankings": [
|
|
904
|
+
{"section_index": 2, "section_name": "Deep Flow", "composite_impact": 0.285,
|
|
905
|
+
"arrival_strength": 0.3, "anticipation_strength": 0.2, "contrast_quality": 0,
|
|
906
|
+
"repetition_fatigue": 0.5, "section_clarity": 0.7, "groove_continuity": 0.7,
|
|
907
|
+
"payoff_balance": 0.25},
|
|
908
|
+
{"section_index": 5, "section_name": "Sun Peak", "composite_impact": 0.285, ...identical}
|
|
909
|
+
],
|
|
910
|
+
"delta_analysis": {"strongest": "Deep Flow", "weakest": "Sun Peak",
|
|
911
|
+
"composite_delta": 0, "biggest_gap_dimension": ""}}
|
|
912
|
+
```
|
|
913
|
+
|
|
914
|
+
Every single dimension is identical. Different sections, different clip content, but the phrase analyzer can't tell them apart.
|
|
915
|
+
|
|
916
|
+
**Why it's a bug:** Deep Flow has active tracks `[0,1,2,3,4,5,6,7,8]` and Sun Peak has `[0,1,2,3,4,5,6,7,8]` — same track set but different clips (confirmed by different arrangement clip names for Pad Lush across sections). The phrase analyzer is likely only reading section-level energy/density (both 0.9 — identical) rather than the actual clip/note contents.
|
|
917
|
+
|
|
918
|
+
**Fix direction:** In `score_phrase_impact` (the per-section tool that `compare_phrase_impact` wraps), read the actual NOTE data from clips in each section to differentiate. Section energy+density alone isn't enough — two sections with the same density can have very different impact (e.g., a busy verse vs a held chorus chord).
|
|
919
|
+
|
|
920
|
+
**Impact:** Medium. `compare_phrase_impact` can't actually compare when sections have similar energy/density.
|
|
921
|
+
|
|
922
|
+
---
|
|
923
|
+
|
|
924
|
+
### BUG-B52 · `🟢 fixed (commit 7142319)` · export_clip_midi ignores custom filename parameter
|
|
925
|
+
|
|
926
|
+
**Reproducer:** `export_clip_midi(track_index=3, clip_index=0, filename="/tmp/livepilot_debug_pad_intro.mid")` returns:
|
|
927
|
+
```json
|
|
928
|
+
{"file_path": "/Users/visansilviugeorge/Documents/LivePilot/outputs/midi/livepilot_debug_pad_intro.mid",
|
|
929
|
+
"note_count": 6, "duration_beats": 30, "tempo": 119}
|
|
930
|
+
```
|
|
931
|
+
|
|
932
|
+
The file wrote to the default `~/Documents/LivePilot/outputs/midi/` directory, not the specified `/tmp/` path. Only the basename was respected; the dirname was overridden.
|
|
933
|
+
|
|
934
|
+
**Why it's a bug:** The `filename` parameter is documented as "Auto-generates filename from track/clip if not provided." When provided, users expect their path to be honored. Instead, the tool splits the input into dirname+basename, discards the dirname, and uses its own default output directory.
|
|
935
|
+
|
|
936
|
+
**Fix direction:** In `mcp_server/tools/_midi_io_engine.py::export_clip_midi`, respect the full absolute path when provided. If the user writes `/tmp/foo.mid`, write to `/tmp/foo.mid`, not `~/Documents/LivePilot/outputs/midi/foo.mid`.
|
|
937
|
+
|
|
938
|
+
**Impact:** Low-Medium. Users who try to export to specific locations get silent redirect. Creates unexpected files in the Documents tree.
|
|
939
|
+
|
|
940
|
+
---
|
|
941
|
+
|
|
942
|
+
### BUG-B54 · `🟢 fixed (Batch 12)` · generate_reference_inspired_variants refuses to run on empty principles
|
|
943
|
+
|
|
944
|
+
**Reproducer:** Chain:
|
|
945
|
+
1. `distill_reference_principles(reference_description="cold 90s hip-hop...")` → returns `principles: []` (BUG-B17)
|
|
946
|
+
2. `map_reference_principles_to_song()` → returns `mappings: []`, `mapping_count: 0`
|
|
947
|
+
3. `generate_reference_inspired_variants(request_text="...")` → returns 3 variants with `principles_applied: []`, `move_id: ""`, empty `what_changed` / `summary`
|
|
948
|
+
|
|
949
|
+
**Why it's a bug:** The entire reference-engine chain — distill → map → generate_variants — silently degrades to empty output. Each step accepts the upstream empty data and passes empty data forward. The user gets 3 shell "variants" that claim to be "reference-inspired" but have no reference material driving them.
|
|
950
|
+
|
|
951
|
+
**Fix direction:** Add failure cascade detection. If `distill_reference_principles` returns empty, subsequent tools should:
|
|
952
|
+
- Refuse to run and return an explanatory error
|
|
953
|
+
- OR fall back to a generic variant builder (not branded as reference-inspired)
|
|
954
|
+
- OR emit a prominent warning in the output that the reference chain is broken
|
|
955
|
+
|
|
956
|
+
**Impact:** High. This is a multi-step workflow that can look like it's working while producing nothing useful.
|
|
957
|
+
|
|
958
|
+
---
|
|
959
|
+
|
|
960
|
+
### BUG-B53 · `🟢 fixed (Batch 12)` · wonder_mode vs create_preview_set parity — preview variants no longer shells
|
|
961
|
+
|
|
962
|
+
**Reproducer:** Same session, two similar tools produce dramatically different quality:
|
|
963
|
+
|
|
964
|
+
**`enter_wonder_mode(request_text="...")`** variant output:
|
|
965
|
+
```json
|
|
966
|
+
{"variant_id": "wm_..._strong", "move_id": "open_chorus",
|
|
967
|
+
"what_changed": "Targets energy (+0.4), width (+0.3), contrast (+0.3)",
|
|
968
|
+
"compiled_plan": {"move_id": "open_chorus", "step_count": 8, "steps": [...]},
|
|
969
|
+
"score": 0.799, "score_breakdown": {"taste": 0.6, "identity": 0.7, "novelty": 0.946, "coherence": 1},
|
|
970
|
+
"distinctness_reason": "Different approach: set_track_pan, set_track_send, set_track_volume"}
|
|
971
|
+
```
|
|
972
|
+
|
|
973
|
+
**`create_preview_set(request_text="...")`** variant output (strong):
|
|
974
|
+
```json
|
|
975
|
+
{"variant_id": "ps_..._strong", "move_id": "make_kick_bass_lock",
|
|
976
|
+
"what_changed": "", // empty
|
|
977
|
+
"compiled_plan": null, // MISSING entirely
|
|
978
|
+
"score": 0, ...} // no breakdown
|
|
979
|
+
```
|
|
980
|
+
|
|
981
|
+
**Why it's a bug:** Both tools generate creative variants. Wonder mode is CORRECT: rich compiled_plan + what_changed + scoring. Preview set has the same three variants (safe/strong/unexpected) shape but missing most fields (BUG-B44 + BUG-B45).
|
|
982
|
+
|
|
983
|
+
**Root cause hypothesis:** Two different code paths in `mcp_server/preview_studio/engine.py` — one for wonder mode, one for direct preview. They should share a common variant-builder.
|
|
984
|
+
|
|
985
|
+
**Fix direction:** Unify variant construction. Wonder mode's flow is the correct template — preview_set should use the same logic.
|
|
986
|
+
|
|
987
|
+
**Impact:** Medium. Users invoking `create_preview_set` directly (outside wonder mode) get inferior output.
|
|
988
|
+
|
|
989
|
+
---
|
|
990
|
+
|
|
991
|
+
### BUG-B50 · `🟢 fixed (Batch 13)` · build_reference_profile style corpus was incomplete
|
|
992
|
+
|
|
993
|
+
**Reproducer:** `build_reference_profile(style="burial")` returns partial data:
|
|
994
|
+
```json
|
|
995
|
+
{"source_type": "style",
|
|
996
|
+
"loudness_posture": 0, "spectral_contour": {}, "width_depth": {},
|
|
997
|
+
"density_arc": [0.75],
|
|
998
|
+
"section_pacing": [{"label": "sparse_intro"}, {"label": "gradual_buildup"}, {"label": "sudden_strip_back"}],
|
|
999
|
+
"harmonic_character": "atmospheric_filtered",
|
|
1000
|
+
"transition_tendencies": ["conceal", "drift", "punctuate"]}
|
|
1001
|
+
```
|
|
1002
|
+
|
|
1003
|
+
**Why it's partial:** "burial" IS in the built-in style list (confirmed working for `get_style_tactics`), and the section_pacing / harmonic_character / transition_tendencies fields HAVE data. But `loudness_posture: 0`, `spectral_contour: {}`, `width_depth: {}` are empty — so reference gap analysis against Burial can't compare loudness or spectral character.
|
|
1004
|
+
|
|
1005
|
+
**Fix direction:** Extend the built-in style corpus (`mcp_server/reference_engine/styles.py` or similar) with loudness_posture + spectral_contour + width_depth for each style. For Burial: approx -12 LUFS integrated, dark spectrum (centroid ~2kHz), wide + deep stereo depth.
|
|
1006
|
+
|
|
1007
|
+
**Impact:** Medium. Reference gap analysis works partially — structural comparisons work, spectral/loudness don't.
|
|
1008
|
+
|
|
1009
|
+
---
|
|
1010
|
+
|
|
1011
|
+
### BUG-B42 · `🟢 fixed (Batch 10)` · build_world_model.weak_foundation false-positive during stopped playback
|
|
1012
|
+
|
|
1013
|
+
**Reproducer:** `build_world_model()` during `is_playing: false` returns:
|
|
1014
|
+
```json
|
|
1015
|
+
{"sonic": {"spectrum": {...all zeros...}, "rms": 0},
|
|
1016
|
+
"issues": {"sonic": [{
|
|
1017
|
+
"type": "weak_foundation", "severity": 0.6,
|
|
1018
|
+
"evidence": ["sub band energy: 0.00 with bass tracks present"]
|
|
1019
|
+
}]}}
|
|
1020
|
+
```
|
|
1021
|
+
|
|
1022
|
+
**Why it's wrong:** The sub band energy is 0 because **playback is stopped**, not because the mix has weak foundation. The critic fires based on spectrum data without checking playback state.
|
|
1023
|
+
|
|
1024
|
+
**Fix direction:** In `mcp_server/tools/_agent_os_engine/critics.py::run_sonic_critic`, check `is_playing` before computing spectrum-based critics. When not playing, either skip the sonic critic OR return a "playback_required" issue.
|
|
1025
|
+
|
|
1026
|
+
**Impact:** Medium. Users probing `build_world_model` on a static session get misleading "weak_foundation" warnings.
|
|
1027
|
+
|
|
1028
|
+
---
|
|
1029
|
+
|
|
1030
|
+
### BUG-B38 · `🟢 fixed (Batch 14)` · evaluate_sample_fit frequency_fit critic now marks itself unavailable
|
|
1031
|
+
|
|
1032
|
+
**Reproducer:** Same call as B37. Output includes:
|
|
1033
|
+
```json
|
|
1034
|
+
"frequency_fit": {
|
|
1035
|
+
"score": 0.5,
|
|
1036
|
+
"recommendation": "No spectral data — verify frequency fit by ear",
|
|
1037
|
+
"adjustments": [{"note": "stub — spectral overlap analysis not yet implemented"}]
|
|
1038
|
+
}
|
|
1039
|
+
```
|
|
1040
|
+
|
|
1041
|
+
**Why it's a bug (or, unfinished feature):** The tool explicitly returns a "stub" marker. This feature isn't implemented but runs in production returning a default 0.5 score.
|
|
1042
|
+
|
|
1043
|
+
**Fix direction:** Either:
|
|
1044
|
+
1. Implement spectral overlap analysis (read master spectrum + sample spectrum, compute overlap)
|
|
1045
|
+
2. Remove the critic entirely until implemented (don't return 0.5 as if it's meaningful)
|
|
1046
|
+
3. Gate the stub behind `capability.available == false` so it returns "unavailable" rather than "fair"
|
|
1047
|
+
|
|
1048
|
+
**Impact:** Low (known stub, not a regression) but degrades evaluate_sample_fit's meaningfulness.
|
|
1049
|
+
|
|
1050
|
+
---
|
|
1051
|
+
|
|
1052
|
+
### BUG-B23 · `🟢 fixed (Batch 8)` · suggest_next_chord figure/quality mismatch
|
|
1053
|
+
|
|
1054
|
+
**Reproducer:** `suggest_next_chord(track=3, clip=0)` on the Pad Lush D-minor clip returns:
|
|
1055
|
+
```json
|
|
1056
|
+
{
|
|
1057
|
+
"key": "D minor",
|
|
1058
|
+
"suggestions": [
|
|
1059
|
+
{"figure": "IV", "chord_name": "G-minor triad", "quality": "minor", "midi_pitches": [67, 70, 74]},
|
|
1060
|
+
{"figure": "V", "chord_name": "A-minor triad", "quality": "minor", "midi_pitches": [69, 72, 76]}
|
|
1061
|
+
]
|
|
1062
|
+
}
|
|
1063
|
+
```
|
|
1064
|
+
|
|
1065
|
+
**Musical issues:**
|
|
1066
|
+
1. **IV in D minor is Gm** (G-Bb-D) — correct pitches (67 G, 70 Bb, 74 D). Label IV (uppercase = major) mismatches quality "minor" → should be **iv** (lowercase for minor).
|
|
1067
|
+
2. **V in D minor is A major** (A-C#-E) in common-practice. The tool returns A minor (A-C-E, 69/72/76). In modal/natural minor this is **v** (lowercase). Again uppercase figure mismatches minor quality.
|
|
1068
|
+
|
|
1069
|
+
**Why it's a bug:** Roman numeral figures (IV/V) conventionally use uppercase for major chords and lowercase for minor. The tool returns uppercase figures with minor qualities — pick one convention or match them correctly.
|
|
1070
|
+
|
|
1071
|
+
**Fix direction:** In the figure-labeling logic, derive case from chord quality: if triad is minor → lowercase figure (iv, v, vi, etc.). If major → uppercase (IV, V, VI). If diminished → lowercase + "°".
|
|
1072
|
+
|
|
1073
|
+
**Impact:** Low-medium. Musicians reading the figures get confused; downstream progression critics that trust the figure may misclassify.
|
|
1074
|
+
|
|
1075
|
+
---
|
|
1076
|
+
|
|
1077
|
+
### BUG-B16 · `🟢 fixed (Batch 11)` · get_session_story returned empty after build_song_brain
|
|
1078
|
+
|
|
1079
|
+
**Reproducer:** Just called `build_song_brain()` which returned `brain_id: "a7e6ef3b70a9"` with full identity_summary. Immediately after, `get_session_story()` returns:
|
|
1080
|
+
```json
|
|
1081
|
+
{"song_id": "", "identity_summary": "Dominant texture: drums", ...,
|
|
1082
|
+
"threads": [], "recent_turns": [], "mood_arc": [], "total_turns": 0}
|
|
1083
|
+
```
|
|
1084
|
+
|
|
1085
|
+
**Why it's confusing:** Some fields ARE populated (identity_summary matches) but `song_id` is empty, `threads` empty, `recent_turns` empty. Is the session story a separate data store from song_brain? Or is it expected to hydrate from ledger + threads which are empty because no moves were recorded?
|
|
1086
|
+
|
|
1087
|
+
**Fix direction:**
|
|
1088
|
+
1. If session_story is meant to be the canonical narrative, it should pull `song_id`/`mood_arc` from the last-built SongBrain
|
|
1089
|
+
2. If it's ledger-based, document clearly that it's empty on fresh sessions with no action history
|
|
1090
|
+
3. At minimum, include `song_brain_id` field so clients know which brain was used
|
|
1091
|
+
|
|
1092
|
+
**Impact:** Low. Not a user-blocker but wastes trust — the partial population reads as "something's wrong."
|
|
1093
|
+
|
|
1094
|
+
---
|
|
1095
|
+
|
|
1096
|
+
## E. Cross-engine data consistency
|
|
1097
|
+
|
|
1098
|
+
### BUG-E1 · `🟢 fixed (Batch 3)` · project_brain.role_graph empty — section_id key mismatch
|
|
1099
|
+
|
|
1100
|
+
**Reproducer:** `build_project_brain()` returned `role_graph: {"roles": [], "confidence": {"overall": 0, ...}}` while `analyze_composition()` on the same session returned 49 role assignments.
|
|
1101
|
+
|
|
1102
|
+
**Root cause:** `build_role_graph` expects a `notes_map` keyed on the same section IDs that `build_section_graph_from_scenes` emits (`sec_{i:02d}` using the raw enumerate index). `build_project_brain` in `tools.py` was building the notes_map keyed on the scene display name instead (`scene.get("section_id") or scene.get("name") or f"scene_{idx}"`). Every `notes_map.get("sec_00", {})` lookup missed, `active_tracks` stayed empty, and role inference produced zero entries.
|
|
1103
|
+
|
|
1104
|
+
Second related issue: `_ce_build_sections` skips unnamed scenes, which means section IDs can be non-contiguous (`sec_00`, `sec_02`). The notes_map loop must skip unnamed scenes *and* use the raw enumerate index to keep the IDs aligned.
|
|
1105
|
+
|
|
1106
|
+
**Fix (landed):** `mcp_server/project_brain/tools.py::build_project_brain` now builds `notes_map` with the same `f"sec_{scene_idx:02d}"` scheme, skipping unnamed scenes but preserving the raw index. Regression tests `test_notes_map_keys_match_section_ids` and `test_empty_scene_names_advance_section_counter_consistently` in `tests/test_project_brain.py` enforce the alignment invariant.
|
|
1107
|
+
|
|
1108
|
+
**Impact:** Medium. Engines that rely on `project_brain` for role info now see the same data as `analyze_composition`.
|
|
1109
|
+
|
|
1110
|
+
---
|
|
1111
|
+
|
|
1112
|
+
### BUG-E3 · `🟢 fixed (Batch 5)` · get_harmony_field hijacked by percussion tracks
|
|
1113
|
+
|
|
1114
|
+
**Reproducer (live Dabrye session, section "Intro Dust"):**
|
|
1115
|
+
`get_harmony_field(section_index=0)` returned `{"key": "C", "mode": "major", "chord_progression": ["C chord"] × 4}` while `analyze_harmony(track=3, clip=0)` on the Pad Lush clip in the same section returned `{"key": "D minor", "chords": ["D-minor triad", ...]}`. Two tools, same section, contradictory answers.
|
|
1116
|
+
|
|
1117
|
+
**Root cause (diagnosed on live session):** `get_harmony_field` iterated `section.tracks_active` in track-index order and took the **first track with notes** to lock in scale info (`if not scale_info:` guard). Track 1 "Perc Hats" came before track 3 "Pad Lush" in active_tracks. Perc Hats' Ghost Hats clip contained four MIDI notes all at pitch 60 (C4) with 0.1-beat durations — a single-pitch staccato percussion trigger. `detect_key` on that pool matched "C major" (C is in the C major scale and there's no disambiguation), then the loop never consulted the Pad Lush track's actual D/F/A harmony.
|
|
1118
|
+
|
|
1119
|
+
**Fix (landed):**
|
|
1120
|
+
1. New helper `harmonic_score(notes, track_name)` in `mcp_server/tools/_composition_engine/harmony.py` returning 0.0–1.0. Combines unique pitch classes, median duration, pitch range, minimum pitch, and track-name hints (`"kick"/"hat"/"perc"/"drum"` etc. vs `"pad"/"bass"/"lead"/"keys"`).
|
|
1121
|
+
2. `mcp_server/tools/composition.py::get_harmony_field` now builds a scored candidate list of all active tracks, sorts by score desc, **aggregates notes from every track ≥ 0.3** for key detection, and uses the **top-scoring single track** for chord extraction. Falls back to highest-scoring track if nothing passes threshold.
|
|
1122
|
+
|
|
1123
|
+
**Verification (live session):** To be re-measured after plugin-cache sync. Scoring on the real data: Perc Hats `0.15` (below threshold), Pad Lush `0.95` (above) — aggregator consults only the pad.
|
|
1124
|
+
|
|
1125
|
+
Regression tests (`tests/test_composition_engine.py::TestHarmonicScoreBugE3` + `tests/test_composition_tools.py::TestGetHarmonyFieldE3`):
|
|
1126
|
+
- Percussion hits score <0.3
|
|
1127
|
+
- Sustained Dm triad scores >0.6
|
|
1128
|
+
- Track-name nudges bounded in [0,1]
|
|
1129
|
+
- Monophonic bass passes threshold (harmonic, not drum)
|
|
1130
|
+
- Long drone note not misclassified as drum
|
|
1131
|
+
- Pad decisively beats perc in the Dabrye reproducer
|
|
1132
|
+
- Integration: full `get_harmony_field` on fake-Ableton with perc + pad returns D/F/A tonic, not "C major"
|
|
1133
|
+
- Integration: chord_progression reflects pad content, not perc
|
|
1134
|
+
|
|
1135
|
+
**Impact:** High — now closed. Every harmonic critic that uses `get_harmony_field` (transition analysis, voice-leading, chromatic-mediant suggestions) gets the true key.
|
|
1136
|
+
|
|
1137
|
+
---
|
|
1138
|
+
|
|
1139
|
+
### BUG-E4 · `🟢 fixed (Batch 6)` · get_performance_state role labels differed from analyze_composition
|
|
1140
|
+
|
|
1141
|
+
**Reproducer:** Same sections, different role labels:
|
|
1142
|
+
|
|
1143
|
+
| Section | analyze_composition.section_type | get_performance_state.role |
|
|
1144
|
+
|---|---|---|
|
|
1145
|
+
| Intro Dust | intro | intro ✓ |
|
|
1146
|
+
| Groove Build | build | build ✓ |
|
|
1147
|
+
| Deep Flow | **drop** | **verse** |
|
|
1148
|
+
| Breakdown | breakdown | breakdown ✓ |
|
|
1149
|
+
| Re-Entry | verse | **chorus** |
|
|
1150
|
+
| Sun Peak | **drop** | **chorus** |
|
|
1151
|
+
| Outro Dust | outro | outro ✓ |
|
|
1152
|
+
|
|
1153
|
+
**Why it's wrong:** 3 of 7 sections disagree. "Drop" vs "verse" vs "chorus" — these aren't equivalent terms. The performance engine and composition engine have independent role inference logic that produces contradictory labels.
|
|
1154
|
+
|
|
1155
|
+
**Fix direction:** Same as BUG-B21 — unify section-role classification in one place (`_composition_engine.sections`) and have performance engine import it instead of re-deriving.
|
|
1156
|
+
|
|
1157
|
+
**Impact:** High. A critic told to "make the chorus punchier" would act on section 5 (Sun Peak) via performance engine but section 2 (Deep Flow) via composition engine. Silent misfire.
|
|
1158
|
+
|
|
1159
|
+
---
|
|
1160
|
+
|
|
1161
|
+
### BUG-E6 · `🟢 fixed (Batch 6)` · build_world_model vs check_flucoma disagreed on FluCoMa availability
|
|
1162
|
+
|
|
1163
|
+
**Reproducer:**
|
|
1164
|
+
```
|
|
1165
|
+
check_flucoma() → {"flucoma_available": true, "active_streams": 6,
|
|
1166
|
+
"streams": {"spectral_shape": true, "mel_bands": true, "chroma": true,
|
|
1167
|
+
"onset": true, "novelty": true, "loudness": true}}
|
|
1168
|
+
build_world_model().technical → {"flucoma_available": false}
|
|
1169
|
+
```
|
|
1170
|
+
|
|
1171
|
+
**Why it's wrong:** One says yes, the other says no, with 6 confirmed active streams sending data.
|
|
1172
|
+
|
|
1173
|
+
**Fix direction:** `build_world_model.technical.flucoma_available` should call `check_flucoma` internally OR read the same bridge state. Currently it's inferring availability from a different signal (maybe the `capability_state.flucoma` domain which isn't populated correctly).
|
|
1174
|
+
|
|
1175
|
+
**Impact:** Medium. Downstream engines using `build_world_model.technical` to decide whether to request FluCoMa data will falsely skip it.
|
|
1176
|
+
|
|
1177
|
+
---
|
|
1178
|
+
|
|
1179
|
+
### BUG-E5 · `🟢 fixed (Batch 6)` · get_performance_state energy_level values differed from get_section_graph.energy
|
|
1180
|
+
|
|
1181
|
+
See BUG-B21 for the full cross-engine energy table. This is the specific manifestation in the performance engine — it reports energies `[0.2, 0.6, 0.4, 0.3, 0.7, 0.7, 0.2]` while the section graph reports `[0.7, 0.9, 0.9, 0.5, 0.6, 0.9, 0.4]`. Not just scaled, but reordered (Deep Flow is peak in composition, mid-tier in performance).
|
|
1182
|
+
|
|
1183
|
+
**Impact:** High. Same root cause as B21/E4. One engine's peak is another engine's dip.
|
|
1184
|
+
|
|
1185
|
+
---
|
|
1186
|
+
|
|
1187
|
+
### BUG-E2 · `🟢 fixed (Batch 3)` · project_brain.automation_graph empty — didn't scan clip envelopes
|
|
1188
|
+
|
|
1189
|
+
**Reproducer:** `build_project_brain()` returned `automation_graph.automated_params: []` while `get_clip_automation(track=3, clip=0)` on the same Pad Lush clip returned 3 real envelopes (Send A, Osc 1 Pos, Filter 1 Freq).
|
|
1190
|
+
|
|
1191
|
+
**Root cause:** `build_automation_graph` was only scanning `track_infos[].devices[].parameters[].is_automated` — a flag that reflects mapping state (whether a parameter is routable to automation), NOT whether an actual envelope exists on any clip. Automation envelopes in Live live on the Clip object, not on the device parameter. The previous logic could never find them.
|
|
1192
|
+
|
|
1193
|
+
**Fix (landed):**
|
|
1194
|
+
1. `mcp_server/project_brain/tools.py::build_project_brain` — walk each session clip slot, call `get_clip_automation(track, clip)`, aggregate the envelope descriptors into a list keyed by `sec_{scene_idx:02d}`.
|
|
1195
|
+
2. `mcp_server/project_brain/builder.py::build_project_state_from_data` — accepts new `clip_automation` param and forwards to automation graph builder.
|
|
1196
|
+
3. `mcp_server/project_brain/automation_graph.py::build_automation_graph` — now accepts `clip_automation`. Clip envelopes are the source of truth; device-hint entries are only added if they don't duplicate an envelope entry. Each entry is tagged `source="clip_envelope"` or `source="device_hint"` for downstream disambiguation.
|
|
1197
|
+
4. `density_by_section` is now computed from real per-section envelope counts (normalized by max) instead of the section-density × track-ratio approximation. Falls back to old logic if no clip data.
|
|
1198
|
+
|
|
1199
|
+
Regression tests (in `tests/test_project_brain.py::TestBugE2AutomationGraphWiring`):
|
|
1200
|
+
- `test_clip_envelopes_populate_automation_graph`
|
|
1201
|
+
- `test_no_duplicate_when_both_device_hint_and_envelope_match`
|
|
1202
|
+
- `test_density_by_section_reflects_real_envelope_counts`
|
|
1203
|
+
|
|
1204
|
+
**Impact:** Medium. Critics that reason about "is this track under-automated?" now see reality.
|
|
1205
|
+
|
|
1206
|
+
---
|
|
1207
|
+
|
|
1208
|
+
## C. Audit follow-ups (from fresh-audit pass, v1.10.6)
|
|
1209
|
+
|
|
1210
|
+
### BUG-C1 · `🔴 open (deferred)` · analyzer.py refactor skipped
|
|
1211
|
+
|
|
1212
|
+
`mcp_server/tools/analyzer.py` is 971 LOC. Deferred from the 1.10.6 engine-modularization pass because the file has `@mcp.tool()` decorations and relocating them could disturb FastMCP's tool-registration order.
|
|
1213
|
+
|
|
1214
|
+
**Fix direction:** Same package-facade pattern as `_composition_engine` and `_agent_os_engine`, but leave the `@mcp.tool()` functions in a single sub-module and split only the helpers (cursor management, patch utilities, spectrum adapters).
|
|
1215
|
+
|
|
1216
|
+
---
|
|
1217
|
+
|
|
1218
|
+
### BUG-C2 · `⚪️ low-priority` · sample_engine/techniques.py size
|
|
1219
|
+
|
|
1220
|
+
`mcp_server/sample_engine/techniques.py` is 908 LOC but it's a data catalog (30+ `_register(...)` calls). Splitting doesn't improve anything materially — the data just spreads across more files.
|
|
1221
|
+
|
|
1222
|
+
**Fix direction:** If split, minimum surgery: two files — `_catalog.py` (registry + public API) and `_data.py` (all `_register()` calls). Low ROI.
|
|
1223
|
+
|
|
1224
|
+
---
|
|
1225
|
+
|
|
1226
|
+
### BUG-C3 · `🟡 open` · FastMCP private-internals coupling
|
|
1227
|
+
|
|
1228
|
+
`mcp_server/server.py::_get_all_tools()` reaches into FastMCP private attributes (`_tool_manager._tools` for 0.x, `_local_provider._components` for 3.x) to iterate the tool registry. Pinned to `fastmcp>=3.0.0,<3.3.0` specifically because of this fragility.
|
|
1229
|
+
|
|
1230
|
+
**Fix direction:**
|
|
1231
|
+
1. File the upstream FR (see BUG-C4)
|
|
1232
|
+
2. When upstream adds a public API, migrate + remove the version ceiling
|
|
1233
|
+
|
|
1234
|
+
---
|
|
1235
|
+
|
|
1236
|
+
### BUG-C4 · `🔴 open` · Upstream FastMCP FR not filed
|
|
1237
|
+
|
|
1238
|
+
Draft lives at `docs/FASTMCP_UPSTREAM_FR.md` (local-only per `docs/*.md` gitignore). Needs to be filed as a GitHub issue on https://github.com/jlowin/fastmcp asking for a public tool-enumeration API.
|
|
1239
|
+
|
|
1240
|
+
**Action:** `gh issue create --repo jlowin/fastmcp --body-file docs/FASTMCP_UPSTREAM_FR.md --title "Feature request: public tool-enumeration API"`
|
|
1241
|
+
|
|
1242
|
+
---
|
|
1243
|
+
|
|
1244
|
+
## D. Current session (Dabrye Core) creative trackers
|
|
1245
|
+
|
|
1246
|
+
### BUG-D1 · `🟡 needs listen-test` · Splice vocal D#min vs Dm session
|
|
1247
|
+
|
|
1248
|
+
Track 6 is the Splice audio clip `AU_THF2_128_vocal_full_female_chorus_brains_in_the_body_dry_D#min.wav`. Filename claims D#min but session is Dm.
|
|
1249
|
+
|
|
1250
|
+
**Decision tree:**
|
|
1251
|
+
1. Unmute, listen — if the D# reads as ambient fog (at volume 0.48, no sends, dry), keep as-is
|
|
1252
|
+
2. If it reads as dissonant, open the clip's Sample tab, set Transpose (coarse) to **−1 semitone**
|
|
1253
|
+
3. Alternative if Option 2 glitches the audio: insert a pitch-shift M4L device
|
|
1254
|
+
|
|
1255
|
+
Blocked on BUG-A4 + A5 to automate detection/correction.
|
|
1256
|
+
|
|
1257
|
+
---
|
|
1258
|
+
|
|
1259
|
+
### BUG-D2 · `🔴 open (creative opportunity)` · No clip automation
|
|
1260
|
+
|
|
1261
|
+
`build_project_brain`'s automation_graph is empty — no filter sweeps, volume curves, or energy arc in any clip. Missing classic Dabrye-style automation moves:
|
|
1262
|
+
- Filter sweep on RHODES before VOX LEAD entry ("vacuum before reveal")
|
|
1263
|
+
- Volume crescendo on PERC into a drop
|
|
1264
|
+
- Delay feedback automation on VOX LEAD for dub-style handoffs
|
|
1265
|
+
|
|
1266
|
+
**Action:** After VOX LEAD is unmuted + balanced, write automation envelopes to fill the emotional-arc gap (currently scored 0.637 with payoff_strength 0.0).
|
|
1267
|
+
|
|
1268
|
+
---
|
|
1269
|
+
|
|
1270
|
+
### BUG-D3 · `🟢 fixed (user-side)` · VOX LEAD Simpler Warp
|
|
1271
|
+
|
|
1272
|
+
**Originally open:** Simpler was in Classic/Trigger mode with 86 BPM sample in 90 BPM session → 4.7% tempo drift.
|
|
1273
|
+
**Fixed:** User clicked Warp toggle in Simpler's Sample tab (Complex Pro mode), 2026-04-17.
|
|
1274
|
+
|
|
1275
|
+
---
|
|
1276
|
+
|
|
1277
|
+
## Session-resolved bugs (1.10.6 release)
|
|
1278
|
+
|
|
1279
|
+
These were closed during the v1.10.6 cleanup — listed here for historical reference.
|
|
1280
|
+
|
|
1281
|
+
- ✅ **79 silent `except Exception: pass` sites** across `mcp_server/` — converted to `logger.debug("<func> failed: %s", exc)` breadcrumbs
|
|
1282
|
+
- ✅ **Credit-floor docstring lying** in `SpliceGRPCClient.download_sample()` — defensive guard added via `can_afford(1, budget=1)` check
|
|
1283
|
+
- ✅ **Version drift** across 13 files — bumped 1.10.5 → 1.10.6 everywhere including .amxd binary patch
|
|
1284
|
+
- ✅ **livepilot.mcpb committed to git** — `git rm --cached` + added to `.gitignore`
|
|
1285
|
+
- ✅ **CI single Python version** — added 3.11 alongside 3.12 (covers Ableton 12.3 embedded Python)
|
|
1286
|
+
- ✅ **OSC convention undocumented** — added contract comments to both `livepilot_bridge.js` and `mcp_server/m4l_bridge.py`
|
|
1287
|
+
- ✅ **`_composition_engine.py` (1530 LOC)** — split into 6-module package with facade (`models`, `sections`, `critics`, `gestures`, `harmony`, `analysis`)
|
|
1288
|
+
- ✅ **`_agent_os_engine.py` (947 LOC)** — split into 6-module package (`models`, `world_model`, `critics`, `evaluation`, `techniques`, `taste`); `_clamp` promoted to models to resolve circular-dep
|
|
1289
|
+
|
|
1290
|
+
---
|
|
1291
|
+
|
|
1292
|
+
## Debug session notes — 2026-04-17 (second session, 119 BPM project)
|
|
1293
|
+
|
|
1294
|
+
Second project loaded in the same session (Prefuse73-adjacent, 10 tracks, 49 clips, 7 named sections: Intro Dust → Groove Build → Deep Flow → Breakdown → Re-Entry → Sun Peak → Outro Dust). Exercised a wide set of MCP tools to surface bugs. 6 new bugs logged (B5-B9, E1-E2).
|
|
1295
|
+
|
|
1296
|
+
### Project fingerprint
|
|
1297
|
+
- 119 BPM, 4/4
|
|
1298
|
+
- 10 tracks (Kick, Perc Hats, Congas, Pad Lush, Glitch Chops, Snare Rim, Bass, Texture, Atmo FX, Splice vocal)
|
|
1299
|
+
- 2 return tracks (A-Verb Space, B-Delay Dub)
|
|
1300
|
+
- 8 scenes (one empty), 49 total clips
|
|
1301
|
+
- **Key: D minor** (confirmed 0.874 confidence via Pad Lush MIDI)
|
|
1302
|
+
- **Auto-detected key from master bus: C# major** (possible analyzer misdetection on D-minor content — not a bug necessarily, but worth noting)
|
|
1303
|
+
|
|
1304
|
+
### Things that work well (good signals — don't regress these)
|
|
1305
|
+
|
|
1306
|
+
- ✅ `analyze_composition` correctly identified 7 sections + 49 role assignments across all sections
|
|
1307
|
+
- ✅ `identify_scale` returns modal ladder: D minor 0.874 → 7 modal alternatives at 0.751 (same pitch collection, different tonics). Proper Krumhansl-Schmuckler behavior.
|
|
1308
|
+
- ✅ `get_clip_automation` correctly enumerates envelopes when queried directly: Pad Lush "Intro Wash" has 3 envelopes (Send A, Wavetable Osc 1 Pos, Wavetable Filter 1 Freq)
|
|
1309
|
+
- ✅ `propose_next_best_move` returns sensible semantic-move ranks sorted by match_score
|
|
1310
|
+
- ✅ `analyze_mix` flagged legitimate `support_too_loud` on Texture (vol 0.60 vs avg 0.38)
|
|
1311
|
+
- ✅ `get_master_spectrum` returns real content with low age_ms while session was playing
|
|
1312
|
+
- ✅ `find_and_load_device` works cleanly when `insert_device` fails (graceful fallback path validated)
|
|
1313
|
+
- ✅ `memory_list` returns 12 existing techniques across sessions — including prior Prefuse73 work, a "CRITICAL: verify track meters" preference, and the 2026-04-17 bug tracker + Dabrye template
|
|
1314
|
+
|
|
1315
|
+
### Session observations (findings, not bugs)
|
|
1316
|
+
|
|
1317
|
+
- **Pad Lush** uses Wavetable (InstrumentVector) + Saturator (Drive 51%, Dry/Wet 45%) + Echo (Duck On enabled, L/R Div -3/-3 asymmetric, Wobble On amount 0.15). Well-sculpted wet pad.
|
|
1318
|
+
- **Bass** uses UltraAnalog (OSC1 saw octave -1 level 85%, OSC2 sine octave -2 level 70%, F1 LPF 24dB drive 2 freq 28%, glide 15%) + Auto Filter Legacy. Classic analog bass architecture.
|
|
1319
|
+
- **Texture track** is the loudest at 0.60 — candidate for gain staging
|
|
1320
|
+
- **Scenes 1+2 (Groove Build + Deep Flow) both at energy 0.9** → legitimate `no_adjacent_contrast` form issue
|
|
1321
|
+
- **Scenes 3+4 (Breakdown + Re-Entry) at 0.5/0.6** → another `no_adjacent_contrast` issue
|
|
1322
|
+
- **Splice vocal** (track 9) contains the same `JJP_90SS2_86_vocal_lead_hurt_you_Dm` sample reused from the Dabrye session
|
|
1323
|
+
- **Fatigue level: 0.93** across the 8-motif arrangement — but loop-based scene design = high motif recurrence by design, so the critic is over-triggering here (possible BUG-B1-adjacent tuning issue)
|
|
1324
|
+
|
|
1325
|
+
### Current bug totals
|
|
1326
|
+
|
|
1327
|
+
| Category | Open | Fixed | Notes |
|
|
1328
|
+
|---|---|---|---|
|
|
1329
|
+
| **A** server/LOM gaps | 2 | 3 | A1/A4/A5 fixed in Batch 2. A2, A3 remain (M4L-bridge route — needs .amxd re-freeze). |
|
|
1330
|
+
| **B** critics/analyzers | **1** | **45** | **Batches 4-17**: 45 bugs closed across chord naming, harmonization, critics, variants, reference engine, sample engine, atlas, and docs. Only B36 remains open — now fixed (Batch 17, next commit). |
|
|
1331
|
+
| **C** audit follow-ups | 3 | 0 | v1.10.6 deferred items (refactor + upstream coupling). |
|
|
1332
|
+
| **D** creative trackers | 2 | 1 | Dabrye session D3 fixed; D1 unblocked by A5; D2 creative opportunity, not a code bug. |
|
|
1333
|
+
| **E** cross-engine consistency | 0 | 6 | **All E bugs closed.** |
|
|
1334
|
+
| **Total** | **7** | **57** | **BUGS.md near-empty**: 56 bugs fixed across 17 batches + 1 bonus robustness fix. Remaining 7 are all deferred external dependencies (A2/A3 M4L bridge .amxd re-freeze, C1/C3/C4 upstream/refactor, D1/D2 creative workflow items). |
|
|
1335
|
+
|
|
1336
|
+
### Additional findings (wave 3 — song brain + transitions + theory + FluCoMa)
|
|
1337
|
+
|
|
1338
|
+
**Big positive discovery:** Arrangement view is **fully built out** on this session — Pad Lush alone has 43 arrangement clips across 960 beats with poetic names ("Intro Wash — distant pad", "Sun Wash — harmonic bed", "Full Wash — the one chord moment", "Float — the stillness", "Sun Chord — the harmonic peak", "Farewell — the pad says goodbye"). Session view clips + arrangement view are both populated — scene view feels like the "working draft" and arrangement view is "the final pass." LivePilot's composition critics only read session view, missing this richness.
|
|
1339
|
+
|
|
1340
|
+
**Working correctly (confirmed):**
|
|
1341
|
+
- ✅ `build_song_brain` returns structured model with identity, sacred elements, energy arc, open questions
|
|
1342
|
+
- ✅ `detect_identity_drift` correctly reports 0 drift when no changes since last brain
|
|
1343
|
+
- ✅ `analyze_transition` produces structured archetype + scoring + targeted issues (despite BUG-B15)
|
|
1344
|
+
- ✅ `get_transition_analysis` enumerates all 6 adjacent-section boundaries with specific recommendations per boundary
|
|
1345
|
+
- ✅ `detect_theory_issues` correctly finds zero issues for a legitimate D minor pad clip (no parallel fifths, in-key, clean)
|
|
1346
|
+
- ✅ `check_safety` properly escalates `delete_track` to "caution" + requires_confirmation=true when affecting 10 tracks
|
|
1347
|
+
- ✅ `get_automation_recipes` returns 15 recipes (filter_sweep_up/down, dub_throw, tape_stop, build_rise, sidechain_pump, fade_in/out, tremolo, auto_pan, stutter, breathing, washout, vinyl_crackle, stereo_narrow) — **rich creative library**
|
|
1348
|
+
- ✅ `analyze_for_automation` correctly identifies device types (Drift → timbre_evolution, Auto Filter → filter_sweep, sends → dub_throw) and maps to recipe names
|
|
1349
|
+
- ✅ `get_arrangement_clips` returns precise clip timing (43 entries on Pad Lush with start/end times, lengths, loop states)
|
|
1350
|
+
- ✅ `get_spectral_shape` (FluCoMa) returns real descriptor values (centroid 979 Hz, spread 1390, skewness 3.98, crest 35.57) — FluCoMa bridge IS functional for this tool
|
|
1351
|
+
|
|
1352
|
+
**Unverified — playback-state dependent (re-verify when audio confirmed playing):**
|
|
1353
|
+
- ⚠️ `get_chroma` returned all zeros (session may have paused between probes)
|
|
1354
|
+
- ⚠️ `get_onsets` returned `detected: false`
|
|
1355
|
+
- ⚠️ `get_mel_spectrum` values are 1e-6 range (essentially silent)
|
|
1356
|
+
- ⚠️ `analyze_for_automation` returned spectrum all zeros
|
|
1357
|
+
- These are consistent with playback stopped during the probe, not tool bugs. Re-verify with confirmed-playing audio.
|
|
1358
|
+
|
|
1359
|
+
---
|
|
1360
|
+
|
|
1361
|
+
### Additional findings (wave 4 — reference engine + generative + performance + phrase)
|
|
1362
|
+
|
|
1363
|
+
**Biggest finding — 3 engines disagree on section "energy" and "role"** (BUG-B21/E4/E5). Composition engine, performance engine, and emotional arc engine each compute these fields independently with different algorithms. They even disagree on *ordering* (Deep Flow is a peak in composition but mid-tier in performance). Anything that mixes signals from multiple engines silently misfires.
|
|
1364
|
+
|
|
1365
|
+
**Reference engine is substantially limited** (BUG-B17/B18/B19):
|
|
1366
|
+
- Only 6 built-in styles: burial, daft punk, techno, ambient, trap, lo-fi
|
|
1367
|
+
- Prefuse73 (which the user has 3 saved techniques for!) returns NOT_FOUND
|
|
1368
|
+
- `distill_reference_principles` accepts any description text but returns empty fields — text-to-principle distillation is either unimplemented or gated on style lookup
|
|
1369
|
+
- Memory store and style_tactics store are disconnected — saved techniques don't feed back as tactics
|
|
1370
|
+
|
|
1371
|
+
**BUG-E3 — `get_harmony_field` returns WRONG KEY** for the same underlying clip that `analyze_harmony` correctly analyzes. Section 0 "Intro Dust" comes back as C major with 4 identical "C chord" entries, while direct analysis of the Pad Lush MIDI returns D minor with proper chord content. The section-level harmony engine is reading the wrong data source.
|
|
1372
|
+
|
|
1373
|
+
**Working correctly (wave 4 positives):**
|
|
1374
|
+
- ✅ `get_section_graph` returns the same 7 sections as `analyze_composition` (internally consistent)
|
|
1375
|
+
- ✅ `generate_euclidean_rhythm(3, 8)` produces correct tresillo pattern `[1,0,0,1,0,0,1,0]` with proper timing, identifies the named rhythm
|
|
1376
|
+
- ✅ `suggest_next_chord` detects D minor correctly, suggests IV and V (despite figure-case bug B23)
|
|
1377
|
+
- ✅ `plan_scene_handoff(0→1)` returns structured 5-step gesture sequence with energy path `[0.2, 0.3, 0.4, 0.5, 0.6]`
|
|
1378
|
+
- ✅ `get_performance_safe_moves` returns 8 safe + 2 energy moves with proper blocked_moves list (`arrangement_edit`, `clip_create_delete`, `device_chain_surgery`, `note_edit`, `track_create_delete`) — good safety discipline
|
|
1379
|
+
- ✅ `detect_payoff_failure`: `overall_health: "healthy"`, 0 failures — reasonable
|
|
1380
|
+
- ✅ `get_sample_opportunities`: flags "no Simpler/Sampler devices — samples could add character" with confidence 0.4 (legit since track 9 is the only audio track)
|
|
1381
|
+
- ✅ `get_emotional_arc` returns tension_curve + legit `peak_too_early` issue (position 2/7)
|
|
1382
|
+
- ✅ `check_safety("delete_track")` properly escalates to caution + requires confirmation when affecting 10 tracks
|
|
1383
|
+
- ✅ `get_action_ledger_summary`, `get_promotion_candidates`, `get_section_outcomes` properly empty for a fresh session (no false data)
|
|
1384
|
+
|
|
1385
|
+
### Additional findings (wave 5 — generative + theory + translation + sound design + sample fit)
|
|
1386
|
+
|
|
1387
|
+
**Biggest finding — `evaluate_sample_fit` can't detect the session key** (BUG-B37). Core workflow for sample recommendation is crippled because the sample engine has its own (broken) key inference that doesn't use the harmonic engines' data. This is the third distinct "can't detect key" or "wrong key" bug after E3 (harmony_field wrong key) and the master-bus C# vs Dm detection. **Root cause: 3+ engines independently compute "what key is this song in" with different algorithms.**
|
|
1388
|
+
|
|
1389
|
+
**Harmonization engine is broken** (BUG-B26/B27):
|
|
1390
|
+
- 4-voice output with bass stuck on tonic pedal (5 of 6 bass notes are D2)
|
|
1391
|
+
- Soprano line is an exact duplicate of the input melody (not a harmonization)
|
|
1392
|
+
- Creative tool unusable as-is
|
|
1393
|
+
|
|
1394
|
+
**Evaluate_sample_fit's frequency_fit critic is an explicit stub** (BUG-B38) — returns default 0.5 score with the adjustments array containing `"note": "stub — spectral overlap analysis not yet implemented"`. Running in production.
|
|
1395
|
+
|
|
1396
|
+
**Working correctly in wave 5 (strong signals):**
|
|
1397
|
+
- ✅ `classify_progression(Dm-Gm-Am-Dm)` correctly identifies "diatonic cycle fragment" (despite one transform returning "?", the classification heuristic still works)
|
|
1398
|
+
- ✅ `navigate_tonnetz(Dm, depth=2)` returns structured P/L/R + all 9 depth-2 transforms (PP, PL, PR, LP, LL, LR, RP, RL, RR)
|
|
1399
|
+
- ✅ `suggest_chromatic_mediants(Dm)` returns 6 valid mediants + cinematic picks (Bb minor, F# minor)
|
|
1400
|
+
- ✅ `find_voice_leading_path(Dm→Bb)` finds 1-transform L path (tuning issue with voice smoothness, not correctness)
|
|
1401
|
+
- ✅ `transform_motif([2,2,-1,2], "inversion")` correctly inverts to `[-2,-2,1,-2]` — verified by checking output pitches
|
|
1402
|
+
- ✅ `generate_tintinnabuli` returns voices following Pärt's nearest-triad-tone rule (with a couple questionable choices, minor issues)
|
|
1403
|
+
- ✅ `transform_section("insert_bridge_before_final_chorus")` returns dry-run before/after section graph — 7 → 8 sections, bar delta +8, proper non-mutating preview
|
|
1404
|
+
- ✅ `score_phrase_impact(section=5, target="drop")` returns multi-dimensional score (arrival, anticipation, contrast, fatigue, clarity, groove, payoff)
|
|
1405
|
+
- ✅ `score_transition` returns structured boundary-clarity + payoff + redirection + identity + cliche-risk breakdown
|
|
1406
|
+
- ✅ `check_translation` returns `overall_robustness: "robust"` with mono-safe + small-speaker-safe + low-end-stable + front-element-present all true
|
|
1407
|
+
- ✅ `measure_hook_salience` includes natural-language `interpretation` field — nice UX touch
|
|
1408
|
+
- ✅ `plan_mix_move` correctly proposes `gain_staging` for Texture (tracks back to analyze_mix finding)
|
|
1409
|
+
- ✅ `get_mix_summary` lightweight 10-track summary with anchor tracks, loudest/quietest
|
|
1410
|
+
- ✅ `evaluate_sample_fit` produces both `surgeon_plan` and `alchemist_plan` — rich tool output despite the key-detection bug
|
|
1411
|
+
|
|
1412
|
+
---
|
|
1413
|
+
|
|
1414
|
+
### Additional findings (wave 6 — atlas + browser + generative + world model + FluCoMa)
|
|
1415
|
+
|
|
1416
|
+
**CONFIRMED: FluCoMa tools ARE available and working** (`check_flucoma` returns `active_streams: 6` with all 6 named streams `true`). The earlier zero-output observations were 100% playback-state-dependent — not tool bugs. The FluCoMa subsystem is healthy; `get_chroma`/`get_onsets`/`get_mel_spectrum`/`analyze_for_automation` spectrum all return zeros only because `is_playing: false` at probe time.
|
|
1417
|
+
|
|
1418
|
+
**NEW systemic finding — atlas vs atlas_device_info data parity broken** (BUG-B40). The atlas has rich enrichment data (character_tags, genre_affinity, starter_recipes, gotchas, sonic_description, complexity, synthesis_type, introduced_in). But `atlas_compare` doesn't read that enrichment — it gets a stripped-down view. Same data, different access paths produce different answers.
|
|
1419
|
+
|
|
1420
|
+
**Rich enrichment proof — `atlas_device_info("Wavetable")` is outstanding:**
|
|
1421
|
+
- character_tags: `["modern", "versatile", "lush", "massive", "evolving"]`
|
|
1422
|
+
- use_cases: leads/pads/bass/textures/plucks
|
|
1423
|
+
- genre_affinity: primary (edm/pop/future_bass), secondary (synthwave/cinematic/ambient/dnb)
|
|
1424
|
+
- **10 key_parameters** with ranges, sweet_spots, and type info
|
|
1425
|
+
- **3 starter_recipes** with exact param values (Supersaw Lead, Glassy Pad, Digital Bass)
|
|
1426
|
+
- **5 pairs_well_with** relationships with rationale
|
|
1427
|
+
- **5 gotchas** with practical advice (CPU cost of unison, mod matrix power, etc.)
|
|
1428
|
+
- complexity level, synthesis_type, introduced_in version
|
|
1429
|
+
|
|
1430
|
+
This is DEEP corpus knowledge. The data is there. Access paths need consolidation.
|
|
1431
|
+
|
|
1432
|
+
**Working correctly in wave 6 (strong signals):**
|
|
1433
|
+
- ✅ `atlas_suggest(intent="evolving pad")` returns 5 synths (Analog, Drift, Emit, Meld, Poli) with parameter recipes per device
|
|
1434
|
+
- ✅ `atlas_device_info("Wavetable")` returns the richest corpus entry I've seen — 10 params, 3 recipes, 5 gotchas, 5 pairings
|
|
1435
|
+
- ✅ `atlas_search("warm analog bass")` returns 5 results with enrichment + scoring (despite ranking bug B41)
|
|
1436
|
+
- ✅ `get_browser_tree` returns the full 11-category tree (instruments 32, audio_effects 70, drums **684**, samples **22,291**, user_library 10, plugins 4 — rich data)
|
|
1437
|
+
- ✅ `get_automation_state(track=3, device=0)` on Wavetable returns 93 total params, 0 automated — lightweight and accurate
|
|
1438
|
+
- ✅ `search_samples("dark vocal chop", key="Dm")` returns 5 Splice results with full metadata (hash, bpm, key, tags, pack, duration, price)
|
|
1439
|
+
- ✅ `list_semantic_moves(domain="mix")` returns 6 mix moves with targets/protect/risk + 7 domain list
|
|
1440
|
+
- ✅ `layer_euclidean_rhythms` correctly stacks tresillo (3/8) + cinquillo (5/8) + brazilian necklace (7/16) with proper naming
|
|
1441
|
+
- ✅ `generate_phase_shift(3 voices, shift 0.125)` produces 44 notes with velocity-encoded voicing
|
|
1442
|
+
- ✅ `generate_additive_process(direction="forward", reps=2)` produces 4-stage build — 20 notes
|
|
1443
|
+
- ✅ `generate_automation_curve("exponential", duration=8, density=32)` returns 32 precise curve points
|
|
1444
|
+
- ✅ `get_device_presets("Drift")` returns **250+ presets** organized by category — massive corpus
|
|
1445
|
+
- ✅ `get_anti_preferences` + `get_taste_graph` + `explain_taste_inference` properly empty for a fresh session (no phantom data)
|
|
1446
|
+
- ✅ `compile_goal_vector` validates targets + splits measurable/unmeasurable dimensions correctly
|
|
1447
|
+
- ✅ `build_world_model` returns topology + sonic + technical + role inference + structured issues (with B42 and E6 inconsistency caveats)
|
|
1448
|
+
- ✅ `check_flucoma` — proper diagnostic return with per-stream availability
|
|
1449
|
+
|
|
1450
|
+
**Interesting discovery — `get_browser_tree` returned `current_project` category with 21 .als files** — the user has 21 LivePilot-adjacent projects on disk, including `prefuse73 demo.als`, `dabrye 73.als`, `dabrye prefuse 1.9.21.als`, `boc demo debug.als`, `shybuia house.als`, `manele.als` (Romanian genre!), `aicaldos.als`, `LIVEPILOT V2.als` and more. Rich body of work; each is a potential reference source for the style_tactics corpus (currently limited to 6 built-in styles — see BUG-B18/B19).
|
|
1451
|
+
|
|
1452
|
+
---
|
|
1453
|
+
|
|
1454
|
+
### Additional findings (wave 7 — preview studio + experiment + research + compose + device forge)
|
|
1455
|
+
|
|
1456
|
+
**Biggest finding — `research_technique` is essentially broken** (BUG-B43). For a clear query like "sidechain bass to kick", it returns a phantom "Unknown Device" finding with confidence 0 and a template-substitution `technique_card` that has no real research content. The atlas HAS the data (Compressor info, sidechain recipes) but the research flow doesn't connect to it.
|
|
1457
|
+
|
|
1458
|
+
**Preview studio has shape but missing flesh** (BUG-B44, B45, B46):
|
|
1459
|
+
- Variants missing compiled_plan where they shouldn't
|
|
1460
|
+
- `what_changed` field empty — users can't see what variants actually do
|
|
1461
|
+
- Constrained variants can't find matching moves → emit empty-move shells
|
|
1462
|
+
|
|
1463
|
+
**`analyze_sample` never opens the file** (BUG-B49). Despite FluCoMa being fully available and the user's entire ecosystem depending on sample analysis, the tool returns filename-parsed key/bpm with every spectral/temporal field set to zero. Should use offline librosa/soundfile for duration, spectral centroid, onset density.
|
|
1464
|
+
|
|
1465
|
+
**`compose` works but conservatively when credits=0** — correctly generated 5-layer plan with Splice queries, then dropped all layers when credits budget prohibited downloads. Ended with a single-step plan (set_tempo). Working as designed but the output is degenerate for users without credit budget.
|
|
1466
|
+
|
|
1467
|
+
**Working correctly in wave 7 (strong signals):**
|
|
1468
|
+
- ✅ `list_genexpr_templates`: 15 templates across 8 categories — Lorenz/Henon (chaos), Karplus-Strong/phase-distortion/wavefolder/bitcrusher (synthesis+distortion), FDN/granular-delay/chorus/ring-mod (delay/mod), stochastic-resonance (texture). Rich GenExpr DSP library for M4L generation.
|
|
1469
|
+
- ✅ `plan_arrangement(style="hiphop", target_bars=128)`: produces **complete 8-section blueprint** (intro 12b → verse 24b → chorus 12b → verse 24b → chorus 12b → bridge 12b → chorus 12b → outro 12b = 120 bars) with per-section energy/density targets, tracks_entering/exiting, sample_hints, AND gesture_plan (7 transitions with gesture_templates like "pre_arrival_vacuum", "re_entry_spotlight", "outro_decay_dissolve"). Beautiful structured output.
|
|
1470
|
+
- ✅ `apply_creative_constraint_set(["subtraction_only", "no_new_tracks"])`: confirms both constraints with descriptions + reasons — good UX
|
|
1471
|
+
- ✅ `suggest_sample_technique` for the Hurt You vocal: 3 rich techniques
|
|
1472
|
+
- `vocal_chop_rhythm` (Burial-style staccato) — 7 steps
|
|
1473
|
+
- `vocal_harmony_stack` (Bon Iver Prismizer) — 4 steps
|
|
1474
|
+
- `syllable_instrument` (vocal as instrument) — 5 steps
|
|
1475
|
+
Each has name, difficulty, philosophy, inspiration, step_count, steps_preview. This is the technique library showing its best form.
|
|
1476
|
+
- ✅ FluCoMa tools (re-verified): `get_novelty` = 0.0135 real, `get_momentary_loudness` = -104.6 LUFS (playback paused = low), `get_spectral_shape` centroid 998Hz + crest 38 — all real data
|
|
1477
|
+
- ✅ `get_browser_items("instruments/Drift")` — 13 folders with is_folder flags for tree navigation
|
|
1478
|
+
- ✅ `analyze_sound_design` on 4 more tracks (Perc Hats/Glitch Chops/Bass/Atmo FX) — structured patch models. Most have 0 issues (reasonable — tracks are well-designed). Perc Hats flagged as "generic_chain" (Erosion+Echo lacks filter/saturation for character).
|
|
1479
|
+
- ✅ `create_experiment(move_ids=["make_punchier", "tighten_low_end"])` — clean experiment with 2 branches, proper IDs
|
|
1480
|
+
- ✅ `discard_preview_set` + `discard_experiment` — cleanup returns `{"discarded": true}` confirming state clears
|
|
1481
|
+
- ✅ `build_reference_profile(style="burial")` returns actual structured data (unlike "prefuse73" which NOT_FOUND'd in earlier wave) — partial but working
|
|
1482
|
+
|
|
1483
|
+
**Interesting observation:** `get_technique_card("dusty kick")` returned 0 cards despite the user's memory containing Prefuse73/Dabrye techniques that absolutely involve dusty kicks. Another instance of BUG-B18 (style_tactics/technique_card disconnected from memory store).
|
|
1484
|
+
|
|
1485
|
+
---
|
|
1486
|
+
|
|
1487
|
+
### Additional findings (wave 8 — wonder mode + hook dev + gesture + action ledger + MIDI I/O + fabric eval)
|
|
1488
|
+
|
|
1489
|
+
**Biggest positive finding — `enter_wonder_mode` produces excellent output.** Session ID ws_b3ce483b9b9f returned 3 variants (strong/safe/unexpected) each with:
|
|
1490
|
+
- Full compiled_plan (5-8 steps with verify_after)
|
|
1491
|
+
- Populated what_changed (e.g., "Targets energy (+0.4), width (+0.3), contrast (+0.3)")
|
|
1492
|
+
- Score + score_breakdown (taste/identity/novelty/coherence)
|
|
1493
|
+
- distinctness_reason per variant ("Different family: sound_design")
|
|
1494
|
+
- Warnings when devices are missing ("No Saturator on Pad Lush — using volume+reverb for warmth")
|
|
1495
|
+
|
|
1496
|
+
This is what `create_preview_set` SHOULD be producing. The shared variant-builder has bugs in the preview_set path (B44, B45) but works correctly in wonder_mode (B53 — cross-tool inconsistency).
|
|
1497
|
+
|
|
1498
|
+
**Cleanup confirmed:** `discard_wonder_session(ws_b3ce483b9b9f)` returns `{"discarded": true, "thread_still_open": true}` — the creative thread `ecb79c394a` stays open by design (per the tool description: "the problem isn't solved").
|
|
1499
|
+
|
|
1500
|
+
**Working correctly in wave 8 (strong signals):**
|
|
1501
|
+
- ✅ `enter_wonder_mode` — rich diagnosis + 3 quality variants with compiled plans
|
|
1502
|
+
- ✅ `develop_hook(hook_id="track_...", mode="variation")` — 4 concrete tactics: transpose, invert/retrograde, rhythmic displacement, fragmentation (BUG-B31 is specifically about the empty-hook_id path, not the general tool)
|
|
1503
|
+
- ✅ `measure_hook_salience(hook_id)` — structured scoring with interpretation
|
|
1504
|
+
- ✅ `plan_gesture(intent="reveal", target_tracks=[9], start_bar=16)` — proper gesture plan with curve_family (exponential), direction (up), parameter_hints (filter_cutoff, send_level, utility_width)
|
|
1505
|
+
- ✅ `apply_gesture_template("pre_arrival_vacuum")` — returns 2 nested gestures (inhale bars 36-39, release bars 40-41) with all fields populated
|
|
1506
|
+
- ✅ `resume_last_intent` — correctly finds the wonder thread I just opened
|
|
1507
|
+
- ✅ `get_turn_budget(mode="improve")` — returns 6 resource pools (latency, risk, novelty, changes, undos, research) with proper defaults
|
|
1508
|
+
- ✅ `get_recent_actions(limit=20)` — proper ledger with 20 entries showing my probe history; some marked `"ok": false, "error": "INVALID_PARAM"` for my probes of empty clip slots (expected)
|
|
1509
|
+
- ✅ `get_last_move` returns `{}` when no moves in ledger (honest empty)
|
|
1510
|
+
- ✅ `get_session_memory` returns empty entries list (no session memory yet)
|
|
1511
|
+
- ✅ `evaluate_with_fabric(engine="sonic")` — score 0.6304, keep_change=true, goal_progress 0.014, measured deltas per dimension, memory_candidate=true
|
|
1512
|
+
- ✅ `export_clip_midi` — wrote 6 notes, 30 beats, tempo 119 to disk (despite BUG-B52 filename path issue)
|
|
1513
|
+
- ✅ `discard_wonder_session` — clean cleanup with thread preservation
|
|
1514
|
+
|
|
1515
|
+
**Per-track sound design wrap-up:**
|
|
1516
|
+
- Track 2 Congas: 3 issues (no_modulation_sources, too_few_blocks, no_modulation — stacked flags for same cause)
|
|
1517
|
+
- Track 5 Snare Rim: 2 issues (too_few_blocks + no_modulation — same BUG-B35 pattern — critics don't understand simple drums are supposed to be simple)
|
|
1518
|
+
|
|
1519
|
+
---
|
|
1520
|
+
|
|
1521
|
+
### Additional findings (wave 9 — arrangement reads + reference comparisons + taste/ranking + memory ops + display values)
|
|
1522
|
+
|
|
1523
|
+
**This was a green wave.** Most tools probed work correctly. The single new bug (B54) is a cascade of B17 (distill_reference_principles returning empty), which causes the entire reference-engine chain to silently degrade.
|
|
1524
|
+
|
|
1525
|
+
**Standout positive findings (deep confirmation):**
|
|
1526
|
+
- ✅ **`get_display_values` is an excellent debugging tool** — on Analog synth it returned all 172 parameters with human-readable strings:
|
|
1527
|
+
- `"F1 Freq": "193 Hz"` (filter freq in actual Hz)
|
|
1528
|
+
- `"AMP1 Level": "-7.7 dB"` (level in dB)
|
|
1529
|
+
- `"OSC1 Shape": "Saw"` (enum name instead of index)
|
|
1530
|
+
- `"OSC1 Octave": "-1"` (signed int)
|
|
1531
|
+
- `"LFO1 Speed": "0.4 Hz"` (frequency)
|
|
1532
|
+
- `"FEG1 Attack": "7 ms"` (time in ms)
|
|
1533
|
+
- For Saturator: `"Drive": "2.0 dB"`, `"Type": "Analog Clip"`, `"Dry/Wet": "30 %"`
|
|
1534
|
+
- This is **exactly what's needed to close BUG-B4 and BUG-B9** — the display_value strings show actual units. Tools that set parameters should always read value_string back after setting, not rely on raw 0-1 normalization.
|
|
1535
|
+
- ✅ **`get_scene_matrix` returns the full session grid** (10 tracks × 8 scenes with clip states, names, colors) — complete structural overview
|
|
1536
|
+
- ✅ **`memory_get(a50d7cc1-...)` returns the FULL Dabrye Core template** I saved — qualities + payload + track_roles + scenes + creative_moves_applied + pending_manual_steps. Perfect round-trip.
|
|
1537
|
+
- ✅ **`memory_favorite` works** — marked bug tracker as `favorite: true, rating: 5`, updated_at advanced
|
|
1538
|
+
- ✅ **`explain_preference_vs_identity`** produces rich breakdown: taste_score 0.96 + identity_score 0.7 + composite 0.791 + recommendation + tension explanation + weight notes (0.65 identity / 0.35 taste)
|
|
1539
|
+
- ✅ **`rank_by_taste_and_identity`** — 3 candidates ranked with composite + per-score explanations + per-candidate recommendation ("recommended" vs "consider")
|
|
1540
|
+
- ✅ **`rank_moves_by_taste`** — ranks 3 moves by taste_score with full metadata preserved
|
|
1541
|
+
- ✅ **`evaluate_mix_move`** — PROPERLY enforced hard rule: rejected my test because measurable delta on "punch" was -0.0389 (worse). `hard_rule_failures: ["HARD RULE: measurable delta <= 0"]`, `keep_change: false`, `decision_mode: "measured_reject"`. This is safety-critical logic working correctly.
|
|
1542
|
+
- ✅ **`compare_to_reference`** (offline, no Ableton needed) — returns proper LUFS deltas, centroid deltas, band_deltas, stereo_width
|
|
1543
|
+
- ✅ **`get_arrangement_notes`** returns arrangement-view MIDI data (6 notes in Pad Lush Intro Wash arrangement clip, pitches 50/53/57/60 = D-F-A-C — same material as session clip, confirmed)
|
|
1544
|
+
- ✅ **`get_plugin_parameters`, `get_plugin_presets`** correctly ERROR when called on non-plugin devices with clear error messages: *"Device is InstrumentVector, not a plugin... Check get_device_info().is_plugin first"*
|
|
1545
|
+
- ✅ **`get_warp_markers`** for the vocal audio clip — returns 2 markers at (beat 0, sample 0) and (beat 32.03, sample 22.35) — confirming the Ableton warp maths (22.35s × 86 BPM/60 = 32.03 beats, tempo-matched to 32-beat clip at 90 BPM session)
|
|
1546
|
+
- ✅ **`get_freeze_status`** — simple boolean query works
|
|
1547
|
+
- ✅ **`get_taste_dimensions`** — returns 8 structured dimensions (transition_boldness, automation_density, dryness_preference, harmonic_boldness, width_preference, native_vs_plugin, density_tolerance, fx_intensity) with evidence_count 0 on a fresh session
|
|
1548
|
+
|
|
1549
|
+
---
|
|
1550
|
+
|
|
1551
|
+
### Highest-leverage fixes (if we fix bugs next session)
|
|
1552
|
+
|
|
1553
|
+
1. **BUG-E1 + E2 (project_brain missing data)** — `project_brain` is supposed to be the canonical V2 engine state. Missing role+automation data silently degrades every engine that depends on it. Fixing these two gives the V2 orchestration layer its full information picture.
|
|
1554
|
+
2. **BUG-B9 (Auto Filter Legacy scale)** — Can silently silence tracks when automation recipes assume 0-1 normalization on 20-135 or 0-30 scales. Real field risk.
|
|
1555
|
+
3. **BUG-A2 / A3 (M4L bridge extensions)** — 30 minutes each, flips "wontfix" → "fixable" for Simpler Warp + Compressor sidechain routing.
|
|
1556
|
+
4. **BUG-B5 + B2 (chord naming on partial chords)** — Same root-inference bug hit twice. One fix closes both.
|
|
1557
|
+
|
|
1558
|
+
---
|
|
1559
|
+
|
|
1560
|
+
## How to use this file across sessions
|
|
1561
|
+
|
|
1562
|
+
New session startup:
|
|
1563
|
+
```bash
|
|
1564
|
+
cat /Users/visansilviugeorge/Desktop/DREAM\ AI/LivePilot/BUGS.md
|
|
1565
|
+
```
|
|
1566
|
+
|
|
1567
|
+
To tell a fresh Claude session to pick up where we left off:
|
|
1568
|
+
> "Read BUGS.md in the LivePilot repo. Let's work through bug {X}." (e.g. BUG-A1, BUG-B1, BUG-D2)
|
|
1569
|
+
|
|
1570
|
+
When a bug is fixed, update the status flag to `🟢 fixed` and either move it to the resolved section at the bottom or keep inline for traceability. Add new bugs with incrementing IDs in their category.
|