@martian-engineering/lossless-claw 0.2.4 → 0.2.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,6 +2,15 @@
2
2
 
3
3
  Lossless Context Management plugin for [OpenClaw](https://github.com/openclaw/openclaw), based on the [LCM paper](https://papers.voltropy.com/LCM). Replaces OpenClaw's built-in sliding-window compaction with a DAG-based summarization system that preserves every message while keeping active context within model token limits.
4
4
 
5
+ ## Table of contents
6
+
7
+ - [What it does](#what-it-does)
8
+ - [Quick start](#quick-start)
9
+ - [Configuration](#configuration)
10
+ - [Documentation](#documentation)
11
+ - [Development](#development)
12
+ - [License](#license)
13
+
5
14
  ## What it does
6
15
 
7
16
  Two ways to learn: read the below, or [check out this super cool animated visualization](https://losslesscontext.ai).
@@ -18,7 +27,7 @@ Nothing is lost. Raw messages stay in the database. Summaries link back to their
18
27
 
19
28
  **It feels like talking to an agent that never forgets. Because it doesn't. In normal operation, you'll never need to think about compaction again.**
20
29
 
21
- ## Installation
30
+ ## Quick start
22
31
 
23
32
  ### Prerequisites
24
33
 
@@ -68,168 +77,6 @@ If you need to set it manually, ensure the context engine slot points at lossles
68
77
 
69
78
  Restart OpenClaw after configuration changes.
70
79
 
71
- ### Optional: enable FTS5 for fast full-text search
72
-
73
- `lossless-claw` works without FTS5 as of the current release. When FTS5 is unavailable in the
74
- Node runtime that runs the OpenClaw gateway, the plugin:
75
-
76
- - keeps persisting messages and summaries
77
- - falls back from `"full_text"` search to a slower `LIKE`-based search
78
- - loses FTS ranking/snippet quality
79
-
80
- If you want native FTS5 search performance and ranking, the **exact Node runtime that runs the
81
- gateway** must have SQLite FTS5 compiled in.
82
-
83
- #### Probe the gateway runtime
84
-
85
- Run this with the same `node` binary your gateway uses:
86
-
87
- ```bash
88
- node --input-type=module - <<'NODE'
89
- import { DatabaseSync } from 'node:sqlite';
90
- const db = new DatabaseSync(':memory:');
91
- const options = db.prepare('pragma compile_options').all().map((row) => row.compile_options);
92
-
93
- console.log(options.filter((value) => value.includes('FTS')).join('\n') || 'no fts compile options');
94
-
95
- try {
96
- db.exec("CREATE VIRTUAL TABLE t USING fts5(content)");
97
- console.log("fts5: ok");
98
- } catch (err) {
99
- console.log("fts5: fail");
100
- console.log(err instanceof Error ? err.message : String(err));
101
- }
102
- NODE
103
- ```
104
-
105
- Expected output:
106
-
107
- ```text
108
- ENABLE_FTS5
109
- fts5: ok
110
- ```
111
-
112
- If you get `fts5: fail`, build or install an FTS5-capable Node and point the gateway at that runtime.
113
-
114
- #### Build an FTS5-capable Node on macOS
115
-
116
- This workflow was verified with Node `v22.15.0`.
117
-
118
- ```bash
119
- cd ~/Projects
120
- git clone --depth 1 --branch v22.15.0 https://github.com/nodejs/node.git node-fts5
121
- cd node-fts5
122
- ```
123
-
124
- Edit `deps/sqlite/sqlite.gyp` and add `SQLITE_ENABLE_FTS5` to the `defines` list for the `sqlite`
125
- target:
126
-
127
- ```diff
128
- 'defines': [
129
- 'SQLITE_DEFAULT_MEMSTATUS=0',
130
- + 'SQLITE_ENABLE_FTS5',
131
- 'SQLITE_ENABLE_MATH_FUNCTIONS',
132
- 'SQLITE_ENABLE_SESSION',
133
- 'SQLITE_ENABLE_PREUPDATE_HOOK'
134
- ],
135
- ```
136
-
137
- Important:
138
-
139
- - patch `deps/sqlite/sqlite.gyp`, not only `node.gyp`
140
- - `node:sqlite` uses the embedded SQLite built from `deps/sqlite/sqlite.gyp`
141
-
142
- Build the runtime:
143
-
144
- ```bash
145
- ./configure --prefix="$PWD/out-install"
146
- make -j8 node
147
- ```
148
-
149
- Expose the binary under a Node-compatible basename that OpenClaw recognizes:
150
-
151
- ```bash
152
- mkdir -p ~/Projects/node-fts5/bin
153
- ln -sfn ~/Projects/node-fts5/out/Release/node ~/Projects/node-fts5/bin/node-22.15.0
154
- ```
155
-
156
- Use a basename like `node-22.15.0`, `node`, or `nodejs`. Names like
157
- `node-v22.15.0-fts5` may not be recognized correctly by OpenClaw's CLI/runtime parsing.
158
-
159
- Verify the new runtime:
160
-
161
- ```bash
162
- ~/Projects/node-fts5/bin/node-22.15.0 --version
163
- ~/Projects/node-fts5/bin/node-22.15.0 --input-type=module - <<'NODE'
164
- import { DatabaseSync } from 'node:sqlite';
165
- const db = new DatabaseSync(':memory:');
166
- db.exec("CREATE VIRTUAL TABLE t USING fts5(content)");
167
- console.log("fts5: ok");
168
- NODE
169
- ```
170
-
171
- #### Point the OpenClaw gateway at that runtime on macOS
172
-
173
- Back up the existing LaunchAgent plist first:
174
-
175
- ```bash
176
- cp ~/Library/LaunchAgents/ai.openclaw.gateway.plist \
177
- ~/Library/LaunchAgents/ai.openclaw.gateway.plist.bak-$(date +%Y%m%d-%H%M%S)
178
- ```
179
-
180
- Replace the runtime path, then reload the agent:
181
-
182
- ```bash
183
- /usr/libexec/PlistBuddy -c 'Set :ProgramArguments:0 /Users/youruser/Projects/node-fts5/bin/node-22.15.0' \
184
- ~/Library/LaunchAgents/ai.openclaw.gateway.plist
185
-
186
- launchctl bootout gui/$UID ~/Library/LaunchAgents/ai.openclaw.gateway.plist 2>/dev/null || true
187
- launchctl bootstrap gui/$UID ~/Library/LaunchAgents/ai.openclaw.gateway.plist
188
- launchctl kickstart -k gui/$UID/ai.openclaw.gateway
189
- ```
190
-
191
- Verify the live runtime:
192
-
193
- ```bash
194
- launchctl print gui/$UID/ai.openclaw.gateway | sed -n '1,80p'
195
- ```
196
-
197
- You should see:
198
-
199
- ```text
200
- program = /Users/youruser/Projects/node-fts5/bin/node-22.15.0
201
- ```
202
-
203
- #### Verify `lossless-claw`
204
-
205
- Check the logs:
206
-
207
- ```bash
208
- tail -n 60 ~/.openclaw/logs/gateway.log
209
- tail -n 60 ~/.openclaw/logs/gateway.err.log
210
- ```
211
-
212
- You want:
213
-
214
- - `[gateway] [lcm] Plugin loaded ...`
215
- - no new `no such module: fts5`
216
-
217
- Then force one turn through the gateway and verify the DB fills:
218
-
219
- ```bash
220
- /Users/youruser/Projects/node-fts5/bin/node-22.15.0 \
221
- /path/to/openclaw/dist/index.js \
222
- agent --session-id fts5-smoke --message 'Reply with exactly: ok' --timeout 60
223
-
224
- sqlite3 ~/.openclaw/lcm.db '
225
- select count(*) as conversations from conversations;
226
- select count(*) as messages from messages;
227
- select count(*) as summaries from summaries;
228
- '
229
- ```
230
-
231
- Those counts should increase after a real turn.
232
-
233
80
  ## Configuration
234
81
 
235
82
  LCM is configured through a combination of plugin config and environment variables. Environment variables take precedence for backward compatibility.
@@ -332,212 +179,14 @@ For most long-lived LCM setups, a good starting point is:
332
179
  }
333
180
  ```
334
181
 
335
- ## How it works
336
-
337
- See [docs/architecture.md](docs/architecture.md) for the full technical deep-dive. Here's the summary:
338
-
339
- ### The DAG
340
-
341
- LCM builds a directed acyclic graph of summaries:
342
-
343
- ```
344
- Raw messages → Leaf summaries (d0) → Condensed (d1) → Condensed (d2) → ...
345
- ```
346
-
347
- - **Leaf summaries** (depth 0) are created from chunks of raw messages. They preserve timestamps, decisions, file operations, and key details.
348
- - **Condensed summaries** (depth 1+) merge multiple summaries at the same depth into a higher-level node. Each depth tier uses a different prompt strategy optimized for its level of abstraction.
349
- - **Parent links** connect each condensed summary to its source summaries, enabling drill-down via `lcm_expand_query`.
350
-
351
- ### Context assembly
352
-
353
- Each turn, the assembler builds model context by:
354
-
355
- 1. Fetching the conversation's **context items** (an ordered list of summary and message references)
356
- 2. Resolving each item into an `AgentMessage`
357
- 3. Protecting the **fresh tail** (most recent N messages) from eviction
358
- 4. Filling remaining token budget from oldest to newest, dropping the oldest items first if over budget
359
- 5. Wrapping summaries in XML with metadata (id, depth, timestamps, descendant count)
360
-
361
- The model sees something like:
362
-
363
- ```xml
364
- <summary id="sum_abc123" kind="condensed" depth="1" descendant_count="8"
365
- earliest_at="2026-02-17T07:37:00" latest_at="2026-02-17T15:43:00">
366
- <parents>
367
- <summary_ref id="sum_def456" />
368
- <summary_ref id="sum_ghi789" />
369
- </parents>
370
- <content>
371
- ...summary text...
372
- </content>
373
- </summary>
374
- ```
375
-
376
- This gives the model enough information to know what was discussed, when, and how to drill deeper via the expansion tools.
377
-
378
- ### Compaction triggers
379
-
380
- Compaction runs in two modes:
381
-
382
- - **Proactive (after each turn):** If raw messages outside the fresh tail exceed `leafChunkTokens`, a leaf pass runs. If `incrementalMaxDepth != 0`, condensation follows (cascading to the configured depth, or unlimited with `-1`).
383
- - **Reactive (overflow/manual):** When total context exceeds `contextThreshold × tokenBudget`, a full sweep runs: all eligible leaf chunks are compacted, then condensation proceeds depth-by-depth until stable.
384
-
385
- ### Depth-aware prompts
386
-
387
- Each summary depth gets a tailored prompt:
388
-
389
- | Depth | Kind | Strategy |
390
- |-------|------|----------|
391
- | 0 | Leaf | Narrative with timestamps, file tracking, preserves operational detail |
392
- | 1 | Condensed | Chronological session summary, deduplicates against `previous_context` |
393
- | 2 | Condensed | Arc-focused: goals, outcomes, what carries forward. Self-contained. |
394
- | 3+ | Condensed | Durable context only: key decisions, relationships, lessons learned |
395
-
396
- All summaries end with an "Expand for details about:" footer listing what was compressed, guiding agents on when to use `lcm_expand_query`.
397
-
398
- ### Large file handling
399
-
400
- Files over `largeFileTokenThreshold` (default 25k tokens) embedded in messages are intercepted during ingestion:
401
-
402
- 1. Content is stored to `~/.openclaw/lcm-files/<conversation_id>/<file_id>.<ext>`
403
- 2. A ~200 token exploration summary replaces the file in the message
404
- 3. The `lcm_describe` tool can retrieve the full file content on demand
405
-
406
- This prevents large file pastes from consuming the entire context window.
407
-
408
- ## Agent tools
409
-
410
- LCM registers four tools that agents can use to search and recall compacted history:
411
-
412
- ### `lcm_grep`
413
-
414
- Full-text and regex search across messages and summaries.
415
-
416
- ```
417
- lcm_grep(pattern: "database migration", mode: "full_text")
418
- lcm_grep(pattern: "config\\.threshold", mode: "regex", scope: "summaries")
419
- ```
420
-
421
- Parameters:
422
- - `pattern` — Search string (regex or full-text)
423
- - `mode` — `"regex"` (default) or `"full_text"`
424
- - `scope` — `"messages"`, `"summaries"`, or `"both"` (default)
425
- - `conversationId` — Scope to a specific conversation
426
- - `allConversations` — Search across all conversations
427
- - `since` / `before` — ISO timestamp filters
428
- - `limit` — Max results (default 50, max 200)
429
-
430
- ### `lcm_describe`
431
-
432
- Inspect a specific summary or stored file by ID.
433
-
434
- ```
435
- lcm_describe(id: "sum_abc123")
436
- lcm_describe(id: "file_def456")
437
- ```
438
-
439
- Returns the full content, metadata, parent/child relationships, and token counts. For files, returns the stored content.
440
-
441
- ### `lcm_expand_query`
442
-
443
- Deep recall via delegated sub-agent. Finds relevant summaries, expands them by walking the DAG down to source material, and answers a focused question.
444
-
445
- ```
446
- lcm_expand_query(
447
- query: "database migration",
448
- prompt: "What migration strategy was decided on?"
449
- )
450
-
451
- lcm_expand_query(
452
- summaryIds: ["sum_abc123"],
453
- prompt: "What were the exact config changes?"
454
- )
455
- ```
456
-
457
- Parameters:
458
- - `prompt` — The question to answer (required)
459
- - `query` — Text query to find relevant summaries (when you don't have IDs)
460
- - `summaryIds` — Specific summary IDs to expand (when you have them)
461
- - `maxTokens` — Answer length cap (default 2000)
462
- - `conversationId` / `allConversations` — Scope control
463
-
464
- Returns a compact answer with cited summary IDs.
465
-
466
- ### `lcm_expand`
467
-
468
- Low-level DAG expansion (sub-agent only). Main agents should use `lcm_expand_query` instead; this tool is available to delegated sub-agents spawned by `lcm_expand_query`.
469
-
470
- ## TUI
471
-
472
- The repo includes an interactive terminal UI (`tui/`) for inspecting, repairing, and managing the LCM database. It's a separate Go binary — not part of the npm package.
473
-
474
- ### Install
475
-
476
- **From GitHub releases** (recommended):
477
-
478
- Download the latest binary for your platform from [Releases](https://github.com/Martian-Engineering/lossless-claw/releases).
479
-
480
- **Build from source:**
481
-
482
- ```bash
483
- cd tui
484
- go build -o lcm-tui .
485
- # or: make build
486
- # or: go install github.com/Martian-Engineering/lossless-claw/tui@latest
487
- ```
488
-
489
- Requires Go 1.24+.
490
-
491
- ### Usage
492
-
493
- ```bash
494
- lcm-tui [--db path/to/lcm.db] [--sessions path/to/sessions/dir]
495
- ```
182
+ ## Documentation
496
183
 
497
- Defaults to `~/.openclaw/lcm.db` and auto-discovers session directories.
498
-
499
- ### Features
500
-
501
- - **Conversation browser** — List all conversations with message/summary counts and token totals
502
- - **Summary DAG view** Navigate the full summary hierarchy with depth, kind, token counts, and parent/child relationships
503
- - **Context view** — See exactly what the model sees: ordered context items with token breakdowns (summaries + fresh tail messages)
504
- - **Dissolve** — Surgically restore a condensed summary back to its parent summaries (with ordinal shift preview)
505
- - **Rewrite** — Re-summarize nodes using actual OpenClaw prompts with scrollable diffs and auto-accept mode
506
- - **Repair** — Fix corrupted summaries (fallback truncations, empty content) using proper LLM summarization
507
- - **Transplant** — Deep-copy summary DAGs between conversations (preserves all messages, message_parts, summary_messages)
508
- - **Previous context viewer** — Inspect the `previous_context` text used during summarization
509
-
510
- ### Keybindings
511
-
512
- | Key | Action |
513
- |-----|--------|
514
- | `c` | Context view (from conversation list) |
515
- | `s` | Summary DAG view |
516
- | `d` | Dissolve a condensed summary |
517
- | `r` | Rewrite a summary |
518
- | `R` | Repair corrupted summaries |
519
- | `t` | Transplant summaries between conversations |
520
- | `p` | View previous_context |
521
- | `Enter` | Expand/select |
522
- | `Esc`/`q` | Back/quit |
523
-
524
- ## Database
525
-
526
- LCM uses SQLite via Node's built-in `node:sqlite` module. The default database path is `~/.openclaw/lcm.db`.
527
-
528
- ### Schema overview
529
-
530
- - **conversations** — Maps session IDs to conversation IDs
531
- - **messages** — Every ingested message with role, content, token count, timestamps
532
- - **message_parts** — Structured content blocks (text, tool calls, reasoning, files) linked to messages
533
- - **summaries** — The summary DAG nodes with content, depth, kind, token counts, timestamps
534
- - **summary_messages** — Links leaf summaries to their source messages
535
- - **summary_parents** — Links condensed summaries to their parent summaries
536
- - **context_items** — The ordered context list for each conversation (what the model sees)
537
- - **large_files** — Metadata for intercepted large files
538
- - **expansion_grants** — Delegation grants for sub-agent expansion queries
539
-
540
- Migrations run automatically on first use. The schema is forward-compatible; new columns are added with defaults.
184
+ - [Configuration guide](docs/configuration.md)
185
+ - [Architecture](docs/architecture.md)
186
+ - [Agent tools](docs/agent-tools.md)
187
+ - [TUI Reference](docs/tui.md)
188
+ - [lcm-tui](tui/README.md)
189
+ - [Optional: enable FTS5 for fast full-text search](docs/fts5.md)
541
190
 
542
191
  ## Development
543
192
 
package/docs/fts5.md ADDED
@@ -0,0 +1,161 @@
1
+ # Optional: enable FTS5 for fast full-text search
2
+
3
+ `lossless-claw` works without FTS5 as of the current release. When FTS5 is unavailable in the
4
+ Node runtime that runs the OpenClaw gateway, the plugin:
5
+
6
+ - keeps persisting messages and summaries
7
+ - falls back from `"full_text"` search to a slower `LIKE`-based search
8
+ - loses FTS ranking/snippet quality
9
+
10
+ If you want native FTS5 search performance and ranking, the **exact Node runtime that runs the
11
+ gateway** must have SQLite FTS5 compiled in.
12
+
13
+ ## Probe the gateway runtime
14
+
15
+ Run this with the same `node` binary your gateway uses:
16
+
17
+ ```bash
18
+ node --input-type=module - <<'NODE'
19
+ import { DatabaseSync } from 'node:sqlite';
20
+ const db = new DatabaseSync(':memory:');
21
+ const options = db.prepare('pragma compile_options').all().map((row) => row.compile_options);
22
+
23
+ console.log(options.filter((value) => value.includes('FTS')).join('\n') || 'no fts compile options');
24
+
25
+ try {
26
+ db.exec("CREATE VIRTUAL TABLE t USING fts5(content)");
27
+ console.log("fts5: ok");
28
+ } catch (err) {
29
+ console.log("fts5: fail");
30
+ console.log(err instanceof Error ? err.message : String(err));
31
+ }
32
+ NODE
33
+ ```
34
+
35
+ Expected output:
36
+
37
+ ```text
38
+ ENABLE_FTS5
39
+ fts5: ok
40
+ ```
41
+
42
+ If you get `fts5: fail`, build or install an FTS5-capable Node and point the gateway at that runtime.
43
+
44
+ ## Build an FTS5-capable Node on macOS
45
+
46
+ This workflow was verified with Node `v22.15.0`.
47
+
48
+ ```bash
49
+ cd ~/Projects
50
+ git clone --depth 1 --branch v22.15.0 https://github.com/nodejs/node.git node-fts5
51
+ cd node-fts5
52
+ ```
53
+
54
+ Edit `deps/sqlite/sqlite.gyp` and add `SQLITE_ENABLE_FTS5` to the `defines` list for the `sqlite`
55
+ target:
56
+
57
+ ```diff
58
+ 'defines': [
59
+ 'SQLITE_DEFAULT_MEMSTATUS=0',
60
+ + 'SQLITE_ENABLE_FTS5',
61
+ 'SQLITE_ENABLE_MATH_FUNCTIONS',
62
+ 'SQLITE_ENABLE_SESSION',
63
+ 'SQLITE_ENABLE_PREUPDATE_HOOK'
64
+ ],
65
+ ```
66
+
67
+ Important:
68
+
69
+ - patch `deps/sqlite/sqlite.gyp`, not only `node.gyp`
70
+ - `node:sqlite` uses the embedded SQLite built from `deps/sqlite/sqlite.gyp`
71
+
72
+ Build the runtime:
73
+
74
+ ```bash
75
+ ./configure --prefix="$PWD/out-install"
76
+ make -j8 node
77
+ ```
78
+
79
+ Expose the binary under a Node-compatible basename that OpenClaw recognizes:
80
+
81
+ ```bash
82
+ mkdir -p ~/Projects/node-fts5/bin
83
+ ln -sfn ~/Projects/node-fts5/out/Release/node ~/Projects/node-fts5/bin/node-22.15.0
84
+ ```
85
+
86
+ Use a basename like `node-22.15.0`, `node`, or `nodejs`. Names like
87
+ `node-v22.15.0-fts5` may not be recognized correctly by OpenClaw's CLI/runtime parsing.
88
+
89
+ Verify the new runtime:
90
+
91
+ ```bash
92
+ ~/Projects/node-fts5/bin/node-22.15.0 --version
93
+ ~/Projects/node-fts5/bin/node-22.15.0 --input-type=module - <<'NODE'
94
+ import { DatabaseSync } from 'node:sqlite';
95
+ const db = new DatabaseSync(':memory:');
96
+ db.exec("CREATE VIRTUAL TABLE t USING fts5(content)");
97
+ console.log("fts5: ok");
98
+ NODE
99
+ ```
100
+
101
+ ## Point the OpenClaw gateway at that runtime on macOS
102
+
103
+ Back up the existing LaunchAgent plist first:
104
+
105
+ ```bash
106
+ cp ~/Library/LaunchAgents/ai.openclaw.gateway.plist \
107
+ ~/Library/LaunchAgents/ai.openclaw.gateway.plist.bak-$(date +%Y%m%d-%H%M%S)
108
+ ```
109
+
110
+ Replace the runtime path, then reload the agent:
111
+
112
+ ```bash
113
+ /usr/libexec/PlistBuddy -c 'Set :ProgramArguments:0 /Users/youruser/Projects/node-fts5/bin/node-22.15.0' \
114
+ ~/Library/LaunchAgents/ai.openclaw.gateway.plist
115
+
116
+ launchctl bootout gui/$UID ~/Library/LaunchAgents/ai.openclaw.gateway.plist 2>/dev/null || true
117
+ launchctl bootstrap gui/$UID ~/Library/LaunchAgents/ai.openclaw.gateway.plist
118
+ launchctl kickstart -k gui/$UID/ai.openclaw.gateway
119
+ ```
120
+
121
+ Verify the live runtime:
122
+
123
+ ```bash
124
+ launchctl print gui/$UID/ai.openclaw.gateway | sed -n '1,80p'
125
+ ```
126
+
127
+ You should see:
128
+
129
+ ```text
130
+ program = /Users/youruser/Projects/node-fts5/bin/node-22.15.0
131
+ ```
132
+
133
+ ## Verify `lossless-claw`
134
+
135
+ Check the logs:
136
+
137
+ ```bash
138
+ tail -n 60 ~/.openclaw/logs/gateway.log
139
+ tail -n 60 ~/.openclaw/logs/gateway.err.log
140
+ ```
141
+
142
+ You want:
143
+
144
+ - `[gateway] [lcm] Plugin loaded ...`
145
+ - no new `no such module: fts5`
146
+
147
+ Then force one turn through the gateway and verify the DB fills:
148
+
149
+ ```bash
150
+ /Users/youruser/Projects/node-fts5/bin/node-22.15.0 \
151
+ /path/to/openclaw/dist/index.js \
152
+ agent --session-id fts5-smoke --message 'Reply with exactly: ok' --timeout 60
153
+
154
+ sqlite3 ~/.openclaw/lcm.db '
155
+ select count(*) as conversations from conversations;
156
+ select count(*) as messages from messages;
157
+ select count(*) as summaries from summaries;
158
+ '
159
+ ```
160
+
161
+ Those counts should increase after a real turn.
package/index.ts CHANGED
@@ -42,7 +42,10 @@ function normalizeAgentId(agentId: string | undefined): string {
42
42
  type PluginEnvSnapshot = {
43
43
  lcmSummaryModel: string;
44
44
  lcmSummaryProvider: string;
45
+ pluginSummaryModel: string;
46
+ pluginSummaryProvider: string;
45
47
  openclawProvider: string;
48
+ openclawDefaultModel: string;
46
49
  agentDir: string;
47
50
  home: string;
48
51
  };
@@ -61,12 +64,30 @@ function snapshotPluginEnv(env: NodeJS.ProcessEnv = process.env): PluginEnvSnaps
61
64
  return {
62
65
  lcmSummaryModel: env.LCM_SUMMARY_MODEL?.trim() ?? "",
63
66
  lcmSummaryProvider: env.LCM_SUMMARY_PROVIDER?.trim() ?? "",
67
+ pluginSummaryModel: "",
68
+ pluginSummaryProvider: "",
64
69
  openclawProvider: env.OPENCLAW_PROVIDER?.trim() ?? "",
70
+ openclawDefaultModel: "",
65
71
  agentDir: env.OPENCLAW_AGENT_DIR?.trim() || env.PI_CODING_AGENT_DIR?.trim() || "",
66
72
  home: env.HOME?.trim() ?? "",
67
73
  };
68
74
  }
69
75
 
76
+ /** Read OpenClaw's configured default model from the validated runtime config. */
77
+ function readDefaultModelFromConfig(config: unknown): string {
78
+ if (!config || typeof config !== "object") {
79
+ return "";
80
+ }
81
+
82
+ const model = (config as { agents?: { defaults?: { model?: unknown } } }).agents?.defaults?.model;
83
+ if (typeof model === "string") {
84
+ return model.trim();
85
+ }
86
+
87
+ const primary = (model as { primary?: unknown } | undefined)?.primary;
88
+ return typeof primary === "string" ? primary.trim() : "";
89
+ }
90
+
70
91
  /** Resolve common provider API keys from environment. */
71
92
  function resolveApiKey(provider: string, readEnv: ReadEnvFn): string | undefined {
72
93
  const keyMap: Record<string, string[]> = {
@@ -596,6 +617,7 @@ function readLatestAssistantReply(messages: unknown[]): string | undefined {
596
617
  /** Construct LCM dependencies from plugin API/runtime surfaces. */
597
618
  function createLcmDependencies(api: OpenClawPluginApi): LcmDependencies {
598
619
  const envSnapshot = snapshotPluginEnv();
620
+ envSnapshot.openclawDefaultModel = readDefaultModelFromConfig(api.config);
599
621
  const readEnv: ReadEnvFn = (key) => process.env[key];
600
622
  const pluginConfig =
601
623
  api.pluginConfig && typeof api.pluginConfig === "object" && !Array.isArray(api.pluginConfig)
@@ -603,6 +625,18 @@ function createLcmDependencies(api: OpenClawPluginApi): LcmDependencies {
603
625
  : undefined;
604
626
  const config = resolveLcmConfig(process.env, pluginConfig);
605
627
 
628
+ // Read model overrides from plugin config
629
+ if (pluginConfig) {
630
+ const summaryModel = pluginConfig.summaryModel;
631
+ const summaryProvider = pluginConfig.summaryProvider;
632
+ if (typeof summaryModel === "string") {
633
+ envSnapshot.pluginSummaryModel = summaryModel.trim();
634
+ }
635
+ if (typeof summaryProvider === "string") {
636
+ envSnapshot.pluginSummaryProvider = summaryProvider.trim();
637
+ }
638
+ }
639
+
606
640
  return {
607
641
  config,
608
642
  complete: async ({
@@ -789,7 +823,11 @@ function createLcmDependencies(api: OpenClawPluginApi): LcmDependencies {
789
823
  }
790
824
  },
791
825
  resolveModel: (modelRef, providerHint) => {
792
- const raw = (modelRef ?? envSnapshot.lcmSummaryModel).trim();
826
+ const raw =
827
+ (modelRef?.trim() ||
828
+ envSnapshot.pluginSummaryModel ||
829
+ envSnapshot.lcmSummaryModel ||
830
+ envSnapshot.openclawDefaultModel).trim();
793
831
  if (!raw) {
794
832
  throw new Error("No model configured for LCM summarization.");
795
833
  }
@@ -803,8 +841,9 @@ function createLcmDependencies(api: OpenClawPluginApi): LcmDependencies {
803
841
  }
804
842
 
805
843
  const provider = (
806
- envSnapshot.lcmSummaryProvider ||
807
844
  providerHint?.trim() ||
845
+ envSnapshot.pluginSummaryProvider ||
846
+ envSnapshot.lcmSummaryProvider ||
808
847
  envSnapshot.openclawProvider ||
809
848
  "openai"
810
849
  ).trim();
@@ -16,6 +16,14 @@
16
16
  "dbPath": {
17
17
  "label": "Database Path",
18
18
  "help": "Path to LCM SQLite database (default: ~/.openclaw/lcm.db)"
19
+ },
20
+ "summaryModel": {
21
+ "label": "Summary Model",
22
+ "help": "Model override for LCM summarization (e.g., 'gpt-5.4' or 'openai-resp/gpt-5.4')"
23
+ },
24
+ "summaryProvider": {
25
+ "label": "Summary Provider",
26
+ "help": "Provider override for LCM summarization (e.g., 'openai-resp')"
19
27
  }
20
28
  },
21
29
  "configSchema": {
@@ -56,6 +64,12 @@
56
64
  "largeFileThresholdTokens": {
57
65
  "type": "integer",
58
66
  "minimum": 1000
67
+ },
68
+ "summaryModel": {
69
+ "type": "string"
70
+ },
71
+ "summaryProvider": {
72
+ "type": "string"
59
73
  }
60
74
  }
61
75
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@martian-engineering/lossless-claw",
3
- "version": "0.2.4",
3
+ "version": "0.2.6",
4
4
  "description": "Lossless Context Management plugin for OpenClaw — DAG-based conversation summarization with incremental compaction",
5
5
  "type": "module",
6
6
  "main": "index.ts",
package/src/engine.ts CHANGED
@@ -1509,6 +1509,10 @@ export class LcmContextEngine implements ContextEngine {
1509
1509
  observedTokens !== undefined
1510
1510
  ? await this.compaction.evaluate(conversationId, tokenBudget, observedTokens)
1511
1511
  : await this.compaction.evaluate(conversationId, tokenBudget);
1512
+ const targetTokens =
1513
+ params.compactionTarget === "threshold" ? decision.threshold : tokenBudget;
1514
+ const liveContextStillExceedsTarget =
1515
+ observedTokens !== undefined && observedTokens >= targetTokens;
1512
1516
 
1513
1517
  if (!forceCompaction && !decision.shouldCompact) {
1514
1518
  return {
@@ -1533,27 +1537,28 @@ export class LcmContextEngine implements ContextEngine {
1533
1537
  });
1534
1538
 
1535
1539
  return {
1536
- ok: true,
1540
+ ok: sweepResult.actionTaken || !liveContextStillExceedsTarget,
1537
1541
  compacted: sweepResult.actionTaken,
1538
1542
  reason: sweepResult.actionTaken
1539
1543
  ? "compacted"
1540
1544
  : manualCompactionRequested
1541
1545
  ? "nothing to compact"
1542
- : "already under target",
1546
+ : liveContextStillExceedsTarget
1547
+ ? "live context still exceeds target"
1548
+ : "already under target",
1543
1549
  result: {
1544
1550
  tokensBefore: decision.currentTokens,
1545
1551
  tokensAfter: sweepResult.tokensAfter,
1546
1552
  details: {
1547
1553
  rounds: sweepResult.actionTaken ? 1 : 0,
1548
- targetTokens:
1549
- params.compactionTarget === "threshold" ? decision.threshold : tokenBudget,
1554
+ targetTokens,
1550
1555
  },
1551
1556
  },
1552
1557
  };
1553
1558
  }
1554
1559
 
1555
1560
  // When forced, use the token budget as target
1556
- const targetTokens = forceCompaction
1561
+ const convergenceTargetTokens = forceCompaction
1557
1562
  ? tokenBudget
1558
1563
  : params.compactionTarget === "threshold"
1559
1564
  ? decision.threshold
@@ -1562,7 +1567,7 @@ export class LcmContextEngine implements ContextEngine {
1562
1567
  const compactResult = await this.compaction.compactUntilUnder({
1563
1568
  conversationId,
1564
1569
  tokenBudget,
1565
- targetTokens,
1570
+ targetTokens: convergenceTargetTokens,
1566
1571
  ...(observedTokens !== undefined ? { currentTokens: observedTokens } : {}),
1567
1572
  summarize,
1568
1573
  });
@@ -1581,7 +1586,7 @@ export class LcmContextEngine implements ContextEngine {
1581
1586
  tokensAfter: compactResult.finalTokens,
1582
1587
  details: {
1583
1588
  rounds: compactResult.rounds,
1584
- targetTokens,
1589
+ targetTokens: convergenceTargetTokens,
1585
1590
  },
1586
1591
  },
1587
1592
  };