@martian-engineering/lossless-claw 0.5.2 → 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +49 -11
- package/docs/configuration.md +44 -0
- package/openclaw.plugin.json +114 -0
- package/package.json +2 -1
- package/skills/lossless-claw/SKILL.md +33 -0
- package/skills/lossless-claw/references/architecture.md +52 -0
- package/skills/lossless-claw/references/config.md +263 -0
- package/skills/lossless-claw/references/diagnostics.md +79 -0
- package/skills/lossless-claw/references/recall-tools.md +55 -0
- package/skills/lossless-claw/references/session-lifecycle.md +59 -0
- package/src/assembler.ts +321 -34
- package/src/compaction.ts +220 -19
- package/src/db/config.ts +74 -21
- package/src/db/migration.ts +50 -13
- package/src/engine.ts +742 -133
- package/src/plugin/index.ts +156 -73
- package/src/plugin/lcm-command.ts +759 -0
- package/src/plugin/lcm-doctor-apply.ts +546 -0
- package/src/plugin/lcm-doctor-shared.ts +210 -0
- package/src/store/conversation-store.ts +60 -21
- package/src/store/parse-utc-timestamp.ts +25 -0
- package/src/store/summary-store.ts +460 -11
- package/src/summarize.ts +553 -224
- package/src/tools/lcm-expand-query-tool.ts +195 -59
- package/src/tools/lcm-expansion-recursion-guard.ts +87 -0
- package/src/types.ts +1 -0
package/README.md
CHANGED
|
@@ -7,6 +7,7 @@ Lossless Context Management plugin for [OpenClaw](https://github.com/openclaw/op
|
|
|
7
7
|
- [What it does](#what-it-does)
|
|
8
8
|
- [Quick start](#quick-start)
|
|
9
9
|
- [Configuration](#configuration)
|
|
10
|
+
- [Commands And Skill](#commands-and-skill)
|
|
10
11
|
- [Documentation](#documentation)
|
|
11
12
|
- [Development](#development)
|
|
12
13
|
- [License](#license)
|
|
@@ -27,6 +28,16 @@ Nothing is lost. Raw messages stay in the database. Summaries link back to their
|
|
|
27
28
|
|
|
28
29
|
**It feels like talking to an agent that never forgets. Because it doesn't. In normal operation, you'll never need to think about compaction again.**
|
|
29
30
|
|
|
31
|
+
## Commands And Skill
|
|
32
|
+
|
|
33
|
+
The plugin now ships a bundled `lossless-claw` skill plus a small native command surface:
|
|
34
|
+
|
|
35
|
+
- `/lcm` shows version, enablement/selection state, DB path and size, summary counts, and summary-health status
|
|
36
|
+
- `/lcm doctor` scans for broken or truncated summaries
|
|
37
|
+
- `/lossless` is an alias for `/lcm` on native command surfaces
|
|
38
|
+
|
|
39
|
+
The bundled skill focuses on configuration, diagnostics, architecture, and recall-tool usage. Its reference set lives under `skills/lossless-claw/references/`.
|
|
40
|
+
|
|
30
41
|
## Quick start
|
|
31
42
|
|
|
32
43
|
### Prerequisites
|
|
@@ -59,6 +70,8 @@ openclaw plugins install --link /path/to/lossless-claw
|
|
|
59
70
|
|
|
60
71
|
The install command records the plugin, enables it, and applies compatible slot selection (including `contextEngine` when applicable).
|
|
61
72
|
|
|
73
|
+
> **Note:** If your OpenClaw config uses `plugins.allow`, make sure both `lossless-claw` and any active plugins you rely on remain allowlisted. In some setups, narrowing the allowlist can prevent plugin-backed integrations from loading, even if `lossless-claw` itself is installed correctly. Restart the gateway after plugin config changes.
|
|
74
|
+
|
|
62
75
|
### Configure OpenClaw
|
|
63
76
|
|
|
64
77
|
In most cases, no manual JSON edits are needed after `openclaw plugins install`.
|
|
@@ -92,14 +105,17 @@ Add a `lossless-claw` entry under `plugins.entries` in your OpenClaw config:
|
|
|
92
105
|
"lossless-claw": {
|
|
93
106
|
"enabled": true,
|
|
94
107
|
"config": {
|
|
95
|
-
"freshTailCount":
|
|
108
|
+
"freshTailCount": 64,
|
|
109
|
+
"leafChunkTokens": 80000,
|
|
110
|
+
"newSessionRetainDepth": 2,
|
|
96
111
|
"contextThreshold": 0.75,
|
|
97
|
-
"incrementalMaxDepth":
|
|
112
|
+
"incrementalMaxDepth": 1,
|
|
98
113
|
"ignoreSessionPatterns": [
|
|
99
114
|
"agent:*:cron:**"
|
|
100
115
|
],
|
|
101
116
|
"summaryModel": "anthropic/claude-haiku-4-5",
|
|
102
|
-
"expansionModel": "anthropic/claude-haiku-4-5"
|
|
117
|
+
"expansionModel": "anthropic/claude-haiku-4-5",
|
|
118
|
+
"delegationTimeoutMs": 300000
|
|
103
119
|
}
|
|
104
120
|
}
|
|
105
121
|
}
|
|
@@ -107,7 +123,7 @@ Add a `lossless-claw` entry under `plugins.entries` in your OpenClaw config:
|
|
|
107
123
|
}
|
|
108
124
|
```
|
|
109
125
|
|
|
110
|
-
`summaryModel` and `summaryProvider` let you pin compaction summarization to a cheaper or faster model than your main OpenClaw session model. `expansionModel` does the same for `lcm_expand_query` sub-agent calls (drilling into summaries to recover detail). When unset,
|
|
126
|
+
`leafChunkTokens` controls how many source tokens can accumulate in a leaf compaction chunk before summarization is triggered. The default is `20000`, but quota-limited summary providers may benefit from a larger value to reduce compaction frequency. `summaryModel` and `summaryProvider` let you pin compaction summarization to a cheaper or faster model than your main OpenClaw session model. `expansionModel` does the same for `lcm_expand_query` sub-agent calls (drilling into summaries to recover detail). `delegationTimeoutMs` controls how long `lcm_expand_query` waits for that delegated sub-agent to finish before returning a timeout error; it defaults to `120000` (120s). When unset, the model settings still fall back to OpenClaw's configured default model/provider. See [Expansion model override requirements](#expansion-model-override-requirements) for the required `subagent` trust policy when using `expansionModel`.
|
|
111
127
|
|
|
112
128
|
### Environment variables
|
|
113
129
|
|
|
@@ -119,11 +135,12 @@ Add a `lossless-claw` entry under `plugins.entries` in your OpenClaw config:
|
|
|
119
135
|
| `LCM_STATELESS_SESSION_PATTERNS` | `""` | Comma-separated glob patterns for session keys that may read from LCM but never write to it |
|
|
120
136
|
| `LCM_SKIP_STATELESS_SESSIONS` | `true` | Enable stateless-session write skipping for matching session keys |
|
|
121
137
|
| `LCM_CONTEXT_THRESHOLD` | `0.75` | Fraction of context window that triggers compaction (0.0–1.0) |
|
|
122
|
-
| `LCM_FRESH_TAIL_COUNT` | `
|
|
138
|
+
| `LCM_FRESH_TAIL_COUNT` | `64` | Number of recent messages protected from compaction |
|
|
139
|
+
| `LCM_NEW_SESSION_RETAIN_DEPTH` | `2` | Context retained after `/new` (`-1` keeps all context, `2` keeps d2+) |
|
|
123
140
|
| `LCM_LEAF_MIN_FANOUT` | `8` | Minimum raw messages per leaf summary |
|
|
124
141
|
| `LCM_CONDENSED_MIN_FANOUT` | `4` | Minimum summaries per condensed node |
|
|
125
142
|
| `LCM_CONDENSED_MIN_FANOUT_HARD` | `2` | Relaxed fanout for forced compaction sweeps |
|
|
126
|
-
| `LCM_INCREMENTAL_MAX_DEPTH` | `
|
|
143
|
+
| `LCM_INCREMENTAL_MAX_DEPTH` | `1` | How deep incremental compaction goes (0 = leaf only, 1 = one condensed pass, -1 = unlimited) |
|
|
127
144
|
| `LCM_LEAF_CHUNK_TOKENS` | `20000` | Max source tokens per leaf compaction chunk |
|
|
128
145
|
| `LCM_LEAF_TARGET_TOKENS` | `1200` | Target token count for leaf summaries |
|
|
129
146
|
| `LCM_CONDENSED_TARGET_TOKENS` | `2000` | Target token count for condensed summaries |
|
|
@@ -136,7 +153,7 @@ Add a `lossless-claw` entry under `plugins.entries` in your OpenClaw config:
|
|
|
136
153
|
| `LCM_SUMMARY_BASE_URL` | *(from OpenClaw / provider default)* | Base URL override for summarization API calls |
|
|
137
154
|
| `LCM_EXPANSION_MODEL` | *(from OpenClaw)* | Model override for `lcm_expand_query` sub-agent (e.g. `anthropic/claude-haiku-4-5`) |
|
|
138
155
|
| `LCM_EXPANSION_PROVIDER` | *(from OpenClaw)* | Provider override for `lcm_expand_query` sub-agent |
|
|
139
|
-
| `
|
|
156
|
+
| `LCM_DELEGATION_TIMEOUT_MS` | `120000` | Max time to wait for delegated `lcm_expand_query` sub-agent completion |
|
|
140
157
|
| `LCM_PRUNE_HEARTBEAT_OK` | `false` | Retroactively delete `HEARTBEAT_OK` turn cycles from LCM storage |
|
|
141
158
|
|
|
142
159
|
### Expansion model override requirements
|
|
@@ -177,8 +194,10 @@ Plugin config equivalents:
|
|
|
177
194
|
- `ignoreSessionPatterns`
|
|
178
195
|
- `statelessSessionPatterns`
|
|
179
196
|
- `skipStatelessSessions`
|
|
197
|
+
- `newSessionRetainDepth`
|
|
180
198
|
- `summaryModel`
|
|
181
199
|
- `summaryProvider`
|
|
200
|
+
- `delegationTimeoutMs`
|
|
182
201
|
|
|
183
202
|
Environment variables still win over plugin config when both are set.
|
|
184
203
|
|
|
@@ -196,17 +215,36 @@ If `summaryModel` already includes a provider prefix such as `anthropic/claude-s
|
|
|
196
215
|
### Recommended starting configuration
|
|
197
216
|
|
|
198
217
|
```
|
|
199
|
-
LCM_FRESH_TAIL_COUNT=
|
|
200
|
-
|
|
218
|
+
LCM_FRESH_TAIL_COUNT=64
|
|
219
|
+
LCM_LEAF_CHUNK_TOKENS=20000
|
|
220
|
+
LCM_INCREMENTAL_MAX_DEPTH=1
|
|
201
221
|
LCM_CONTEXT_THRESHOLD=0.75
|
|
202
222
|
```
|
|
203
223
|
|
|
204
|
-
- **freshTailCount=
|
|
205
|
-
- **
|
|
224
|
+
- **freshTailCount=64** protects the last 64 messages from compaction, giving the model more recent context for continuity.
|
|
225
|
+
- **leafChunkTokens=20000** limits how large each leaf compaction chunk can grow before LCM summarizes it. Increase this when your summary provider is quota-limited and frequent leaf compactions are exhausting that quota.
|
|
226
|
+
- **incrementalMaxDepth=1** runs one condensed pass after each leaf compaction by default. Set to `0` for leaf-only behavior, a larger positive integer for a deeper cap, or `-1` for unlimited cascading.
|
|
206
227
|
- **contextThreshold=0.75** triggers compaction when context reaches 75% of the model's window, leaving headroom for the model's response.
|
|
207
228
|
|
|
208
229
|
### Session exclusion patterns
|
|
209
230
|
|
|
231
|
+
### Session reset semantics
|
|
232
|
+
|
|
233
|
+
Lossless-claw distinguishes OpenClaw's two session-reset commands:
|
|
234
|
+
|
|
235
|
+
- `/new` keeps the active conversation row and all stored summaries, but prunes `context_items` so the next turn rebuilds context from retained summaries instead of the fresh tail.
|
|
236
|
+
- `/reset` archives the active conversation row and creates a new active row for the same stable `sessionKey`, giving the next turn a clean LCM conversation while preserving prior history.
|
|
237
|
+
|
|
238
|
+
`newSessionRetainDepth` (or `LCM_NEW_SESSION_RETAIN_DEPTH`) controls how much summary structure survives `/new`:
|
|
239
|
+
|
|
240
|
+
- `-1`: keep all existing context items
|
|
241
|
+
- `0`: keep all summaries, drop only fresh-tail messages
|
|
242
|
+
- `1`: keep d1+ summaries
|
|
243
|
+
- `2`: keep d2+ summaries; recommended default
|
|
244
|
+
- `3+`: keep only deeper, more abstract summaries
|
|
245
|
+
|
|
246
|
+
Lossless-claw currently applies these storage semantics through the `before_reset` hook only. User-facing confirmation text after `/new` or `/reset` must be emitted by OpenClaw's command handlers.
|
|
247
|
+
|
|
210
248
|
Use `ignoreSessionPatterns` or `LCM_IGNORE_SESSION_PATTERNS` to keep low-value sessions completely out of LCM. Matching sessions do not create conversations, do not store messages, and do not participate in compaction or delegated expansion grants.
|
|
211
249
|
|
|
212
250
|
Pattern rules:
|
package/docs/configuration.md
CHANGED
|
@@ -26,6 +26,7 @@ Set recommended environment variables:
|
|
|
26
26
|
|
|
27
27
|
```bash
|
|
28
28
|
export LCM_FRESH_TAIL_COUNT=32
|
|
29
|
+
export LCM_NEW_SESSION_RETAIN_DEPTH=2
|
|
29
30
|
export LCM_INCREMENTAL_MAX_DEPTH=-1
|
|
30
31
|
```
|
|
31
32
|
|
|
@@ -51,6 +52,18 @@ For most use cases, 0.75 is a good balance.
|
|
|
51
52
|
|
|
52
53
|
For coding conversations with tool calls (which generate many messages per logical turn), 32 is recommended.
|
|
53
54
|
|
|
55
|
+
### /new retain depth
|
|
56
|
+
|
|
57
|
+
`LCM_NEW_SESSION_RETAIN_DEPTH` (default `2`) controls what survives OpenClaw's `/new` command.
|
|
58
|
+
|
|
59
|
+
- `-1` keeps all existing context items, making `/new` a transcript-only reset from lossless-claw's perspective.
|
|
60
|
+
- `0` drops only fresh-tail message items and keeps all summaries.
|
|
61
|
+
- `1` drops d0 summaries and keeps d1+.
|
|
62
|
+
- `2` drops d0 and d1 summaries, keeping d2+ project-arc context. This is the recommended default.
|
|
63
|
+
- `3+` keeps only deeper, more abstract summaries.
|
|
64
|
+
|
|
65
|
+
`/new` never deletes the summaries themselves. It only prunes `context_items`, so the summary DAG remains available for later retrieval and expansion.
|
|
66
|
+
|
|
54
67
|
### Leaf fanout
|
|
55
68
|
|
|
56
69
|
`LCM_LEAF_MIN_FANOUT` (default `8`) is the minimum number of raw messages that must be available outside the fresh tail before a leaf pass runs.
|
|
@@ -91,6 +104,25 @@ The actual summary size depends on the LLM's output; these values are guidelines
|
|
|
91
104
|
- Smaller chunks create summaries more frequently from less material.
|
|
92
105
|
- This also affects the condensed minimum input threshold (10% of this value).
|
|
93
106
|
|
|
107
|
+
### Maximum assembly token budget
|
|
108
|
+
|
|
109
|
+
`LCM_MAX_ASSEMBLY_TOKEN_BUDGET` (default: none) caps the token budget used for context assembly and compaction threshold evaluation. When set, this takes precedence over both the 128k fallback and runtime-provided budgets.
|
|
110
|
+
|
|
111
|
+
Set this if you're using a model with a smaller context window:
|
|
112
|
+
|
|
113
|
+
- **8k models:** `LCM_MAX_ASSEMBLY_TOKEN_BUDGET=7000`
|
|
114
|
+
- **32k models:** `LCM_MAX_ASSEMBLY_TOKEN_BUDGET=30000`
|
|
115
|
+
- **128k+ models:** No need to set (128k fallback is appropriate)
|
|
116
|
+
|
|
117
|
+
### Summary size cap
|
|
118
|
+
|
|
119
|
+
`LCM_SUMMARY_MAX_OVERAGE_FACTOR` (default: `3`) controls the hard ceiling on summary sizes relative to the target tokens (`leafTargetTokens` for leaf summaries, `condensedTargetTokens` for condensed summaries).
|
|
120
|
+
|
|
121
|
+
If a summary exceeds `overage_factor * target_tokens`, it is deterministically truncated. A warning is logged when any summary exceeds `1.5 * target_tokens`.
|
|
122
|
+
|
|
123
|
+
- **Lower values** (e.g., 2) enforce tighter summaries but may truncate more often with weaker summarizer models.
|
|
124
|
+
- **Higher values** (e.g., 5) allow more LLM flexibility but risk storing oversized summaries.
|
|
125
|
+
|
|
94
126
|
## Model selection
|
|
95
127
|
|
|
96
128
|
LCM uses the same model as the parent OpenClaw session for summarization by default. You can override this:
|
|
@@ -113,10 +145,22 @@ When more than one source is present, compaction summarization resolves in this
|
|
|
113
145
|
|
|
114
146
|
If `summaryModel` already includes a provider prefix such as `anthropic/claude-sonnet-4-20250514`, `summaryProvider` is ignored for that choice.
|
|
115
147
|
|
|
148
|
+
For delegated `lcm_expand_query` runs, you can extend the sub-agent wait window with `delegationTimeoutMs` (plugin config) or `LCM_DELEGATION_TIMEOUT_MS` (environment variable). The default is `120000` milliseconds.
|
|
149
|
+
|
|
116
150
|
## Session controls
|
|
117
151
|
|
|
118
152
|
### Excluding sessions entirely
|
|
119
153
|
|
|
154
|
+
### `/new` vs `/reset`
|
|
155
|
+
|
|
156
|
+
Lossless-claw treats the two OpenClaw reset commands differently:
|
|
157
|
+
|
|
158
|
+
- `/new` keeps the active LCM conversation and prunes active context according to `newSessionRetainDepth`.
|
|
159
|
+
- `/reset` archives the active conversation row and creates a fresh active row for the same stable `sessionKey`.
|
|
160
|
+
|
|
161
|
+
This preserves lossless history while still giving users a real clean-slate command.
|
|
162
|
+
OpenClaw's command handlers still own the user-facing post-command disclosure text; lossless-claw applies only the underlying storage transition through `before_reset`.
|
|
163
|
+
|
|
120
164
|
Use `ignoreSessionPatterns` or `LCM_IGNORE_SESSION_PATTERNS` to keep low-value sessions completely out of LCM. Matching sessions do not create conversations, do not store messages, and do not participate in compaction or delegated expansion grants.
|
|
121
165
|
|
|
122
166
|
- Matching uses the full session key.
|
package/openclaw.plugin.json
CHANGED
|
@@ -1,5 +1,8 @@
|
|
|
1
1
|
{
|
|
2
2
|
"id": "lossless-claw",
|
|
3
|
+
"skills": [
|
|
4
|
+
"skills/lossless-claw"
|
|
5
|
+
],
|
|
3
6
|
"uiHints": {
|
|
4
7
|
"contextThreshold": {
|
|
5
8
|
"label": "Context Threshold",
|
|
@@ -13,6 +16,30 @@
|
|
|
13
16
|
"label": "Fresh Tail Count",
|
|
14
17
|
"help": "Number of recent messages protected from compaction"
|
|
15
18
|
},
|
|
19
|
+
"leafChunkTokens": {
|
|
20
|
+
"label": "Leaf Chunk Tokens",
|
|
21
|
+
"help": "Maximum source tokens per leaf compaction chunk before summarization"
|
|
22
|
+
},
|
|
23
|
+
"bootstrapMaxTokens": {
|
|
24
|
+
"label": "Bootstrap Max Tokens",
|
|
25
|
+
"help": "Maximum raw parent-history tokens imported into a brand-new conversation bootstrap; oldest turns are dropped first"
|
|
26
|
+
},
|
|
27
|
+
"newSessionRetainDepth": {
|
|
28
|
+
"label": "New Session Retain Depth",
|
|
29
|
+
"help": "Context retained after /new (-1 keeps all context, 2 keeps d2+)"
|
|
30
|
+
},
|
|
31
|
+
"leafTargetTokens": {
|
|
32
|
+
"label": "Leaf Target Tokens",
|
|
33
|
+
"help": "Target token count for leaf summaries"
|
|
34
|
+
},
|
|
35
|
+
"condensedTargetTokens": {
|
|
36
|
+
"label": "Condensed Target Tokens",
|
|
37
|
+
"help": "Target token count for condensed summaries"
|
|
38
|
+
},
|
|
39
|
+
"maxExpandTokens": {
|
|
40
|
+
"label": "Max Expand Tokens",
|
|
41
|
+
"help": "Token cap for lcm_expand_query expansion calls"
|
|
42
|
+
},
|
|
16
43
|
"dbPath": {
|
|
17
44
|
"label": "Database Path",
|
|
18
45
|
"help": "Path to LCM SQLite database (default: ~/.openclaw/lcm.db)"
|
|
@@ -37,6 +64,14 @@
|
|
|
37
64
|
"label": "Summary Provider",
|
|
38
65
|
"help": "Provider override used only when summaryModel is a bare model name (e.g., 'openai-resp')"
|
|
39
66
|
},
|
|
67
|
+
"largeFileSummaryModel": {
|
|
68
|
+
"label": "Large File Summary Model",
|
|
69
|
+
"help": "Model override for large-file summarization"
|
|
70
|
+
},
|
|
71
|
+
"largeFileSummaryProvider": {
|
|
72
|
+
"label": "Large File Summary Provider",
|
|
73
|
+
"help": "Provider override for large-file summarization"
|
|
74
|
+
},
|
|
40
75
|
"expansionModel": {
|
|
41
76
|
"label": "Expansion Model",
|
|
42
77
|
"help": "Model override for lcm_expand_query sub-agent (e.g., 'anthropic/claude-haiku-4-5')"
|
|
@@ -44,6 +79,30 @@
|
|
|
44
79
|
"expansionProvider": {
|
|
45
80
|
"label": "Expansion Provider",
|
|
46
81
|
"help": "Provider override for lcm_expand_query sub-agent (e.g., 'anthropic')"
|
|
82
|
+
},
|
|
83
|
+
"delegationTimeoutMs": {
|
|
84
|
+
"label": "Delegation Timeout (ms)",
|
|
85
|
+
"help": "Maximum time to wait for delegated lcm_expand_query sub-agent completion before timing out"
|
|
86
|
+
},
|
|
87
|
+
"maxAssemblyTokenBudget": {
|
|
88
|
+
"label": "Max Assembly Token Budget",
|
|
89
|
+
"help": "Hard ceiling for assembly token budget — caps runtime-provided and fallback budgets. Set for smaller context-window models (e.g., 30000 for 32k models)"
|
|
90
|
+
},
|
|
91
|
+
"summaryMaxOverageFactor": {
|
|
92
|
+
"label": "Summary Max Overage Factor",
|
|
93
|
+
"help": "Maximum allowed overage factor for summaries relative to target tokens (default 3). Summaries exceeding this are deterministically truncated."
|
|
94
|
+
},
|
|
95
|
+
"customInstructions": {
|
|
96
|
+
"label": "Custom Instructions",
|
|
97
|
+
"help": "Natural language instructions injected into all summarization prompts (e.g., formatting rules, tone control)"
|
|
98
|
+
},
|
|
99
|
+
"timezone": {
|
|
100
|
+
"label": "Timezone",
|
|
101
|
+
"help": "IANA timezone used for summary timestamps"
|
|
102
|
+
},
|
|
103
|
+
"pruneHeartbeatOk": {
|
|
104
|
+
"label": "Prune HEARTBEAT_OK",
|
|
105
|
+
"help": "Retroactively delete HEARTBEAT_OK turn cycles from LCM storage"
|
|
47
106
|
}
|
|
48
107
|
},
|
|
49
108
|
"configSchema": {
|
|
@@ -66,6 +125,30 @@
|
|
|
66
125
|
"type": "integer",
|
|
67
126
|
"minimum": 1
|
|
68
127
|
},
|
|
128
|
+
"leafChunkTokens": {
|
|
129
|
+
"type": "integer",
|
|
130
|
+
"minimum": 1
|
|
131
|
+
},
|
|
132
|
+
"bootstrapMaxTokens": {
|
|
133
|
+
"type": "integer",
|
|
134
|
+
"minimum": 1
|
|
135
|
+
},
|
|
136
|
+
"newSessionRetainDepth": {
|
|
137
|
+
"type": "integer",
|
|
138
|
+
"minimum": -1
|
|
139
|
+
},
|
|
140
|
+
"leafTargetTokens": {
|
|
141
|
+
"type": "integer",
|
|
142
|
+
"minimum": 1
|
|
143
|
+
},
|
|
144
|
+
"condensedTargetTokens": {
|
|
145
|
+
"type": "integer",
|
|
146
|
+
"minimum": 1
|
|
147
|
+
},
|
|
148
|
+
"maxExpandTokens": {
|
|
149
|
+
"type": "integer",
|
|
150
|
+
"minimum": 1
|
|
151
|
+
},
|
|
69
152
|
"leafMinFanout": {
|
|
70
153
|
"type": "integer",
|
|
71
154
|
"minimum": 2
|
|
@@ -106,11 +189,42 @@
|
|
|
106
189
|
"summaryProvider": {
|
|
107
190
|
"type": "string"
|
|
108
191
|
},
|
|
192
|
+
"largeFileSummaryModel": {
|
|
193
|
+
"type": "string"
|
|
194
|
+
},
|
|
195
|
+
"largeFileSummaryProvider": {
|
|
196
|
+
"type": "string"
|
|
197
|
+
},
|
|
109
198
|
"expansionModel": {
|
|
110
199
|
"type": "string"
|
|
111
200
|
},
|
|
112
201
|
"expansionProvider": {
|
|
113
202
|
"type": "string"
|
|
203
|
+
},
|
|
204
|
+
"delegationTimeoutMs": {
|
|
205
|
+
"type": "integer",
|
|
206
|
+
"minimum": 1
|
|
207
|
+
},
|
|
208
|
+
"maxAssemblyTokenBudget": {
|
|
209
|
+
"type": "integer",
|
|
210
|
+
"minimum": 1000
|
|
211
|
+
},
|
|
212
|
+
"summaryMaxOverageFactor": {
|
|
213
|
+
"type": "number",
|
|
214
|
+
"minimum": 1
|
|
215
|
+
},
|
|
216
|
+
"customInstructions": {
|
|
217
|
+
"type": "string"
|
|
218
|
+
},
|
|
219
|
+
"timezone": {
|
|
220
|
+
"type": "string"
|
|
221
|
+
},
|
|
222
|
+
"pruneHeartbeatOk": {
|
|
223
|
+
"type": "boolean"
|
|
224
|
+
},
|
|
225
|
+
"databasePath": {
|
|
226
|
+
"description": "Path to LCM SQLite database (alias for dbPath)",
|
|
227
|
+
"type": "string"
|
|
114
228
|
}
|
|
115
229
|
}
|
|
116
230
|
}
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@martian-engineering/lossless-claw",
|
|
3
|
-
"version": "0.
|
|
3
|
+
"version": "0.6.0",
|
|
4
4
|
"description": "Lossless Context Management plugin for OpenClaw — DAG-based conversation summarization with incremental compaction",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "index.ts",
|
|
@@ -24,6 +24,7 @@
|
|
|
24
24
|
"files": [
|
|
25
25
|
"index.ts",
|
|
26
26
|
"src/**/*.ts",
|
|
27
|
+
"skills/",
|
|
27
28
|
"openclaw.plugin.json",
|
|
28
29
|
"docs/",
|
|
29
30
|
"README.md",
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: lossless-claw
|
|
3
|
+
description: Configure, diagnose, and use lossless-claw effectively in OpenClaw, with emphasis on key settings, summary health, and recall-tool usage.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Lossless Claw
|
|
7
|
+
|
|
8
|
+
Use this skill when the task is about operating, tuning, or debugging the `lossless-claw` OpenClaw plugin.
|
|
9
|
+
|
|
10
|
+
Start here:
|
|
11
|
+
|
|
12
|
+
1. Confirm whether the user needs configuration help, diagnostics, recall-tool guidance, or session-lifecycle guidance.
|
|
13
|
+
2. If they need a quick health check, tell them to run `/lossless` (`/lcm` is the shorter alias).
|
|
14
|
+
3. If they suspect summary corruption or truncation, use `/lossless doctor`.
|
|
15
|
+
4. If they ask how `/new` or `/reset` interacts with LCM, read the session-lifecycle reference before answering.
|
|
16
|
+
5. Load the relevant reference file instead of improvising details from memory.
|
|
17
|
+
|
|
18
|
+
Reference map:
|
|
19
|
+
|
|
20
|
+
- Configuration (complete config surface on current main): `references/config.md`
|
|
21
|
+
- Internal model and data flow: `references/architecture.md`
|
|
22
|
+
- Diagnostics and summary-health workflow: `references/diagnostics.md`
|
|
23
|
+
- Recall tools and when to use them: `references/recall-tools.md`
|
|
24
|
+
- `/new` and `/reset` behavior with current lossless-claw session mapping: `references/session-lifecycle.md`
|
|
25
|
+
|
|
26
|
+
Working rules:
|
|
27
|
+
|
|
28
|
+
- Prioritize explaining why a setting matters, not just what it does.
|
|
29
|
+
- Prefer the native plugin command surface for MVP workflows (`/lossless`, with `/lcm` as alias).
|
|
30
|
+
- Do not assume the Go TUI is installed.
|
|
31
|
+
- Do not recommend advanced rewrite/backfill/transplant/dissolve flows unless the user explicitly asks for non-MVP internals.
|
|
32
|
+
- For exact evidence retrieval from compacted history, guide the user toward recall tools instead of guessing from summaries.
|
|
33
|
+
- When users compare `/lossless` to `/status`, explain that they report different layers: `/lossless` shows LCM-side frontier/summary metrics, while `/status` shows the last assembled runtime prompt snapshot.
|
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
# Architecture
|
|
2
|
+
|
|
3
|
+
`lossless-claw` stores full conversation history in SQLite and uses summaries to keep active context within model limits.
|
|
4
|
+
|
|
5
|
+
## Core flow
|
|
6
|
+
|
|
7
|
+
1. Messages are persisted into the LCM database.
|
|
8
|
+
2. Older messages are compacted into leaf summaries.
|
|
9
|
+
3. Leaf summaries can be condensed into higher-depth summaries.
|
|
10
|
+
4. Context assembly mixes summaries with the fresh raw tail.
|
|
11
|
+
5. Recall tools let agents drill back into compacted material when precision matters.
|
|
12
|
+
|
|
13
|
+
## Mental model
|
|
14
|
+
|
|
15
|
+
Think of LCM as two layers:
|
|
16
|
+
|
|
17
|
+
- durable storage of the full conversation record
|
|
18
|
+
- a summary DAG used to present compacted context efficiently
|
|
19
|
+
|
|
20
|
+
The summary DAG is not the source of truth. Raw messages remain the ground truth.
|
|
21
|
+
|
|
22
|
+
## Why summary quality matters
|
|
23
|
+
|
|
24
|
+
Bad summaries do not stay local:
|
|
25
|
+
|
|
26
|
+
- poor leaf summaries degrade condensed summaries
|
|
27
|
+
- poor condensed summaries degrade future recall
|
|
28
|
+
- aggressive truncation reduces the precision of downstream answers
|
|
29
|
+
|
|
30
|
+
That is why configuration choices around compaction thresholds and summary model quality matter operationally.
|
|
31
|
+
|
|
32
|
+
## What `/lcm` tells you
|
|
33
|
+
|
|
34
|
+
The MVP command surface focuses on operational facts:
|
|
35
|
+
|
|
36
|
+
- package version
|
|
37
|
+
- whether the plugin is enabled and selected
|
|
38
|
+
- database path and size
|
|
39
|
+
- summary counts
|
|
40
|
+
- total summarized source-token coverage when available
|
|
41
|
+
- broken or truncated summary presence
|
|
42
|
+
|
|
43
|
+
## What `/lcm doctor` tells you
|
|
44
|
+
|
|
45
|
+
The MVP doctor flow is diagnostic only.
|
|
46
|
+
|
|
47
|
+
It looks for known summary-health markers that indicate:
|
|
48
|
+
|
|
49
|
+
- deterministic fallback summaries
|
|
50
|
+
- truncated summary artifacts near the end of stored content
|
|
51
|
+
|
|
52
|
+
This gives users one place to answer the question “is my summary graph healthy?” without introducing a broader mutation surface.
|