@martian-engineering/lossless-claw 0.5.3 → 0.6.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -7,6 +7,7 @@ Lossless Context Management plugin for [OpenClaw](https://github.com/openclaw/op
7
7
  - [What it does](#what-it-does)
8
8
  - [Quick start](#quick-start)
9
9
  - [Configuration](#configuration)
10
+ - [Commands And Skill](#commands-and-skill)
10
11
  - [Documentation](#documentation)
11
12
  - [Development](#development)
12
13
  - [License](#license)
@@ -27,6 +28,16 @@ Nothing is lost. Raw messages stay in the database. Summaries link back to their
27
28
 
28
29
  **It feels like talking to an agent that never forgets. Because it doesn't. In normal operation, you'll never need to think about compaction again.**
29
30
 
31
+ ## Commands And Skill
32
+
33
+ The plugin now ships a bundled `lossless-claw` skill plus a small native command surface:
34
+
35
+ - `/lcm` shows version, enablement/selection state, DB path and size, summary counts, and summary-health status
36
+ - `/lcm doctor` scans for broken or truncated summaries
37
+ - `/lossless` is an alias for `/lcm` on native command surfaces
38
+
39
+ The bundled skill focuses on configuration, diagnostics, architecture, and recall-tool usage. Its reference set lives under `skills/lossless-claw/references/`.
40
+
30
41
  ## Quick start
31
42
 
32
43
  ### Prerequisites
@@ -96,6 +107,7 @@ Add a `lossless-claw` entry under `plugins.entries` in your OpenClaw config:
96
107
  "config": {
97
108
  "freshTailCount": 64,
98
109
  "leafChunkTokens": 80000,
110
+ "newSessionRetainDepth": 2,
99
111
  "contextThreshold": 0.75,
100
112
  "incrementalMaxDepth": 1,
101
113
  "ignoreSessionPatterns": [
@@ -124,6 +136,7 @@ Add a `lossless-claw` entry under `plugins.entries` in your OpenClaw config:
124
136
  | `LCM_SKIP_STATELESS_SESSIONS` | `true` | Enable stateless-session write skipping for matching session keys |
125
137
  | `LCM_CONTEXT_THRESHOLD` | `0.75` | Fraction of context window that triggers compaction (0.0–1.0) |
126
138
  | `LCM_FRESH_TAIL_COUNT` | `64` | Number of recent messages protected from compaction |
139
+ | `LCM_NEW_SESSION_RETAIN_DEPTH` | `2` | Context retained after `/new` (`-1` keeps all context, `2` keeps d2+) |
127
140
  | `LCM_LEAF_MIN_FANOUT` | `8` | Minimum raw messages per leaf summary |
128
141
  | `LCM_CONDENSED_MIN_FANOUT` | `4` | Minimum summaries per condensed node |
129
142
  | `LCM_CONDENSED_MIN_FANOUT_HARD` | `2` | Relaxed fanout for forced compaction sweeps |
@@ -141,7 +154,6 @@ Add a `lossless-claw` entry under `plugins.entries` in your OpenClaw config:
141
154
  | `LCM_EXPANSION_MODEL` | *(from OpenClaw)* | Model override for `lcm_expand_query` sub-agent (e.g. `anthropic/claude-haiku-4-5`) |
142
155
  | `LCM_EXPANSION_PROVIDER` | *(from OpenClaw)* | Provider override for `lcm_expand_query` sub-agent |
143
156
  | `LCM_DELEGATION_TIMEOUT_MS` | `120000` | Max time to wait for delegated `lcm_expand_query` sub-agent completion |
144
- | `LCM_AUTOCOMPACT_DISABLED` | `false` | Disable automatic compaction after turns |
145
157
  | `LCM_PRUNE_HEARTBEAT_OK` | `false` | Retroactively delete `HEARTBEAT_OK` turn cycles from LCM storage |
146
158
 
147
159
  ### Expansion model override requirements
@@ -182,6 +194,7 @@ Plugin config equivalents:
182
194
  - `ignoreSessionPatterns`
183
195
  - `statelessSessionPatterns`
184
196
  - `skipStatelessSessions`
197
+ - `newSessionRetainDepth`
185
198
  - `summaryModel`
186
199
  - `summaryProvider`
187
200
  - `delegationTimeoutMs`
@@ -215,6 +228,23 @@ LCM_CONTEXT_THRESHOLD=0.75
215
228
 
216
229
  ### Session exclusion patterns
217
230
 
231
+ ### Session reset semantics
232
+
233
+ Lossless-claw distinguishes OpenClaw's two session-reset commands:
234
+
235
+ - `/new` keeps the active conversation row and all stored summaries, but prunes `context_items` so the next turn rebuilds context from retained summaries instead of the fresh tail.
236
+ - `/reset` archives the active conversation row and creates a new active row for the same stable `sessionKey`, giving the next turn a clean LCM conversation while preserving prior history.
237
+
238
+ `newSessionRetainDepth` (or `LCM_NEW_SESSION_RETAIN_DEPTH`) controls how much summary structure survives `/new`:
239
+
240
+ - `-1`: keep all existing context items
241
+ - `0`: keep all summaries, drop only fresh-tail messages
242
+ - `1`: keep d1+ summaries
243
+ - `2`: keep d2+ summaries; recommended default
244
+ - `3+`: keep only deeper, more abstract summaries
245
+
246
+ Lossless-claw currently applies these storage semantics through the `before_reset` hook only. User-facing confirmation text after `/new` or `/reset` must be emitted by OpenClaw's command handlers.
247
+
218
248
  Use `ignoreSessionPatterns` or `LCM_IGNORE_SESSION_PATTERNS` to keep low-value sessions completely out of LCM. Matching sessions do not create conversations, do not store messages, and do not participate in compaction or delegated expansion grants.
219
249
 
220
250
  Pattern rules:
@@ -26,6 +26,7 @@ Set recommended environment variables:
26
26
 
27
27
  ```bash
28
28
  export LCM_FRESH_TAIL_COUNT=32
29
+ export LCM_NEW_SESSION_RETAIN_DEPTH=2
29
30
  export LCM_INCREMENTAL_MAX_DEPTH=-1
30
31
  ```
31
32
 
@@ -51,6 +52,18 @@ For most use cases, 0.75 is a good balance.
51
52
 
52
53
  For coding conversations with tool calls (which generate many messages per logical turn), 32 is recommended.
53
54
 
55
+ ### /new retain depth
56
+
57
+ `LCM_NEW_SESSION_RETAIN_DEPTH` (default `2`) controls what survives OpenClaw's `/new` command.
58
+
59
+ - `-1` keeps all existing context items, making `/new` a transcript-only reset from lossless-claw's perspective.
60
+ - `0` drops only fresh-tail message items and keeps all summaries.
61
+ - `1` drops d0 summaries and keeps d1+.
62
+ - `2` drops d0 and d1 summaries, keeping d2+ project-arc context. This is the recommended default.
63
+ - `3+` keeps only deeper, more abstract summaries.
64
+
65
+ `/new` never deletes the summaries themselves. It only prunes `context_items`, so the summary DAG remains available for later retrieval and expansion.
66
+
54
67
  ### Leaf fanout
55
68
 
56
69
  `LCM_LEAF_MIN_FANOUT` (default `8`) is the minimum number of raw messages that must be available outside the fresh tail before a leaf pass runs.
@@ -138,6 +151,16 @@ For delegated `lcm_expand_query` runs, you can extend the sub-agent wait window
138
151
 
139
152
  ### Excluding sessions entirely
140
153
 
154
+ ### `/new` vs `/reset`
155
+
156
+ Lossless-claw treats the two OpenClaw reset commands differently:
157
+
158
+ - `/new` keeps the active LCM conversation and prunes active context according to `newSessionRetainDepth`.
159
+ - `/reset` archives the active conversation row and creates a fresh active row for the same stable `sessionKey`.
160
+
161
+ This preserves lossless history while still giving users a real clean-slate command.
162
+ OpenClaw's command handlers still own the user-facing post-command disclosure text; lossless-claw applies only the underlying storage transition through `before_reset`.
163
+
141
164
  Use `ignoreSessionPatterns` or `LCM_IGNORE_SESSION_PATTERNS` to keep low-value sessions completely out of LCM. Matching sessions do not create conversations, do not store messages, and do not participate in compaction or delegated expansion grants.
142
165
 
143
166
  - Matching uses the full session key.
@@ -1,5 +1,8 @@
1
1
  {
2
2
  "id": "lossless-claw",
3
+ "skills": [
4
+ "skills/lossless-claw"
5
+ ],
3
6
  "uiHints": {
4
7
  "contextThreshold": {
5
8
  "label": "Context Threshold",
@@ -17,6 +20,26 @@
17
20
  "label": "Leaf Chunk Tokens",
18
21
  "help": "Maximum source tokens per leaf compaction chunk before summarization"
19
22
  },
23
+ "bootstrapMaxTokens": {
24
+ "label": "Bootstrap Max Tokens",
25
+ "help": "Maximum raw parent-history tokens imported into a brand-new conversation bootstrap; oldest turns are dropped first"
26
+ },
27
+ "newSessionRetainDepth": {
28
+ "label": "New Session Retain Depth",
29
+ "help": "Context retained after /new (-1 keeps all context, 2 keeps d2+)"
30
+ },
31
+ "leafTargetTokens": {
32
+ "label": "Leaf Target Tokens",
33
+ "help": "Target token count for leaf summaries"
34
+ },
35
+ "condensedTargetTokens": {
36
+ "label": "Condensed Target Tokens",
37
+ "help": "Target token count for condensed summaries"
38
+ },
39
+ "maxExpandTokens": {
40
+ "label": "Max Expand Tokens",
41
+ "help": "Token cap for lcm_expand_query expansion calls"
42
+ },
20
43
  "dbPath": {
21
44
  "label": "Database Path",
22
45
  "help": "Path to LCM SQLite database (default: ~/.openclaw/lcm.db)"
@@ -41,6 +64,14 @@
41
64
  "label": "Summary Provider",
42
65
  "help": "Provider override used only when summaryModel is a bare model name (e.g., 'openai-resp')"
43
66
  },
67
+ "largeFileSummaryModel": {
68
+ "label": "Large File Summary Model",
69
+ "help": "Model override for large-file summarization"
70
+ },
71
+ "largeFileSummaryProvider": {
72
+ "label": "Large File Summary Provider",
73
+ "help": "Provider override for large-file summarization"
74
+ },
44
75
  "expansionModel": {
45
76
  "label": "Expansion Model",
46
77
  "help": "Model override for lcm_expand_query sub-agent (e.g., 'anthropic/claude-haiku-4-5')"
@@ -64,6 +95,14 @@
64
95
  "customInstructions": {
65
96
  "label": "Custom Instructions",
66
97
  "help": "Natural language instructions injected into all summarization prompts (e.g., formatting rules, tone control)"
98
+ },
99
+ "timezone": {
100
+ "label": "Timezone",
101
+ "help": "IANA timezone used for summary timestamps"
102
+ },
103
+ "pruneHeartbeatOk": {
104
+ "label": "Prune HEARTBEAT_OK",
105
+ "help": "Retroactively delete HEARTBEAT_OK turn cycles from LCM storage"
67
106
  }
68
107
  },
69
108
  "configSchema": {
@@ -90,6 +129,26 @@
90
129
  "type": "integer",
91
130
  "minimum": 1
92
131
  },
132
+ "bootstrapMaxTokens": {
133
+ "type": "integer",
134
+ "minimum": 1
135
+ },
136
+ "newSessionRetainDepth": {
137
+ "type": "integer",
138
+ "minimum": -1
139
+ },
140
+ "leafTargetTokens": {
141
+ "type": "integer",
142
+ "minimum": 1
143
+ },
144
+ "condensedTargetTokens": {
145
+ "type": "integer",
146
+ "minimum": 1
147
+ },
148
+ "maxExpandTokens": {
149
+ "type": "integer",
150
+ "minimum": 1
151
+ },
93
152
  "leafMinFanout": {
94
153
  "type": "integer",
95
154
  "minimum": 2
@@ -130,6 +189,12 @@
130
189
  "summaryProvider": {
131
190
  "type": "string"
132
191
  },
192
+ "largeFileSummaryModel": {
193
+ "type": "string"
194
+ },
195
+ "largeFileSummaryProvider": {
196
+ "type": "string"
197
+ },
133
198
  "expansionModel": {
134
199
  "type": "string"
135
200
  },
@@ -150,6 +215,16 @@
150
215
  },
151
216
  "customInstructions": {
152
217
  "type": "string"
218
+ },
219
+ "timezone": {
220
+ "type": "string"
221
+ },
222
+ "pruneHeartbeatOk": {
223
+ "type": "boolean"
224
+ },
225
+ "databasePath": {
226
+ "description": "Path to LCM SQLite database (alias for dbPath)",
227
+ "type": "string"
153
228
  }
154
229
  }
155
230
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@martian-engineering/lossless-claw",
3
- "version": "0.5.3",
3
+ "version": "0.6.1",
4
4
  "description": "Lossless Context Management plugin for OpenClaw — DAG-based conversation summarization with incremental compaction",
5
5
  "type": "module",
6
6
  "main": "index.ts",
@@ -24,6 +24,7 @@
24
24
  "files": [
25
25
  "index.ts",
26
26
  "src/**/*.ts",
27
+ "skills/",
27
28
  "openclaw.plugin.json",
28
29
  "docs/",
29
30
  "README.md",
@@ -0,0 +1,33 @@
1
+ ---
2
+ name: lossless-claw
3
+ description: Configure, diagnose, and use lossless-claw effectively in OpenClaw, with emphasis on key settings, summary health, and recall-tool usage.
4
+ ---
5
+
6
+ # Lossless Claw
7
+
8
+ Use this skill when the task is about operating, tuning, or debugging the `lossless-claw` OpenClaw plugin.
9
+
10
+ Start here:
11
+
12
+ 1. Confirm whether the user needs configuration help, diagnostics, recall-tool guidance, or session-lifecycle guidance.
13
+ 2. If they need a quick health check, tell them to run `/lossless` (`/lcm` is the shorter alias).
14
+ 3. If they suspect summary corruption or truncation, use `/lossless doctor`.
15
+ 4. If they ask how `/new` or `/reset` interacts with LCM, read the session-lifecycle reference before answering.
16
+ 5. Load the relevant reference file instead of improvising details from memory.
17
+
18
+ Reference map:
19
+
20
+ - Configuration (complete config surface on current main): `references/config.md`
21
+ - Internal model and data flow: `references/architecture.md`
22
+ - Diagnostics and summary-health workflow: `references/diagnostics.md`
23
+ - Recall tools and when to use them: `references/recall-tools.md`
24
+ - `/new` and `/reset` behavior with current lossless-claw session mapping: `references/session-lifecycle.md`
25
+
26
+ Working rules:
27
+
28
+ - Prioritize explaining why a setting matters, not just what it does.
29
+ - Prefer the native plugin command surface for MVP workflows (`/lossless`, with `/lcm` as alias).
30
+ - Do not assume the Go TUI is installed.
31
+ - Do not recommend advanced rewrite/backfill/transplant/dissolve flows unless the user explicitly asks for non-MVP internals.
32
+ - For exact evidence retrieval from compacted history, guide the user toward recall tools instead of guessing from summaries.
33
+ - When users compare `/lossless` to `/status`, explain that they report different layers: `/lossless` shows LCM-side frontier/summary metrics, while `/status` shows the last assembled runtime prompt snapshot.
@@ -0,0 +1,52 @@
1
+ # Architecture
2
+
3
+ `lossless-claw` stores full conversation history in SQLite and uses summaries to keep active context within model limits.
4
+
5
+ ## Core flow
6
+
7
+ 1. Messages are persisted into the LCM database.
8
+ 2. Older messages are compacted into leaf summaries.
9
+ 3. Leaf summaries can be condensed into higher-depth summaries.
10
+ 4. Context assembly mixes summaries with the fresh raw tail.
11
+ 5. Recall tools let agents drill back into compacted material when precision matters.
12
+
13
+ ## Mental model
14
+
15
+ Think of LCM as two layers:
16
+
17
+ - durable storage of the full conversation record
18
+ - a summary DAG used to present compacted context efficiently
19
+
20
+ The summary DAG is not the source of truth. Raw messages remain the ground truth.
21
+
22
+ ## Why summary quality matters
23
+
24
+ Bad summaries do not stay local:
25
+
26
+ - poor leaf summaries degrade condensed summaries
27
+ - poor condensed summaries degrade future recall
28
+ - aggressive truncation reduces the precision of downstream answers
29
+
30
+ That is why configuration choices around compaction thresholds and summary model quality matter operationally.
31
+
32
+ ## What `/lcm` tells you
33
+
34
+ The MVP command surface focuses on operational facts:
35
+
36
+ - package version
37
+ - whether the plugin is enabled and selected
38
+ - database path and size
39
+ - summary counts
40
+ - total summarized source-token coverage when available
41
+ - broken or truncated summary presence
42
+
43
+ ## What `/lcm doctor` tells you
44
+
45
+ The MVP doctor flow is diagnostic only.
46
+
47
+ It looks for known summary-health markers that indicate:
48
+
49
+ - deterministic fallback summaries
50
+ - truncated summary artifacts near the end of stored content
51
+
52
+ This gives users one place to answer the question “is my summary graph healthy?” without introducing a broader mutation surface.
@@ -0,0 +1,263 @@
1
+ # Configuration
2
+
3
+ This reference covers the current `lossless-claw` config surface on `main`, based on `openclaw.plugin.json`.
4
+
5
+ `lossless-claw` is most effective when the operator understands which settings change compaction behavior and why.
6
+
7
+ ## First checks
8
+
9
+ - Ensure the plugin is installed and enabled.
10
+ - Ensure the context-engine slot points at `lossless-claw` when you want it to own compaction.
11
+ - Run `/lossless` (`/lcm` alias) to confirm the plugin is active and see the live DB path.
12
+
13
+ ## High-impact settings
14
+
15
+ These are the settings most operators should understand first.
16
+
17
+ ### `contextThreshold`
18
+
19
+ Controls how full the model context can get before LCM compacts older material.
20
+
21
+ - Lower values compact earlier.
22
+ - Higher values compact later.
23
+
24
+ Why it matters:
25
+
26
+ - Too low increases summarization cost and churn.
27
+ - Too high risks hitting the model window with large tool output or long replies.
28
+
29
+ Good default:
30
+
31
+ - `0.75`
32
+
33
+ ### `freshTailCount`
34
+
35
+ Keeps the newest messages raw instead of compacting them.
36
+
37
+ Why it matters:
38
+
39
+ - Higher values preserve near-term conversational nuance.
40
+ - Lower values free context budget sooner.
41
+
42
+ Good starting range:
43
+
44
+ - `32` to `64`
45
+
46
+ ### `leafChunkTokens`
47
+
48
+ Caps how much raw material gets summarized into one leaf summary.
49
+
50
+ Why it matters:
51
+
52
+ - Larger chunks reduce summarization frequency.
53
+ - Smaller chunks create more summaries and more DAG fragmentation.
54
+
55
+ Use this when:
56
+
57
+ - Your summarizer is rate-limited or expensive.
58
+ - You want fewer but broader leaf summaries.
59
+
60
+ ### `incrementalMaxDepth`
61
+
62
+ Controls how far automatic condensation cascades after leaf compaction.
63
+
64
+ Why it matters:
65
+
66
+ - `0` keeps only leaf summaries moving automatically.
67
+ - `1` is a practical default for long-running sessions.
68
+ - `-1` allows unlimited cascading, which can be useful for very long histories but is more aggressive.
69
+
70
+ ### `summaryModel` and `summaryProvider`
71
+
72
+ Override the model used for compaction summarization.
73
+
74
+ Why they matter:
75
+
76
+ - Summary quality compounds upward in the DAG.
77
+ - Cheaper models can reduce cost, but weak summaries create weak recalled context later.
78
+
79
+ Guidance:
80
+
81
+ - Pick a cheaper model only if it remains reliably structured and faithful.
82
+ - `summaryProvider` only matters when `summaryModel` is a bare model name rather than a canonical provider/model ref.
83
+
84
+ ### `expansionModel` and `expansionProvider`
85
+
86
+ Override the model used by delegated recall flows such as `lcm_expand_query`.
87
+
88
+ Why they matter:
89
+
90
+ - This lets recall-heavy work use a different cost/latency profile than normal compaction.
91
+ - These are recall-path settings, not compaction-path settings.
92
+
93
+ ## Complete config surface
94
+
95
+ ## Core enablement and storage
96
+
97
+ ### `enabled`
98
+
99
+ Boolean on/off switch for the plugin entry.
100
+
101
+ Use this when:
102
+
103
+ - you need the plugin installed but temporarily disabled
104
+ - you want to distinguish “installed” from “selected and active”
105
+
106
+ ### `dbPath`
107
+
108
+ Overrides the SQLite DB location.
109
+
110
+ Why it matters:
111
+
112
+ - useful for custom deployments, testing, or isolating environments
113
+ - wrong path selection is a common reason operators think LCM is empty or not growing
114
+
115
+ ### `largeFileThresholdTokens`
116
+
117
+ Threshold for externalizing oversized tool/file payloads out of the main transcript into large-file storage.
118
+
119
+ Why it matters:
120
+
121
+ - lower values externalize more aggressively
122
+ - higher values keep more payload inline but can bloat storage and compaction inputs
123
+
124
+ ## Compaction timing and shape
125
+
126
+ ### `contextThreshold`
127
+
128
+ See high-impact settings above.
129
+
130
+ ### `freshTailCount`
131
+
132
+ See high-impact settings above.
133
+
134
+ ### `leafChunkTokens`
135
+
136
+ See high-impact settings above.
137
+
138
+ ### `leafMinFanout`
139
+
140
+ Minimum number of leaf items required before creating a leaf compaction grouping.
141
+
142
+ Why it matters:
143
+
144
+ - higher values avoid tiny leaf summaries
145
+ - lower values compact sooner but can create overly granular summaries
146
+
147
+ ### `condensedMinFanout`
148
+
149
+ Preferred minimum fanout for condensed summaries during normal condensation.
150
+
151
+ Why it matters:
152
+
153
+ - controls how eagerly summaries get grouped upward
154
+ - affects DAG breadth and readability of higher-level summaries
155
+
156
+ ### `condensedMinFanoutHard`
157
+
158
+ Hard lower bound for condensed fanout decisions.
159
+
160
+ Why it matters:
161
+
162
+ - acts as the guardrail when normal fanout preferences cannot be met cleanly
163
+ - mostly useful for advanced tuning or pathological summary-tree shapes
164
+
165
+ ### `incrementalMaxDepth`
166
+
167
+ See high-impact settings above.
168
+
169
+ ## Session-selection controls
170
+
171
+ ### `ignoreSessionPatterns`
172
+
173
+ Glob-style session-key patterns that should never enter LCM.
174
+
175
+ Why it matters:
176
+
177
+ - keeps low-value automation or noisy sessions out of the DB
178
+ - useful for excluding certain agent lanes or ephemeral traffic entirely
179
+
180
+ ### `statelessSessionPatterns`
181
+
182
+ Patterns for sessions that may read from LCM but should not write to it.
183
+
184
+ Why it matters:
185
+
186
+ - useful for sub-agents and ephemeral workers
187
+ - prevents recall helpers from polluting the main history
188
+
189
+ ### `skipStatelessSessions`
190
+
191
+ Boolean that changes how stateless matches are treated.
192
+
193
+ Why it matters:
194
+
195
+ - when enabled, matching stateless sessions skip LCM persistence entirely
196
+ - use carefully, because it affects whether those sessions behave as readers only or are effectively bypassed for writes
197
+
198
+ ## Recall-path and delegation controls
199
+
200
+ ### `expansionModel`
201
+
202
+ See high-impact settings above.
203
+
204
+ ### `expansionProvider`
205
+
206
+ See high-impact settings above.
207
+
208
+ ### `delegationTimeoutMs`
209
+
210
+ Maximum time to wait for delegated recall completion.
211
+
212
+ Why it matters:
213
+
214
+ - lower values fail faster under slow sub-agent paths
215
+ - higher values tolerate deeper recall but can make calls feel stuck longer
216
+
217
+ ### `maxAssemblyTokenBudget`
218
+
219
+ Hard ceiling for assembled LCM token budget.
220
+
221
+ Why it matters:
222
+
223
+ - useful when the runtime model window is smaller than the surrounding system assumes
224
+ - can prevent oversized assembly on smaller-context models
225
+
226
+ ## Summary quality and prompt controls
227
+
228
+ ### `summaryMaxOverageFactor`
229
+
230
+ Maximum allowed overage factor before an oversized summary is truncated/downgraded.
231
+
232
+ Why it matters:
233
+
234
+ - guards against runaway summaries that are much larger than their target budget
235
+ - useful when summary models are verbose or unstable
236
+
237
+ ### `customInstructions`
238
+
239
+ Natural-language instructions injected into summarization prompts.
240
+
241
+ Why it matters:
242
+
243
+ - lets operators steer formatting or emphasis without patching code
244
+ - should be used sparingly; low-quality instructions can degrade summary quality system-wide
245
+
246
+ ## Practical operator workflow
247
+
248
+ 1. Install and enable the plugin.
249
+ 2. Set the context-engine slot to `lossless-claw`.
250
+ 3. Start with conservative defaults.
251
+ 4. Run `/lossless` after startup to confirm path, size, and summary health.
252
+ 5. If recall feels weak, revisit `freshTailCount`, `leafChunkTokens`, and summarizer model quality before changing anything else.
253
+ 6. Touch advanced knobs like fanout, large-file thresholds, custom instructions, and assembly caps only after a concrete symptom appears.
254
+
255
+ ## Reading the status output
256
+
257
+ `/lossless` is the right command for LCM-local metrics.
258
+
259
+ Useful interpretation notes:
260
+
261
+ - `tokens in context` is the current LCM frontier token count in the live LCM state.
262
+ - `compression ratio` is shown as a rounded `1:N`, which is easier to read than a tiny percentage for heavily compacted conversations.
263
+ - `/status` may still show a different context number because it reflects the runtime prompt that was actually assembled and sent on the last turn.
@@ -0,0 +1,79 @@
1
+ # Diagnostics
2
+
3
+ For the MVP, use the native command surface first.
4
+
5
+ ## Fast path
6
+
7
+ ### `/lossless` (`/lcm` alias)
8
+
9
+ Use this when you need a quick health snapshot.
10
+
11
+ It should answer:
12
+
13
+ - Is `lossless-claw` enabled?
14
+ - Is it selected as the context engine?
15
+ - Which DB is active?
16
+ - Is the DB growing as expected?
17
+ - Are summaries present?
18
+ - Are broken or truncated summaries present?
19
+
20
+ ### `/lossless doctor`
21
+
22
+ Use this when summary corruption or truncation is suspected.
23
+
24
+ It is the single user-facing diagnostic entrypoint for summary-health issues in the MVP.
25
+
26
+ What it should help confirm:
27
+
28
+ - whether broken summaries exist
29
+ - whether truncation markers exist
30
+ - which conversations are affected most
31
+
32
+ ## Interpreting common states
33
+
34
+ ### `/lossless` tokens vs `/status` context
35
+
36
+ These numbers are related, but they are not the same metric.
37
+
38
+ - `/lossless` reports LCM-side conversation metrics such as the current frontier token count and compression ratio.
39
+ - `/status` reports the last assembled runtime prompt snapshot for the active model.
40
+
41
+ Why they can differ:
42
+
43
+ - runtime assembly can trim or omit frontier material before the request is sent
44
+ - model-specific token budgeting and packing happen after LCM frontier selection
45
+ - `/status` reflects a last-run snapshot, while `/lossless` reads live LCM state from the DB
46
+
47
+ Treat `/lossless` as the LCM health/shape view, and `/status` as the runtime request view.
48
+
49
+ ### No summaries yet
50
+
51
+ Usually means one of:
52
+
53
+ - the conversation has not crossed compaction thresholds yet
54
+ - the plugin is not selected as the context engine
55
+ - writes are being skipped because the session matches stateless or ignored patterns
56
+
57
+ ### DB exists but stays tiny
58
+
59
+ Usually means one of:
60
+
61
+ - the plugin is not receiving traffic
62
+ - the wrong DB path is configured
63
+ - the plugin is enabled but not selected
64
+
65
+ ### Broken or truncated summaries detected
66
+
67
+ Treat this as a signal to inspect summary health before trusting compacted context heavily.
68
+
69
+ For MVP guidance:
70
+
71
+ - keep the user on `/lossless doctor`
72
+ - explain the count and affected conversations
73
+ - avoid advertising separate repair-vs-doctor command families
74
+
75
+ ## Safe operator advice
76
+
77
+ - Do not guess exact historical details from compacted context alone.
78
+ - When a user wants a fact pattern verified, use recall tools to recover evidence.
79
+ - Prefer changing one configuration knob at a time and then re-checking `/lossless`.