@joshuaswarren/openclaw-engram 9.0.82 → 9.0.83

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -189,6 +189,7 @@ Start with zero config. Enable features as your needs grow:
189
189
  | **+ Search tuning** | Choose from 6 search backends (QMD, Orama, LanceDB, Meilisearch, remote, noop) |
190
190
  | **+ Capture control** | `implicit`, `explicit`, or `hybrid` capture modes for memory write policy |
191
191
  | **+ Memory OS** | Memory boxes, graph reasoning, compounding, shared context, identity continuity |
192
+ | **+ LCM** | Lossless Context Management — never lose conversation context to compaction |
192
193
  | **+ Advanced** | Trust zones, causal trajectories, harmonic retrieval, evaluation harness, poisoning defense |
193
194
 
194
195
  Use a preset to jump to a recommended level: `conservative`, `balanced`, `research-max`, or `local-llm-heavy`.
@@ -245,6 +246,36 @@ These capabilities can be enabled progressively:
245
246
  - **Native Knowledge** — Search curated markdown (workspace docs, Obsidian vaults) without extracting into memory
246
247
  - **Behavior Loop Tuning** — Runtime self-tuning of extraction and recall parameters
247
248
 
249
+ ### Lossless Context Management (LCM)
250
+
251
+ When your AI agent hits its context window limit, the runtime silently compresses old messages — and that context is gone forever. LCM fixes this by proactively archiving every message into a local SQLite database and building a hierarchical summary DAG (directed acyclic graph) alongside it. When context gets compacted, LCM injects compressed session history back into recall, so your agent never loses track of what happened earlier in the conversation.
252
+
253
+ - **Proactive archiving** — Every message is indexed with full-text search before compaction can discard it
254
+ - **Hierarchical summaries** — Leaf summaries cover ~8 turns, depth-1 covers ~32, depth-2 ~128, etc.
255
+ - **Fresh tail protection** — Recent turns always use the most detailed (leaf-level) summaries
256
+ - **Three-level summarization** — Normal LLM summary, aggressive bullet compression, and deterministic truncation (guaranteed convergence, no LLM needed)
257
+ - **MCP expansion tools** — Agents can search, describe, or expand any part of conversation history on demand
258
+ - **Zero data loss** — Raw messages are retained for the configured retention period (default 90 days)
259
+
260
+ Enable it in your `openclaw.json`:
261
+
262
+ ```jsonc
263
+ {
264
+ "plugins": {
265
+ "entries": {
266
+ "openclaw-engram": {
267
+ "config": {
268
+ "lcmEnabled": true
269
+ // All other LCM settings have sensible defaults
270
+ }
271
+ }
272
+ }
273
+ }
274
+ }
275
+ ```
276
+
277
+ See the [LCM Guide](docs/guides/lossless-context-management.md) for architecture details, configuration options, and how it complements native compaction.
278
+
248
279
  ### Advanced (opt-in)
249
280
 
250
281
  - **Objective-State Recall** — Surfaces file/process/tool state snapshots alongside semantic memory
@@ -289,6 +320,9 @@ Available via both stdio and HTTP transports:
289
320
  | `engram.suggestion_submit` | Queue a memory for review |
290
321
  | `engram.entity_get` | Look up a known entity |
291
322
  | `engram.review_queue_list` | View the governance review queue |
323
+ | `engram_context_search` | Full-text search across all archived conversation history (LCM) |
324
+ | `engram_context_describe` | Get a compressed summary of a turn range (LCM) |
325
+ | `engram_context_expand` | Retrieve raw lossless messages for a turn range (LCM) |
292
326
 
293
327
  ### MCP over HTTP
294
328
 
@@ -346,6 +380,7 @@ All settings live in `openclaw.json` under `plugins.entries.openclaw-engram.conf
346
380
  | `recallBudgetChars` | `maxMemoryTokens * 4` | Recall budget (default ~8K chars; set 64K+ for large-context models) |
347
381
  | `memoryDir` | `~/.openclaw/workspace/memory/local` | Memory storage root |
348
382
  | `memoryOsPreset` | unset | Quick config: `conservative`, `balanced`, `research-max`, `local-llm-heavy` |
383
+ | `lcmEnabled` | `false` | Enable Lossless Context Management (proactive session archive + summary DAG) |
349
384
 
350
385
  **[See the full config reference for all 60+ settings](docs/config-reference.md)** including search backend configuration, namespace policies, Memory OS features, governance, evaluation harness, trust zones, causal trajectories, and more.
351
386
 
@@ -368,6 +403,7 @@ All settings live in `openclaw.json` under `plugins.entries.openclaw-engram.conf
368
403
  - [Graph Reasoning](docs/architecture/graph-reasoning.md) — Opt-in graph traversal
369
404
  - [Evaluation Harness](docs/evaluation-harness.md) — Benchmarks and CI delta gates
370
405
  - [Operations](docs/operations.md) — Backup, export, maintenance
406
+ - [Lossless Context Management](docs/guides/lossless-context-management.md) — Never lose context to compaction
371
407
  - [Enable All Features](docs/enable-all-v8.md) — Full-feature config profile
372
408
  - [Migration Guide](docs/guides/migrations.md) — Upgrading from older versions
373
409
 
@@ -3810,4 +3810,4 @@ export {
3810
3810
  serializeEntityFile,
3811
3811
  StorageManager
3812
3812
  };
3813
- //# sourceMappingURL=chunk-MWOB4CMS.js.map
3813
+ //# sourceMappingURL=chunk-C26MLXQM.js.map