@martian-engineering/lossless-claw 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Josh Lehman / Martian Engineering
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,384 @@
1
+ # lossless-claw
2
+
3
+ Lossless Context Management plugin for [OpenClaw](https://github.com/openclaw/openclaw), based on the [LCM paper](https://voltropy.com/LCM). Replaces OpenClaw's built-in sliding-window compaction with a DAG-based summarization system that preserves every message while keeping active context within model token limits.
4
+
5
+ ## What it does
6
+
7
+ When a conversation grows beyond the model's context window, OpenClaw (just like all of the other agents) normally truncates older messages. LCM instead:
8
+
9
+ 1. **Persists every message** in a SQLite database, organized by conversation
10
+ 2. **Summarizes chunks** of older messages into summaries using your configured LLM
11
+ 3. **Condenses summaries** into higher-level nodes as they accumulate, forming a DAG (directed acyclic graph)
12
+ 4. **Assembles context** each turn by combining summaries + recent raw messages
13
+ 5. **Provides tools** (`lcm_grep`, `lcm_describe`, `lcm_expand`) so agents can search and recall details from compacted history
14
+
15
+ Nothing is lost. Raw messages stay in the database. Summaries link back to their source messages. Agents can drill into any summary to recover the original detail.
16
+
17
+ **It feels like talking to an agent that never forgets. Because it doesn't. In normal operation, you'll never need to think about compaction again.**
18
+
19
+ ## Installation
20
+
21
+ ### Prerequisites
22
+
23
+ - OpenClaw with context engine support (josh/context-engine branch or equivalent)
24
+ - Node.js 22+
25
+ - An LLM provider configured in OpenClaw (used for summarization)
26
+
27
+ ### Install the plugin
28
+
29
+ ```bash
30
+ # Clone the repo
31
+ git clone https://github.com/Martian-Engineering/lossless-claw.git
32
+ cd lossless-claw
33
+
34
+ # Install dependencies
35
+ npm install
36
+ ```
37
+
38
+ ### Configure OpenClaw
39
+
40
+ Add the plugin to your OpenClaw config (`~/.openclaw/openclaw.json`):
41
+
42
+ ```json
43
+ {
44
+ "plugins": {
45
+ "paths": [
46
+ "/path/to/lossless-claw"
47
+ ],
48
+ "slots": {
49
+ "contextEngine": "lossless-claw"
50
+ }
51
+ }
52
+ }
53
+ ```
54
+
55
+ The `slots.contextEngine` setting tells OpenClaw to route all context management through LCM instead of the built-in legacy engine.
56
+
57
+ Restart OpenClaw after configuration changes.
58
+
59
+ ## Configuration
60
+
61
+ LCM is configured through a combination of plugin config and environment variables. Environment variables take precedence for backward compatibility.
62
+
63
+ ### Plugin config
64
+
65
+ Add an `lossless-claw` block under `plugins.config` in your OpenClaw config:
66
+
67
+ ```json
68
+ {
69
+ "plugins": {
70
+ "config": {
71
+ "lossless-claw": {
72
+ "enabled": true,
73
+ "freshTailCount": 32,
74
+ "contextThreshold": 0.75,
75
+ "incrementalMaxDepth": 1
76
+ }
77
+ }
78
+ }
79
+ }
80
+ ```
81
+
82
+ ### Environment variables
83
+
84
+ | Variable | Default | Description |
85
+ |----------|---------|-------------|
86
+ | `LCM_ENABLED` | `true` | Enable/disable the plugin |
87
+ | `LCM_DATABASE_PATH` | `~/.openclaw/lcm.db` | Path to the SQLite database |
88
+ | `LCM_CONTEXT_THRESHOLD` | `0.75` | Fraction of context window that triggers compaction (0.0–1.0) |
89
+ | `LCM_FRESH_TAIL_COUNT` | `32` | Number of recent messages protected from compaction |
90
+ | `LCM_LEAF_MIN_FANOUT` | `8` | Minimum raw messages per leaf summary |
91
+ | `LCM_CONDENSED_MIN_FANOUT` | `4` | Minimum summaries per condensed node |
92
+ | `LCM_CONDENSED_MIN_FANOUT_HARD` | `2` | Relaxed fanout for forced compaction sweeps |
93
+ | `LCM_INCREMENTAL_MAX_DEPTH` | `0` | How deep incremental compaction goes (0 = leaf only) |
94
+ | `LCM_LEAF_CHUNK_TOKENS` | `20000` | Max source tokens per leaf compaction chunk |
95
+ | `LCM_LEAF_TARGET_TOKENS` | `1200` | Target token count for leaf summaries |
96
+ | `LCM_CONDENSED_TARGET_TOKENS` | `2000` | Target token count for condensed summaries |
97
+ | `LCM_MAX_EXPAND_TOKENS` | `4000` | Token cap for sub-agent expansion queries |
98
+ | `LCM_LARGE_FILE_TOKEN_THRESHOLD` | `25000` | File blocks above this size are intercepted and stored separately |
99
+ | `LCM_SUMMARY_MODEL` | *(from OpenClaw)* | Model for summarization (e.g. `anthropic/claude-sonnet-4-20250514`) |
100
+ | `LCM_SUMMARY_PROVIDER` | *(from OpenClaw)* | Provider override for summarization |
101
+ | `LCM_INCREMENTAL_MAX_DEPTH` | `0` | Depth limit for incremental condensation after leaf passes |
102
+
103
+ ### Recommended starting configuration
104
+
105
+ ```
106
+ LCM_FRESH_TAIL_COUNT=32
107
+ LCM_INCREMENTAL_MAX_DEPTH=1
108
+ LCM_CONTEXT_THRESHOLD=0.75
109
+ ```
110
+
111
+ - **freshTailCount=32** protects the last 32 messages from compaction, giving the model enough recent context for continuity.
112
+ - **incrementalMaxDepth=1** enables automatic condensation of leaf summaries after each compaction pass (without this, only leaf summaries are created and condensation only happens during manual `/compact` or overflow).
113
+ - **contextThreshold=0.75** triggers compaction when context reaches 75% of the model's window, leaving headroom for the model's response.
114
+
115
+ ## How it works
116
+
117
+ See [docs/architecture.md](docs/architecture.md) for the full technical deep-dive. Here's the summary:
118
+
119
+ ### The DAG
120
+
121
+ LCM builds a directed acyclic graph of summaries:
122
+
123
+ ```
124
+ Raw messages → Leaf summaries (d0) → Condensed (d1) → Condensed (d2) → ...
125
+ ```
126
+
127
+ - **Leaf summaries** (depth 0) are created from chunks of raw messages. They preserve timestamps, decisions, file operations, and key details.
128
+ - **Condensed summaries** (depth 1+) merge multiple summaries at the same depth into a higher-level node. Each depth tier uses a different prompt strategy optimized for its level of abstraction.
129
+ - **Parent links** connect each condensed summary to its source summaries, enabling drill-down via `lcm_expand_query`.
130
+
131
+ ### Context assembly
132
+
133
+ Each turn, the assembler builds model context by:
134
+
135
+ 1. Fetching the conversation's **context items** (an ordered list of summary and message references)
136
+ 2. Resolving each item into an `AgentMessage`
137
+ 3. Protecting the **fresh tail** (most recent N messages) from eviction
138
+ 4. Filling remaining token budget from oldest to newest, dropping the oldest items first if over budget
139
+ 5. Wrapping summaries in XML with metadata (id, depth, timestamps, descendant count)
140
+
141
+ The model sees something like:
142
+
143
+ ```xml
144
+ <summary id="sum_abc123" kind="condensed" depth="1" descendant_count="8"
145
+ earliest_at="2026-02-17T07:37:00" latest_at="2026-02-17T15:43:00">
146
+ <parents>
147
+ <summary_ref id="sum_def456" />
148
+ <summary_ref id="sum_ghi789" />
149
+ </parents>
150
+ <content>
151
+ ...summary text...
152
+ </content>
153
+ </summary>
154
+ ```
155
+
156
+ This gives the model enough information to know what was discussed, when, and how to drill deeper via the expansion tools.
157
+
158
+ ### Compaction triggers
159
+
160
+ Compaction runs in two modes:
161
+
162
+ - **Proactive (after each turn):** If raw messages outside the fresh tail exceed `leafChunkTokens`, a leaf pass runs. If `incrementalMaxDepth > 0`, condensation follows.
163
+ - **Reactive (overflow/manual):** When total context exceeds `contextThreshold × tokenBudget`, a full sweep runs: all eligible leaf chunks are compacted, then condensation proceeds depth-by-depth until stable.
164
+
165
+ ### Depth-aware prompts
166
+
167
+ Each summary depth gets a tailored prompt:
168
+
169
+ | Depth | Kind | Strategy |
170
+ |-------|------|----------|
171
+ | 0 | Leaf | Narrative with timestamps, file tracking, preserves operational detail |
172
+ | 1 | Condensed | Chronological session summary, deduplicates against `previous_context` |
173
+ | 2 | Condensed | Arc-focused: goals, outcomes, what carries forward. Self-contained. |
174
+ | 3+ | Condensed | Durable context only: key decisions, relationships, lessons learned |
175
+
176
+ All summaries end with an "Expand for details about:" footer listing what was compressed, guiding agents on when to use `lcm_expand_query`.
177
+
178
+ ### Large file handling
179
+
180
+ Files over `largeFileTokenThreshold` (default 25k tokens) embedded in messages are intercepted during ingestion:
181
+
182
+ 1. Content is stored to `~/.openclaw/lcm-files/<conversation_id>/<file_id>.<ext>`
183
+ 2. A ~200 token exploration summary replaces the file in the message
184
+ 3. The `lcm_describe` tool can retrieve the full file content on demand
185
+
186
+ This prevents large file pastes from consuming the entire context window.
187
+
188
+ ## Agent tools
189
+
190
+ LCM registers four tools that agents can use to search and recall compacted history:
191
+
192
+ ### `lcm_grep`
193
+
194
+ Full-text and regex search across messages and summaries.
195
+
196
+ ```
197
+ lcm_grep(pattern: "database migration", mode: "full_text")
198
+ lcm_grep(pattern: "config\\.threshold", mode: "regex", scope: "summaries")
199
+ ```
200
+
201
+ Parameters:
202
+ - `pattern` — Search string (regex or full-text)
203
+ - `mode` — `"regex"` (default) or `"full_text"`
204
+ - `scope` — `"messages"`, `"summaries"`, or `"both"` (default)
205
+ - `conversationId` — Scope to a specific conversation
206
+ - `allConversations` — Search across all conversations
207
+ - `since` / `before` — ISO timestamp filters
208
+ - `limit` — Max results (default 50, max 200)
209
+
210
+ ### `lcm_describe`
211
+
212
+ Inspect a specific summary or stored file by ID.
213
+
214
+ ```
215
+ lcm_describe(id: "sum_abc123")
216
+ lcm_describe(id: "file_def456")
217
+ ```
218
+
219
+ Returns the full content, metadata, parent/child relationships, and token counts. For files, returns the stored content.
220
+
221
+ ### `lcm_expand_query`
222
+
223
+ Deep recall via delegated sub-agent. Finds relevant summaries, expands them by walking the DAG down to source material, and answers a focused question.
224
+
225
+ ```
226
+ lcm_expand_query(
227
+ query: "database migration",
228
+ prompt: "What migration strategy was decided on?"
229
+ )
230
+
231
+ lcm_expand_query(
232
+ summaryIds: ["sum_abc123"],
233
+ prompt: "What were the exact config changes?"
234
+ )
235
+ ```
236
+
237
+ Parameters:
238
+ - `prompt` — The question to answer (required)
239
+ - `query` — Text query to find relevant summaries (when you don't have IDs)
240
+ - `summaryIds` — Specific summary IDs to expand (when you have them)
241
+ - `maxTokens` — Answer length cap (default 2000)
242
+ - `conversationId` / `allConversations` — Scope control
243
+
244
+ Returns a compact answer with cited summary IDs.
245
+
246
+ ### `lcm_expand`
247
+
248
+ Low-level DAG expansion (sub-agent only). Main agents should use `lcm_expand_query` instead; this tool is available to delegated sub-agents spawned by `lcm_expand_query`.
249
+
250
+ ## TUI
251
+
252
+ The repo includes an interactive terminal UI (`tui/`) for inspecting, repairing, and managing the LCM database. It's a separate Go binary — not part of the npm package.
253
+
254
+ ### Install
255
+
256
+ **From GitHub releases** (recommended):
257
+
258
+ Download the latest binary for your platform from [Releases](https://github.com/Martian-Engineering/lossless-claw/releases).
259
+
260
+ **Build from source:**
261
+
262
+ ```bash
263
+ cd tui
264
+ go build -o lcm-tui .
265
+ # or: make build
266
+ # or: go install github.com/Martian-Engineering/lossless-claw/tui@latest
267
+ ```
268
+
269
+ Requires Go 1.24+.
270
+
271
+ ### Usage
272
+
273
+ ```bash
274
+ lcm-tui [--db path/to/lcm.db] [--sessions path/to/sessions/dir]
275
+ ```
276
+
277
+ Defaults to `~/.openclaw/lcm.db` and auto-discovers session directories.
278
+
279
+ ### Features
280
+
281
+ - **Conversation browser** — List all conversations with message/summary counts and token totals
282
+ - **Summary DAG view** — Navigate the full summary hierarchy with depth, kind, token counts, and parent/child relationships
283
+ - **Context view** — See exactly what the model sees: ordered context items with token breakdowns (summaries + fresh tail messages)
284
+ - **Dissolve** — Surgically restore a condensed summary back to its parent summaries (with ordinal shift preview)
285
+ - **Rewrite** — Re-summarize nodes using actual OpenClaw prompts with scrollable diffs and auto-accept mode
286
+ - **Repair** — Fix corrupted summaries (fallback truncations, empty content) using proper LLM summarization
287
+ - **Transplant** — Deep-copy summary DAGs between conversations (preserves all messages, message_parts, summary_messages)
288
+ - **Previous context viewer** — Inspect the `previous_context` text used during summarization
289
+
290
+ ### Keybindings
291
+
292
+ | Key | Action |
293
+ |-----|--------|
294
+ | `c` | Context view (from conversation list) |
295
+ | `s` | Summary DAG view |
296
+ | `d` | Dissolve a condensed summary |
297
+ | `r` | Rewrite a summary |
298
+ | `R` | Repair corrupted summaries |
299
+ | `t` | Transplant summaries between conversations |
300
+ | `p` | View previous_context |
301
+ | `Enter` | Expand/select |
302
+ | `Esc`/`q` | Back/quit |
303
+
304
+ ## Database
305
+
306
+ LCM uses SQLite via Node's built-in `node:sqlite` module. The default database path is `~/.openclaw/lcm.db`.
307
+
308
+ ### Schema overview
309
+
310
+ - **conversations** — Maps session IDs to conversation IDs
311
+ - **messages** — Every ingested message with role, content, token count, timestamps
312
+ - **message_parts** — Structured content blocks (text, tool calls, reasoning, files) linked to messages
313
+ - **summaries** — The summary DAG nodes with content, depth, kind, token counts, timestamps
314
+ - **summary_messages** — Links leaf summaries to their source messages
315
+ - **summary_parents** — Links condensed summaries to their parent summaries
316
+ - **context_items** — The ordered context list for each conversation (what the model sees)
317
+ - **large_files** — Metadata for intercepted large files
318
+ - **expansion_grants** — Delegation grants for sub-agent expansion queries
319
+
320
+ Migrations run automatically on first use. The schema is forward-compatible; new columns are added with defaults.
321
+
322
+ ## Development
323
+
324
+ ```bash
325
+ # Run tests
326
+ npx vitest
327
+
328
+ # Type check
329
+ npx tsc --noEmit
330
+
331
+ # Run a specific test file
332
+ npx vitest test/engine.test.ts
333
+ ```
334
+
335
+ ### Project structure
336
+
337
+ ```
338
+ index.ts # Plugin entry point and registration
339
+ src/
340
+ engine.ts # LcmContextEngine — implements ContextEngine interface
341
+ assembler.ts # Context assembly (summaries + messages → model context)
342
+ compaction.ts # CompactionEngine — leaf passes, condensation, sweeps
343
+ summarize.ts # Depth-aware prompt generation and LLM summarization
344
+ retrieval.ts # RetrievalEngine — grep, describe, expand operations
345
+ expansion.ts # DAG expansion logic for lcm_expand_query
346
+ expansion-auth.ts # Delegation grants for sub-agent expansion
347
+ expansion-policy.ts # Depth/token policy for expansion
348
+ large-files.ts # File interception, storage, and exploration summaries
349
+ integrity.ts # DAG integrity checks and repair utilities
350
+ transcript-repair.ts # Tool-use/result pairing sanitization
351
+ types.ts # Core type definitions (dependency injection contracts)
352
+ openclaw-bridge.ts # Bridge utilities
353
+ db/
354
+ config.ts # LcmConfig resolution from env vars
355
+ connection.ts # SQLite connection management
356
+ migration.ts # Schema migrations
357
+ store/
358
+ conversation-store.ts # Message persistence and retrieval
359
+ summary-store.ts # Summary DAG persistence and context item management
360
+ fts5-sanitize.ts # FTS5 query sanitization
361
+ tools/
362
+ lcm-grep-tool.ts # lcm_grep tool implementation
363
+ lcm-describe-tool.ts # lcm_describe tool implementation
364
+ lcm-expand-tool.ts # lcm_expand tool (sub-agent only)
365
+ lcm-expand-query-tool.ts # lcm_expand_query tool (main agent wrapper)
366
+ lcm-conversation-scope.ts # Conversation scoping utilities
367
+ common.ts # Shared tool utilities
368
+ test/ # Vitest test suite
369
+ specs/ # Design specifications
370
+ openclaw.plugin.json # Plugin manifest with config schema and UI hints
371
+ tui/ # Interactive terminal UI (Go)
372
+ main.go # Entry point and bubbletea app
373
+ data.go # Data loading and SQLite queries
374
+ dissolve.go # Summary dissolution
375
+ repair.go # Corrupted summary repair
376
+ rewrite.go # Summary re-summarization
377
+ transplant.go # Cross-conversation DAG copy
378
+ prompts/ # Depth-aware prompt templates
379
+ .goreleaser.yml # GoReleaser config for TUI binary releases
380
+ ```
381
+
382
+ ## License
383
+
384
+ MIT
@@ -0,0 +1,187 @@
1
+ # Agent tools
2
+
3
+ LCM provides four tools for agents to search, inspect, and recall information from compacted conversation history.
4
+
5
+ ## Usage patterns
6
+
7
+ ### Escalation pattern: grep → describe → expand_query
8
+
9
+ Most recall tasks follow this escalation:
10
+
11
+ 1. **`lcm_grep`** — Find relevant summaries or messages by keyword/regex
12
+ 2. **`lcm_describe`** — Inspect a specific summary's full content (cheap, no sub-agent)
13
+ 3. **`lcm_expand_query`** — Deep recall: spawn a sub-agent to expand the DAG and answer a focused question
14
+
15
+ Start with grep. If the snippet is enough, stop. If you need full summary content, use describe. If you need details that were compressed away, use expand_query.
16
+
17
+ ### When to expand
18
+
19
+ Summaries are lossy by design. The "Expand for details about:" footer at the end of each summary lists what was dropped. Use `lcm_expand_query` when you need:
20
+
21
+ - Exact commands, error messages, or config values
22
+ - File paths and specific code changes
23
+ - Decision rationale beyond what the summary captured
24
+ - Tool call sequences and their outputs
25
+ - Verbatim quotes or specific data points
26
+
27
+ `lcm_expand_query` is bounded (~120s, scoped sub-agent) and relatively cheap. Don't ration it.
28
+
29
+ ## Tool reference
30
+
31
+ ### lcm_grep
32
+
33
+ Search across messages and/or summaries using regex or full-text search.
34
+
35
+ **Parameters:**
36
+
37
+ | Param | Type | Required | Default | Description |
38
+ |-------|------|----------|---------|-------------|
39
+ | `pattern` | string | ✅ | — | Search pattern |
40
+ | `mode` | string | | `"regex"` | `"regex"` or `"full_text"` |
41
+ | `scope` | string | | `"both"` | `"messages"`, `"summaries"`, or `"both"` |
42
+ | `conversationId` | number | | current | Specific conversation to search |
43
+ | `allConversations` | boolean | | `false` | Search all conversations |
44
+ | `since` | string | | — | ISO timestamp lower bound |
45
+ | `before` | string | | — | ISO timestamp upper bound |
46
+ | `limit` | number | | 50 | Max results (1–200) |
47
+
48
+ **Returns:** Array of matches with:
49
+ - `id` — Message or summary ID
50
+ - `type` — `"message"` or `"summary"`
51
+ - `snippet` — Truncated content around the match
52
+ - `conversationId` — Which conversation
53
+ - `createdAt` — Timestamp
54
+ - For summaries: `depth`, `kind`, `summaryId`
55
+
56
+ **Examples:**
57
+
58
+ ```
59
+ # Full-text search across all conversations
60
+ lcm_grep(pattern: "database migration", mode: "full_text", allConversations: true)
61
+
62
+ # Regex search in summaries only
63
+ lcm_grep(pattern: "config\\.threshold.*0\\.[0-9]+", scope: "summaries")
64
+
65
+ # Recent messages containing a specific term
66
+ lcm_grep(pattern: "deployment", since: "2026-02-19T00:00:00Z", scope: "messages")
67
+ ```
68
+
69
+ ### lcm_describe
70
+
71
+ Look up metadata and content for a specific summary or stored file.
72
+
73
+ **Parameters:**
74
+
75
+ | Param | Type | Required | Default | Description |
76
+ |-------|------|----------|---------|-------------|
77
+ | `id` | string | ✅ | — | `sum_xxx` for summaries, `file_xxx` for files |
78
+ | `conversationId` | number | | current | Scope to a specific conversation |
79
+ | `allConversations` | boolean | | `false` | Allow cross-conversation lookups |
80
+
81
+ **Returns for summaries:**
82
+ - Full summary content
83
+ - Metadata: depth, kind, token count, created timestamp
84
+ - Time range (earliestAt, latestAt)
85
+ - Descendant count
86
+ - Parent summary IDs (for condensed summaries)
87
+ - Child summary IDs
88
+ - Source message IDs (for leaf summaries)
89
+ - File IDs referenced in the summary
90
+
91
+ **Returns for files:**
92
+ - File content (full text)
93
+ - Metadata: fileName, mimeType, byteSize
94
+ - Exploration summary
95
+ - Storage path
96
+
97
+ **Examples:**
98
+
99
+ ```
100
+ # Inspect a summary from context
101
+ lcm_describe(id: "sum_abc123def456")
102
+
103
+ # Retrieve a stored large file
104
+ lcm_describe(id: "file_789abc012345")
105
+ ```
106
+
107
+ ### lcm_expand_query
108
+
109
+ Answer a focused question by expanding summaries through the DAG. Spawns a bounded sub-agent that walks parent links down to source material and returns a compact answer.
110
+
111
+ **Parameters:**
112
+
113
+ | Param | Type | Required | Default | Description |
114
+ |-------|------|----------|---------|-------------|
115
+ | `prompt` | string | ✅ | — | The question to answer |
116
+ | `query` | string | ✅* | — | Text query to find summaries (if no `summaryIds`) |
117
+ | `summaryIds` | string[] | ✅* | — | Specific summary IDs to expand (if no `query`) |
118
+ | `maxTokens` | number | | 2000 | Answer length cap |
119
+ | `conversationId` | number | | current | Scope to a specific conversation |
120
+ | `allConversations` | boolean | | `false` | Search across all conversations |
121
+
122
+ *One of `query` or `summaryIds` is required.
123
+
124
+ **Returns:**
125
+ - `answer` — The focused answer text
126
+ - `citedIds` — Summary IDs that contributed to the answer
127
+ - `expandedSummaryCount` — How many summaries were expanded
128
+ - `totalSourceTokens` — Total tokens read from the DAG
129
+ - `truncated` — Whether the answer was truncated to fit maxTokens
130
+
131
+ **Examples:**
132
+
133
+ ```
134
+ # Find and expand summaries about a topic
135
+ lcm_expand_query(
136
+ query: "OAuth authentication fix",
137
+ prompt: "What was the root cause and what commits fixed it?"
138
+ )
139
+
140
+ # Expand specific summaries you already have
141
+ lcm_expand_query(
142
+ summaryIds: ["sum_abc123", "sum_def456"],
143
+ prompt: "What were the exact file changes?"
144
+ )
145
+
146
+ # Cross-conversation search
147
+ lcm_expand_query(
148
+ query: "deployment procedure",
149
+ prompt: "What's the current deployment process?",
150
+ allConversations: true
151
+ )
152
+ ```
153
+
154
+ ### lcm_expand
155
+
156
+ Low-level DAG expansion tool. **Only available to sub-agents** spawned by `lcm_expand_query`. Main agents should always use `lcm_expand_query` instead.
157
+
158
+ This tool is what the expansion sub-agent uses internally to walk the summary DAG, read source messages, and build its answer.
159
+
160
+ ## Tips for agent developers
161
+
162
+ ### Configuring agent prompts
163
+
164
+ Add instructions to your agent's system prompt so it knows when to use LCM tools:
165
+
166
+ ```markdown
167
+ ## Memory & Context
168
+
169
+ Use LCM tools for recall:
170
+ 1. `lcm_grep` — Search all conversations by keyword/regex
171
+ 2. `lcm_describe` — Inspect a specific summary (cheap, no sub-agent)
172
+ 3. `lcm_expand_query` — Deep recall with sub-agent expansion
173
+
174
+ When summaries in context have an "Expand for details about:" footer
175
+ listing something you need, use `lcm_expand_query` to get the full detail.
176
+ ```
177
+
178
+ ### Conversation scoping
179
+
180
+ By default, tools operate on the current conversation. Use `allConversations: true` to search across all of them (all agents, all sessions). Use `conversationId` to target a specific conversation you already know about (from previous grep results).
181
+
182
+ ### Performance considerations
183
+
184
+ - `lcm_grep` and `lcm_describe` are fast (direct database queries)
185
+ - `lcm_expand_query` spawns a sub-agent and takes ~30–120 seconds
186
+ - The sub-agent has a 120-second timeout with cleanup guarantees
187
+ - Token caps (`LCM_MAX_EXPAND_TOKENS`) prevent runaway expansion