@martian-engineering/lossless-claw 0.1.2 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,5 +1,7 @@
1
1
  # lossless-claw
2
2
 
3
+ > ⚠️ **Current requirement:** This plugin currently requires a custom OpenClaw build with [PR #22201](https://github.com/openclaw/openclaw/pull/22201) applied until that PR is merged upstream.
4
+
3
5
  Lossless Context Management plugin for [OpenClaw](https://github.com/openclaw/openclaw), based on the [LCM paper](https://voltropy.com/LCM). Replaces OpenClaw's built-in sliding-window compaction with a DAG-based summarization system that preserves every message while keeping active context within model token limits.
4
6
 
5
7
  ## What it does
@@ -26,45 +28,37 @@ Nothing is lost. Raw messages stay in the database. Summaries link back to their
26
28
 
27
29
  ### Install the plugin
28
30
 
29
- **From npm** (recommended):
31
+ Use OpenClaw's plugin installer (recommended):
30
32
 
31
33
  ```bash
32
- npm install @martian-engineering/lossless-claw
34
+ openclaw plugins install @martian-engineering/lossless-claw
33
35
  ```
34
36
 
35
- **From source** (for development):
37
+ If you're running from a local OpenClaw checkout, use:
36
38
 
37
39
  ```bash
38
- git clone https://github.com/Martian-Engineering/lossless-claw.git
39
- cd lossless-claw
40
- npm install
40
+ pnpm openclaw plugins install @martian-engineering/lossless-claw
41
41
  ```
42
42
 
43
- ### Configure OpenClaw
44
-
45
- Add the plugin to your OpenClaw config (`~/.openclaw/openclaw.json`):
43
+ For local plugin development, link your working copy instead of copying files:
46
44
 
47
- ```json
48
- {
49
- "plugins": {
50
- "paths": [
51
- "node_modules/@martian-engineering/lossless-claw"
52
- ],
53
- "slots": {
54
- "contextEngine": "lossless-claw"
55
- }
56
- }
57
- }
45
+ ```bash
46
+ openclaw plugins install --link /path/to/lossless-claw
47
+ # or from a local OpenClaw checkout:
48
+ # pnpm openclaw plugins install --link /path/to/lossless-claw
58
49
  ```
59
50
 
60
- If installed from source, use the absolute path to the cloned repo instead:
51
+ The install command records the plugin, enables it, and applies compatible slot selection (including `contextEngine` when applicable).
52
+
53
+ ### Configure OpenClaw
54
+
55
+ In most cases, no manual JSON edits are needed after `openclaw plugins install`.
56
+
57
+ If you need to set it manually, ensure the context engine slot points at lossless-claw:
61
58
 
62
59
  ```json
63
60
  {
64
61
  "plugins": {
65
- "paths": [
66
- "/path/to/lossless-claw"
67
- ],
68
62
  "slots": {
69
63
  "contextEngine": "lossless-claw"
70
64
  }
@@ -72,8 +66,6 @@ If installed from source, use the absolute path to the cloned repo instead:
72
66
  }
73
67
  ```
74
68
 
75
- The `slots.contextEngine` setting tells OpenClaw to route all context management through LCM instead of the built-in legacy engine.
76
-
77
69
  Restart OpenClaw after configuration changes.
78
70
 
79
71
  ## Configuration
@@ -67,8 +67,10 @@ The **leaf pass** converts raw messages into leaf summaries:
67
67
  3. Concatenate message content with timestamps.
68
68
  4. Resolve the most recent prior summary for continuity (passed as `previous_context` so the LLM avoids repeating known information).
69
69
  5. Send to the LLM with the leaf prompt.
70
- 6. If the summary is larger than the input (LLM failure), retry with the aggressive prompt. If still too large, fall back to deterministic truncation.
71
- 7. Persist the summary, link to source messages, and replace the message range in context_items.
70
+ 6. Normalize provider response blocks (Anthropic/OpenAI text, output_text, and nested content/summary shapes) into plain text.
71
+ 7. If normalization is empty, log provider/model/block-type diagnostics and fall back to deterministic truncation.
72
+ 8. If the summary is larger than the input (LLM failure), retry with the aggressive prompt. If still too large, fall back to deterministic truncation.
73
+ 9. Persist the summary, link to source messages, and replace the message range in context_items.
72
74
 
73
75
  ### Condensation
74
76
 
@@ -215,8 +217,8 @@ All mutating operations (ingest, compact) are serialized per-session using a pro
215
217
 
216
218
  LCM needs to call an LLM for summarization. It resolves credentials through a three-tier cascade:
217
219
 
218
- 1. **Explicit API key** — If provided in legacy params
220
+ 1. **Auth profiles** — OpenClaw's OAuth/token/API-key profile system (`auth-profiles.json`), checked in priority order
219
221
  2. **Environment variables** — Standard provider env vars (`ANTHROPIC_API_KEY`, etc.)
220
- 3. **Auth profiles** — OpenClaw's OAuth/token/API-key profile system (`auth-profiles.json`)
222
+ 3. **Custom provider key** — From models config (e.g., `models.json`)
221
223
 
222
224
  For OAuth providers (e.g., Anthropic via Claude Max), LCM handles token refresh and credential persistence automatically.
@@ -2,24 +2,25 @@
2
2
 
3
3
  ## Quick start
4
4
 
5
- Install the plugin and add it to your OpenClaw config:
5
+ Install the plugin with OpenClaw's plugin installer:
6
6
 
7
7
  ```bash
8
- npm install @martian-engineering/lossless-claw
8
+ openclaw plugins install @martian-engineering/lossless-claw
9
9
  ```
10
10
 
11
- ```json
12
- {
13
- "plugins": {
14
- "paths": ["node_modules/@martian-engineering/lossless-claw"],
15
- "slots": {
16
- "contextEngine": "lossless-claw"
17
- }
18
- }
19
- }
11
+ If you're running from a local OpenClaw checkout:
12
+
13
+ ```bash
14
+ pnpm openclaw plugins install @martian-engineering/lossless-claw
15
+ ```
16
+
17
+ For local development of this plugin, link your working copy:
18
+
19
+ ```bash
20
+ openclaw plugins install --link /path/to/lossless-claw
20
21
  ```
21
22
 
22
- If installed from source, use the absolute path to the repo instead of `node_modules/...`.
23
+ `openclaw plugins install` handles plugin registration/enabling and slot selection automatically.
23
24
 
24
25
  Set recommended environment variables:
25
26
 
package/docs/tui.md CHANGED
@@ -176,7 +176,7 @@ Lists files that exceeded the large file threshold (default 25k tokens) and were
176
176
  Re-summarizes a single summary node using the current depth-aware prompt templates. The process:
177
177
 
178
178
  1. **Preview** — shows the prompt that will be sent, including source material, target token count, previous context, and time range
179
- 2. **API call** — sends to Anthropic's API (Claude Sonnet by default)
179
+ 2. **API call** — sends to the configured provider API (Anthropic by default)
180
180
  3. **Review** — shows old and new content side-by-side with token delta. Toggle unified diff view with `d`. Scroll with `j`/`k`.
181
181
 
182
182
  | Key (Preview) | Action |
@@ -280,6 +280,9 @@ lcm-tui rewrite 44 --depth 0 --apply
280
280
  # Rewrite everything bottom-up
281
281
  lcm-tui rewrite 44 --all --apply --diff
282
282
 
283
+ # Rewrite with OpenAI Responses API
284
+ lcm-tui rewrite 44 --summary sum_abc123 --provider openai --model gpt-5.3-codex --apply
285
+
283
286
  # Use custom prompt templates
284
287
  lcm-tui rewrite 44 --all --apply --prompt-dir ~/.config/lcm-tui/prompts
285
288
  ```
@@ -292,7 +295,8 @@ lcm-tui rewrite 44 --all --apply --prompt-dir ~/.config/lcm-tui/prompts
292
295
  | `--apply` | Write changes to database |
293
296
  | `--dry-run` | Show before/after without writing (default) |
294
297
  | `--diff` | Show unified diff |
295
- | `--model <model>` | Anthropic model (default: `claude-sonnet-4-20250514`) |
298
+ | `--provider <id>` | API provider (inferred from `--model` when omitted) |
299
+ | `--model <model>` | API model (default depends on provider) |
296
300
  | `--prompt-dir <path>` | Custom prompt template directory |
297
301
  | `--timestamps` | Inject timestamps into source text (default: true) |
298
302
  | `--tz <timezone>` | Timezone for timestamps (default: system local) |
@@ -348,6 +352,56 @@ Everything runs in a single transaction.
348
352
  | `--apply` | Execute transplant |
349
353
  | `--dry-run` | Show what would be transplanted (default) |
350
354
 
355
+ ### `lcm-tui backfill`
356
+
357
+ Imports a pre-LCM JSONL session into `conversations/messages/context_items`, runs iterative depth-aware compaction with the configured provider + prompt templates, optionally forces a single-root fold, and can transplant the result to another conversation.
358
+
359
+ ```bash
360
+ # Preview import + compaction plan (no writes)
361
+ lcm-tui backfill my-agent session_abc123
362
+
363
+ # Import + compact
364
+ lcm-tui backfill my-agent session_abc123 --apply
365
+
366
+ # Re-run compaction for an already-imported session
367
+ lcm-tui backfill my-agent session_abc123 --apply --recompact
368
+
369
+ # Force a single summary root when possible
370
+ lcm-tui backfill my-agent session_abc123 --apply --recompact --single-root
371
+
372
+ # Import + compact + transplant into an active conversation
373
+ lcm-tui backfill my-agent session_abc123 --apply --transplant-to 653
374
+
375
+ # Backfill using OpenAI
376
+ lcm-tui backfill my-agent session_abc123 --apply --provider openai --model gpt-5.3-codex
377
+ ```
378
+
379
+ All write paths are transactional:
380
+ 1. Import transaction (conversation/messages/message_parts/context)
381
+ 2. Per-pass compaction transactions (leaf/condensed replacements)
382
+ 3. Optional transplant transaction (reuse of transplant command internals)
383
+
384
+ An idempotency guard prevents duplicate imports for the same `session_id`.
385
+
386
+ | Flag | Description |
387
+ |------|-------------|
388
+ | `--apply` | Execute import/compaction/transplant |
389
+ | `--dry-run` | Show what would run, without writes (default) |
390
+ | `--recompact` | Re-run compaction for already-imported sessions (message import remains idempotent) |
391
+ | `--single-root` | Force condensed folding until one summary remains when possible |
392
+ | `--transplant-to <conv_id>` | Transplant backfilled summaries into target conversation |
393
+ | `--title <text>` | Override imported conversation title |
394
+ | `--leaf-chunk-tokens <n>` | Max source tokens per leaf chunk |
395
+ | `--leaf-target-tokens <n>` | Target output tokens for leaf summaries |
396
+ | `--condensed-target-tokens <n>` | Target output tokens for condensed summaries |
397
+ | `--leaf-fanout <n>` | Min leaves required for d1 condensation |
398
+ | `--condensed-fanout <n>` | Min summaries required for d2+ condensation |
399
+ | `--hard-fanout <n>` | Min summaries for forced single-root passes |
400
+ | `--fresh-tail <n>` | Preserve freshest N raw messages from leaf compaction |
401
+ | `--provider <id>` | API provider (inferred from model when omitted) |
402
+ | `--model <id>` | API model (default depends on provider) |
403
+ | `--prompt-dir <path>` | Custom depth-prompt directory |
404
+
351
405
  ### `lcm-tui prompts`
352
406
 
353
407
  Manage and inspect depth-aware prompt templates. Templates control how the LLM summarizes at each depth level.
@@ -404,21 +458,31 @@ All templates end with an `"Expand for details about:"` footer listing topics av
404
458
 
405
459
  ## Authentication
406
460
 
407
- The TUI needs an Anthropic API key for rewrite and repair operations. It resolves credentials in this order:
461
+ The TUI resolves API keys by provider for rewrite, repair, and backfill compaction operations.
462
+
463
+ - Anthropic: `ANTHROPIC_API_KEY`
464
+ - OpenAI: `OPENAI_API_KEY`
408
465
 
409
- 1. `ANTHROPIC_API_KEY` environment variable
410
- 2. OpenClaw config (`~/.openclaw/openclaw.json`) reads the `anthropic:default` auth profile mode
466
+ Resolution order:
467
+ 1. Provider API key environment variable
468
+ 2. OpenClaw config (`~/.openclaw/openclaw.json`) — checks matching provider auth profile mode
411
469
  3. OpenClaw env file
412
470
  4. `~/.zshrc` export
413
- 5. Various credential file candidates under `~/.openclaw/`
471
+ 5. Credential file candidates under `~/.openclaw/`
472
+
473
+ If the provider auth profile mode is `oauth` (not `api_key`), set the provider API key environment variable explicitly.
474
+
475
+ Interactive rewrite (`w`/`W`) can be configured with:
476
+ - `LCM_TUI_SUMMARY_PROVIDER`
477
+ - `LCM_TUI_SUMMARY_MODEL`
414
478
 
415
- If the auth profile mode is `oauth` (not `api_key`), the TUI cannot use it — set `ANTHROPIC_API_KEY` explicitly for repair/rewrite commands.
479
+ It also honors `LCM_SUMMARY_PROVIDER` / `LCM_SUMMARY_MODEL` as fallback.
416
480
 
417
481
  ## Database
418
482
 
419
- The TUI operates directly on the SQLite database at `~/.openclaw/lcm.db`. All write operations (rewrite, dissolve, repair, transplant) use transactions. Changes take effect on the next conversation turn — the running OpenClaw instance picks up database changes automatically.
483
+ The TUI operates directly on the SQLite database at `~/.openclaw/lcm.db`. All write operations (rewrite, dissolve, repair, transplant, backfill) use transactions. Changes take effect on the next conversation turn — the running OpenClaw instance picks up database changes automatically.
420
484
 
421
- **Backup recommendation:** Before batch operations (repair `--all`, rewrite `--all`, transplant), copy the database:
485
+ **Backup recommendation:** Before batch operations (repair `--all`, rewrite `--all`, transplant, backfill), copy the database:
422
486
 
423
487
  ```bash
424
488
  cp ~/.openclaw/lcm.db ~/.openclaw/lcm.db.bak-$(date +%Y%m%d)
@@ -428,7 +492,7 @@ cp ~/.openclaw/lcm.db ~/.openclaw/lcm.db.bak-$(date +%Y%m%d)
428
492
 
429
493
  **"No LCM summaries found"** — The session may not have an associated conversation in the LCM database. Check that the `conv_id` column shows a non-zero value in the session list. Sessions without LCM tracking won't have summaries.
430
494
 
431
- **Rewrite returns empty/bad content** — Check the API key is valid and the model is accessible. The TUI uses `claude-sonnet-4-20250514` by default; override with `--model` if needed.
495
+ **Rewrite returns empty/bad content** — Check provider/model access and API key. If normalization still yields empty text, the TUI now returns diagnostics including `provider`, `model`, and response `block_types` to help pinpoint adapter mismatches.
432
496
 
433
497
  **Dissolve fails with "not condensed"** — Only condensed summaries (depth > 0) can be dissolved. Leaf summaries have no parent summaries to restore.
434
498
 
package/index.ts CHANGED
@@ -49,6 +49,13 @@ type PluginEnvSnapshot = {
49
49
 
50
50
  type ReadEnvFn = (key: string) => string | undefined;
51
51
 
52
+ type CompleteSimpleOptions = {
53
+ apiKey?: string;
54
+ maxTokens: number;
55
+ temperature?: number;
56
+ reasoning?: string;
57
+ };
58
+
52
59
  /** Capture plugin env values once during initialization. */
53
60
  function snapshotPluginEnv(env: NodeJS.ProcessEnv = process.env): PluginEnvSnapshot {
54
61
  return {
@@ -130,13 +137,17 @@ type PiAiModule = {
130
137
  contextWindow?: number;
131
138
  maxTokens?: number;
132
139
  },
133
- request: { messages: Array<{ role: string; content: unknown; timestamp?: number }> },
140
+ request: {
141
+ systemPrompt?: string;
142
+ messages: Array<{ role: string; content: unknown; timestamp?: number }>;
143
+ },
134
144
  options: {
135
145
  apiKey?: string;
136
146
  maxTokens: number;
137
147
  temperature?: number;
148
+ reasoning?: string;
138
149
  },
139
- ) => Promise<{ content?: Array<{ type: string; text?: string }> }>;
150
+ ) => Promise<Record<string, unknown> & { content?: Array<{ type: string; text?: string }> }>;
140
151
  getModel?: (provider: string, modelId: string) => unknown;
141
152
  getModels?: (provider: string) => unknown[];
142
153
  getEnvApiKey?: (provider: string) => string | undefined;
@@ -173,6 +184,39 @@ function inferApiFromProvider(provider: string): string {
173
184
  return map[normalized] ?? "openai-responses";
174
185
  }
175
186
 
187
+ /** Codex Responses rejects `temperature`; omit it for that API family. */
188
+ export function shouldOmitTemperatureForApi(api: string | undefined): boolean {
189
+ return (api ?? "").trim().toLowerCase() === "openai-codex-responses";
190
+ }
191
+
192
+ /** Build provider-aware options for pi-ai completeSimple. */
193
+ export function buildCompleteSimpleOptions(params: {
194
+ api: string | undefined;
195
+ apiKey: string | undefined;
196
+ maxTokens: number;
197
+ temperature: number | undefined;
198
+ reasoning: string | undefined;
199
+ }): CompleteSimpleOptions {
200
+ const options: CompleteSimpleOptions = {
201
+ apiKey: params.apiKey,
202
+ maxTokens: params.maxTokens,
203
+ };
204
+
205
+ if (
206
+ typeof params.temperature === "number" &&
207
+ Number.isFinite(params.temperature) &&
208
+ !shouldOmitTemperatureForApi(params.api)
209
+ ) {
210
+ options.temperature = params.temperature;
211
+ }
212
+
213
+ if (typeof params.reasoning === "string" && params.reasoning.trim()) {
214
+ options.reasoning = params.reasoning.trim();
215
+ }
216
+
217
+ return options;
218
+ }
219
+
176
220
  /** Select provider-specific config values with case-insensitive provider keys. */
177
221
  function findProviderConfigValue<T>(
178
222
  map: Record<string, T> | undefined,
@@ -566,8 +610,10 @@ function createLcmDependencies(api: OpenClawPluginApi): LcmDependencies {
566
610
  agentDir,
567
611
  runtimeConfig,
568
612
  messages,
613
+ system,
569
614
  maxTokens,
570
615
  temperature,
616
+ reasoning,
571
617
  }) => {
572
618
  try {
573
619
  const piAiModuleId = "@mariozechner/pi-ai";
@@ -644,24 +690,62 @@ function createLcmDependencies(api: OpenClawPluginApi): LcmDependencies {
644
690
  });
645
691
  }
646
692
 
693
+ const completeOptions = buildCompleteSimpleOptions({
694
+ api: resolvedModel.api,
695
+ apiKey: resolvedApiKey,
696
+ maxTokens,
697
+ temperature,
698
+ reasoning,
699
+ });
700
+
647
701
  const result = await mod.completeSimple(
648
702
  resolvedModel,
649
703
  {
704
+ ...(typeof system === "string" && system.trim()
705
+ ? { systemPrompt: system.trim() }
706
+ : {}),
650
707
  messages: messages.map((message) => ({
651
708
  role: message.role,
652
709
  content: message.content,
653
710
  timestamp: Date.now(),
654
711
  })),
655
712
  },
656
- {
657
- apiKey: resolvedApiKey,
658
- maxTokens,
659
- temperature,
660
- },
713
+ completeOptions,
661
714
  );
662
715
 
716
+ if (!isRecord(result)) {
717
+ return {
718
+ content: [],
719
+ request_provider: providerId,
720
+ request_model: modelId,
721
+ request_api: resolvedModel.api,
722
+ request_reasoning:
723
+ typeof reasoning === "string" && reasoning.trim() ? reasoning.trim() : "(none)",
724
+ request_has_system:
725
+ typeof system === "string" && system.trim().length > 0 ? "true" : "false",
726
+ request_temperature:
727
+ typeof completeOptions.temperature === "number"
728
+ ? String(completeOptions.temperature)
729
+ : "(omitted)",
730
+ request_temperature_sent:
731
+ typeof completeOptions.temperature === "number" ? "true" : "false",
732
+ };
733
+ }
734
+
663
735
  return {
664
- content: Array.isArray(result?.content) ? result.content : [],
736
+ ...result,
737
+ content: Array.isArray(result.content) ? result.content : [],
738
+ request_provider: providerId,
739
+ request_model: modelId,
740
+ request_api: resolvedModel.api,
741
+ request_reasoning:
742
+ typeof reasoning === "string" && reasoning.trim() ? reasoning.trim() : "(none)",
743
+ request_has_system: typeof system === "string" && system.trim().length > 0 ? "true" : "false",
744
+ request_temperature:
745
+ typeof completeOptions.temperature === "number"
746
+ ? String(completeOptions.temperature)
747
+ : "(omitted)",
748
+ request_temperature_sent: typeof completeOptions.temperature === "number" ? "true" : "false",
665
749
  };
666
750
  } catch (err) {
667
751
  console.error(`[lcm] completeSimple error:`, err instanceof Error ? err.message : err);
@@ -715,8 +799,8 @@ function createLcmDependencies(api: OpenClawPluginApi): LcmDependencies {
715
799
  }
716
800
 
717
801
  const provider = (
718
- providerHint?.trim() ||
719
802
  envSnapshot.lcmSummaryProvider ||
803
+ providerHint?.trim() ||
720
804
  envSnapshot.openclawProvider ||
721
805
  "openai"
722
806
  ).trim();
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@martian-engineering/lossless-claw",
3
- "version": "0.1.2",
3
+ "version": "0.1.5",
4
4
  "description": "Lossless Context Management plugin for OpenClaw — DAG-based conversation summarization with incremental compaction",
5
5
  "type": "module",
6
6
  "main": "index.ts",
package/src/summarize.ts CHANGED
@@ -24,6 +24,14 @@ export type LcmSummarizerLegacyParams = {
24
24
  type SummaryMode = "normal" | "aggressive";
25
25
 
26
26
  const DEFAULT_CONDENSED_TARGET_TOKENS = 2000;
27
+ const LCM_SUMMARIZER_SYSTEM_PROMPT =
28
+ "You are a context-compaction summarization engine. Follow user instructions exactly and return plain text summary content only.";
29
+ const DIAGNOSTIC_MAX_DEPTH = 4;
30
+ const DIAGNOSTIC_MAX_ARRAY_ITEMS = 8;
31
+ const DIAGNOSTIC_MAX_OBJECT_KEYS = 16;
32
+ const DIAGNOSTIC_MAX_CHARS = 1200;
33
+ const DIAGNOSTIC_SENSITIVE_KEY_PATTERN =
34
+ /(api[-_]?key|authorization|token|secret|password|cookie|set-cookie|private[-_]?key|bearer)/i;
27
35
 
28
36
  /** Normalize provider ids for stable config/profile lookup. */
29
37
  function normalizeProviderId(provider: string): string {
@@ -78,13 +86,315 @@ function estimateTokens(text: string): number {
78
86
  return Math.ceil(text.length / 4);
79
87
  }
80
88
 
81
- /** Narrows completion response blocks to plain text blocks. */
82
- function isTextBlock(block: unknown): block is { type: string; text: string } {
83
- if (!block || typeof block !== "object" || Array.isArray(block)) {
84
- return false;
89
+ /** Narrow unknown values to plain object records. */
90
+ function isRecord(value: unknown): value is Record<string, unknown> {
91
+ return !!value && typeof value === "object" && !Array.isArray(value);
92
+ }
93
+
94
+ /**
95
+ * Normalize text fragments from provider-specific block shapes.
96
+ *
97
+ * Deduplicates exact repeated fragments while preserving first-seen order so
98
+ * providers that mirror output in multiple fields don't duplicate summaries.
99
+ */
100
+ function normalizeTextFragments(chunks: string[]): string {
101
+ const normalized: string[] = [];
102
+ const seen = new Set<string>();
103
+
104
+ for (const chunk of chunks) {
105
+ const trimmed = chunk.trim();
106
+ if (!trimmed || seen.has(trimmed)) {
107
+ continue;
108
+ }
109
+ seen.add(trimmed);
110
+ normalized.push(trimmed);
111
+ }
112
+ return normalized.join("\n").trim();
113
+ }
114
+
115
+ /** Collect all nested `type` labels for diagnostics on normalization failures. */
116
+ function collectBlockTypes(value: unknown, out: Set<string>): void {
117
+ if (Array.isArray(value)) {
118
+ for (const entry of value) {
119
+ collectBlockTypes(entry, out);
120
+ }
121
+ return;
122
+ }
123
+ if (!isRecord(value)) {
124
+ return;
125
+ }
126
+
127
+ if (typeof value.type === "string" && value.type.trim()) {
128
+ out.add(value.type.trim());
129
+ }
130
+ for (const nested of Object.values(value)) {
131
+ collectBlockTypes(nested, out);
132
+ }
133
+ }
134
+
135
+ /** Collect text payloads from common provider response shapes. */
136
+ function collectTextLikeFields(value: unknown, out: string[]): void {
137
+ if (Array.isArray(value)) {
138
+ for (const entry of value) {
139
+ collectTextLikeFields(entry, out);
140
+ }
141
+ return;
142
+ }
143
+ if (!isRecord(value)) {
144
+ return;
145
+ }
146
+
147
+ for (const key of ["text", "output_text", "thinking"]) {
148
+ appendTextValue(value[key], out);
149
+ }
150
+ for (const key of ["content", "summary", "output", "message", "response"]) {
151
+ if (key in value) {
152
+ collectTextLikeFields(value[key], out);
153
+ }
154
+ }
155
+ }
156
+
157
+ /** Append raw textual values and nested text wrappers (`value`, `text`). */
158
+ function appendTextValue(value: unknown, out: string[]): void {
159
+ if (typeof value === "string") {
160
+ out.push(value);
161
+ return;
162
+ }
163
+ if (Array.isArray(value)) {
164
+ for (const entry of value) {
165
+ appendTextValue(entry, out);
166
+ }
167
+ return;
168
+ }
169
+ if (!isRecord(value)) {
170
+ return;
171
+ }
172
+
173
+ if (typeof value.value === "string") {
174
+ out.push(value.value);
175
+ }
176
+ if (typeof value.text === "string") {
177
+ out.push(value.text);
178
+ }
179
+ }
180
+
181
+ /** Normalize provider completion content into a plain-text summary payload. */
182
+ function normalizeCompletionSummary(content: unknown): { summary: string; blockTypes: string[] } {
183
+ const chunks: string[] = [];
184
+ const blockTypeSet = new Set<string>();
185
+
186
+ collectTextLikeFields(content, chunks);
187
+ collectBlockTypes(content, blockTypeSet);
188
+
189
+ const blockTypes = [...blockTypeSet].sort((a, b) => a.localeCompare(b));
190
+ return {
191
+ summary: normalizeTextFragments(chunks),
192
+ blockTypes,
193
+ };
194
+ }
195
+
196
+ /** Format normalized block types for concise diagnostics. */
197
+ function formatBlockTypes(blockTypes: string[]): string {
198
+ if (blockTypes.length === 0) {
199
+ return "(none)";
200
+ }
201
+ return blockTypes.join(",");
202
+ }
203
+
204
+ /** Truncate long diagnostic text values to keep logs bounded and readable. */
205
+ function truncateDiagnosticText(value: string, maxChars = DIAGNOSTIC_MAX_CHARS): string {
206
+ if (value.length <= maxChars) {
207
+ return value;
208
+ }
209
+ return `${value.slice(0, maxChars)}...[truncated:${value.length - maxChars} chars]`;
210
+ }
211
+
212
+ /** Build a JSON-safe, redacted, depth-limited clone for diagnostic logging. */
213
+ function sanitizeForDiagnostics(value: unknown, depth = 0): unknown {
214
+ if (depth >= DIAGNOSTIC_MAX_DEPTH) {
215
+ return "[max-depth]";
216
+ }
217
+ if (typeof value === "string") {
218
+ return truncateDiagnosticText(value);
219
+ }
220
+ if (
221
+ value === null ||
222
+ typeof value === "number" ||
223
+ typeof value === "boolean" ||
224
+ typeof value === "bigint"
225
+ ) {
226
+ return value;
227
+ }
228
+ if (value === undefined) {
229
+ return "[undefined]";
230
+ }
231
+ if (typeof value === "function") {
232
+ return "[function]";
233
+ }
234
+ if (typeof value === "symbol") {
235
+ return "[symbol]";
236
+ }
237
+ if (Array.isArray(value)) {
238
+ const head = value
239
+ .slice(0, DIAGNOSTIC_MAX_ARRAY_ITEMS)
240
+ .map((entry) => sanitizeForDiagnostics(entry, depth + 1));
241
+ if (value.length > DIAGNOSTIC_MAX_ARRAY_ITEMS) {
242
+ head.push(`[+${value.length - DIAGNOSTIC_MAX_ARRAY_ITEMS} more items]`);
243
+ }
244
+ return head;
245
+ }
246
+ if (!isRecord(value)) {
247
+ return String(value);
248
+ }
249
+
250
+ const out: Record<string, unknown> = {};
251
+ const entries = Object.entries(value);
252
+ for (const [key, entry] of entries.slice(0, DIAGNOSTIC_MAX_OBJECT_KEYS)) {
253
+ out[key] = DIAGNOSTIC_SENSITIVE_KEY_PATTERN.test(key)
254
+ ? "[redacted]"
255
+ : sanitizeForDiagnostics(entry, depth + 1);
256
+ }
257
+ if (entries.length > DIAGNOSTIC_MAX_OBJECT_KEYS) {
258
+ out.__truncated_keys__ = entries.length - DIAGNOSTIC_MAX_OBJECT_KEYS;
259
+ }
260
+ return out;
261
+ }
262
+
263
+ /** Encode diagnostic payloads in a compact JSON string with safety guards. */
264
+ function formatDiagnosticPayload(value: unknown): string {
265
+ try {
266
+ const json = JSON.stringify(sanitizeForDiagnostics(value));
267
+ if (!json) {
268
+ return "\"\"";
269
+ }
270
+ return truncateDiagnosticText(json);
271
+ } catch {
272
+ return "\"[unserializable]\"";
273
+ }
274
+ }
275
+
276
+ /**
277
+ * Extract safe diagnostic metadata from a provider response envelope.
278
+ *
279
+ * Picks common metadata fields (request id, model echo, usage counters) without
280
+ * leaking secrets like API keys or auth tokens. The result object from
281
+ * `deps.complete` is typed narrowly but real provider responses carry extra
282
+ * fields that are useful for debugging empty-summary incidents.
283
+ */
284
+ function extractResponseDiagnostics(result: unknown): string {
285
+ if (!isRecord(result)) {
286
+ return "";
287
+ }
288
+
289
+ const parts: string[] = [];
290
+
291
+ // Envelope-shape diagnostics for empty-block incidents.
292
+ const topLevelKeys = Object.keys(result).slice(0, 24);
293
+ if (topLevelKeys.length > 0) {
294
+ parts.push(`keys=${topLevelKeys.join(",")}`);
295
+ }
296
+ if ("content" in result) {
297
+ const contentVal = result.content;
298
+ if (Array.isArray(contentVal)) {
299
+ parts.push(`content_kind=array`);
300
+ parts.push(`content_len=${contentVal.length}`);
301
+ } else if (contentVal === null) {
302
+ parts.push(`content_kind=null`);
303
+ } else {
304
+ parts.push(`content_kind=${typeof contentVal}`);
305
+ }
306
+ parts.push(`content_preview=${formatDiagnosticPayload(contentVal)}`);
307
+ } else {
308
+ parts.push("content_kind=missing");
309
+ }
310
+
311
+ // Preview common non-content payload envelopes used by provider SDKs.
312
+ const envelopePayload: Record<string, unknown> = {};
313
+ for (const key of ["summary", "output", "message", "response"]) {
314
+ if (key in result) {
315
+ envelopePayload[key] = result[key];
316
+ }
317
+ }
318
+ if (Object.keys(envelopePayload).length > 0) {
319
+ parts.push(`payload_preview=${formatDiagnosticPayload(envelopePayload)}`);
320
+ }
321
+
322
+ // Request / response id — present in most provider envelopes.
323
+ for (const key of ["id", "request_id", "x-request-id"]) {
324
+ const val = result[key];
325
+ if (typeof val === "string" && val.trim()) {
326
+ parts.push(`${key}=${val.trim()}`);
327
+ }
328
+ }
329
+
330
+ // Model echo — useful when the provider selects a different checkpoint.
331
+ if (typeof result.model === "string" && result.model.trim()) {
332
+ parts.push(`resp_model=${result.model.trim()}`);
333
+ }
334
+ if (typeof result.provider === "string" && result.provider.trim()) {
335
+ parts.push(`resp_provider=${result.provider.trim()}`);
336
+ }
337
+ for (const key of [
338
+ "request_provider",
339
+ "request_model",
340
+ "request_api",
341
+ "request_reasoning",
342
+ "request_has_system",
343
+ "request_temperature",
344
+ "request_temperature_sent",
345
+ ]) {
346
+ const val = result[key];
347
+ if (typeof val === "string" && val.trim()) {
348
+ parts.push(`${key}=${val.trim()}`);
349
+ }
85
350
  }
86
- const record = block as { type?: unknown; text?: unknown };
87
- return record.type === "text" && typeof record.text === "string";
351
+
352
+ // Usage counters safe numeric diagnostics.
353
+ if (isRecord(result.usage)) {
354
+ const u = result.usage;
355
+ const tokens: string[] = [];
356
+ for (const k of [
357
+ "prompt_tokens",
358
+ "completion_tokens",
359
+ "total_tokens",
360
+ "input",
361
+ "output",
362
+ "cacheRead",
363
+ "cacheWrite",
364
+ ]) {
365
+ if (typeof u[k] === "number") {
366
+ tokens.push(`${k}=${u[k]}`);
367
+ }
368
+ }
369
+ if (tokens.length > 0) {
370
+ parts.push(tokens.join(","));
371
+ }
372
+ }
373
+
374
+ // Finish reason — helps explain empty content.
375
+ const finishReason =
376
+ typeof result.finish_reason === "string"
377
+ ? result.finish_reason
378
+ : typeof result.stopReason === "string"
379
+ ? result.stopReason
380
+ : typeof result.stop_reason === "string"
381
+ ? result.stop_reason
382
+ : undefined;
383
+ if (finishReason) {
384
+ parts.push(`finish=${finishReason}`);
385
+ }
386
+
387
+ // Provider-level error payloads (most useful when finish=error and content is empty).
388
+ const errorMessage = result.errorMessage;
389
+ if (typeof errorMessage === "string" && errorMessage.trim()) {
390
+ parts.push(`error_message=${truncateDiagnosticText(errorMessage.trim(), 400)}`);
391
+ }
392
+ const errorPayload = result.error;
393
+ if (errorPayload !== undefined) {
394
+ parts.push(`error_preview=${formatDiagnosticPayload(errorPayload)}`);
395
+ }
396
+
397
+ return parts.join("; ");
88
398
  }
89
399
 
90
400
  /**
@@ -416,6 +726,7 @@ export async function createLcmSummarizeFromLegacyParams(params: {
416
726
  authProfileId,
417
727
  agentDir,
418
728
  runtimeConfig: params.legacyParams.config,
729
+ system: LCM_SUMMARIZER_SYSTEM_PROMPT,
419
730
  messages: [
420
731
  {
421
732
  role: "user",
@@ -426,18 +737,112 @@ export async function createLcmSummarizeFromLegacyParams(params: {
426
737
  temperature: aggressive ? 0.1 : 0.2,
427
738
  });
428
739
 
429
- const summary = result.content
430
- .filter(isTextBlock)
431
- .map((block) => block.text.trim())
432
- .filter(Boolean)
433
- .join("\n")
434
- .trim();
740
+ const normalized = normalizeCompletionSummary(result.content);
741
+ let summary = normalized.summary;
742
+ let summarySource: "content" | "envelope" | "retry" | "fallback" = "content";
435
743
 
744
+ // --- Empty-summary hardening: envelope → retry → deterministic fallback ---
436
745
  if (!summary) {
437
- console.error(`[lcm] summarize got empty content from LLM (${result.content.length} blocks, types: ${result.content.map(b => b.type).join(",")}), falling back to truncation`);
746
+ // Envelope-aware extraction: some providers place summary text in
747
+ // top-level response fields (output, message, response) rather than
748
+ // inside the content array. Re-run normalization against the full
749
+ // response envelope before spending an API call on a retry.
750
+ const envelopeNormalized = normalizeCompletionSummary(result);
751
+ if (envelopeNormalized.summary) {
752
+ summary = envelopeNormalized.summary;
753
+ summarySource = "envelope";
754
+ console.error(
755
+ `[lcm] recovered summary from response envelope; provider=${provider}; model=${model}; ` +
756
+ `block_types=${formatBlockTypes(envelopeNormalized.blockTypes)}; source=envelope`,
757
+ );
758
+ }
759
+ }
760
+
761
+ if (!summary) {
762
+ const responseDiag = extractResponseDiagnostics(result);
763
+ const diagParts = [
764
+ `[lcm] empty normalized summary on first attempt`,
765
+ `provider=${provider}`,
766
+ `model=${model}`,
767
+ `block_types=${formatBlockTypes(normalized.blockTypes)}`,
768
+ `response_blocks=${result.content.length}`,
769
+ ];
770
+ if (responseDiag) {
771
+ diagParts.push(responseDiag);
772
+ }
773
+ console.error(`${diagParts.join("; ")}; retrying with conservative settings`);
774
+
775
+ // Single retry with conservative parameters: low temperature and low
776
+ // reasoning budget to coax a textual response from providers that
777
+ // sometimes return reasoning-only or empty blocks on the first pass.
778
+ try {
779
+ const retryResult = await params.deps.complete({
780
+ provider,
781
+ model,
782
+ apiKey,
783
+ providerApi,
784
+ authProfileId,
785
+ agentDir,
786
+ runtimeConfig: params.legacyParams.config,
787
+ system: LCM_SUMMARIZER_SYSTEM_PROMPT,
788
+ messages: [
789
+ {
790
+ role: "user",
791
+ content: prompt,
792
+ },
793
+ ],
794
+ maxTokens: targetTokens,
795
+ temperature: 0.05,
796
+ reasoning: "low",
797
+ });
798
+
799
+ const retryNormalized = normalizeCompletionSummary(retryResult.content);
800
+ summary = retryNormalized.summary;
801
+
802
+ if (summary) {
803
+ summarySource = "retry";
804
+ console.error(
805
+ `[lcm] retry succeeded; provider=${provider}; model=${model}; ` +
806
+ `block_types=${formatBlockTypes(retryNormalized.blockTypes)}; source=retry`,
807
+ );
808
+ } else {
809
+ const retryDiag = extractResponseDiagnostics(retryResult);
810
+ const retryParts = [
811
+ `[lcm] retry also returned empty summary`,
812
+ `provider=${provider}`,
813
+ `model=${model}`,
814
+ `block_types=${formatBlockTypes(retryNormalized.blockTypes)}`,
815
+ `response_blocks=${retryResult.content.length}`,
816
+ ];
817
+ if (retryDiag) {
818
+ retryParts.push(retryDiag);
819
+ }
820
+ console.error(`${retryParts.join("; ")}; falling back to truncation`);
821
+ }
822
+ } catch (retryErr) {
823
+ // Retry is best-effort; log and proceed to deterministic fallback.
824
+ console.error(
825
+ `[lcm] retry failed; provider=${provider} model=${model}; error=${
826
+ retryErr instanceof Error ? retryErr.message : String(retryErr)
827
+ }; falling back to truncation`,
828
+ );
829
+ }
830
+ }
831
+
832
+ if (!summary) {
833
+ summarySource = "fallback";
834
+ console.error(
835
+ `[lcm] all extraction attempts exhausted; provider=${provider}; model=${model}; source=fallback`,
836
+ );
438
837
  return buildDeterministicFallbackSummary(text, targetTokens);
439
838
  }
440
839
 
840
+ if (summarySource !== "content") {
841
+ console.error(
842
+ `[lcm] summary resolved via non-content path; provider=${provider}; model=${model}; source=${summarySource}`,
843
+ );
844
+ }
845
+
441
846
  return summary;
442
847
  };
443
848
  }
package/src/types.ts CHANGED
@@ -11,6 +11,17 @@ import type { LcmConfig } from "./db/config.js";
11
11
  * Minimal LLM completion interface needed by LCM for summarization.
12
12
  * Matches the signature of completeSimple from @mariozechner/pi-ai.
13
13
  */
14
+ export type CompletionContentBlock = {
15
+ type: string;
16
+ text?: string;
17
+ [key: string]: unknown;
18
+ };
19
+
20
+ export type CompletionResult = {
21
+ content: CompletionContentBlock[];
22
+ [key: string]: unknown;
23
+ };
24
+
14
25
  export type CompleteFn = (params: {
15
26
  provider?: string;
16
27
  model: string;
@@ -24,7 +35,7 @@ export type CompleteFn = (params: {
24
35
  maxTokens: number;
25
36
  temperature?: number;
26
37
  reasoning?: string;
27
- }) => Promise<{ content: Array<{ type: string; text?: string }> }>;
38
+ }) => Promise<CompletionResult>;
28
39
 
29
40
  /**
30
41
  * Gateway RPC call interface.