engrm 0.4.29 → 0.4.31

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -226,6 +226,8 @@ The MCP server exposes tools that supported agents can call directly:
226
226
  | `recent_handoffs` | List recent saved handoffs for the current project or workspace |
227
227
  | `load_handoff` | Open a saved handoff as a resume point for a new session |
228
228
  | `refresh_chat_recall` | Rehydrate the separate chat lane from a Claude transcript when a long session feels under-captured |
229
+ | `repair_recall` | Rehydrate recent project/session recall from transcript or Claude history fallback when chat feels missing |
230
+ | `resume_thread` | Build one clear resume point from handoff, current thread, recent chat, and unified recall |
229
231
  | `recent_chat` | Inspect the separate synced chat lane without mixing it into durable memory |
230
232
  | `search_chat` | Search recent chat recall with hybrid lexical + semantic matching, separately from reusable memory observations |
231
233
  | `search_recall` | Search durable memory and chat recall together when you do not want to guess the right lane |
@@ -250,6 +252,10 @@ If you are evaluating Engrm as an MCP server, start with this small set first:
250
252
  - verify which tools are actually producing durable memory and which plugins they exercise
251
253
  - `capture_quality`
252
254
  - check whether raw chronology is healthy across the workspace before judging memory quality
255
+ - `resume_thread`
256
+ - get one direct “where were we?” resume point with freshness, source, tool trail, and next actions
257
+ - `repair_recall`
258
+ - repair weak project recall from transcript or Claude history fallback before you give up on continuity
253
259
 
254
260
  These are the tools we should be comfortable pointing people to publicly first:
255
261
 
@@ -257,6 +263,7 @@ These are the tools we should be comfortable pointing people to publicly first:
257
263
  - local-first execution
258
264
  - durable memory output instead of raw transcript dumping
259
265
  - easy local inspection after capture
266
+ - clear continuity recovery when switching devices or resuming long sessions
260
267
 
261
268
  ### Thin Tools, Thick Memory
262
269
 
@@ -328,6 +335,19 @@ For long sessions, Engrm now also supports transcript-backed chat hydration:
328
335
  - fills gaps in the separate chat lane with transcript-backed messages
329
336
  - keeps those rows marked separately from hook-edge chat so recall can prefer the fuller thread
330
337
 
338
+ - `repair_recall`
339
+ - scans recent sessions for the current project
340
+ - rehydrates recall from transcript files when they exist
341
+ - falls back to Claude `history.jsonl` when transcript/session alignment is missing
342
+ - reports whether recovered chat is `transcript-backed`, `history-backed`, or still only `hook-only`
343
+
344
+ - `resume_thread`
345
+ - gives OpenClaw or Claude one direct “where were we?” action
346
+ - combines the best handoff, the current thread, recent outcomes, recent chat, and unified recall
347
+ - reports whether the resume point is `strong`, `usable`, or `thin`
348
+ - can attempt recall repair first when continuity is still weak
349
+ - makes Engrm usable as the primary live continuity layer instead of forcing agents to choose between low-level recall tools
350
+
331
351
  Before Claude compacts, Engrm now also:
332
352
 
333
353
  - refreshes transcript-backed chat recall for the active session
@@ -346,22 +366,25 @@ Recommended flow:
346
366
  ```text
347
367
  1. capture_status
348
368
  2. memory_console
349
- 3. activity_feed
350
- 4. recent_sessions
351
- 5. session_story
352
- 6. tool_memory_index
353
- 7. session_tool_memory
354
- 8. project_memory_index
355
- 9. workspace_memory_index
369
+ 3. resume_thread
370
+ 4. activity_feed
371
+ 5. recent_sessions
372
+ 6. session_story
373
+ 7. tool_memory_index
374
+ 8. session_tool_memory
375
+ 9. project_memory_index
376
+ 10. workspace_memory_index
356
377
  ```
357
378
 
358
379
  What each tool is good for:
359
380
 
360
381
  - `capture_status` tells you whether prompt/tool hooks are live on this machine
361
- - `capture_quality` shows whether chat recall is transcript-backed or still hook-only across the workspace
382
+ - `capture_quality` shows whether chat recall is transcript-backed, history-backed, or still hook-only across the workspace
362
383
  - `memory_console` gives the quickest project snapshot, including whether continuity is `fresh`, `thin`, or `cold`
363
- - `memory_console`, `project_memory_index`, and `session_context` now also show whether project chat recall is transcript-backed or only hook-captured
364
- - when chat continuity is only hook-captured, the workbench and startup hints now prefer `refresh_chat_recall`
384
+ - `resume_thread` is the fastest “get me back into the live thread” path when you want freshness, source, next actions, tool trail, and chat in one place
385
+ - `memory_console`, `project_memory_index`, and `session_context` now also show whether project chat recall is transcript-backed, history-backed, or only hook-captured
386
+ - `memory_console`, `project_memory_index`, and `session_context` also expose resume-readiness directly, so you can see whether a repo is `live`, `recent`, or `stale` before drilling deeper
387
+ - when chat continuity is only partial, the workbench and startup hints now prefer `repair_recall`, and still suggest `refresh_chat_recall` when a single session likely just needs transcript hydration
365
388
  - the workbench and startup hints now also prefer `search_recall` as the first “what were we just talking about?” path when recent prompts/chat/observations exist
366
389
  - `search_chat` now uses hybrid lexical + semantic ranking when sqlite-vec and local embeddings are available, so recent conversation recall is less dependent on exact wording
367
390
  - `activity_feed` shows the merged chronology across prompts, tools, chat, handoffs, observations, and summaries
@@ -371,7 +394,7 @@ What each tool is good for:
371
394
  - `session_tool_memory` shows which tool calls in one session turned into reusable memory and which did not
372
395
  - `project_memory_index` shows typed memory by repo, including continuity state and hot files
373
396
  - `workspace_memory_index` shows coverage across all repos on the machine
374
- - `recent_chat` / `search_chat` now report transcript-vs-hook coverage too, and `search_chat` will also mark when semantic ranking was available, so weak OpenClaw recall is easier to diagnose and refresh
397
+ - `recent_chat` / `search_chat` now report transcript-vs-history-vs-hook coverage too, and `search_chat` will also mark when semantic ranking was available, so weak OpenClaw recall is easier to diagnose and repair
375
398
 
376
399
  ### Thin Tool Workflow
377
400
 
@@ -4007,6 +4007,22 @@ async function saveTranscriptResults(db, config, results, sessionId, cwd) {
4007
4007
  }
4008
4008
 
4009
4009
  // src/tools/recent-chat.ts
4010
+ function summarizeChatSources(messages) {
4011
+ return messages.reduce((summary, message) => {
4012
+ summary[getChatCaptureOrigin(message)] += 1;
4013
+ return summary;
4014
+ }, { transcript: 0, history: 0, hook: 0 });
4015
+ }
4016
+ function getChatCoverageState(messagesOrSummary) {
4017
+ const summary = Array.isArray(messagesOrSummary) ? summarizeChatSources(messagesOrSummary) : messagesOrSummary;
4018
+ if (summary.transcript > 0)
4019
+ return "transcript-backed";
4020
+ if (summary.history > 0)
4021
+ return "history-backed";
4022
+ if (summary.hook > 0)
4023
+ return "hook-only";
4024
+ return "none";
4025
+ }
4010
4026
  function getChatCaptureOrigin(message) {
4011
4027
  if (message.source_kind === "transcript")
4012
4028
  return "transcript";
@@ -4038,7 +4054,7 @@ function getSessionStory(db, input) {
4038
4054
  prompts,
4039
4055
  chat_messages: chatMessages,
4040
4056
  chat_source_summary: summarizeChatSources(chatMessages),
4041
- chat_coverage_state: chatMessages.some((message) => message.source_kind === "transcript") ? "transcript-backed" : chatMessages.length > 0 ? "hook-only" : "none",
4057
+ chat_coverage_state: getChatCoverageState(chatMessages),
4042
4058
  tool_events: toolEvents,
4043
4059
  observations,
4044
4060
  handoffs,
@@ -4136,12 +4152,6 @@ function collectProvenanceSummary(observations) {
4136
4152
  }
4137
4153
  return Array.from(counts.entries()).map(([tool, count]) => ({ tool, count })).sort((a, b) => b.count - a.count || a.tool.localeCompare(b.tool)).slice(0, 6);
4138
4154
  }
4139
- function summarizeChatSources(messages) {
4140
- return messages.reduce((summary, message) => {
4141
- summary[getChatCaptureOrigin(message)] += 1;
4142
- return summary;
4143
- }, { transcript: 0, history: 0, hook: 0 });
4144
- }
4145
4155
 
4146
4156
  // src/tools/handoffs.ts
4147
4157
  async function upsertRollingHandoff(db, config, input) {
@@ -2204,6 +2204,22 @@ function computeObservationPriority(obs, nowEpoch) {
2204
2204
  }
2205
2205
 
2206
2206
  // src/tools/recent-chat.ts
2207
+ function summarizeChatSources(messages) {
2208
+ return messages.reduce((summary, message) => {
2209
+ summary[getChatCaptureOrigin(message)] += 1;
2210
+ return summary;
2211
+ }, { transcript: 0, history: 0, hook: 0 });
2212
+ }
2213
+ function getChatCoverageState(messagesOrSummary) {
2214
+ const summary = Array.isArray(messagesOrSummary) ? summarizeChatSources(messagesOrSummary) : messagesOrSummary;
2215
+ if (summary.transcript > 0)
2216
+ return "transcript-backed";
2217
+ if (summary.history > 0)
2218
+ return "history-backed";
2219
+ if (summary.hook > 0)
2220
+ return "hook-only";
2221
+ return "none";
2222
+ }
2207
2223
  function getChatCaptureOrigin(message) {
2208
2224
  if (message.source_kind === "transcript")
2209
2225
  return "transcript";
@@ -2235,7 +2251,7 @@ function getSessionStory(db, input) {
2235
2251
  prompts,
2236
2252
  chat_messages: chatMessages,
2237
2253
  chat_source_summary: summarizeChatSources(chatMessages),
2238
- chat_coverage_state: chatMessages.some((message) => message.source_kind === "transcript") ? "transcript-backed" : chatMessages.length > 0 ? "hook-only" : "none",
2254
+ chat_coverage_state: getChatCoverageState(chatMessages),
2239
2255
  tool_events: toolEvents,
2240
2256
  observations,
2241
2257
  handoffs,
@@ -2333,12 +2349,6 @@ function collectProvenanceSummary(observations) {
2333
2349
  }
2334
2350
  return Array.from(counts.entries()).map(([tool, count]) => ({ tool, count })).sort((a, b) => b.count - a.count || a.tool.localeCompare(b.tool)).slice(0, 6);
2335
2351
  }
2336
- function summarizeChatSources(messages) {
2337
- return messages.reduce((summary, message) => {
2338
- summary[getChatCaptureOrigin(message)] += 1;
2339
- return summary;
2340
- }, { transcript: 0, history: 0, hook: 0 });
2341
- }
2342
2352
 
2343
2353
  // src/tools/save.ts
2344
2354
  import { relative, isAbsolute } from "node:path";
@@ -474,6 +474,22 @@ function normalizeItem(value) {
474
474
  }
475
475
 
476
476
  // src/tools/recent-chat.ts
477
+ function summarizeChatSources(messages) {
478
+ return messages.reduce((summary, message) => {
479
+ summary[getChatCaptureOrigin(message)] += 1;
480
+ return summary;
481
+ }, { transcript: 0, history: 0, hook: 0 });
482
+ }
483
+ function getChatCoverageState(messagesOrSummary) {
484
+ const summary = Array.isArray(messagesOrSummary) ? summarizeChatSources(messagesOrSummary) : messagesOrSummary;
485
+ if (summary.transcript > 0)
486
+ return "transcript-backed";
487
+ if (summary.history > 0)
488
+ return "history-backed";
489
+ if (summary.hook > 0)
490
+ return "hook-only";
491
+ return "none";
492
+ }
477
493
  function getChatCaptureOrigin(message) {
478
494
  if (message.source_kind === "transcript")
479
495
  return "transcript";
@@ -505,7 +521,7 @@ function getSessionStory(db, input) {
505
521
  prompts,
506
522
  chat_messages: chatMessages,
507
523
  chat_source_summary: summarizeChatSources(chatMessages),
508
- chat_coverage_state: chatMessages.some((message) => message.source_kind === "transcript") ? "transcript-backed" : chatMessages.length > 0 ? "hook-only" : "none",
524
+ chat_coverage_state: getChatCoverageState(chatMessages),
509
525
  tool_events: toolEvents,
510
526
  observations,
511
527
  handoffs,
@@ -603,12 +619,6 @@ function collectProvenanceSummary(observations) {
603
619
  }
604
620
  return Array.from(counts.entries()).map(([tool, count]) => ({ tool, count })).sort((a, b) => b.count - a.count || a.tool.localeCompare(b.tool)).slice(0, 6);
605
621
  }
606
- function summarizeChatSources(messages) {
607
- return messages.reduce((summary, message) => {
608
- summary[getChatCaptureOrigin(message)] += 1;
609
- return summary;
610
- }, { transcript: 0, history: 0, hook: 0 });
611
- }
612
622
 
613
623
  // src/tools/save.ts
614
624
  import { relative, isAbsolute } from "node:path";
@@ -2929,6 +2939,16 @@ function getRecentOutcomes2(db, projectId, userId, recentSessions) {
2929
2939
  }
2930
2940
 
2931
2941
  // src/tools/project-memory-index.ts
2942
+ function classifyResumeFreshness(sourceTimestamp) {
2943
+ if (!sourceTimestamp)
2944
+ return "stale";
2945
+ const ageMs = Date.now() - sourceTimestamp * 1000;
2946
+ if (ageMs <= 15 * 60 * 1000)
2947
+ return "live";
2948
+ if (ageMs <= 3 * 24 * 60 * 60 * 1000)
2949
+ return "recent";
2950
+ return "stale";
2951
+ }
2932
2952
  function classifyContinuityState(recentRequestsCount, recentToolsCount, recentHandoffsCount, recentChatCount, recentSessions, recentOutcomesCount) {
2933
2953
  const hasRaw = recentRequestsCount > 0 || recentToolsCount > 0;
2934
2954
  const hasResume = recentHandoffsCount > 0 || recentChatCount > 0;
@@ -3070,7 +3090,7 @@ import { existsSync as existsSync3, readFileSync as readFileSync2, writeFileSync
3070
3090
  import { join as join3 } from "node:path";
3071
3091
  import { homedir } from "node:os";
3072
3092
  var STATE_PATH = join3(homedir(), ".engrm", "config-fingerprint.json");
3073
- var CLIENT_VERSION = "0.4.29";
3093
+ var CLIENT_VERSION = "0.4.31";
3074
3094
  function hashFile(filePath) {
3075
3095
  try {
3076
3096
  if (!existsSync3(filePath))
@@ -5580,6 +5600,10 @@ function formatVisibleStartupBrief(context) {
5580
5600
  }
5581
5601
  }
5582
5602
  lines.push(`${c2.cyan}Continuity:${c2.reset} ${continuityState} \u2014 ${truncateInline(describeContinuityState(continuityState), 160)}`);
5603
+ const resumeReadiness = buildResumeReadinessLine(context);
5604
+ if (resumeReadiness) {
5605
+ lines.push(`${c2.cyan}Resume:${c2.reset} ${truncateInline(resumeReadiness, 160)}`);
5606
+ }
5583
5607
  if (promptLines.length > 0) {
5584
5608
  lines.push(`${c2.cyan}Asked recently:${c2.reset}`);
5585
5609
  for (const item of promptLines) {
@@ -5700,6 +5724,23 @@ function buildLatestHandoffLines(context) {
5700
5724
  }
5701
5725
  return Array.from(new Set(lines.filter(Boolean))).slice(0, 2);
5702
5726
  }
5727
+ function buildResumeReadinessLine(context) {
5728
+ const latestSession = context.recentSessions?.[0] ?? null;
5729
+ const latestHandoff = context.recentHandoffs?.[0] ?? null;
5730
+ const latestChatEpoch = (context.recentChatMessages ?? []).length > 0 ? context.recentChatMessages?.[context.recentChatMessages.length - 1]?.created_at_epoch ?? null : null;
5731
+ const sourceTimestamp = latestChatEpoch ?? latestSession?.completed_at_epoch ?? latestSession?.started_at_epoch ?? latestHandoff?.created_at_epoch ?? null;
5732
+ if (!sourceTimestamp)
5733
+ return null;
5734
+ const freshness = classifyResumeFreshness(sourceTimestamp);
5735
+ const sourceSessionId = latestSession?.session_id ?? latestHandoff?.session_id ?? null;
5736
+ const sourceDevice = latestSession?.device_id ?? latestHandoff?.device_id ?? null;
5737
+ const bits = [freshness];
5738
+ if (sourceDevice)
5739
+ bits.push(`from ${sourceDevice}`);
5740
+ if (sourceSessionId)
5741
+ bits.push(sourceSessionId);
5742
+ return bits.join(" \xB7 ");
5743
+ }
5703
5744
  function formatContextEconomics(data) {
5704
5745
  const totalMemories = Math.max(0, data.loaded + data.available);
5705
5746
  const parts = [];
@@ -5753,6 +5794,7 @@ function formatInspectHints(context, visibleObservationIds = []) {
5753
5794
  hints.push("activity_feed");
5754
5795
  }
5755
5796
  if ((context.recentPrompts?.length ?? 0) > 0 || (context.recentChatMessages?.length ?? 0) > 0 || context.observations.length > 0) {
5797
+ hints.push("resume_thread");
5756
5798
  hints.push("search_recall");
5757
5799
  }
5758
5800
  if (context.observations.length > 0) {
@@ -5765,8 +5807,9 @@ function formatInspectHints(context, visibleObservationIds = []) {
5765
5807
  if ((context.recentChatMessages?.length ?? 0) > 0) {
5766
5808
  hints.push("recent_chat");
5767
5809
  }
5768
- if (hasHookOnlyRecentChat(context)) {
5810
+ if (hasNonTranscriptRecentChat(context)) {
5769
5811
  hints.push("refresh_chat_recall");
5812
+ hints.push("repair_recall");
5770
5813
  }
5771
5814
  if (continuityState !== "fresh") {
5772
5815
  hints.push("recent_chat");
@@ -6240,7 +6283,7 @@ function hasFreshContinuitySignal(context) {
6240
6283
  function getStartupContinuityState(context) {
6241
6284
  return classifyContinuityState(context.recentPrompts?.length ?? 0, context.recentToolEvents?.length ?? 0, context.recentHandoffs?.length ?? 0, context.recentChatMessages?.length ?? 0, context.recentSessions ?? [], context.recentOutcomes?.length ?? 0);
6242
6285
  }
6243
- function hasHookOnlyRecentChat(context) {
6286
+ function hasNonTranscriptRecentChat(context) {
6244
6287
  const recentChat = context.recentChatMessages ?? [];
6245
6288
  return recentChat.length > 0 && !recentChat.some((message) => message.source_kind === "transcript");
6246
6289
  }
@@ -3082,7 +3082,7 @@ function buildBeacon(db, config, sessionId, metrics) {
3082
3082
  sentinel_used: valueSignals.security_findings_count > 0,
3083
3083
  risk_score: riskScore,
3084
3084
  stacks_detected: stacks,
3085
- client_version: "0.4.29",
3085
+ client_version: "0.4.31",
3086
3086
  context_observations_injected: metrics?.contextObsInjected ?? 0,
3087
3087
  context_total_available: metrics?.contextTotalAvailable ?? 0,
3088
3088
  recall_attempts: metrics?.recallAttempts ?? 0,
@@ -4216,6 +4216,22 @@ async function saveTranscriptResults(db, config, results, sessionId, cwd) {
4216
4216
  }
4217
4217
 
4218
4218
  // src/tools/recent-chat.ts
4219
+ function summarizeChatSources(messages) {
4220
+ return messages.reduce((summary, message) => {
4221
+ summary[getChatCaptureOrigin(message)] += 1;
4222
+ return summary;
4223
+ }, { transcript: 0, history: 0, hook: 0 });
4224
+ }
4225
+ function getChatCoverageState(messagesOrSummary) {
4226
+ const summary = Array.isArray(messagesOrSummary) ? summarizeChatSources(messagesOrSummary) : messagesOrSummary;
4227
+ if (summary.transcript > 0)
4228
+ return "transcript-backed";
4229
+ if (summary.history > 0)
4230
+ return "history-backed";
4231
+ if (summary.hook > 0)
4232
+ return "hook-only";
4233
+ return "none";
4234
+ }
4219
4235
  function getChatCaptureOrigin(message) {
4220
4236
  if (message.source_kind === "transcript")
4221
4237
  return "transcript";
@@ -4247,7 +4263,7 @@ function getSessionStory(db, input) {
4247
4263
  prompts,
4248
4264
  chat_messages: chatMessages,
4249
4265
  chat_source_summary: summarizeChatSources(chatMessages),
4250
- chat_coverage_state: chatMessages.some((message) => message.source_kind === "transcript") ? "transcript-backed" : chatMessages.length > 0 ? "hook-only" : "none",
4266
+ chat_coverage_state: getChatCoverageState(chatMessages),
4251
4267
  tool_events: toolEvents,
4252
4268
  observations,
4253
4269
  handoffs,
@@ -4345,12 +4361,6 @@ function collectProvenanceSummary(observations) {
4345
4361
  }
4346
4362
  return Array.from(counts.entries()).map(([tool, count]) => ({ tool, count })).sort((a, b) => b.count - a.count || a.tool.localeCompare(b.tool)).slice(0, 6);
4347
4363
  }
4348
- function summarizeChatSources(messages) {
4349
- return messages.reduce((summary, message) => {
4350
- summary[getChatCaptureOrigin(message)] += 1;
4351
- return summary;
4352
- }, { transcript: 0, history: 0, hook: 0 });
4353
- }
4354
4364
 
4355
4365
  // src/tools/handoffs.ts
4356
4366
  async function upsertRollingHandoff(db, config, input) {
@@ -3100,6 +3100,22 @@ async function saveTranscriptResults(db, config, results, sessionId, cwd) {
3100
3100
  }
3101
3101
 
3102
3102
  // src/tools/recent-chat.ts
3103
+ function summarizeChatSources(messages) {
3104
+ return messages.reduce((summary, message) => {
3105
+ summary[getChatCaptureOrigin(message)] += 1;
3106
+ return summary;
3107
+ }, { transcript: 0, history: 0, hook: 0 });
3108
+ }
3109
+ function getChatCoverageState(messagesOrSummary) {
3110
+ const summary = Array.isArray(messagesOrSummary) ? summarizeChatSources(messagesOrSummary) : messagesOrSummary;
3111
+ if (summary.transcript > 0)
3112
+ return "transcript-backed";
3113
+ if (summary.history > 0)
3114
+ return "history-backed";
3115
+ if (summary.hook > 0)
3116
+ return "hook-only";
3117
+ return "none";
3118
+ }
3103
3119
  function getChatCaptureOrigin(message) {
3104
3120
  if (message.source_kind === "transcript")
3105
3121
  return "transcript";
@@ -3131,7 +3147,7 @@ function getSessionStory(db, input) {
3131
3147
  prompts,
3132
3148
  chat_messages: chatMessages,
3133
3149
  chat_source_summary: summarizeChatSources(chatMessages),
3134
- chat_coverage_state: chatMessages.some((message) => message.source_kind === "transcript") ? "transcript-backed" : chatMessages.length > 0 ? "hook-only" : "none",
3150
+ chat_coverage_state: getChatCoverageState(chatMessages),
3135
3151
  tool_events: toolEvents,
3136
3152
  observations,
3137
3153
  handoffs,
@@ -3229,12 +3245,6 @@ function collectProvenanceSummary(observations) {
3229
3245
  }
3230
3246
  return Array.from(counts.entries()).map(([tool, count]) => ({ tool, count })).sort((a, b) => b.count - a.count || a.tool.localeCompare(b.tool)).slice(0, 6);
3231
3247
  }
3232
- function summarizeChatSources(messages) {
3233
- return messages.reduce((summary, message) => {
3234
- summary[getChatCaptureOrigin(message)] += 1;
3235
- return summary;
3236
- }, { transcript: 0, history: 0, hook: 0 });
3237
- }
3238
3248
 
3239
3249
  // src/tools/handoffs.ts
3240
3250
  async function upsertRollingHandoff(db, config, input) {