engrm 0.4.30 → 0.4.31

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -252,6 +252,10 @@ If you are evaluating Engrm as an MCP server, start with this small set first:
252
252
  - verify which tools are actually producing durable memory and which plugins they exercise
253
253
  - `capture_quality`
254
254
  - check whether raw chronology is healthy across the workspace before judging memory quality
255
+ - `resume_thread`
256
+ - get one direct “where were we?” resume point with freshness, source, tool trail, and next actions
257
+ - `repair_recall`
258
+ - repair weak project recall from transcript or Claude history fallback before you give up on continuity
255
259
 
256
260
  These are the tools we should be comfortable pointing people to publicly first:
257
261
 
@@ -259,6 +263,7 @@ These are the tools we should be comfortable pointing people to publicly first:
259
263
  - local-first execution
260
264
  - durable memory output instead of raw transcript dumping
261
265
  - easy local inspection after capture
266
+ - clear continuity recovery when switching devices or resuming long sessions
262
267
 
263
268
  ### Thin Tools, Thick Memory
264
269
 
@@ -339,6 +344,8 @@ For long sessions, Engrm now also supports transcript-backed chat hydration:
339
344
  - `resume_thread`
340
345
  - gives OpenClaw or Claude one direct “where were we?” action
341
346
  - combines the best handoff, the current thread, recent outcomes, recent chat, and unified recall
347
+ - reports whether the resume point is `strong`, `usable`, or `thin`
348
+ - can attempt recall repair first when continuity is still weak
342
349
  - makes Engrm usable as the primary live continuity layer instead of forcing agents to choose between low-level recall tools
343
350
 
344
351
  Before Claude compacts, Engrm now also:
@@ -359,13 +366,14 @@ Recommended flow:
359
366
  ```text
360
367
  1. capture_status
361
368
  2. memory_console
362
- 3. activity_feed
363
- 4. recent_sessions
364
- 5. session_story
365
- 6. tool_memory_index
366
- 7. session_tool_memory
367
- 8. project_memory_index
368
- 9. workspace_memory_index
369
+ 3. resume_thread
370
+ 4. activity_feed
371
+ 5. recent_sessions
372
+ 6. session_story
373
+ 7. tool_memory_index
374
+ 8. session_tool_memory
375
+ 9. project_memory_index
376
+ 10. workspace_memory_index
369
377
  ```
370
378
 
371
379
  What each tool is good for:
@@ -373,9 +381,10 @@ What each tool is good for:
373
381
  - `capture_status` tells you whether prompt/tool hooks are live on this machine
374
382
  - `capture_quality` shows whether chat recall is transcript-backed, history-backed, or still hook-only across the workspace
375
383
  - `memory_console` gives the quickest project snapshot, including whether continuity is `fresh`, `thin`, or `cold`
384
+ - `resume_thread` is the fastest “get me back into the live thread” path when you want freshness, source, next actions, tool trail, and chat in one place
376
385
  - `memory_console`, `project_memory_index`, and `session_context` now also show whether project chat recall is transcript-backed, history-backed, or only hook-captured
386
+ - `memory_console`, `project_memory_index`, and `session_context` also expose resume-readiness directly, so you can see whether a repo is `live`, `recent`, or `stale` before drilling deeper
377
387
  - when chat continuity is only partial, the workbench and startup hints now prefer `repair_recall`, and still suggest `refresh_chat_recall` when a single session likely just needs transcript hydration
378
- - `resume_thread` is now the fastest “get me back into the live thread” path when you do not want to think about which continuity lane to inspect
379
388
  - the workbench and startup hints now also prefer `search_recall` as the first “what were we just talking about?” path when recent prompts/chat/observations exist
380
389
  - `search_chat` now uses hybrid lexical + semantic ranking when sqlite-vec and local embeddings are available, so recent conversation recall is less dependent on exact wording
381
390
  - `activity_feed` shows the merged chronology across prompts, tools, chat, handoffs, observations, and summaries
@@ -2939,6 +2939,16 @@ function getRecentOutcomes2(db, projectId, userId, recentSessions) {
2939
2939
  }
2940
2940
 
2941
2941
  // src/tools/project-memory-index.ts
2942
+ function classifyResumeFreshness(sourceTimestamp) {
2943
+ if (!sourceTimestamp)
2944
+ return "stale";
2945
+ const ageMs = Date.now() - sourceTimestamp * 1000;
2946
+ if (ageMs <= 15 * 60 * 1000)
2947
+ return "live";
2948
+ if (ageMs <= 3 * 24 * 60 * 60 * 1000)
2949
+ return "recent";
2950
+ return "stale";
2951
+ }
2942
2952
  function classifyContinuityState(recentRequestsCount, recentToolsCount, recentHandoffsCount, recentChatCount, recentSessions, recentOutcomesCount) {
2943
2953
  const hasRaw = recentRequestsCount > 0 || recentToolsCount > 0;
2944
2954
  const hasResume = recentHandoffsCount > 0 || recentChatCount > 0;
@@ -3080,7 +3090,7 @@ import { existsSync as existsSync3, readFileSync as readFileSync2, writeFileSync
3080
3090
  import { join as join3 } from "node:path";
3081
3091
  import { homedir } from "node:os";
3082
3092
  var STATE_PATH = join3(homedir(), ".engrm", "config-fingerprint.json");
3083
- var CLIENT_VERSION = "0.4.30";
3093
+ var CLIENT_VERSION = "0.4.31";
3084
3094
  function hashFile(filePath) {
3085
3095
  try {
3086
3096
  if (!existsSync3(filePath))
@@ -5590,6 +5600,10 @@ function formatVisibleStartupBrief(context) {
5590
5600
  }
5591
5601
  }
5592
5602
  lines.push(`${c2.cyan}Continuity:${c2.reset} ${continuityState} \u2014 ${truncateInline(describeContinuityState(continuityState), 160)}`);
5603
+ const resumeReadiness = buildResumeReadinessLine(context);
5604
+ if (resumeReadiness) {
5605
+ lines.push(`${c2.cyan}Resume:${c2.reset} ${truncateInline(resumeReadiness, 160)}`);
5606
+ }
5593
5607
  if (promptLines.length > 0) {
5594
5608
  lines.push(`${c2.cyan}Asked recently:${c2.reset}`);
5595
5609
  for (const item of promptLines) {
@@ -5710,6 +5724,23 @@ function buildLatestHandoffLines(context) {
5710
5724
  }
5711
5725
  return Array.from(new Set(lines.filter(Boolean))).slice(0, 2);
5712
5726
  }
5727
+ function buildResumeReadinessLine(context) {
5728
+ const latestSession = context.recentSessions?.[0] ?? null;
5729
+ const latestHandoff = context.recentHandoffs?.[0] ?? null;
5730
+ const latestChatEpoch = (context.recentChatMessages ?? []).length > 0 ? context.recentChatMessages?.[context.recentChatMessages.length - 1]?.created_at_epoch ?? null : null;
5731
+ const sourceTimestamp = latestChatEpoch ?? latestSession?.completed_at_epoch ?? latestSession?.started_at_epoch ?? latestHandoff?.created_at_epoch ?? null;
5732
+ if (!sourceTimestamp)
5733
+ return null;
5734
+ const freshness = classifyResumeFreshness(sourceTimestamp);
5735
+ const sourceSessionId = latestSession?.session_id ?? latestHandoff?.session_id ?? null;
5736
+ const sourceDevice = latestSession?.device_id ?? latestHandoff?.device_id ?? null;
5737
+ const bits = [freshness];
5738
+ if (sourceDevice)
5739
+ bits.push(`from ${sourceDevice}`);
5740
+ if (sourceSessionId)
5741
+ bits.push(sourceSessionId);
5742
+ return bits.join(" \xB7 ");
5743
+ }
5713
5744
  function formatContextEconomics(data) {
5714
5745
  const totalMemories = Math.max(0, data.loaded + data.available);
5715
5746
  const parts = [];
@@ -3082,7 +3082,7 @@ function buildBeacon(db, config, sessionId, metrics) {
3082
3082
  sentinel_used: valueSignals.security_findings_count > 0,
3083
3083
  risk_score: riskScore,
3084
3084
  stacks_detected: stacks,
3085
- client_version: "0.4.30",
3085
+ client_version: "0.4.31",
3086
3086
  context_observations_injected: metrics?.contextObsInjected ?? 0,
3087
3087
  context_total_available: metrics?.contextTotalAvailable ?? 0,
3088
3088
  recall_attempts: metrics?.recallAttempts ?? 0,
package/dist/server.js CHANGED
@@ -16395,10 +16395,25 @@ function sanitizeFtsQuery(query) {
16395
16395
  }
16396
16396
 
16397
16397
  // src/tools/recent-chat.ts
16398
+ function dedupeChatMessages(messages) {
16399
+ const bestByKey = new Map;
16400
+ for (const message of messages) {
16401
+ const key = buildDedupKey(message);
16402
+ const existing = bestByKey.get(key);
16403
+ if (!existing || compareChatPriority(message, existing) < 0) {
16404
+ bestByKey.set(key, message);
16405
+ }
16406
+ }
16407
+ return Array.from(bestByKey.values()).sort((a, b) => {
16408
+ if (b.created_at_epoch !== a.created_at_epoch)
16409
+ return b.created_at_epoch - a.created_at_epoch;
16410
+ return b.id - a.id;
16411
+ });
16412
+ }
16398
16413
  function getRecentChat(db, input) {
16399
16414
  const limit = Math.max(1, Math.min(input.limit ?? 20, 100));
16400
16415
  if (input.session_id) {
16401
- const messages2 = db.getSessionChatMessages(input.session_id, limit).slice(-limit).reverse();
16416
+ const messages2 = dedupeChatMessages(db.getSessionChatMessages(input.session_id, limit * 3)).slice(0, limit);
16402
16417
  return {
16403
16418
  messages: messages2,
16404
16419
  session_count: countDistinctSessions(messages2),
@@ -16419,7 +16434,7 @@ function getRecentChat(db, input) {
16419
16434
  projectName = project.name;
16420
16435
  }
16421
16436
  }
16422
- const messages = db.getRecentChatMessages(projectId, limit, input.user_id);
16437
+ const messages = dedupeChatMessages(db.getRecentChatMessages(projectId, limit * 3, input.user_id)).slice(0, limit);
16423
16438
  return {
16424
16439
  messages,
16425
16440
  project: projectName,
@@ -16456,6 +16471,25 @@ function getChatCaptureOrigin(message) {
16456
16471
  }
16457
16472
  return "hook";
16458
16473
  }
16474
+ function buildDedupKey(message) {
16475
+ return [
16476
+ message.session_id,
16477
+ message.role,
16478
+ message.content.toLowerCase().replace(/\s+/g, " ").trim()
16479
+ ].join("::");
16480
+ }
16481
+ function compareChatPriority(a, b) {
16482
+ const originScore = (message) => {
16483
+ const origin = getChatCaptureOrigin(message);
16484
+ return origin === "transcript" ? 0 : origin === "history" ? 1 : 2;
16485
+ };
16486
+ const originDelta = originScore(a) - originScore(b);
16487
+ if (originDelta !== 0)
16488
+ return originDelta;
16489
+ if (b.created_at_epoch !== a.created_at_epoch)
16490
+ return b.created_at_epoch - a.created_at_epoch;
16491
+ return b.id - a.id;
16492
+ }
16459
16493
 
16460
16494
  // src/tools/search-chat.ts
16461
16495
  async function searchChat(db, input) {
@@ -16490,7 +16524,7 @@ async function searchChat(db, input) {
16490
16524
  semantic = db.searchChatVec(queryEmbedding, projectId, limit * 3, input.user_id);
16491
16525
  }
16492
16526
  const messageIds = mergeChatResults(db, lexical, semantic, normalizedQuery, limit);
16493
- const messages = messageIds.length > 0 ? db.getChatMessagesByIds(messageIds) : [];
16527
+ const messages = messageIds.length > 0 ? dedupeChatMessages(db.getChatMessagesByIds(messageIds)).slice(0, limit) : [];
16494
16528
  return {
16495
16529
  messages,
16496
16530
  project: projectName,
@@ -16652,11 +16686,12 @@ function mergeRecallResults(memory, chat, limit, recentThreadQuery, sessionPrior
16652
16686
  }
16653
16687
  for (let index = 0;index < chat.length; index++) {
16654
16688
  const item = chat[index];
16689
+ const origin = getChatCaptureOrigin(item);
16655
16690
  const base = 1 / (60 + index + 1);
16656
16691
  const ageHours = Math.max(0, (nowEpoch - item.created_at_epoch) / 3600);
16657
16692
  const immediacyBoost = ageHours < 1 ? 1.2 : ageHours < 6 ? 0.55 : 0;
16658
16693
  const recencyBoost = ageHours < 24 ? 0.18 : ageHours < 72 ? 0.08 : 0.02;
16659
- const sourceBoost = item.source_kind === "transcript" ? 0.1 : 0.04;
16694
+ const sourceBoost = origin === "transcript" ? 0.1 : origin === "history" ? 0.07 : 0.04;
16660
16695
  const continuityBoost = recentThreadQuery ? 0.35 : 0;
16661
16696
  const sessionBoost = sessionPriority.get(item.session_id) ?? 0;
16662
16697
  scored.push({
@@ -16666,8 +16701,8 @@ function mergeRecallResults(memory, chat, limit, recentThreadQuery, sessionPrior
16666
16701
  session_id: item.session_id,
16667
16702
  id: item.id,
16668
16703
  role: item.role,
16669
- source_kind: getChatCaptureOrigin(item),
16670
- title: `${item.role} [${getChatCaptureOrigin(item)}]`,
16704
+ source_kind: origin,
16705
+ title: `${item.role} [${origin}]`,
16671
16706
  detail: item.content.replace(/\s+/g, " ").trim()
16672
16707
  });
16673
16708
  }
@@ -18336,6 +18371,8 @@ function getProjectMemoryIndex(db, input) {
18336
18371
  limit: 20
18337
18372
  });
18338
18373
  const recentChatCount = recentChat.messages.length;
18374
+ const latestSession = recentSessions[0] ?? null;
18375
+ const latestSummary = latestSession ? db.getSessionSummary(latestSession.session_id) : null;
18339
18376
  const recentOutcomes = observations.filter((obs) => ["bugfix", "feature", "refactor", "change", "decision"].includes(obs.type)).map((obs) => obs.title.trim()).filter((title) => title.length > 0 && !looksLikeFileOperationTitle3(title)).slice(0, 8);
18340
18377
  const captureSummary = summarizeCaptureState(recentSessions);
18341
18378
  const topTypes = Object.entries(counts).map(([type, count]) => ({ type, count })).sort((a, b) => b.count - a.count || a.type.localeCompare(b.type)).slice(0, 5);
@@ -18350,11 +18387,16 @@ function getProjectMemoryIndex(db, input) {
18350
18387
  ].filter(Boolean).join(`
18351
18388
  `));
18352
18389
  const continuityState = classifyContinuityState(recentRequestsCount, recentToolsCount, recentHandoffsCount.length, recentChatCount, recentSessions, recentOutcomes.length);
18390
+ const sourceTimestamp = pickResumeSourceTimestamp(latestSession, recentChat.messages);
18353
18391
  return {
18354
18392
  project: project.name,
18355
18393
  canonical_id: project.canonical_id,
18356
18394
  continuity_state: continuityState,
18357
18395
  continuity_summary: describeContinuityState(continuityState),
18396
+ resume_freshness: classifyResumeFreshness(sourceTimestamp),
18397
+ resume_source_session_id: latestSession?.session_id ?? null,
18398
+ resume_source_device_id: latestSession?.device_id ?? null,
18399
+ resume_next_actions: collectNextActions(latestSummary?.next_steps),
18358
18400
  observation_counts: counts,
18359
18401
  recent_sessions: recentSessions,
18360
18402
  recent_outcomes: recentOutcomes,
@@ -18379,6 +18421,28 @@ function getProjectMemoryIndex(db, input) {
18379
18421
  suggested_tools: suggestedTools
18380
18422
  };
18381
18423
  }
18424
+ function pickResumeSourceTimestamp(latestSession, messages) {
18425
+ const latestChatEpoch = messages.length > 0 ? messages[messages.length - 1]?.created_at_epoch ?? null : null;
18426
+ return latestChatEpoch ?? latestSession?.completed_at_epoch ?? latestSession?.started_at_epoch ?? null;
18427
+ }
18428
+ function classifyResumeFreshness(sourceTimestamp) {
18429
+ if (!sourceTimestamp)
18430
+ return "stale";
18431
+ const ageMs = Date.now() - sourceTimestamp * 1000;
18432
+ if (ageMs <= 15 * 60 * 1000)
18433
+ return "live";
18434
+ if (ageMs <= 3 * 24 * 60 * 60 * 1000)
18435
+ return "recent";
18436
+ return "stale";
18437
+ }
18438
+ function collectNextActions(value) {
18439
+ if (!value)
18440
+ return [];
18441
+ const normalized = value.split(/\n+/).map((line) => line.replace(/^[\s*-]+/, "").trim()).filter((line) => line.length > 0);
18442
+ if (normalized.length > 1)
18443
+ return normalized.slice(0, 5);
18444
+ return value.split(/[.;](?:\s+|$)/).map((item) => item.replace(/^[\s*-]+/, "").trim()).filter((item) => item.length > 0).slice(0, 5);
18445
+ }
18382
18446
  function classifyContinuityState(recentRequestsCount, recentToolsCount, recentHandoffsCount, recentChatCount, recentSessions, recentOutcomesCount) {
18383
18447
  const hasRaw = recentRequestsCount > 0 || recentToolsCount > 0;
18384
18448
  const hasResume = recentHandoffsCount > 0 || recentChatCount > 0;
@@ -18523,6 +18587,10 @@ function getMemoryConsole(db, input) {
18523
18587
  capture_mode: requests.length > 0 || tools.length > 0 ? "rich" : "observations-only",
18524
18588
  continuity_state: continuityState,
18525
18589
  continuity_summary: projectIndex?.continuity_summary ?? describeContinuityState(continuityState),
18590
+ resume_freshness: projectIndex?.resume_freshness ?? "stale",
18591
+ resume_source_session_id: projectIndex?.resume_source_session_id ?? sessions[0]?.session_id ?? null,
18592
+ resume_source_device_id: projectIndex?.resume_source_device_id ?? sessions[0]?.device_id ?? null,
18593
+ resume_next_actions: projectIndex?.resume_next_actions ?? [],
18526
18594
  sessions,
18527
18595
  requests,
18528
18596
  tools,
@@ -19399,14 +19467,22 @@ function getSessionContext(db, input) {
19399
19467
  limit: 8
19400
19468
  });
19401
19469
  const recentChatMessages = recentChat.messages.length;
19470
+ const latestSession = context.recentSessions?.[0] ?? null;
19471
+ const latestSummary = latestSession ? db.getSessionSummary(latestSession.session_id) : null;
19402
19472
  const captureState = recentRequests > 0 && recentTools > 0 ? "rich" : recentRequests > 0 || recentTools > 0 ? "partial" : "summary-only";
19403
19473
  const hotFiles = buildHotFiles(context);
19404
19474
  const continuityState = classifyContinuityState(recentRequests, recentTools, recentHandoffs, recentChatMessages, context.recentSessions ?? [], (context.recentOutcomes ?? []).length);
19475
+ const latestChatEpoch = recentChat.messages.length > 0 ? recentChat.messages[recentChat.messages.length - 1]?.created_at_epoch ?? null : null;
19476
+ const resumeTimestamp = latestChatEpoch ?? latestSession?.completed_at_epoch ?? latestSession?.started_at_epoch ?? null;
19405
19477
  return {
19406
19478
  project_name: context.project_name,
19407
19479
  canonical_id: context.canonical_id,
19408
19480
  continuity_state: continuityState,
19409
19481
  continuity_summary: describeContinuityState(continuityState),
19482
+ resume_freshness: classifyResumeFreshness(resumeTimestamp),
19483
+ resume_source_session_id: latestSession?.session_id ?? null,
19484
+ resume_source_device_id: latestSession?.device_id ?? null,
19485
+ resume_next_actions: collectNextActions2(latestSummary?.next_steps),
19410
19486
  session_count: context.session_count,
19411
19487
  total_active: context.total_active,
19412
19488
  recent_requests: recentRequests,
@@ -19429,6 +19505,14 @@ function getSessionContext(db, input) {
19429
19505
  preview
19430
19506
  };
19431
19507
  }
19508
+ function collectNextActions2(value) {
19509
+ if (!value)
19510
+ return [];
19511
+ const normalized = value.split(/\n+/).map((line) => line.replace(/^[\s*-]+/, "").trim()).filter((line) => line.length > 0);
19512
+ if (normalized.length > 1)
19513
+ return normalized.slice(0, 5);
19514
+ return value.split(/[.;](?:\s+|$)/).map((item) => item.replace(/^[\s*-]+/, "").trim()).filter((item) => item.length > 0).slice(0, 5);
19515
+ }
19432
19516
  function buildHotFiles(context) {
19433
19517
  const counts = new Map;
19434
19518
  for (const obs of context.observations) {
@@ -19896,45 +19980,53 @@ async function repairRecall(db, config2, input = {}) {
19896
19980
  }
19897
19981
 
19898
19982
  // src/tools/resume-thread.ts
19899
- async function resumeThread(db, input = {}) {
19983
+ async function resumeThread(db, config2, input = {}) {
19900
19984
  const cwd = input.cwd ?? process.cwd();
19901
19985
  const limit = Math.max(2, Math.min(input.limit ?? 5, 8));
19986
+ const repairIfNeeded = input.repair_if_needed !== false;
19902
19987
  const detected = detectProject(cwd);
19903
19988
  const project = db.getProjectByCanonicalId(detected.canonical_id);
19904
- const context = getSessionContext(db, {
19905
- cwd,
19906
- user_id: input.user_id,
19907
- current_device_id: input.current_device_id
19908
- });
19909
- const handoffResult = loadHandoff(db, {
19910
- cwd,
19911
- project_scoped: true,
19912
- user_id: input.user_id,
19913
- current_device_id: input.current_device_id
19914
- });
19915
- const handoff = handoffResult.handoff;
19916
- const recentChat = getRecentChat(db, {
19917
- cwd,
19918
- project_scoped: true,
19919
- user_id: input.user_id,
19920
- limit: Math.max(limit, 4)
19921
- });
19922
- const recentSessions = getRecentSessions(db, {
19923
- cwd,
19924
- project_scoped: true,
19925
- user_id: input.user_id,
19926
- limit: 3
19927
- }).sessions;
19928
- const recall = await searchRecall(db, {
19929
- query: "what were we just talking about",
19930
- cwd,
19931
- project_scoped: true,
19932
- user_id: input.user_id,
19933
- limit
19934
- });
19989
+ let snapshot = await buildResumeSnapshot(db, cwd, input.user_id, input.current_device_id, limit);
19990
+ let repairResult = null;
19991
+ const shouldRepair = repairIfNeeded && snapshot.recentChat.coverage_state !== "transcript-backed" && (snapshot.recentChat.messages.length > 0 || snapshot.recentSessions.length > 0 || snapshot.context?.continuity_state !== "cold");
19992
+ if (shouldRepair) {
19993
+ repairResult = await repairRecall(db, config2, {
19994
+ cwd,
19995
+ user_id: input.user_id,
19996
+ limit: Math.max(limit, 4)
19997
+ });
19998
+ if (repairResult.imported_chat_messages > 0) {
19999
+ snapshot = await buildResumeSnapshot(db, cwd, input.user_id, input.current_device_id, limit);
20000
+ }
20001
+ }
20002
+ const { context, handoff, recentChat, recentSessions, recall } = snapshot;
19935
20003
  const latestSession = recentSessions[0] ?? null;
20004
+ const latestSummary = latestSession ? db.getSessionSummary(latestSession.session_id) : null;
19936
20005
  const inferredRequest = latestSession?.request?.trim() || null;
19937
20006
  const currentThread = extractCurrentThread(handoff) || latestSession?.current_thread?.trim() || inferredRequest || context?.recent_outcomes[0] || recentChat.messages[recentChat.messages.length - 1]?.content.replace(/\s+/g, " ").trim().slice(0, 180) || null;
20007
+ const toolTrail = collectToolTrail(latestSession);
20008
+ const hotFiles = collectHotFiles2(latestSession, context?.hot_files ?? []);
20009
+ const nextActions = collectNextActions3(latestSummary?.next_steps);
20010
+ const sourceTimestamp = pickSourceTimestamp(latestSession, recentChat.messages);
20011
+ const resumeBasis = buildResumeBasis({
20012
+ handoff,
20013
+ continuityState: context?.continuity_state ?? "cold",
20014
+ chatCoverageState: recentChat.coverage_state,
20015
+ latestRequest: inferredRequest,
20016
+ currentThread,
20017
+ recallHits: recall.results,
20018
+ recentOutcomes: context?.recent_outcomes ?? [],
20019
+ toolTrail,
20020
+ hotFiles,
20021
+ nextActions
20022
+ });
20023
+ const resumeConfidence = classifyResumeConfidence({
20024
+ handoff,
20025
+ continuityState: context?.continuity_state ?? "cold",
20026
+ chatCoverageState: recentChat.coverage_state,
20027
+ recallHits: recall.results,
20028
+ currentThread
20029
+ });
19938
20030
  const suggestedTools = Array.from(new Set([
19939
20031
  "search_recall",
19940
20032
  ...recentChat.coverage_state !== "transcript-backed" && recentChat.messages.length > 0 ? ["repair_recall", "refresh_chat_recall"] : [],
@@ -19945,6 +20037,16 @@ async function resumeThread(db, input = {}) {
19945
20037
  project_name: project?.name ?? context?.project_name ?? null,
19946
20038
  continuity_state: context?.continuity_state ?? "cold",
19947
20039
  continuity_summary: context?.continuity_summary ?? "No fresh repo-local continuity yet; older memory should be treated cautiously.",
20040
+ resume_freshness: classifyResumeFreshness2(sourceTimestamp),
20041
+ resume_source_session_id: latestSession?.session_id ?? null,
20042
+ resume_source_device_id: handoff?.device_id ?? latestSession?.device_id ?? null,
20043
+ resume_confidence: resumeConfidence,
20044
+ resume_basis: resumeBasis,
20045
+ repair_attempted: shouldRepair,
20046
+ repair_result: repairResult ? {
20047
+ imported_chat_messages: repairResult.imported_chat_messages,
20048
+ sessions_with_imports: repairResult.sessions_with_imports
20049
+ } : null,
19948
20050
  latest_request: inferredRequest,
19949
20051
  current_thread: currentThread,
19950
20052
  handoff: handoff ? {
@@ -19952,6 +20054,9 @@ async function resumeThread(db, input = {}) {
19952
20054
  title: handoff.title,
19953
20055
  source: extractHandoffSource(handoff)
19954
20056
  } : null,
20057
+ tool_trail: toolTrail,
20058
+ hot_files: hotFiles,
20059
+ next_actions: nextActions,
19955
20060
  recent_outcomes: context?.recent_outcomes ?? [],
19956
20061
  chat_coverage_state: recentChat.coverage_state,
19957
20062
  recent_chat: recentChat.messages.slice(-4).map((message) => ({
@@ -19964,6 +20069,45 @@ async function resumeThread(db, input = {}) {
19964
20069
  suggested_tools: suggestedTools
19965
20070
  };
19966
20071
  }
20072
+ async function buildResumeSnapshot(db, cwd, userId, currentDeviceId, limit) {
20073
+ const context = getSessionContext(db, {
20074
+ cwd,
20075
+ user_id: userId,
20076
+ current_device_id: currentDeviceId
20077
+ });
20078
+ const handoffResult = loadHandoff(db, {
20079
+ cwd,
20080
+ project_scoped: true,
20081
+ user_id: userId,
20082
+ current_device_id: currentDeviceId
20083
+ });
20084
+ const recentChat = getRecentChat(db, {
20085
+ cwd,
20086
+ project_scoped: true,
20087
+ user_id: userId,
20088
+ limit: Math.max(limit, 4)
20089
+ });
20090
+ const recentSessions = getRecentSessions(db, {
20091
+ cwd,
20092
+ project_scoped: true,
20093
+ user_id: userId,
20094
+ limit: 3
20095
+ }).sessions;
20096
+ const recall = await searchRecall(db, {
20097
+ query: "what were we just talking about",
20098
+ cwd,
20099
+ project_scoped: true,
20100
+ user_id: userId,
20101
+ limit
20102
+ });
20103
+ return {
20104
+ context,
20105
+ handoff: handoffResult.handoff,
20106
+ recentChat,
20107
+ recentSessions,
20108
+ recall
20109
+ };
20110
+ }
19967
20111
  function extractCurrentThread(handoff) {
19968
20112
  const narrative = handoff?.narrative ?? "";
19969
20113
  const match = narrative.match(/Current thread:\s*(.+)/i);
@@ -19972,6 +20116,87 @@ function extractCurrentThread(handoff) {
19972
20116
  function extractHandoffSource(handoff) {
19973
20117
  return handoff.device_id ?? null;
19974
20118
  }
20119
+ function pickSourceTimestamp(latestSession, messages) {
20120
+ const latestChatEpoch = messages.length > 0 ? messages[messages.length - 1]?.created_at_epoch ?? null : null;
20121
+ return latestChatEpoch ?? latestSession?.completed_at_epoch ?? latestSession?.started_at_epoch ?? null;
20122
+ }
20123
+ function classifyResumeFreshness2(sourceTimestamp) {
20124
+ if (!sourceTimestamp)
20125
+ return "stale";
20126
+ const ageMs = Date.now() - sourceTimestamp * 1000;
20127
+ if (ageMs <= 15 * 60 * 1000)
20128
+ return "live";
20129
+ if (ageMs <= 3 * 24 * 60 * 60 * 1000)
20130
+ return "recent";
20131
+ return "stale";
20132
+ }
20133
+ function classifyResumeConfidence(input) {
20134
+ const hasStrongChat = input.chatCoverageState === "transcript-backed" || input.chatCoverageState === "history-backed";
20135
+ const hasRecall = input.recallHits.length > 0;
20136
+ if (input.handoff && input.currentThread && (input.continuityState === "fresh" || hasStrongChat || hasRecall)) {
20137
+ return "strong";
20138
+ }
20139
+ if (input.currentThread && (input.continuityState !== "cold" || hasStrongChat || hasRecall)) {
20140
+ return "usable";
20141
+ }
20142
+ return "thin";
20143
+ }
20144
+ function buildResumeBasis(input) {
20145
+ const basis = [];
20146
+ if (input.handoff)
20147
+ basis.push("explicit handoff available");
20148
+ if (input.currentThread)
20149
+ basis.push("current thread recovered");
20150
+ if (input.latestRequest)
20151
+ basis.push("latest request recovered");
20152
+ if (input.recentOutcomes.length > 0)
20153
+ basis.push("recent outcomes available");
20154
+ if (input.nextActions.length > 0)
20155
+ basis.push("next actions available");
20156
+ if (input.toolTrail.length > 0)
20157
+ basis.push("recent tool trail available");
20158
+ if (input.hotFiles.length > 0)
20159
+ basis.push("hot files available");
20160
+ if (input.recallHits.some((item) => item.kind === "chat"))
20161
+ basis.push("live chat recall available");
20162
+ if (input.chatCoverageState === "transcript-backed")
20163
+ basis.push("transcript-backed chat continuity");
20164
+ if (input.chatCoverageState === "history-backed")
20165
+ basis.push("history-backed chat continuity");
20166
+ if (input.continuityState === "fresh")
20167
+ basis.push("fresh repo-local continuity");
20168
+ if (basis.length === 0)
20169
+ basis.push("thin fallback only");
20170
+ return basis.slice(0, 6);
20171
+ }
20172
+ function collectToolTrail(session) {
20173
+ const parsed = parseJsonArray4(session?.recent_tool_names);
20174
+ return parsed.slice(0, 5);
20175
+ }
20176
+ function collectHotFiles2(session, fallback) {
20177
+ const parsed = parseJsonArray4(session?.hot_files).map((path) => ({ path, count: 1 }));
20178
+ if (parsed.length > 0)
20179
+ return parsed.slice(0, 5);
20180
+ return fallback.slice(0, 5);
20181
+ }
20182
+ function collectNextActions3(value) {
20183
+ if (!value)
20184
+ return [];
20185
+ const normalized = value.split(/\n+/).map((line) => line.replace(/^[\s*-]+/, "").trim()).filter((line) => line.length > 0);
20186
+ if (normalized.length > 1)
20187
+ return normalized.slice(0, 5);
20188
+ return value.split(/[.;](?:\s+|$)/).map((item) => item.replace(/^[\s*-]+/, "").trim()).filter((item) => item.length > 0).slice(0, 5);
20189
+ }
20190
+ function parseJsonArray4(value) {
20191
+ if (!value)
20192
+ return [];
20193
+ try {
20194
+ const parsed = JSON.parse(value);
20195
+ return Array.isArray(parsed) ? parsed.filter((item) => typeof item === "string" && item.trim().length > 0) : [];
20196
+ } catch {
20197
+ return [];
20198
+ }
20199
+ }
19975
20200
 
19976
20201
  // src/tools/send-message.ts
19977
20202
  async function sendMessage(db, config2, input) {
@@ -20511,8 +20736,8 @@ function buildSessionHandoffMetadata(prompts, toolEvents, observations) {
20511
20736
  const recentToolNames = [...new Set(toolEvents.slice(-8).map((tool) => tool.tool_name).filter(Boolean))];
20512
20737
  const recentToolCommands = [...new Set(toolEvents.slice(-5).map((tool) => (tool.command ?? tool.file_path ?? "").trim()).filter(Boolean))];
20513
20738
  const hotFiles = [...new Set(observations.flatMap((obs) => [
20514
- ...parseJsonArray4(obs.files_modified),
20515
- ...parseJsonArray4(obs.files_read)
20739
+ ...parseJsonArray5(obs.files_modified),
20740
+ ...parseJsonArray5(obs.files_read)
20516
20741
  ]).filter(Boolean))].slice(0, 6);
20517
20742
  const recentOutcomes = observations.filter((obs) => ["bugfix", "feature", "refactor", "change", "decision"].includes(obs.type)).map((obs) => obs.title.trim()).filter((title) => title.length > 0).slice(0, 6);
20518
20743
  const captureState = prompts.length > 0 && toolEvents.length > 0 ? "rich" : prompts.length > 0 || toolEvents.length > 0 ? "partial" : "summary-only";
@@ -20570,7 +20795,7 @@ function compactFileHint(value) {
20570
20795
  return value;
20571
20796
  return parts.slice(-2).join("/");
20572
20797
  }
20573
- function parseJsonArray4(value) {
20798
+ function parseJsonArray5(value) {
20574
20799
  if (!value)
20575
20800
  return [];
20576
20801
  try {
@@ -21814,7 +22039,7 @@ process.on("SIGTERM", () => {
21814
22039
  });
21815
22040
  var server = new McpServer({
21816
22041
  name: "engrm",
21817
- version: "0.4.30"
22042
+ version: "0.4.31"
21818
22043
  });
21819
22044
  server.tool("save_observation", "Save an observation to memory", {
21820
22045
  type: exports_external.enum([
@@ -22262,19 +22487,31 @@ ${rows}`
22262
22487
  server.tool("resume_thread", "Build a clear resume point for the current project by combining handoff, live recall, current thread, and recent chat continuity.", {
22263
22488
  cwd: exports_external.string().optional().describe("Optional cwd override for the project to resume"),
22264
22489
  limit: exports_external.number().optional().describe("Max recall hits/chat snippets to include"),
22265
- user_id: exports_external.string().optional().describe("Optional user override")
22490
+ user_id: exports_external.string().optional().describe("Optional user override"),
22491
+ repair_if_needed: exports_external.boolean().optional().describe("If true, attempt recall repair before resuming when continuity is still weak")
22266
22492
  }, async (params) => {
22267
- const result = await resumeThread(db, {
22493
+ const result = await resumeThread(db, config2, {
22268
22494
  cwd: params.cwd ?? process.cwd(),
22269
22495
  limit: params.limit,
22270
22496
  user_id: params.user_id ?? config2.user_id,
22271
- current_device_id: config2.device_id
22497
+ current_device_id: config2.device_id,
22498
+ repair_if_needed: params.repair_if_needed
22272
22499
  });
22273
22500
  const projectLine = result.project_name ? `Project: ${result.project_name}
22274
22501
  ` : "";
22275
22502
  const handoffLine = result.handoff ? `Handoff: #${result.handoff.id} ${result.handoff.title}${result.handoff.source ? ` (${result.handoff.source})` : ""}
22276
22503
  ` : `Handoff: (none)
22277
22504
  `;
22505
+ const basisLines = result.resume_basis.length > 0 ? result.resume_basis.map((item) => `- ${item}`).join(`
22506
+ `) : "- (none)";
22507
+ const toolTrailLines = result.tool_trail.length > 0 ? result.tool_trail.map((item) => `- ${item}`).join(`
22508
+ `) : "- (none)";
22509
+ const hotFileLines = result.hot_files.length > 0 ? result.hot_files.map((item) => `- ${item.path}${item.count > 1 ? ` (${item.count})` : ""}`).join(`
22510
+ `) : "- (none)";
22511
+ const nextActionLines = result.next_actions.length > 0 ? result.next_actions.map((item) => `- ${item}`).join(`
22512
+ `) : "- (none)";
22513
+ const repairLine = result.repair_attempted ? `Recall repair: attempted${result.repair_result ? ` · imported ${result.repair_result.imported_chat_messages} chat across ${result.repair_result.sessions_with_imports} session(s)` : ""}
22514
+ ` : "";
22278
22515
  const outcomes = result.recent_outcomes.length > 0 ? result.recent_outcomes.map((item) => `- ${item}`).join(`
22279
22516
  `) : "- (none)";
22280
22517
  const chatLines = result.recent_chat.length > 0 ? result.recent_chat.map((item) => `- [${item.role}] [${item.source}] ${item.content.slice(0, 180)}`).join(`
@@ -22296,11 +22533,26 @@ server.tool("resume_thread", "Build a clear resume point for the current project
22296
22533
  {
22297
22534
  type: "text",
22298
22535
  text: `${projectLine}` + `Continuity: ${result.continuity_state} — ${result.continuity_summary}
22299
- ` + `Current thread: ${result.current_thread ?? "(unknown)"}
22536
+ ` + `Freshness: ${result.resume_freshness}
22537
+ ` + `Source: ${result.resume_source_session_id ?? "(unknown session)"}${result.resume_source_device_id ? ` (${result.resume_source_device_id})` : ""}
22538
+ ` + `Resume confidence: ${result.resume_confidence}
22539
+ ` + repairLine + `Current thread: ${result.current_thread ?? "(unknown)"}
22300
22540
  ` + `Latest request: ${result.latest_request ?? "(none)"}
22301
22541
  ` + `${handoffLine}` + `Chat recall: ${result.chat_coverage_state}
22302
22542
  ` + `Suggested tools: ${result.suggested_tools.join(", ") || "(none)"}
22303
22543
 
22544
+ ` + `Resume basis:
22545
+ ${basisLines}
22546
+
22547
+ ` + `Tool trail:
22548
+ ${toolTrailLines}
22549
+
22550
+ ` + `Hot files:
22551
+ ${hotFileLines}
22552
+
22553
+ ` + `Next actions:
22554
+ ${nextActionLines}
22555
+
22304
22556
  ` + `Recent outcomes:
22305
22557
  ${outcomes}
22306
22558
 
@@ -22633,12 +22885,17 @@ server.tool("memory_console", "Show a high-signal local overview of what Engrm c
22633
22885
  {
22634
22886
  type: "text",
22635
22887
  text: `${projectLine}` + `${captureLine}` + `Continuity: ${result.continuity_state} — ${result.continuity_summary}
22888
+ ` + `Resume readiness: ${result.resume_freshness} · ${result.resume_source_session_id ?? "(unknown session)"}${result.resume_source_device_id ? ` (${result.resume_source_device_id})` : ""}
22636
22889
  ` + `Chat recall: ${result.chat_coverage_state} · ${result.recent_chat.length} messages across ${result.recent_chat_sessions} sessions (transcript ${result.chat_source_summary.transcript}, history ${result.chat_source_summary.history}, hook ${result.chat_source_summary.hook})
22637
22890
  ` + `${typeof result.assistant_checkpoint_count === "number" ? `Assistant checkpoints: ${result.assistant_checkpoint_count}
22638
22891
  ` : ""}` + `Handoffs: ${result.saved_handoffs} saved, ${result.rolling_handoff_drafts} rolling drafts
22639
22892
  ` + `${typeof result.estimated_read_tokens === "number" ? `Estimated read cost: ~${result.estimated_read_tokens}t
22640
22893
  ` : ""}` + `Suggested tools: ${result.suggested_tools.join(", ") || "(none)"}
22641
22894
 
22895
+ ` + `Next actions:
22896
+ ${result.resume_next_actions.length > 0 ? result.resume_next_actions.map((item) => `- ${item}`).join(`
22897
+ `) : "- (none)"}
22898
+
22642
22899
  ` + `Top types:
22643
22900
  ${topTypes}
22644
22901
 
@@ -22832,6 +23089,7 @@ server.tool("session_context", "Preview the exact project memory context Engrm w
22832
23089
  text: `Project: ${result.project_name}
22833
23090
  ` + `Canonical ID: ${result.canonical_id}
22834
23091
  ` + `Continuity: ${result.continuity_state} — ${result.continuity_summary}
23092
+ ` + `Resume readiness: ${result.resume_freshness} · ${result.resume_source_session_id ?? "(unknown session)"}${result.resume_source_device_id ? ` (${result.resume_source_device_id})` : ""}
22835
23093
  ` + `Loaded observations: ${result.session_count}
22836
23094
  ` + `Searchable total: ${result.total_active}
22837
23095
  ` + `Recent requests: ${result.recent_requests}
@@ -22842,6 +23100,7 @@ server.tool("session_context", "Preview the exact project memory context Engrm w
22842
23100
  ` + `Recent chat messages: ${result.recent_chat_messages}
22843
23101
  ` + `Chat recall: ${result.chat_coverage_state} · ${result.recent_chat_sessions} sessions (transcript ${result.chat_source_summary.transcript}, history ${result.chat_source_summary.history}, hook ${result.chat_source_summary.hook})
22844
23102
  ` + `Latest handoff: ${result.latest_handoff_title ?? "(none)"}
23103
+ ` + `Next actions: ${result.resume_next_actions.length > 0 ? result.resume_next_actions.join(" | ") : "(none)"}
22845
23104
  ` + `Raw chronology active: ${result.raw_capture_active ? "yes" : "no"}
22846
23105
 
22847
23106
  ` + result.preview
@@ -22914,6 +23173,7 @@ server.tool("project_memory_index", "Show a typed local memory index for the cur
22914
23173
  text: `Project: ${result.project}
22915
23174
  ` + `Canonical ID: ${result.canonical_id}
22916
23175
  ` + `Continuity: ${result.continuity_state} — ${result.continuity_summary}
23176
+ ` + `Resume readiness: ${result.resume_freshness} · ${result.resume_source_session_id ?? "(unknown session)"}${result.resume_source_device_id ? ` (${result.resume_source_device_id})` : ""}
22917
23177
  ` + `Recent requests captured: ${result.recent_requests_count}
22918
23178
  ` + `Recent tools captured: ${result.recent_tools_count}
22919
23179
 
@@ -22928,6 +23188,10 @@ server.tool("project_memory_index", "Show a typed local memory index for the cur
22928
23188
  ` + `Estimated read cost: ~${result.estimated_read_tokens}t
22929
23189
  ` + `Suggested tools: ${result.suggested_tools.join(", ") || "(none)"}
22930
23190
 
23191
+ ` + `Next actions:
23192
+ ${result.resume_next_actions.length > 0 ? result.resume_next_actions.map((item) => `- ${item}`).join(`
23193
+ `) : "- (none)"}
23194
+
22931
23195
  ` + `Observation counts:
22932
23196
  ${counts}
22933
23197
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "engrm",
3
- "version": "0.4.30",
4
- "description": "Shared memory across devices, sessions, and coding agents",
3
+ "version": "0.4.31",
4
+ "description": "Shared memory across devices, sessions, and agents, with thin MCP tools for durable capture and live continuity",
5
5
  "mcpName": "io.github.dr12hes/engrm",
6
6
  "type": "module",
7
7
  "main": "dist/server.js",