claude-mem-lite 2.1.4 → 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -10,7 +10,7 @@
10
10
  "plugins": [
11
11
  {
12
12
  "name": "claude-mem-lite",
13
- "version": "2.1.4",
13
+ "version": "2.1.6",
14
14
  "source": "./",
15
15
  "description": "Lightweight persistent memory system for Claude Code — FTS5 search, episode batching, error-triggered recall"
16
16
  }
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-mem-lite",
3
- "version": "2.1.4",
3
+ "version": "2.2.0",
4
4
  "description": "Lightweight persistent memory system for Claude Code — FTS5 search, episode batching, error-triggered recall",
5
5
  "author": {
6
6
  "name": "sdsrss"
package/README.md CHANGED
@@ -95,6 +95,7 @@ The original sends **everything to the LLM and hopes it filters well**. claude-m
95
95
  - **Exploration bonus** -- New resources in the registry get a fair chance in composite ranking; zombie resources (high recommend, zero adopt) are penalized
96
96
  - **LLM concurrency control** -- File-based semaphore limits background workers to 2 concurrent LLM calls, preventing resource contention
97
97
  - **stdin overflow protection** -- Hook input truncated at 256KB with regex-based action salvage for oversized tool outputs
98
+ - **Cross-session handoff** -- Captures session state (request, completed work, next steps, key files) on `/clear` or `/exit`, then injects context when the next session detects continuation intent via explicit keywords or FTS5 term overlap
98
99
 
99
100
  ## Platform Support
100
101
 
@@ -224,7 +225,7 @@ rm -rf ~/claude-mem-lite/ # pre-v0.5 unhidden (if not auto-moved)
224
225
 
225
226
  ## Database Schema
226
227
 
227
- Four core tables with FTS5 virtual tables for search:
228
+ Five core tables with FTS5 virtual tables for search:
228
229
 
229
230
  **observations** -- Individual coding observations (decisions, bugfixes, features, etc.)
230
231
  ```
@@ -250,6 +251,12 @@ started_at, completed_at, status, prompt_counter
250
251
  id, content_session_id, prompt_text, prompt_number
251
252
  ```
252
253
 
254
+ **session_handoffs** -- Cross-session handoff snapshots (UPSERT, max 2 per project)
255
+ ```
256
+ project, type, session_id, working_on, completed, unfinished,
257
+ key_files, key_decisions, match_keywords, created_at_epoch
258
+ ```
259
+
253
260
  FTS5 indexes: `observations_fts`, `session_summaries_fts`, `user_prompts_fts`
254
261
 
255
262
  ## How It Works
@@ -258,7 +265,7 @@ FTS5 indexes: `observations_fts`, `session_summaries_fts`, `user_prompts_fts`
258
265
 
259
266
  ```
260
267
  SessionStart
261
- -> Generate session ID
268
+ -> Generate session ID (or save handoff snapshot on /clear)
262
269
  -> Mark stale sessions (>24h active) as abandoned
263
270
  -> Clean orphaned/stale lock files
264
271
  -> Query recent observations (24h)
@@ -281,11 +288,13 @@ PreToolUse (before tool execution)
281
288
  UserPromptSubmit
282
289
  -> Capture user prompt text to user_prompts table
283
290
  -> Increment session prompt counter
291
+ -> Handoff: detect continuation intent → inject previous session context
284
292
  -> Dispatch: recommend skill/agent based on user's actual prompt (Tier 0→1→2)
285
293
  -> Primary dispatch point — user intent is clearest here
286
294
 
287
295
  Stop
288
296
  -> Flush final episode buffer
297
+ -> Save handoff snapshot (on /exit)
289
298
  -> Collect dispatch feedback: adoption detection + outcome scoring
290
299
  -> Mark session completed
291
300
  -> Spawn LLM summary worker (poll-based wait)
@@ -419,6 +428,7 @@ claude-mem-lite/
419
428
  hook.mjs # Claude Code hooks: episode capture, error recall, session management
420
429
  hook-llm.mjs # Background LLM workers: episode extraction, session summaries
421
430
  hook-shared.mjs # Shared hook infrastructure: session management, DB access, LLM calls
431
+ hook-handoff.mjs # Cross-session handoff: state extraction, intent detection, injection
422
432
  hook-context.mjs # CLAUDE.md context injection and token budgeting
423
433
  hook-episode.mjs # Episode buffer management: atomic writes, pending entry merging
424
434
  hook-semaphore.mjs # LLM concurrency control: file-based semaphore for background workers
@@ -444,7 +454,7 @@ claude-mem-lite/
444
454
  convert-commands.mjs # Converts command .md → SKILL.md in managed plugins
445
455
  index-managed.mjs # Offline indexer for managed resources
446
456
  # Test & benchmark (dev only)
447
- tests/ # Unit, property, integration, contract, E2E, pipeline tests (581 tests)
457
+ tests/ # Unit, property, integration, contract, E2E, pipeline tests (789 tests)
448
458
  benchmark/ # BM25 search quality benchmarks + CI gate
449
459
  ```
450
460
 
@@ -466,7 +476,7 @@ The benchmark suite runs as a CI gate (`npm run benchmark:gate`) to prevent sear
466
476
 
467
477
  ```bash
468
478
  npm run lint # ESLint static analysis
469
- npm test # Run all 581 tests (vitest)
479
+ npm test # Run all 789 tests (vitest)
470
480
  npm run test:smoke # Run 5 core smoke tests
471
481
  npm run test:coverage # Run tests with V8 coverage (≥70% lines/functions, ≥60% branches)
472
482
  npm run benchmark # Run full search quality benchmark
package/README.zh-CN.md CHANGED
@@ -95,6 +95,7 @@
95
95
  - **探索奖励** -- 注册表中的新资源在复合排名中获得公平机会;高推荐零采纳的"僵尸"资源被惩罚
96
96
  - **LLM 并发控制** -- 基于文件的信号量将后台 worker 限制为 2 个并发 LLM 调用,防止资源争用
97
97
  - **stdin 溢出保护** -- Hook 输入在 256KB 处截断,对超大工具输出使用正则挽救关键信息
98
+ - **跨会话交接** -- 在 `/clear` 或 `/exit` 时捕获会话状态(请求、已完成工作、后续步骤、关键文件),下次会话检测到继续意图时自动注入上下文(支持显式关键词和 FTS5 术语重叠匹配)
98
99
 
99
100
  ## 平台支持
100
101
 
@@ -224,7 +225,7 @@ rm -rf ~/claude-mem-lite/ # v0.5 前的非隐藏目录(如未自动迁移)
224
225
 
225
226
  ## 数据库结构
226
227
 
227
- 四张核心表 + FTS5 虚拟表用于搜索:
228
+ 五张核心表 + FTS5 虚拟表用于搜索:
228
229
 
229
230
  **observations** -- 单条编码观察(决策、bug修复、功能等)
230
231
  ```
@@ -250,6 +251,12 @@ started_at, completed_at, status, prompt_counter
250
251
  id, content_session_id, prompt_text, prompt_number
251
252
  ```
252
253
 
254
+ **session_handoffs** -- 跨会话交接快照(UPSERT,每个项目最多 2 行)
255
+ ```
256
+ project, type, session_id, working_on, completed, unfinished,
257
+ key_files, key_decisions, match_keywords, created_at_epoch
258
+ ```
259
+
253
260
  FTS5 索引:`observations_fts`、`session_summaries_fts`、`user_prompts_fts`
254
261
 
255
262
  ## 工作原理
@@ -258,7 +265,7 @@ FTS5 索引:`observations_fts`、`session_summaries_fts`、`user_prompts_fts`
258
265
 
259
266
  ```
260
267
  SessionStart
261
- -> 生成会话 ID
268
+ -> 生成会话 ID(/clear 时保存交接快照)
262
269
  -> 标记过期会话(活跃 >24h)为 abandoned
263
270
  -> 清理孤儿/过期锁文件
264
271
  -> 查询最近观察(24 小时内)
@@ -281,11 +288,13 @@ PreToolUse(工具执行前)
281
288
  UserPromptSubmit
282
289
  -> 捕获用户提示文本到 user_prompts 表
283
290
  -> 递增会话提示计数器
291
+ -> 交接:检测继续意图 → 注入上一次会话上下文
284
292
  -> 调度:根据用户实际提示推荐 skill/agent(Tier 0→1→2)
285
293
  -> 主要调度触发点 — 用户意图在此最为明确
286
294
 
287
295
  Stop
288
296
  -> 刷新最终 episode 缓冲区
297
+ -> 保存交接快照(/exit 时)
289
298
  -> 收集调度反馈:采纳检测 + 结果评分
290
299
  -> 标记会话为已完成
291
300
  -> 启动 LLM 摘要 worker(轮询等待)
@@ -419,6 +428,7 @@ claude-mem-lite/
419
428
  hook.mjs # Claude Code 钩子:episode 捕获、错误回忆、会话管理
420
429
  hook-llm.mjs # 后台 LLM worker:episode 提取、会话摘要
421
430
  hook-shared.mjs # 共享钩子基础设施:会话管理、数据库访问、LLM 调用
431
+ hook-handoff.mjs # 跨会话交接:状态提取、意图检测、上下文注入
422
432
  hook-context.mjs # CLAUDE.md 上下文注入与 token 预算
423
433
  hook-episode.mjs # Episode 缓冲区管理:原子写入、待处理条目合并
424
434
  hook-semaphore.mjs # LLM 并发控制:基于文件的信号量
@@ -444,7 +454,7 @@ claude-mem-lite/
444
454
  convert-commands.mjs # 将 command .md 转换为托管插件中的 SKILL.md
445
455
  index-managed.mjs # 托管资源离线索引器
446
456
  # 测试和基准(仅开发)
447
- tests/ # 单元、属性、集成、契约、E2E、管线测试(581 个)
457
+ tests/ # 单元、属性、集成、契约、E2E、管线测试(789 个)
448
458
  benchmark/ # BM25 搜索质量基准 + CI 门控
449
459
  ```
450
460
 
@@ -466,7 +476,7 @@ claude-mem-lite/
466
476
 
467
477
  ```bash
468
478
  npm run lint # ESLint 静态分析
469
- npm test # 运行全部 581 个测试(vitest)
479
+ npm test # 运行全部 789 个测试(vitest)
470
480
  npm run test:smoke # 运行 5 个核心冒烟测试
471
481
  npm run test:coverage # 运行测试并生成 V8 覆盖率(≥70% 行/函数,≥60% 分支)
472
482
  npm run benchmark # 运行完整搜索质量基准测试
@@ -24,7 +24,7 @@ const ALLOWED_BASES = [
24
24
 
25
25
  function isAllowedPath(filePath) {
26
26
  if (!filePath) return false;
27
- return ALLOWED_BASES.some(base => filePath.startsWith(base));
27
+ return ALLOWED_BASES.some(base => filePath === base || filePath.startsWith(base + '/'));
28
28
  }
29
29
 
30
30
  // ─── Template Detection ──────────────────────────────────────────────────────
package/dispatch.mjs CHANGED
@@ -494,7 +494,7 @@ function inferTechFromPrompt(prompt) {
494
494
  [/\b(typescript|ts)\b/i, 'typescript'],
495
495
  [/\b(python|django|flask|fastapi)\b/i, 'python'],
496
496
  [/\b(rust|cargo)\b/i, 'rust'],
497
- [/\b(golang|go\s+\w+)\b/i, 'go'],
497
+ [/\b(golang|go\s+(?:build|test|run|get|mod|install|fmt|vet|generate|clean|work|tool))\b/i, 'go'],
498
498
  [/\b(java|spring|maven|gradle)\b/i, 'java'],
499
499
  [/\b(ruby|rails)\b/i, 'ruby'],
500
500
  [/\b(php|laravel|symfony)\b/i, 'php'],
@@ -578,7 +578,10 @@ export function needsHaikuDispatch(results) {
578
578
 
579
579
  if (results.length === 0) return true;
580
580
 
581
- const topScore = Math.abs(results[0].relevance);
581
+ // Prefer composite_score (includes behavioral signals) over raw BM25 relevance.
582
+ // Both are negative (more negative = better). Use absolute values for comparison.
583
+ const scoreOf = r => Math.abs(r.composite_score ?? r.relevance);
584
+ const topScore = scoreOf(results[0]);
582
585
 
583
586
  // Relative threshold: if only one result or few results, use absolute minimum
584
587
  // For larger result sets, use mean-relative threshold
@@ -588,14 +591,14 @@ export function needsHaikuDispatch(results) {
588
591
  }
589
592
 
590
593
  // Compute mean relevance across results
591
- const meanScore = results.reduce((sum, r) => sum + Math.abs(r.relevance), 0) / results.length;
594
+ const meanScore = results.reduce((sum, r) => sum + scoreOf(r), 0) / results.length;
592
595
 
593
596
  // Top result should be significantly above mean (at least 1.5x)
594
597
  if (topScore < meanScore * 1.5 && topScore < 3.0) return true;
595
598
 
596
599
  // Top two results too close → ambiguous, need Haiku to disambiguate
597
600
  if (results.length > 1) {
598
- const gap = topScore - Math.abs(results[1].relevance);
601
+ const gap = topScore - scoreOf(results[1]);
599
602
  // Gap should be at least 10% of top score, or at least 0.5 absolute
600
603
  if (gap < Math.max(topScore * 0.1, 0.5)) return true;
601
604
  }
@@ -633,19 +636,19 @@ JSON: {"query":"search keywords for finding the right skill or agent","type":"sk
633
636
  // ─── Cooldown & Dedup (DB-persisted, survives process restarts) ─────────────
634
637
 
635
638
  export function isRecentlyRecommended(db, resourceId, sessionId) {
636
- // Check 1: Per-session recommendation cap (avoid overwhelming user with suggestions)
639
+ // Check 1 & 2: Session-scoped checks (cap + dedup) only when sessionId is available
637
640
  if (sessionId) {
638
641
  const sessionCount = db.prepare(
639
642
  'SELECT COUNT(*) as cnt FROM invocations WHERE session_id = ? AND recommended = 1'
640
643
  ).get(sessionId);
641
644
  if (sessionCount.cnt >= SESSION_RECOMMEND_CAP) return true;
642
- }
643
645
 
644
- // Check 2: Already recommended in this session (session dedup)
645
- const sessionHit = db.prepare(
646
- 'SELECT 1 FROM invocations WHERE resource_id = ? AND session_id = ? LIMIT 1'
647
- ).get(resourceId, sessionId);
648
- if (sessionHit) return true;
646
+ // Already recommended in this session (session dedup)
647
+ const sessionHit = db.prepare(
648
+ 'SELECT 1 FROM invocations WHERE resource_id = ? AND session_id = ? AND recommended = 1 LIMIT 1'
649
+ ).get(resourceId, sessionId);
650
+ if (sessionHit) return true;
651
+ }
649
652
 
650
653
  // Check 3: Recommended within cooldown window (cross-session cooldown)
651
654
  const cooldownHit = db.prepare(
@@ -702,7 +705,9 @@ function applyAdoptionDecay(results) {
702
705
 
703
706
  if (multiplier === 0) return null;
704
707
  if (multiplier < 1) {
705
- return { ...r, relevance: r.relevance * multiplier, _decayed: true };
708
+ // Composite scores are negative (more negative = more relevant).
709
+ // To penalize: multiply by multiplier (<1) to make less negative (worse rank).
710
+ return { ...r, composite_score: (r.composite_score ?? r.relevance) * multiplier, _decayed: true };
706
711
  }
707
712
  return r;
708
713
  }).filter(Boolean);
@@ -791,12 +796,18 @@ export async function dispatchOnSessionStart(db, userPrompt, sessionId) {
791
796
  if (haikuResult?.query) {
792
797
  const haikuQuery = buildQueryFromText(haikuResult.query);
793
798
  if (haikuQuery) {
794
- const haikuResults = retrieveResources(db, haikuQuery, {
799
+ let haikuResults = retrieveResources(db, haikuQuery, {
795
800
  type: haikuResult.type === 'either' ? undefined : haikuResult.type,
796
801
  limit: 3,
797
802
  projectDomains,
798
803
  });
799
- if (haikuResults.length > 0) results = haikuResults;
804
+ if (haikuResults.length > 0) {
805
+ // Apply same post-processing as Tier2 to prevent zombie/low-confidence bypass
806
+ haikuResults = reRankByKeywords(haikuResults, signals.rawKeywords);
807
+ haikuResults = applyAdoptionDecay(haikuResults);
808
+ haikuResults = passesConfidenceGate(haikuResults, signals);
809
+ if (haikuResults.length > 0) results = haikuResults;
810
+ }
800
811
  }
801
812
  }
802
813
  }
package/hook-context.mjs CHANGED
@@ -78,14 +78,13 @@ export function selectWithTokenBudget(db, project, budget = 2000) {
78
78
  LIMIT 10
79
79
  `).all(project, now_ms - windows.sessWindow);
80
80
 
81
- const now = Date.now();
82
81
  const selectedObs = [];
83
82
  const selectedSess = [];
84
83
  let totalTokens = 0;
85
84
 
86
85
  // Score each candidate: value = recency * importance, cost = tokens
87
86
  const scoredObs = obsPool.map(o => {
88
- const ageDays = (now - o.created_at_epoch) / 86400000;
87
+ const ageDays = (now_ms - o.created_at_epoch) / 86400000;
89
88
  const recency = 1 / (1 + ageDays);
90
89
  const impBoost = 0.5 + 0.5 * (o.importance || 1);
91
90
  const value = recency * impBoost;
@@ -94,7 +93,7 @@ export function selectWithTokenBudget(db, project, budget = 2000) {
94
93
  });
95
94
 
96
95
  const scoredSess = sessPool.map(s => {
97
- const ageDays = (now - s.created_at_epoch) / 86400000;
96
+ const ageDays = (now_ms - s.created_at_epoch) / 86400000;
98
97
  const recency = 1 / (1 + ageDays);
99
98
  const value = recency * 1.5; // Session summaries slightly boosted
100
99
  const cost = estimateTokens((s.request || '') + (s.completed || '') + (s.next_steps || ''));
@@ -155,7 +154,7 @@ export function updateClaudeMd(contextBlock) {
155
154
  const startIdx = content.indexOf(startTag);
156
155
  const endIdx = content.indexOf(endTag);
157
156
 
158
- if (startIdx !== -1 && endIdx !== -1) {
157
+ if (startIdx !== -1 && endIdx !== -1 && startIdx < endIdx) {
159
158
  // Replace existing section in-place — preserves surrounding content (including hint if present)
160
159
  content = content.slice(0, startIdx) + newSection + content.slice(endIdx + endTag.length);
161
160
  } else if (content.length > 0) {
package/hook-episode.mjs CHANGED
@@ -210,13 +210,21 @@ export function mergePendingEntries(episode) {
210
210
 
211
211
  /**
212
212
  * Check if an episode has significant content worth processing with LLM.
213
- * Significant = contains file edits or Bash errors.
213
+ * Significant = contains file edits, Bash errors, or a review/research pattern
214
+ * (5+ Read/Grep entries indicate investigation worth recording).
214
215
  * @param {object} episode The episode to check
215
216
  * @returns {boolean} true if the episode has significant content
216
217
  */
217
218
  export function episodeHasSignificantContent(episode) {
218
- return episode.entries.some(e =>
219
+ const hasEditsOrErrors = episode.entries.some(e =>
219
220
  EDIT_TOOLS.has(e.tool) ||
220
221
  (e.tool === 'Bash' && e.isError)
221
222
  );
223
+ if (hasEditsOrErrors) return true;
224
+
225
+ // Review/research pattern: reading many files indicates investigation
226
+ const readCount = episode.entries.filter(e =>
227
+ e.tool === 'Read' || e.tool === 'Grep'
228
+ ).length;
229
+ return readCount >= 5;
222
230
  }
package/hook-llm.mjs CHANGED
@@ -183,6 +183,47 @@ export function buildDegradedTitle(episode) {
183
183
  return desc.replace(/ → (?:ERROR: )?\{.*$/, hasError ? ' (error)' : '');
184
184
  }
185
185
 
186
+ /**
187
+ * Build a rule-based observation from episode metadata for immediate DB persistence.
188
+ * Used as pre-save (before LLM) and as fallback when LLM is unavailable.
189
+ * @param {object} episode Episode with entries, files, filesRead arrays
190
+ * @returns {object} Observation object ready for saveObservation()
191
+ */
192
+ export function buildImmediateObservation(episode) {
193
+ const hasError = episode.entries.some(e => e.isError);
194
+ const hasEdit = episode.entries.some(e => EDIT_TOOLS.has(e.tool));
195
+ const readCount = episode.entries.filter(e => e.tool === 'Read' || e.tool === 'Grep').length;
196
+ const isReviewPattern = !hasEdit && !hasError && readCount >= 5;
197
+ const inferredType = hasError ? 'bugfix' : hasEdit ? 'change' : 'discovery';
198
+ const fileList = (episode.files || []).map(f => basename(f)).join(', ') || '(multiple)';
199
+
200
+ // Review/research episodes: use a descriptive title with file count
201
+ let title;
202
+ if (isReviewPattern) {
203
+ const allFiles = [...new Set([
204
+ ...(episode.files || []),
205
+ ...(episode.filesRead || []),
206
+ ])].map(f => basename(f));
207
+ const names = allFiles.slice(0, 4).join(', ');
208
+ const suffix = allFiles.length > 4 ? ` +${allFiles.length - 4} more` : '';
209
+ title = truncate(`Reviewed ${allFiles.length} files: ${names}${suffix}`, 120);
210
+ } else {
211
+ title = truncate(buildDegradedTitle(episode), 120);
212
+ }
213
+
214
+ return {
215
+ type: inferredType,
216
+ title,
217
+ subtitle: fileList,
218
+ narrative: episode.entries.map(e => e.desc).join('; '),
219
+ concepts: [],
220
+ facts: [],
221
+ files: episode.files,
222
+ filesRead: episode.filesRead || [],
223
+ importance: isReviewPattern ? Math.max(2, computeRuleImportance(episode)) : computeRuleImportance(episode),
224
+ };
225
+ }
226
+
186
227
  // ─── Background: LLM Episode Extraction (Tier 2 F) ──────────────────────────
187
228
 
188
229
  export async function handleLLMEpisode() {
@@ -282,20 +323,7 @@ importance: 1=routine, 2=notable (error fix, arch decision, config change), 3=cr
282
323
  try { unlinkSync(tmpFile); } catch {}
283
324
  return;
284
325
  }
285
- const hasError = episode.entries.some(e => e.isError);
286
- const hasEdit = episode.entries.some(e => EDIT_TOOLS.has(e.tool));
287
- const inferredType = hasError ? 'bugfix' : hasEdit ? 'change' : 'discovery';
288
- obs = {
289
- type: inferredType,
290
- title: truncate(buildDegradedTitle(episode), 120),
291
- subtitle: fileList,
292
- narrative: episode.entries.map(e => e.desc).join('; '),
293
- concepts: [],
294
- facts: [],
295
- files: episode.files,
296
- filesRead: episode.filesRead || [],
297
- importance: ruleImportance,
298
- };
326
+ obs = buildImmediateObservation(episode);
299
327
  }
300
328
 
301
329
  const db = openDb();
@@ -371,16 +399,25 @@ export async function handleLLMSummary() {
371
399
  if (recentObs.length < 1) return;
372
400
 
373
401
  const obsList = recentObs.map((o, i) =>
374
- `${i + 1}. [${o.type}] ${o.title}${o.narrative ? ': ' + truncate(o.narrative, 80) : ''}`
402
+ `${i + 1}. [${o.type}] ${o.title}${o.narrative ? ': ' + truncate(o.narrative, 200) : ''}`
375
403
  ).join('\n');
376
404
 
405
+ // Include user prompts for richer context
406
+ const userPrompts = db.prepare(`
407
+ SELECT prompt_text FROM user_prompts
408
+ WHERE content_session_id = ? ORDER BY prompt_number ASC LIMIT 10
409
+ `).all(sessionId).map(p => truncate(p.prompt_text, 300));
410
+ const promptCtx = userPrompts.length > 0
411
+ ? `\nUser requests: ${userPrompts.join(' → ')}\n`
412
+ : '';
413
+
377
414
  const prompt = `Summarize this coding session. Return ONLY valid JSON, no markdown fences.
378
415
 
379
- Project: ${project}
416
+ Project: ${project}${promptCtx}
380
417
  Observations (${recentObs.length} total):
381
418
  ${obsList}
382
419
 
383
- JSON: {"request":"what the user was working on","investigated":"what was explored/analyzed","learned":"key findings","completed":"what was accomplished","next_steps":"suggested follow-up"}`;
420
+ JSON: {"request":"what the user was working on","completed":"specific items accomplished with file names","remaining_items":"specific unfinished items from the original request — compare investigation scope with actual changes to infer what was NOT yet done; be precise with file:issue format, or empty string if all done","next_steps":"suggested follow-up"}`;
384
421
 
385
422
  if (!(await acquireLLMSlot())) {
386
423
  debugLog('WARN', 'llm-summary', 'semaphore timeout, skipping summary');
@@ -398,12 +435,13 @@ JSON: {"request":"what the user was working on","investigated":"what was explore
398
435
  if (llmParsed && llmParsed.request) {
399
436
  const now = new Date();
400
437
  db.prepare(`
401
- INSERT INTO session_summaries (memory_session_id, project, request, investigated, learned, completed, next_steps, files_read, files_edited, notes, created_at, created_at_epoch)
402
- VALUES (?, ?, ?, ?, ?, ?, ?, '[]', '[]', '', ?, ?)
438
+ INSERT INTO session_summaries (memory_session_id, project, request, investigated, learned, completed, next_steps, remaining_items, files_read, files_edited, notes, created_at, created_at_epoch)
439
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, '[]', '[]', '', ?, ?)
403
440
  `).run(
404
441
  sessionId, project,
405
442
  llmParsed.request || '', llmParsed.investigated || '', llmParsed.learned || '',
406
443
  llmParsed.completed || '', llmParsed.next_steps || '',
444
+ llmParsed.remaining_items || '',
407
445
  now.toISOString(), now.getTime()
408
446
  );
409
447
  }
package/hook-memory.mjs CHANGED
@@ -28,7 +28,7 @@ export function searchRelevantMemories(db, userPrompt, project, excludeIds = [])
28
28
 
29
29
  const selectStmt = db.prepare(`
30
30
  SELECT o.id, o.type, o.title, o.importance,
31
- bm25(observations_fts) as relevance
31
+ bm25(observations_fts, 10, 5, 5, 3, 3, 2) as relevance
32
32
  FROM observations_fts
33
33
  JOIN observations o ON o.id = observations_fts.rowid
34
34
  WHERE observations_fts MATCH ?
@@ -36,7 +36,7 @@ export function searchRelevantMemories(db, userPrompt, project, excludeIds = [])
36
36
  AND o.importance >= 2
37
37
  AND o.created_at_epoch > ?
38
38
  AND COALESCE(o.compressed_into, 0) = 0
39
- ORDER BY bm25(observations_fts)
39
+ ORDER BY bm25(observations_fts, 10, 5, 5, 3, 3, 2)
40
40
  LIMIT 10
41
41
  `);
42
42
  const rows = selectStmt.all(ftsQuery, project, cutoff);
package/hook-shared.mjs CHANGED
@@ -26,6 +26,12 @@ export const RELATED_OBS_WINDOW_MS = 7 * 86400000; // 7 days
26
26
  export const FALLBACK_OBS_WINDOW_MS = 7 * 24 * 60 * 60 * 1000; // 7 days
27
27
  export const RESOURCE_RESCAN_INTERVAL_MS = 60 * 60 * 1000; // 1 hour
28
28
 
29
+ // Handoff system constants
30
+ export const HANDOFF_EXPIRY_CLEAR = 3600000; // 1 hour
31
+ export const HANDOFF_EXPIRY_EXIT = 7 * 24 * 60 * 60 * 1000; // 7 days
32
+ export const HANDOFF_MATCH_THRESHOLD = 3; // min weighted score
33
+ export const CONTINUE_KEYWORDS = /继续|接着|上次|之前的|前面的|刚才|\bcontinue\b|\bresume\b|\bwhere[\s\-]+we[\s\-]+left\b|\bpick[\s\-]+up\b|\bcarry[\s\-]+on\b/i;
34
+
29
35
  // Ensure runtime directory exists
30
36
  try { if (!existsSync(RUNTIME_DIR)) mkdirSync(RUNTIME_DIR, { recursive: true }); } catch {}
31
37
 
@@ -121,9 +127,10 @@ export function spawnBackground(bgEvent, ...extraArgs) {
121
127
 
122
128
  export function sleep(ms) { return new Promise(r => setTimeout(r, ms)); }
123
129
 
124
- // ─── Injection Budget (per-session, in-memory) ──────────────────────────────
125
- // Limits total context injections across all hooks to prevent context bloat.
126
- // Reset at session-start. Each hook checks before injecting.
130
+ // ─── Injection Budget (per hook invocation, in-memory) ───────────────────────
131
+ // Limits context injections within a single hook process to prevent context bloat.
132
+ // Note: each hook event runs in a separate process, so this is per-invocation,
133
+ // not per-session. Session-level dedup is handled by cooldown/sessionId checks.
127
134
 
128
135
  export const MAX_INJECTIONS_PER_SESSION = 3;
129
136
  let _injectionCount = 0;
package/hook.mjs CHANGED
@@ -10,7 +10,7 @@ import { readFileSync, writeFileSync, unlinkSync, readdirSync, renameSync, statS
10
10
  import {
11
11
  truncate, typeIcon, inferProject, detectBashSignificance,
12
12
  extractErrorKeywords, extractFilePaths, isRelatedToEpisode,
13
- makeEntryDesc, scrubSecrets, computeRuleImportance, EDIT_TOOLS, debugCatch, debugLog, fmtTime,
13
+ makeEntryDesc, scrubSecrets, EDIT_TOOLS, debugCatch, debugLog, fmtTime,
14
14
  } from './utils.mjs';
15
15
  import {
16
16
  readEpisodeRaw, episodeFile,
@@ -29,8 +29,9 @@ import {
29
29
  closeRegistryDb, spawnBackground, appendToolEvent, readAndClearToolEvents,
30
30
  resetInjectionBudget, hasInjectionBudget, incrementInjection,
31
31
  } from './hook-shared.mjs';
32
- import { handleLLMEpisode, handleLLMSummary, saveObservation, buildDegradedTitle } from './hook-llm.mjs';
32
+ import { handleLLMEpisode, handleLLMSummary, saveObservation, buildImmediateObservation } from './hook-llm.mjs';
33
33
  import { searchRelevantMemories } from './hook-memory.mjs';
34
+ import { buildAndSaveHandoff, detectContinuationIntent, renderHandoffInjection } from './hook-handoff.mjs';
34
35
 
35
36
  // Prevent recursive hooks from background claude -p calls
36
37
  // Background workers (llm-episode, llm-summary, resource-scan) are exempt — they're ours
@@ -88,21 +89,7 @@ function flushEpisode(episode) {
88
89
  // LLM background worker will upgrade title/narrative/importance later.
89
90
  if (isSignificant) {
90
91
  try {
91
- const hasError = episode.entries.some(e => e.isError);
92
- const hasEdit = episode.entries.some(e => EDIT_TOOLS.has(e.tool));
93
- const inferredType = hasError ? 'bugfix' : hasEdit ? 'change' : 'discovery';
94
- const fileList = (episode.files || []).map(f => basename(f)).join(', ') || '(multiple)';
95
- const obs = {
96
- type: inferredType,
97
- title: truncate(buildDegradedTitle(episode), 120),
98
- subtitle: fileList,
99
- narrative: episode.entries.map(e => e.desc).join('; '),
100
- concepts: [],
101
- facts: [],
102
- files: episode.files,
103
- filesRead: episode.filesRead || [],
104
- importance: computeRuleImportance(episode),
105
- };
92
+ const obs = buildImmediateObservation(episode);
106
93
  const id = saveObservation(obs, episode.project, episode.sessionId);
107
94
  if (id) episode.savedId = id;
108
95
  } catch (e) { debugCatch(e, 'flushEpisode-immediateSave'); }
@@ -159,7 +146,7 @@ async function handlePostToolUse() {
159
146
 
160
147
  // Skip noise
161
148
  if (SKIP_TOOLS.has(tool_name)) return;
162
- if (tool_name.startsWith('mem_') || tool_name.startsWith('mcp__mem__')) return;
149
+ if (tool_name.startsWith('mem_') || tool_name.startsWith('mcp__mem__') || tool_name.startsWith('mcp__plugin_claude-mem-lite')) return;
163
150
  if (tool_name.startsWith('mcp__sequential') || tool_name.startsWith('mcp__plugin_context7')) return;
164
151
 
165
152
  const resp = typeof tool_response === 'string' ? tool_response : JSON.stringify(tool_response || '');
@@ -318,6 +305,9 @@ async function handleStop() {
318
305
  const sessionId = getSessionId();
319
306
  const project = inferProject();
320
307
 
308
+ // Snapshot episode BEFORE flush for handoff extraction
309
+ const episodeSnapshot = readEpisodeRaw();
310
+
321
311
  // Flush remaining episode buffer (locked to prevent race with handlePostToolUse)
322
312
  if (acquireLock(1000)) {
323
313
  try {
@@ -340,6 +330,13 @@ async function handleStop() {
340
330
  if (episode && episode.entries && episode.entries.length > 0 && episodeHasSignificantContent(episode)) {
341
331
  if (!episode.sessionId) episode.sessionId = sessionId;
342
332
  if (!episode.project) episode.project = project;
333
+ // Immediate save: persist rule-based observation to DB before spawning background worker.
334
+ // Without this, data is lost if the background worker fails.
335
+ try {
336
+ const obs = buildImmediateObservation(episode);
337
+ const id = saveObservation(obs, episode.project, episode.sessionId);
338
+ if (id) episode.savedId = id;
339
+ } catch (e) { debugCatch(e, 'handleStop-fallback-immediateSave'); }
343
340
  const flushFile = join(RUNTIME_DIR, `ep-flush-${Date.now()}-${randomUUID().slice(0, 8)}.json`);
344
341
  writeFileSync(flushFile, JSON.stringify(episode));
345
342
  spawnBackground('llm-episode', flushFile);
@@ -350,7 +347,7 @@ async function handleStop() {
350
347
  } catch (e) { debugCatch(e, 'handleStop-fallback'); }
351
348
  }
352
349
 
353
- // Mark session completed (sync, instant)
350
+ // Mark session completed + save handoff (sync, instant)
354
351
  const db = openDb();
355
352
  if (db) {
356
353
  try {
@@ -358,6 +355,9 @@ async function handleStop() {
358
355
  UPDATE sdk_sessions SET status = 'completed', completed_at = ?, completed_at_epoch = ?
359
356
  WHERE content_session_id = ? AND status = 'active'
360
357
  `).run(new Date().toISOString(), Date.now(), sessionId);
358
+ // Save handoff snapshot for cross-session continuity
359
+ try { buildAndSaveHandoff(db, sessionId, project, 'exit', episodeSnapshot); }
360
+ catch (e) { debugCatch(e, 'handleStop-handoff'); }
361
361
  } finally {
362
362
  db.close();
363
363
  }
@@ -366,10 +366,11 @@ async function handleStop() {
366
366
  // Dispatch: collect feedback on recommendations using actual tool events
367
367
  // PostToolUse tracks Skill/Task/Edit/Write/Bash events in a JSONL file.
368
368
  // These events drive adoption detection (Skill/Task) and outcome detection (Edit/Bash errors).
369
+ // Always clear event file to prevent stale events accumulating if registry DB is unavailable.
369
370
  try {
371
+ const sessionEvents = readAndClearToolEvents();
370
372
  const rdb = getRegistryDb();
371
373
  if (rdb) {
372
- const sessionEvents = readAndClearToolEvents();
373
374
  await collectFeedback(rdb, sessionId, sessionEvents);
374
375
  }
375
376
  } catch (e) { debugCatch(e, 'handleStop-feedback'); }
@@ -386,6 +387,9 @@ async function handleStop() {
386
387
  async function handleSessionStart() {
387
388
  resetInjectionBudget();
388
389
 
390
+ // Snapshot episode BEFORE flush for handoff extraction
391
+ const episodeSnapshot = readEpisodeRaw();
392
+
389
393
  // Flush any leftover episode buffer from previous session (e.g. after /clear)
390
394
  if (acquireLock()) {
391
395
  try {
@@ -464,6 +468,10 @@ async function handleSessionStart() {
464
468
  // ── Non-transactional operations (side effects, background work) ──
465
469
 
466
470
  if (prevSessionId) {
471
+ // Save handoff for cross-session continuity (/clear or /compact)
472
+ try { buildAndSaveHandoff(db, prevSessionId, prevProject || project, 'clear', episodeSnapshot); }
473
+ catch (e) { debugCatch(e, 'session-start-handoff'); }
474
+
467
475
  // Collect dispatch feedback for previous session
468
476
  try {
469
477
  const rdb = getRegistryDb();
@@ -774,6 +782,19 @@ async function handleUserPrompt() {
774
782
  now.toISOString(), now.getTime()
775
783
  );
776
784
 
785
+ // Cross-session handoff injection (first prompt only, before semantic memory)
786
+ if (counter?.prompt_counter === 1 && hasInjectionBudget()) {
787
+ try {
788
+ if (detectContinuationIntent(db, promptText, project)) {
789
+ const injection = renderHandoffInjection(db, project);
790
+ if (injection) {
791
+ process.stdout.write(injection + '\n');
792
+ incrementInjection();
793
+ }
794
+ }
795
+ } catch (e) { debugCatch(e, 'handleUserPrompt-handoff'); }
796
+ }
797
+
777
798
  // Semantic memory injection: search past observations for the user's prompt
778
799
  if (hasInjectionBudget()) {
779
800
  try {
@@ -857,6 +878,7 @@ async function handleResourceScan() {
857
878
  }
858
879
 
859
880
  // Upsert changed resources with fallback metadata (no Haiku)
881
+ let firstErr = true;
860
882
  for (const res of toIndex) {
861
883
  try {
862
884
  upsertResource(rdb, {
@@ -871,7 +893,7 @@ async function handleResourceScan() {
871
893
  trigger_patterns: `when user needs ${res.name.replace(/-/g, ' ').replace(/\//g, ' ')}`,
872
894
  capability_summary: `${res.type}: ${res.name.replace(/-/g, ' ')}`,
873
895
  });
874
- } catch {}
896
+ } catch (e) { if (firstErr) { debugCatch(e, 'handleResourceScan-upsert'); firstErr = false; } }
875
897
  }
876
898
 
877
899
  // Disable resources no longer on filesystem
package/install.mjs CHANGED
@@ -1213,7 +1213,7 @@ async function install() {
1213
1213
  const SOURCE_FILES = [
1214
1214
  'server.mjs', 'server-internals.mjs', 'tool-schemas.mjs',
1215
1215
  'hook.mjs', 'hook-shared.mjs', 'hook-llm.mjs', 'hook-memory.mjs',
1216
- 'hook-semaphore.mjs', 'hook-episode.mjs', 'hook-context.mjs',
1216
+ 'hook-semaphore.mjs', 'hook-episode.mjs', 'hook-context.mjs', 'hook-handoff.mjs',
1217
1217
  'haiku-client.mjs', 'utils.mjs', 'schema.mjs', 'package.json', 'skill.md',
1218
1218
  'registry.mjs', 'registry-scanner.mjs', 'registry-indexer.mjs',
1219
1219
  'registry-retriever.mjs', 'resource-discovery.mjs',
@@ -1581,7 +1581,7 @@ async function install() {
1581
1581
  async function uninstall() {
1582
1582
  console.log('\nclaude-mem-lite uninstaller\n');
1583
1583
 
1584
- // 1. Remove MCP
1584
+ // 1. Remove MCP (legacy hook-based install)
1585
1585
  try {
1586
1586
  execFileSync('claude', ['mcp', 'remove', '-s', 'user', 'mem'], { stdio: 'pipe' });
1587
1587
  ok('MCP server removed');
@@ -1589,7 +1589,7 @@ async function uninstall() {
1589
1589
  warn('MCP server not found or already removed');
1590
1590
  }
1591
1591
 
1592
- // 2. Remove hooks (match both npx and git-clone install paths)
1592
+ // 2. Remove hooks from settings.json (match both npx and git-clone install paths)
1593
1593
  const settings = readSettings();
1594
1594
  if (settings.hooks) {
1595
1595
  for (const [event, configs] of Object.entries(settings.hooks)) {
@@ -1598,11 +1598,67 @@ async function uninstall() {
1598
1598
  if (settings.hooks[event].length === 0) delete settings.hooks[event];
1599
1599
  }
1600
1600
  if (Object.keys(settings.hooks).length === 0) delete settings.hooks;
1601
- writeSettings(settings);
1602
- ok('Hooks removed');
1603
1601
  }
1604
1602
 
1605
- // 3. Purge data if requested
1603
+ // 3. Clean plugin system entries from settings.json
1604
+ const pluginKey = 'claude-mem-lite@sdsrss';
1605
+ const marketplaceKey = 'sdsrss';
1606
+ if (settings.enabledPlugins) {
1607
+ delete settings.enabledPlugins[pluginKey];
1608
+ }
1609
+ if (settings.extraKnownMarketplaces) {
1610
+ delete settings.extraKnownMarketplaces[marketplaceKey];
1611
+ }
1612
+ writeSettings(settings);
1613
+ ok('Hooks and plugin settings cleaned');
1614
+
1615
+ // 4. Clean plugin system registry files
1616
+ const pluginsDir = join(homedir(), '.claude', 'plugins');
1617
+
1618
+ // 4a. Remove marketplace directory
1619
+ const marketplaceDir = join(pluginsDir, 'marketplaces', marketplaceKey);
1620
+ if (existsSync(marketplaceDir)) {
1621
+ rmSync(marketplaceDir, { recursive: true, force: true });
1622
+ ok('Marketplace directory removed');
1623
+ }
1624
+
1625
+ // 4b. Remove cache directory
1626
+ const cacheDir = join(pluginsDir, 'cache', marketplaceKey);
1627
+ if (existsSync(cacheDir)) {
1628
+ rmSync(cacheDir, { recursive: true, force: true });
1629
+ ok('Plugin cache removed');
1630
+ }
1631
+
1632
+ // 4c. Clean known_marketplaces.json
1633
+ const knownPath = join(pluginsDir, 'known_marketplaces.json');
1634
+ try {
1635
+ const known = JSON.parse(readFileSync(knownPath, 'utf8'));
1636
+ if (marketplaceKey in known) {
1637
+ delete known[marketplaceKey];
1638
+ writeFileSync(knownPath, JSON.stringify(known, null, 2) + '\n');
1639
+ ok('Removed from known_marketplaces.json');
1640
+ }
1641
+ } catch { /* file may not exist */ }
1642
+
1643
+ // 4d. Clean installed_plugins.json
1644
+ const installedPath = join(pluginsDir, 'installed_plugins.json');
1645
+ try {
1646
+ const installed = JSON.parse(readFileSync(installedPath, 'utf8'));
1647
+ const plugins = installed.plugins || installed;
1648
+ let cleaned = false;
1649
+ for (const key of Object.keys(plugins)) {
1650
+ if (key.includes('claude-mem-lite') || key.includes('sdsrss')) {
1651
+ delete plugins[key];
1652
+ cleaned = true;
1653
+ }
1654
+ }
1655
+ if (cleaned) {
1656
+ writeFileSync(installedPath, JSON.stringify(installed, null, 2) + '\n');
1657
+ ok('Removed from installed_plugins.json');
1658
+ }
1659
+ } catch { /* file may not exist */ }
1660
+
1661
+ // 5. Purge data if requested
1606
1662
  if (flags.has('--purge')) {
1607
1663
  const expectedPurgePath = join(homedir(), '.claude-mem-lite');
1608
1664
  if (existsSync(DATA_DIR) && DATA_DIR === expectedPurgePath) {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-mem-lite",
3
- "version": "2.1.4",
3
+ "version": "2.2.0",
4
4
  "description": "Lightweight persistent memory system for Claude Code",
5
5
  "type": "module",
6
6
  "engines": {
@@ -54,13 +54,13 @@ function fallbackExtract(resource) {
54
54
  infra: 'infrastructure,devops,cloud',
55
55
  };
56
56
 
57
- let intentTags = '';
57
+ const intentTagSet = new Set();
58
58
  for (const [key, tags] of Object.entries(intentMap)) {
59
59
  if (name.includes(key) || content.includes(key)) {
60
- intentTags = tags;
61
- break;
60
+ for (const t of tags.split(',')) intentTagSet.add(t);
62
61
  }
63
62
  }
63
+ const intentTags = [...intentTagSet].join(',');
64
64
 
65
65
  // Infer domain tags from content
66
66
  const domainPatterns = [
@@ -197,8 +197,9 @@ export function buildEnhancedQuery(signals) {
197
197
  // directly across name, intent_tags, capability_summary, trigger_patterns.
198
198
  if (signals.rawKeywords?.length > 0) {
199
199
  for (const kw of signals.rawKeywords) {
200
- parts.push(`intent_tags:${kw}`);
201
- parts.push(kw); // literal, no synonym expansion
200
+ const safeKw = expandToken(kw);
201
+ parts.push(`intent_tags:${safeKw}`);
202
+ parts.push(safeKw);
202
203
  }
203
204
  }
204
205
 
@@ -338,8 +339,9 @@ export function filterByProjectDomain(results, projectDomains) {
338
339
  // Sign convention: bm25() returns NEGATIVE (more negative = more relevant).
339
340
  // We keep the negative direction and SUBTRACT positive behavioral signals to make
340
341
  // better resources more negative. ORDER BY ... ASC puts most negative (best) first.
341
- const COMPOSITE_ORDER = `
342
- ORDER BY (
342
+ // Composite score expression (shared between SELECT and ORDER BY)
343
+ // Sign convention: more negative = better. BM25 is negative, behavioral signals are subtracted.
344
+ const COMPOSITE_EXPR = `(
343
345
  bm25(resources_fts, 5.0, 3.0, 3.0, 2.0, 2.0, 1.0, 1.0, 1.0) * 0.4
344
346
  - COALESCE(r.repo_stars * 1.0 / (r.repo_stars + 100.0), 0) * 0.15
345
347
  - (
@@ -369,29 +371,37 @@ const COMPOSITE_ORDER = `
369
371
  AND (r.adopt_count + 1.0) / (r.recommend_count + 2.0) < 0.1
370
372
  THEN 0.10
371
373
  ELSE 0 END
372
- ) ASC
373
- `;
374
+ )`;
375
+
376
+ // COMPOSITE_ORDER kept for SEARCH_BY_TYPE_SQL and other queries
377
+ const COMPOSITE_ORDER = `ORDER BY ${COMPOSITE_EXPR} ASC`;
374
378
 
375
379
  const SEARCH_SQL = `
376
- SELECT r.*,
377
- bm25(resources_fts, 5.0, 3.0, 3.0, 2.0, 2.0, 1.0, 1.0, 1.0) AS relevance
378
- FROM resources_fts
379
- JOIN resources r ON r.id = resources_fts.rowid
380
- WHERE resources_fts MATCH ?
381
- AND r.status = 'active'
382
- ${COMPOSITE_ORDER}
380
+ SELECT *, composite_score FROM (
381
+ SELECT r.*,
382
+ bm25(resources_fts, 5.0, 3.0, 3.0, 2.0, 2.0, 1.0, 1.0, 1.0) AS relevance,
383
+ ${COMPOSITE_EXPR} AS composite_score
384
+ FROM resources_fts
385
+ JOIN resources r ON r.id = resources_fts.rowid
386
+ WHERE resources_fts MATCH ?
387
+ AND r.status = 'active'
388
+ ) sub
389
+ ORDER BY composite_score ASC
383
390
  LIMIT ?
384
391
  `;
385
392
 
386
393
  const SEARCH_BY_TYPE_SQL = `
387
- SELECT r.*,
388
- bm25(resources_fts, 5.0, 3.0, 3.0, 2.0, 2.0, 1.0, 1.0, 1.0) AS relevance
389
- FROM resources_fts
390
- JOIN resources r ON r.id = resources_fts.rowid
391
- WHERE resources_fts MATCH ?
392
- AND r.status = 'active'
393
- AND r.type = ?
394
- ${COMPOSITE_ORDER}
394
+ SELECT *, composite_score FROM (
395
+ SELECT r.*,
396
+ bm25(resources_fts, 5.0, 3.0, 3.0, 2.0, 2.0, 1.0, 1.0, 1.0) AS relevance,
397
+ ${COMPOSITE_EXPR} AS composite_score
398
+ FROM resources_fts
399
+ JOIN resources r ON r.id = resources_fts.rowid
400
+ WHERE resources_fts MATCH ?
401
+ AND r.status = 'active'
402
+ AND r.type = ?
403
+ ) sub
404
+ ORDER BY composite_score ASC
395
405
  LIMIT ?
396
406
  `;
397
407
 
package/registry.mjs CHANGED
@@ -4,7 +4,7 @@
4
4
  import Database from 'better-sqlite3';
5
5
  import { existsSync, mkdirSync } from 'fs';
6
6
  import { dirname } from 'path';
7
- // debugLog, debugCatch available from utils.mjs if needed
7
+ import { debugCatch } from './utils.mjs';
8
8
 
9
9
  // ─── Schema ──────────────────────────────────────────────────────────────────
10
10
 
@@ -178,6 +178,9 @@ export function ensureRegistryDb(dbPath) {
178
178
  const schema = db.prepare(`SELECT sql FROM sqlite_master WHERE type='table' AND name='invocations'`).get();
179
179
  if (schema?.sql && !schema.sql.includes('user_prompt')) {
180
180
  db.transaction(() => {
181
+ // Clean up leftover from previous failed migration attempt
182
+ const hasOld = db.prepare(`SELECT 1 FROM sqlite_master WHERE type='table' AND name='invocations_old'`).get();
183
+ if (hasOld) db.exec(`DROP TABLE invocations_old`);
181
184
  db.exec(`ALTER TABLE invocations RENAME TO invocations_old`);
182
185
  db.exec(INVOCATIONS_SCHEMA);
183
186
  db.exec(`INSERT INTO invocations
@@ -187,7 +190,7 @@ export function ensureRegistryDb(dbPath) {
187
190
  db.exec(`DROP TABLE invocations_old`);
188
191
  })();
189
192
  }
190
- } catch {}
193
+ } catch (e) { debugCatch(e, 'ensureRegistryDb-migration'); }
191
194
 
192
195
  db.exec(PREINSTALLED_SCHEMA);
193
196
 
@@ -223,7 +226,7 @@ const UPSERT_SQL = `
223
226
  */
224
227
  export function upsertResource(db, r) {
225
228
  return db.transaction(() => {
226
- const result = db.prepare(UPSERT_SQL).run(
229
+ db.prepare(UPSERT_SQL).run(
227
230
  r.name, r.type, r.status || 'active', r.source || 'preinstalled',
228
231
  r.repo_url || null, r.repo_stars || 0, r.local_path,
229
232
  r.file_hash || null, r.invocation_name || '',
@@ -233,7 +236,6 @@ export function upsertResource(db, r) {
233
236
  r.keywords || '', r.tech_stack || '', r.use_cases || '', r.complexity || 'intermediate',
234
237
  r.indexed_at || null
235
238
  );
236
- if (result.changes > 0 && result.lastInsertRowid) return Number(result.lastInsertRowid);
237
239
  const row = db.prepare('SELECT id FROM resources WHERE type = ? AND name = ?').get(r.type, r.name);
238
240
  return row?.id || 0;
239
241
  })();
package/schema.mjs CHANGED
@@ -74,6 +74,20 @@ const CORE_SCHEMA = `
74
74
  created_at_epoch INTEGER NOT NULL,
75
75
  FOREIGN KEY(content_session_id) REFERENCES sdk_sessions(content_session_id) ON DELETE CASCADE ON UPDATE CASCADE
76
76
  );
77
+
78
+ CREATE TABLE IF NOT EXISTS session_handoffs (
79
+ project TEXT NOT NULL,
80
+ type TEXT NOT NULL,
81
+ session_id TEXT NOT NULL,
82
+ working_on TEXT,
83
+ completed TEXT,
84
+ unfinished TEXT,
85
+ key_files TEXT,
86
+ key_decisions TEXT,
87
+ match_keywords TEXT,
88
+ created_at_epoch INTEGER,
89
+ PRIMARY KEY (project, type)
90
+ );
77
91
  `;
78
92
 
79
93
  // Column migrations (idempotent — only swallow "duplicate column" errors)
@@ -83,6 +97,7 @@ const MIGRATIONS = [
83
97
  'ALTER TABLE observations ADD COLUMN minhash_sig TEXT',
84
98
  'ALTER TABLE observations ADD COLUMN access_count INTEGER DEFAULT 0',
85
99
  'ALTER TABLE observations ADD COLUMN compressed_into INTEGER DEFAULT NULL',
100
+ 'ALTER TABLE session_summaries ADD COLUMN remaining_items TEXT',
86
101
  ];
87
102
 
88
103
  /**
@@ -139,7 +154,7 @@ export function initSchema(db) {
139
154
 
140
155
  // FTS5 full-text search tables + triggers (idempotent)
141
156
  ensureFTS(db, 'observations_fts', 'observations', ['title', 'subtitle', 'narrative', 'text', 'facts', 'concepts']);
142
- ensureFTS(db, 'session_summaries_fts', 'session_summaries', ['request', 'investigated', 'learned', 'completed', 'next_steps', 'notes']);
157
+ ensureFTS(db, 'session_summaries_fts', 'session_summaries', ['request', 'investigated', 'learned', 'completed', 'next_steps', 'notes', 'remaining_items']);
143
158
  ensureFTS(db, 'user_prompts_fts', 'user_prompts', ['prompt_text']);
144
159
 
145
160
  return db;
@@ -182,7 +197,12 @@ export function ensureDb() {
182
197
  db.pragma('synchronous = NORMAL');
183
198
  db.pragma('foreign_keys = OFF'); // Enabled after dedup migration
184
199
 
185
- return initSchema(db);
200
+ try {
201
+ return initSchema(db);
202
+ } catch (e) {
203
+ try { db.close(); } catch {}
204
+ throw e;
205
+ }
186
206
  }
187
207
 
188
208
  /**
@@ -197,10 +217,12 @@ export function ensureDb() {
197
217
  */
198
218
  export function rebuildFTS(db) {
199
219
  const FTS_TABLES = ['observations_fts', 'session_summaries_fts', 'user_prompts_fts'];
220
+ const idRe = /^[a-z][a-z0-9_]*$/;
200
221
  const rebuilt = [];
201
222
  const errors = [];
202
223
  for (const fts of FTS_TABLES) {
203
224
  try {
225
+ if (!idRe.test(fts)) { errors.push(`${fts}: invalid identifier`); continue; }
204
226
  const exists = db.prepare(`SELECT 1 FROM sqlite_master WHERE type='table' AND name=?`).get(fts);
205
227
  if (!exists) { errors.push(`${fts}: not found`); continue; }
206
228
  db.exec(`INSERT INTO ${fts}(${fts}) VALUES('rebuild')`);
@@ -219,10 +241,12 @@ export function rebuildFTS(db) {
219
241
  */
220
242
  export function checkFTSIntegrity(db) {
221
243
  const FTS_TABLES = ['observations_fts', 'session_summaries_fts', 'user_prompts_fts'];
244
+ const idRe = /^[a-z][a-z0-9_]*$/;
222
245
  const details = [];
223
246
  let healthy = true;
224
247
  for (const fts of FTS_TABLES) {
225
248
  try {
249
+ if (!idRe.test(fts)) { details.push(`${fts}: invalid identifier`); healthy = false; continue; }
226
250
  const exists = db.prepare(`SELECT 1 FROM sqlite_master WHERE type='table' AND name=?`).get(fts);
227
251
  if (!exists) { details.push(`${fts}: missing`); healthy = false; continue; }
228
252
  db.exec(`INSERT INTO ${fts}(${fts}) VALUES('integrity-check')`);
@@ -240,7 +264,7 @@ export function ensureFTS(db, ftsName, tableName, columns) {
240
264
  if (exists) return;
241
265
 
242
266
  // Validate identifiers to prevent SQL injection
243
- const idRe = /^[a-z_]+$/;
267
+ const idRe = /^[a-z][a-z0-9_]*$/;
244
268
  if (!idRe.test(ftsName) || !idRe.test(tableName) || !columns.every(c => idRe.test(c))) {
245
269
  throw new Error(`Invalid identifier in ensureFTS: ${ftsName}, ${tableName}`);
246
270
  }
@@ -50,7 +50,7 @@ case "$tool" in
50
50
  exit 0
51
51
  ;;
52
52
  # Prefix filters
53
- mem_*|mcp__mem__*|mcp__sequential*|mcp__plugin_context7*)
53
+ mem_*|mcp__mem__*|mcp__plugin_claude-mem-lite*|mcp__sequential*|mcp__plugin_context7*)
54
54
  exit 0
55
55
  ;;
56
56
  esac
package/server.mjs CHANGED
@@ -61,7 +61,7 @@ const RECENCY_HALF_LIFE_MS = 1209600000; // 14 days in milliseconds
61
61
  // ─── MCP Server ─────────────────────────────────────────────────────────────
62
62
 
63
63
  const server = new McpServer(
64
- { name: 'claude-mem-lite', version: '2.0.0' },
64
+ { name: 'claude-mem-lite', version: '2.1.6' },
65
65
  {
66
66
  instructions: [
67
67
  'Proactively search memory to leverage past experience. This is your long-term memory across sessions.',
@@ -939,6 +939,12 @@ server.registerTool(
939
939
  const narrative = obs.map(o => `- ${o.title || '(untitled)'}`).join('\n');
940
940
  const sessionId = obs[0].project ? `compress-${obs[0].project}` : 'compress-manual';
941
941
 
942
+ // Use median timestamp of compressed observations instead of now,
943
+ // so the summary appears at the correct position in timeline/recency scoring.
944
+ const sortedEpochs = obs.map(o => o.created_at_epoch).sort((a, b) => a - b);
945
+ const medianEpoch = sortedEpochs[Math.floor(sortedEpochs.length / 2)];
946
+ const medianDate = new Date(medianEpoch);
947
+
942
948
  // Ensure session exists (INSERT OR IGNORE avoids race condition)
943
949
  const now = new Date();
944
950
  db.prepare(`
@@ -948,7 +954,7 @@ server.registerTool(
948
954
 
949
955
  const summaryResult = insertSummary.run(
950
956
  sessionId, proj, narrative, dominantType, title, narrative,
951
- now.toISOString(), now.getTime()
957
+ medianDate.toISOString(), medianEpoch
952
958
  );
953
959
  const summaryId = Number(summaryResult.lastInsertRowid);
954
960
 
package/utils.mjs CHANGED
@@ -622,12 +622,11 @@ export function fmtTime(iso) {
622
622
  */
623
623
  export function isoWeekKey(epochMs) {
624
624
  const d = new Date(epochMs);
625
- const tmp = new Date(d.getTime());
626
- tmp.setHours(0, 0, 0, 0);
627
- tmp.setDate(tmp.getDate() + 4 - (tmp.getDay() || 7));
628
- const yearStart = new Date(tmp.getFullYear(), 0, 1);
625
+ const tmp = new Date(Date.UTC(d.getUTCFullYear(), d.getUTCMonth(), d.getUTCDate()));
626
+ tmp.setUTCDate(tmp.getUTCDate() + 4 - (tmp.getUTCDay() || 7));
627
+ const yearStart = new Date(Date.UTC(tmp.getUTCFullYear(), 0, 1));
629
628
  const weekNum = Math.ceil(((tmp - yearStart) / 86400000 + 1) / 7);
630
- const isoYear = tmp.getFullYear();
629
+ const isoYear = tmp.getUTCFullYear();
631
630
  return `${isoYear}-W${String(weekNum).padStart(2, '0')}`;
632
631
  }
633
632
 
@@ -676,3 +675,63 @@ export function parseJsonFromLLM(text) {
676
675
  if (obj) try { return JSON.parse(obj[0]); } catch {}
677
676
  return null;
678
677
  }
678
+
679
+ // ─── Handoff Utilities ──────────────────────────────────────────────────────
680
+
681
+ /** Stop words for handoff keyword extraction (broader than ERROR_STOP_WORDS). */
682
+ export const HANDOFF_STOP_WORDS = new Set([
683
+ 'the', 'and', 'for', 'that', 'this', 'with', 'from', 'are', 'was', 'were',
684
+ 'been', 'have', 'has', 'had', 'does', 'did', 'will', 'would', 'should', 'could',
685
+ 'can', 'may', 'must', 'not', 'but', 'its', 'all', 'any', 'each', 'some',
686
+ 'into', 'over', 'after', 'before', 'between', 'about', 'also', 'just', 'then',
687
+ 'than', 'when', 'where', 'how', 'what', 'which', 'who', 'why', 'here', 'there',
688
+ 'more', 'very', 'only', 'still', 'now', 'new', 'old', 'get', 'got', 'set',
689
+ 'true', 'false', 'null', 'undefined', 'function', 'return', 'const', 'let', 'var',
690
+ 'import', 'export', 'default', 'class', 'async', 'await', 'try', 'catch',
691
+ ]);
692
+
693
+ /**
694
+ * Tokenize text for handoff keyword matching.
695
+ * Splits on whitespace/punctuation, lowercases, filters short tokens.
696
+ * @param {string} text Input text
697
+ * @returns {string[]} Array of lowercase tokens (length >= 3)
698
+ */
699
+ export function tokenizeHandoff(text) {
700
+ if (!text) return [];
701
+ return text
702
+ .split(/[\s,;:.()[\]{}'"`<>→|/\\#@!?=+*&^%$~]+/)
703
+ .map(w => w.toLowerCase().replace(/^[.\-]+|[.\-]+$/g, ''))
704
+ .filter(w => w.length >= 3);
705
+ }
706
+
707
+ /**
708
+ * Check if a token is a "specific" term (file name, identifier, etc.)
709
+ * that should get double weight in intent matching.
710
+ * @param {string} token Lowercase token
711
+ * @returns {boolean}
712
+ */
713
+ export function isSpecificTerm(token) {
714
+ if (!token || token.length < 3) return false;
715
+ if (token.includes('_') || token.includes('-')) return true;
716
+ if (HANDOFF_STOP_WORDS.has(token)) return false;
717
+ return token.length >= 4 && !/^\d+$/.test(token);
718
+ }
719
+
720
+ /**
721
+ * Extract match keywords from text and file paths for handoff intent matching.
722
+ * @param {string} text Combined text from prompts, observations, etc.
723
+ * @param {string[]} files Array of file paths
724
+ * @returns {string} Space-separated keywords
725
+ */
726
+ export function extractMatchKeywords(text, files) {
727
+ const terms = new Set();
728
+ for (const f of files) {
729
+ const base = basename(f).replace(/\.[^.]+$/, '');
730
+ if (base.length >= 3) terms.add(base.toLowerCase());
731
+ }
732
+ const words = tokenizeHandoff(text);
733
+ for (const w of words) {
734
+ if (!HANDOFF_STOP_WORDS.has(w)) terms.add(w);
735
+ }
736
+ return [...terms].join(' ');
737
+ }