get-claudia 1.55.6 → 1.55.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,32 @@
2
2
 
3
3
  All notable changes to Claudia will be documented in this file.
4
4
 
5
+ ## 1.55.8 (2026-03-15)
6
+
7
+ ### The Vector Search Fix
8
+
9
+ v1.55.7 fixed FTS5 recall but broke vector/semantic search for every user. Three raw `sqlite3.connect()` calls skipped loading the `sqlite_vec` extension, and three KNN queries lacked the required `k = ?` constraint for JOINs. Net effect: 0% embedding coverage and silent fallback to keyword matching.
10
+
11
+ - **`load_sqlite_vec()` helper** -- Extracted the vec0 extension loading logic from `Database._get_connection()` into a standalone public function. Any raw connection that touches vec0 tables calls this one function. Eliminates the entire class of "forgot to load extension" bugs.
12
+ - **Backfill worker fix** -- `_backfill_worker()` now loads sqlite_vec on its raw connection. Previously it crashed within 60ms of starting because it couldn't query or write to vec0 tables. Degrades gracefully if the extension isn't available.
13
+ - **Index repair fix** -- `_check_and_repair_indexes()` now loads sqlite_vec on its raw connection. Previously it always reported 0 embeddings (triggering unnecessary backfill every startup). Improved exception handling distinguishes "no such table" from "no such module."
14
+ - **KNN `k = ?` constraint** -- Added `AND k = ?` to all three `embedding MATCH` queries in recall.py (`recall()`, `recall_episodes()`, `search_reflections()`). vec0's KNN queries require this constraint when JOINs are present because SQLite's query planner can't push an outer `LIMIT` into the virtual table scan.
15
+ - **Smarter briefing message** -- Embedding health check now reads `_meta['indexes_repaired']` to distinguish "backfill in progress" from "backfill never started." No longer tells users to "Start Ollama" when the real problem was a code bug.
16
+ - **9 new tests** -- Helper extraction, Database integration, backfill/repair patterns, KNN constraint behavior (both success and documented failure without `k`). All 615 tests pass.
17
+
18
+ ## 1.55.7 (2026-03-15)
19
+
20
+ ### The Recall Recovery Release
21
+
22
+ After v1.55 consolidation, recall could return zero results despite all memories existing. Three-layer fix restores recall for affected users automatically on next startup.
23
+
24
+ - **FTS5 rebuild after consolidation** -- `merge_all_databases()` now rebuilds the FTS5 full-text index after merging. The migration's separate SQLite connection bypassed triggers that keep FTS5 in sync, leaving the index empty.
25
+ - **LIKE fallback fix** -- `_keyword_search()` now falls through to SQL LIKE when FTS5 returns 0 rows (not just on exception). Previously, an empty-but-functional FTS5 table returned nothing and the LIKE fallback never activated.
26
+ - **Startup index repair** -- New `_check_and_repair_indexes()` runs on every daemon startup (idempotent). Detects FTS5 and embedding gaps, rebuilds FTS5 instantly, and starts a background embedding backfill thread if Ollama is available.
27
+ - **Background embedding backfill** -- Non-blocking thread generates missing vector embeddings in batches of 25. Tolerant of missing Ollama (logs warning, recall uses LIKE fallback). Progress logged to daemon.log.
28
+ - **Embedding health in briefing** -- Session briefing now shows embedding coverage percentage when below 90%, so Claudia can inform the user about regeneration status.
29
+ - **5 new tests** -- FTS rebuild after merge, recall works after merge, standalone FTS rebuild, LIKE fallback when FTS empty, FTS MATCH preferred when populated. All 613 tests pass.
30
+
5
31
  ## 1.55.6 (2026-03-15)
6
32
 
7
33
  - **Post-consolidation status report** -- After merging databases, Claudia's first greeting includes live database stats (memories, entities, relationships, episodes, reflections, patterns) and explains the backup schedule going forward. The whats-new file now contains a full status table and backup retention details.
@@ -20,7 +20,7 @@ from pathlib import Path
20
20
  from .config import get_config, set_project_id
21
21
  from .daemon.health import start_health_server, stop_health_server
22
22
  from .daemon.scheduler import start_scheduler, stop_scheduler
23
- from .database import get_db
23
+ from .database import get_db, load_sqlite_vec
24
24
  from .mcp.server import run_server as run_mcp_server
25
25
 
26
26
  logger = logging.getLogger(__name__)
@@ -367,6 +367,10 @@ Your database is automatically backed up on a schedule (requires the standalone
367
367
 
368
368
  All backups live in `~/.claudia/backups/`. Old backups are automatically cleaned up when they exceed retention limits.
369
369
 
370
+ ## Search index rebuild
371
+
372
+ Your full-text search index has been rebuilt automatically. If Ollama is running, vector embeddings are being regenerated in the background (this takes a few minutes for large databases). Until embeddings finish, recall uses keyword search as a fallback.
373
+
370
374
  **What this means for the user:**
371
375
  - You remember everything from every project, always. No more fragmented memory.
372
376
  - Moving project directories no longer loses history.
@@ -381,6 +385,192 @@ _Surface this update in your first greeting with the database stats and backup e
381
385
  logger.debug(f"Could not write consolidation notice: {e}")
382
386
 
383
387
 
388
+ def _check_and_repair_indexes(db_path: Path) -> None:
389
+ """Detect and repair empty FTS5 / embedding indexes after consolidation.
390
+
391
+ Runs on every daemon startup (idempotent). Handles three populations:
392
+ - Already broken (upgraded to v1.55.0-1.55.6): FTS empty, embeddings empty
393
+ - Fresh consolidation (upgrading now): FTS rebuilt in merge, embeddings backfilled here
394
+ - Normal startup (no issues): counts match, no-op
395
+
396
+ Stores '_meta["indexes_repaired"]' with timestamp when repairs happen.
397
+ """
398
+ from datetime import datetime as dt
399
+ from .migration import rebuild_fts_index
400
+
401
+ try:
402
+ conn = sqlite3.connect(str(db_path), timeout=10)
403
+ conn.row_factory = sqlite3.Row
404
+ load_sqlite_vec(conn)
405
+
406
+ # Count memories
407
+ mem_row = conn.execute("SELECT COUNT(*) as c FROM memories WHERE invalidated_at IS NULL").fetchone()
408
+ mem_count = mem_row["c"] if mem_row else 0
409
+
410
+ if mem_count == 0:
411
+ conn.close()
412
+ return # No memories, nothing to repair
413
+
414
+ # Check FTS5 index
415
+ fts_count = 0
416
+ try:
417
+ fts_row = conn.execute("SELECT COUNT(*) as c FROM memories_fts").fetchone()
418
+ fts_count = fts_row["c"] if fts_row else 0
419
+ except Exception:
420
+ pass # FTS5 table might not exist
421
+
422
+ # Check embeddings
423
+ emb_count = 0
424
+ try:
425
+ emb_row = conn.execute("SELECT COUNT(*) as c FROM memory_embeddings").fetchone()
426
+ emb_count = emb_row["c"] if emb_row else 0
427
+ except sqlite3.OperationalError as e:
428
+ if "no such table" in str(e):
429
+ pass # Fresh install, vec0 tables not yet created
430
+ elif "no such module" in str(e):
431
+ logger.debug("sqlite_vec not loaded, cannot count embeddings")
432
+ else:
433
+ logger.warning(f"Unexpected error counting embeddings: {e}")
434
+
435
+ conn.close()
436
+
437
+ fts_gap = mem_count - fts_count
438
+ emb_gap = mem_count - emb_count
439
+ fts_threshold = max(10, int(mem_count * 0.1)) # 10% or at least 10
440
+ emb_threshold = max(10, int(mem_count * 0.1))
441
+
442
+ repaired = []
443
+
444
+ # Repair FTS5 if significantly fewer entries than memories
445
+ if fts_gap > fts_threshold:
446
+ logger.warning(
447
+ f"FTS5 index gap detected: {fts_count} indexed vs {mem_count} memories. "
448
+ f"Rebuilding FTS5 index..."
449
+ )
450
+ indexed = rebuild_fts_index(db_path)
451
+ repaired.append(f"fts5: {fts_count}->{indexed}")
452
+ logger.info(f"FTS5 repair complete: {indexed} rows indexed")
453
+
454
+ # Trigger embedding backfill if significantly fewer embeddings
455
+ if emb_gap > emb_threshold:
456
+ logger.warning(
457
+ f"Embedding gap detected: {emb_count} embeddings vs {mem_count} memories. "
458
+ f"Starting background backfill..."
459
+ )
460
+ _auto_backfill_embeddings(db_path, mem_count, emb_count)
461
+ repaired.append(f"embeddings: {emb_count}/{mem_count} (backfill started)")
462
+
463
+ # Record repair timestamp
464
+ if repaired:
465
+ try:
466
+ rc = sqlite3.connect(str(db_path), timeout=10)
467
+ rc.execute(
468
+ "INSERT OR REPLACE INTO _meta (key, value, updated_at) "
469
+ "VALUES ('indexes_repaired', ?, ?)",
470
+ (", ".join(repaired), dt.now().isoformat()),
471
+ )
472
+ rc.commit()
473
+ rc.close()
474
+ except Exception:
475
+ pass
476
+ else:
477
+ logger.debug(
478
+ f"Index health OK: FTS5={fts_count}/{mem_count}, "
479
+ f"embeddings={emb_count}/{mem_count}"
480
+ )
481
+
482
+ except Exception as e:
483
+ # Non-fatal: log and continue
484
+ logger.error(f"Index repair check failed (non-fatal): {e}")
485
+
486
+
487
+ def _auto_backfill_embeddings(db_path: Path, mem_count: int, emb_count: int) -> None:
488
+ """Start background thread to generate missing embeddings.
489
+
490
+ Non-blocking: the MCP server starts immediately while this runs.
491
+ Tolerant: if Ollama isn't running, logs a warning and exits.
492
+ Batched: processes 25 at a time with progress logging.
493
+ Idempotent: LEFT JOIN ensures only missing embeddings are generated.
494
+ """
495
+ import json as _json
496
+ import threading
497
+
498
+ def _backfill_worker():
499
+ try:
500
+ from .embeddings import get_embedding_service
501
+
502
+ svc = get_embedding_service()
503
+ if not svc.is_available_sync():
504
+ logger.warning(
505
+ "Ollama not available for embedding backfill. "
506
+ "Recall will use LIKE fallback until embeddings are generated. "
507
+ "Start Ollama and restart the daemon, or run --backfill-embeddings."
508
+ )
509
+ return
510
+
511
+ conn = sqlite3.connect(str(db_path), timeout=30)
512
+ conn.row_factory = sqlite3.Row
513
+ if not load_sqlite_vec(conn):
514
+ logger.warning(
515
+ "sqlite_vec not available in backfill thread. "
516
+ "Cannot write to vec0 tables. Skipping embedding backfill."
517
+ )
518
+ conn.close()
519
+ return
520
+
521
+ # Find memories missing embeddings
522
+ missing = conn.execute(
523
+ "SELECT m.id, m.content FROM memories m "
524
+ "LEFT JOIN memory_embeddings me ON m.id = me.memory_id "
525
+ "WHERE me.memory_id IS NULL AND m.invalidated_at IS NULL"
526
+ ).fetchall()
527
+
528
+ if not missing:
529
+ logger.info("Embedding backfill: no missing embeddings found")
530
+ conn.close()
531
+ return
532
+
533
+ total = len(missing)
534
+ logger.info(f"Embedding backfill: generating embeddings for {total} memories...")
535
+
536
+ success = 0
537
+ failed = 0
538
+ batch_size = 25
539
+
540
+ for i, row in enumerate(missing, 1):
541
+ try:
542
+ embedding = svc.embed_sync(row["content"])
543
+ if embedding:
544
+ conn.execute(
545
+ "INSERT OR REPLACE INTO memory_embeddings (memory_id, embedding) VALUES (?, ?)",
546
+ (row["id"], _json.dumps(embedding)),
547
+ )
548
+ success += 1
549
+ if success % batch_size == 0:
550
+ conn.commit()
551
+ else:
552
+ failed += 1
553
+ except Exception as e:
554
+ failed += 1
555
+ if failed <= 3:
556
+ logger.debug(f"Embedding failed for memory {row['id']}: {e}")
557
+
558
+ if i % batch_size == 0 or i == total:
559
+ logger.info(f"Embedding backfill progress: {i}/{total} (success={success}, failed={failed})")
560
+
561
+ conn.commit()
562
+ conn.close()
563
+
564
+ logger.info(f"Embedding backfill complete: {success} generated, {failed} failed out of {total}")
565
+
566
+ except Exception as e:
567
+ logger.error(f"Embedding backfill thread failed: {e}")
568
+
569
+ thread = threading.Thread(target=_backfill_worker, name="embedding-backfill", daemon=True)
570
+ thread.start()
571
+ logger.info("Embedding backfill thread started in background")
572
+
573
+
384
574
  def _write_preflight_result(result: dict) -> Path:
385
575
  """Write preflight result JSON to ~/.claudia/daemon-preflight.json."""
386
576
  out_path = Path.home() / ".claudia" / "daemon-preflight.json"
@@ -790,6 +980,10 @@ def run_daemon(mcp_mode: bool = True, debug: bool = False, project_id: str = Non
790
980
  # Auto-consolidate hash-named databases into unified claudia.db
791
981
  _auto_consolidate()
792
982
 
983
+ # Repair FTS5 and embeddings if they're out of sync with memories.
984
+ # Handles already-affected users (v1.55.0-1.55.6) and fresh consolidations.
985
+ _check_and_repair_indexes(Path(config.db_path))
986
+
793
987
  # Start health server and scheduler - ONLY in standalone mode.
794
988
  # MCP server processes are ephemeral and session-bound; the standalone
795
989
  # daemon (LaunchAgent/systemd) owns port 3848 and handles scheduling.
@@ -25,6 +25,73 @@ from .config import get_config
25
25
  logger = logging.getLogger(__name__)
26
26
 
27
27
 
28
+ def load_sqlite_vec(conn: sqlite3.Connection) -> bool:
29
+ """Load the sqlite-vec extension on a connection.
30
+
31
+ Tries two methods:
32
+ 1. sqlite_vec Python package (recommended, works everywhere including Python 3.13+)
33
+ 2. Native extension loading (for systems with pre-installed sqlite-vec)
34
+
35
+ Returns True if vec0 is available, False otherwise. Never raises.
36
+ """
37
+ # Method 1: Try sqlite_vec Python package
38
+ try:
39
+ import sqlite_vec
40
+ if hasattr(conn, "enable_load_extension"):
41
+ conn.enable_load_extension(True)
42
+ sqlite_vec.load(conn)
43
+ if hasattr(conn, "enable_load_extension"):
44
+ conn.enable_load_extension(False)
45
+ logger.debug("Loaded sqlite-vec via Python package")
46
+ return True
47
+ except ImportError:
48
+ logger.debug("sqlite_vec package not installed")
49
+ except Exception as e:
50
+ logger.warning(f"sqlite_vec package installed but load() failed: {e}")
51
+
52
+ # Method 2: Try native extension loading
53
+ try:
54
+ conn.enable_load_extension(True)
55
+ sqlite_vec_paths = ["vec0"] # System-wide
56
+
57
+ if sys.platform == "win32":
58
+ try:
59
+ import sqlite_vec as _sv
60
+ pkg_dir = Path(_sv.__file__).parent
61
+ for dll in pkg_dir.rglob("vec0*"):
62
+ if dll.suffix in (".dll", ".so"):
63
+ sqlite_vec_paths.append(str(dll.with_suffix("")))
64
+ except ImportError:
65
+ pass
66
+ sqlite_vec_paths.extend([
67
+ str(Path(sys.executable).parent / "DLLs" / "vec0"),
68
+ str(Path.home() / ".local" / "lib" / "sqlite-vec" / "vec0"),
69
+ ])
70
+ else:
71
+ sqlite_vec_paths.extend([
72
+ "/usr/local/lib/sqlite-vec/vec0",
73
+ "/opt/homebrew/lib/sqlite-vec/vec0",
74
+ str(Path.home() / ".local" / "lib" / "sqlite-vec" / "vec0"),
75
+ ])
76
+
77
+ for path in sqlite_vec_paths:
78
+ try:
79
+ conn.load_extension(path)
80
+ logger.debug(f"Loaded sqlite-vec from {path}")
81
+ conn.enable_load_extension(False)
82
+ return True
83
+ except sqlite3.OperationalError:
84
+ continue
85
+
86
+ conn.enable_load_extension(False)
87
+ except AttributeError:
88
+ logger.debug("enable_load_extension not available (Python 3.13+)")
89
+ except Exception as e:
90
+ logger.debug(f"Extension loading failed: {e}")
91
+
92
+ return False
93
+
94
+
28
95
  class Database:
29
96
  """Thread-safe SQLite database with sqlite-vec support"""
30
97
 
@@ -51,72 +118,8 @@ class Database:
51
118
  # Recover any uncommitted WAL writes from a previous crashed daemon
52
119
  conn.execute("PRAGMA wal_checkpoint(TRUNCATE)")
53
120
 
54
- # Try to load sqlite-vec for vector search
55
- # Priority: sqlite_vec Python package first (works on Python 3.13+),
56
- # then fall back to native extension loading
57
- loaded = False
58
-
59
- # Method 1: Try sqlite_vec Python package (recommended, works everywhere)
60
- # Python 3.14+ requires explicit enable_load_extension() before any
61
- # extension loading, even via the sqlite_vec helper package.
62
- try:
63
- import sqlite_vec
64
- if hasattr(conn, "enable_load_extension"):
65
- conn.enable_load_extension(True)
66
- sqlite_vec.load(conn)
67
- if hasattr(conn, "enable_load_extension"):
68
- conn.enable_load_extension(False)
69
- loaded = True
70
- logger.debug("Loaded sqlite-vec via Python package")
71
- except ImportError:
72
- logger.debug("sqlite_vec package not installed")
73
- except Exception as e:
74
- logger.warning(f"sqlite_vec package installed but load() failed: {e}")
75
-
76
- # Method 2: Try native extension loading (for systems with pre-installed sqlite-vec)
77
- if not loaded:
78
- try:
79
- conn.enable_load_extension(True)
80
- sqlite_vec_paths = ["vec0"] # System-wide
81
-
82
- if sys.platform == "win32":
83
- # Try to find vec0.dll in the sqlite-vec package directory
84
- try:
85
- import sqlite_vec as _sv
86
- pkg_dir = Path(_sv.__file__).parent
87
- for dll in pkg_dir.rglob("vec0*"):
88
- if dll.suffix in (".dll", ".so"):
89
- sqlite_vec_paths.append(str(dll.with_suffix("")))
90
- except ImportError:
91
- pass
92
- sqlite_vec_paths.extend([
93
- str(Path(sys.executable).parent / "DLLs" / "vec0"),
94
- str(Path.home() / ".local" / "lib" / "sqlite-vec" / "vec0"),
95
- ])
96
- else:
97
- sqlite_vec_paths.extend([
98
- "/usr/local/lib/sqlite-vec/vec0",
99
- "/opt/homebrew/lib/sqlite-vec/vec0",
100
- str(Path.home() / ".local" / "lib" / "sqlite-vec" / "vec0"),
101
- ])
102
-
103
- for path in sqlite_vec_paths:
104
- try:
105
- conn.load_extension(path)
106
- loaded = True
107
- logger.debug(f"Loaded sqlite-vec from {path}")
108
- break
109
- except sqlite3.OperationalError:
110
- continue
111
-
112
- conn.enable_load_extension(False)
113
- except AttributeError:
114
- # Python 3.13+ may not have enable_load_extension
115
- logger.debug("enable_load_extension not available (Python 3.13+)")
116
- except Exception as e:
117
- logger.debug(f"Extension loading failed: {e}")
118
-
119
- if not loaded:
121
+ # Load sqlite-vec for vector search
122
+ if not load_sqlite_vec(conn):
120
123
  if sys.platform == "win32":
121
124
  logger.warning(
122
125
  "sqlite-vec not available. Vector search will be disabled. "
@@ -3336,6 +3336,40 @@ def _build_briefing() -> str:
3336
3336
  except Exception as e:
3337
3337
  logger.debug(f"Briefing recent failed: {e}")
3338
3338
 
3339
+ # 6. Embedding health check
3340
+ try:
3341
+ mem_total = db.execute("SELECT COUNT(*) as c FROM memories WHERE invalidated_at IS NULL", fetch=True)
3342
+ emb_total = db.execute("SELECT COUNT(*) as c FROM memory_embeddings", fetch=True)
3343
+ mem_c = mem_total[0]["c"] if mem_total else 0
3344
+ emb_c = emb_total[0]["c"] if emb_total else 0
3345
+ if mem_c > 0:
3346
+ coverage = (emb_c / mem_c) * 100
3347
+ if coverage < 90:
3348
+ gap = mem_c - emb_c
3349
+ # Check _meta for backfill status to give accurate guidance
3350
+ status_hint = "Start Ollama and restart daemon to generate embeddings."
3351
+ try:
3352
+ repair_row = db.execute(
3353
+ "SELECT value FROM _meta WHERE key = 'indexes_repaired'",
3354
+ fetch=True,
3355
+ )
3356
+ if repair_row and repair_row[0]["value"]:
3357
+ repair_info = repair_row[0]["value"]
3358
+ if "backfill started" in repair_info:
3359
+ status_hint = "Backfill was started on this run. Check daemon logs for progress."
3360
+ elif coverage > 0:
3361
+ status_hint = "Partial embeddings exist. Restart daemon to trigger backfill for the rest."
3362
+ except Exception:
3363
+ pass
3364
+ lines.append(
3365
+ f"**Embedding coverage:** {emb_c}/{mem_c} ({coverage:.0f}%). "
3366
+ f"{gap} memories lack vector embeddings. "
3367
+ f"{status_hint} "
3368
+ f"Recall uses keyword fallback for unembedded memories."
3369
+ )
3370
+ except Exception as e:
3371
+ logger.debug(f"Briefing embedding health failed: {e}")
3372
+
3339
3373
  if len(lines) <= 1:
3340
3374
  lines.append("No context available yet. This appears to be a fresh workspace.")
3341
3375
 
@@ -1255,9 +1255,42 @@ def merge_all_databases(
1255
1255
  logger.error(f"Failed to merge {source_path.name}: {e}")
1256
1256
  # Non-fatal: continue with other sources
1257
1257
 
1258
+ # Rebuild FTS5 index after all merges (triggers don't fire on the
1259
+ # separate connections used by migrate_legacy_database)
1260
+ if not dry_run and totals["sources_merged"] > 0:
1261
+ rebuild_fts_index(target_path)
1262
+
1258
1263
  return totals
1259
1264
 
1260
1265
 
1266
+ def rebuild_fts_index(db_path: Path) -> int:
1267
+ """Rebuild FTS5 index from memories table.
1268
+
1269
+ After consolidation, FTS5 triggers may not have fired for migrated rows
1270
+ (the migration uses a separate connection that bypasses triggers).
1271
+ This rebuilds the entire FTS index from scratch.
1272
+
1273
+ Returns the number of rows indexed.
1274
+ """
1275
+ try:
1276
+ conn = sqlite3.connect(str(db_path), timeout=30)
1277
+ # Clear existing FTS data
1278
+ conn.execute("INSERT INTO memories_fts(memories_fts) VALUES('delete-all')")
1279
+ # Repopulate from memories table
1280
+ conn.execute(
1281
+ "INSERT INTO memories_fts(rowid, content) "
1282
+ "SELECT id, content FROM memories WHERE invalidated_at IS NULL"
1283
+ )
1284
+ conn.commit()
1285
+ count = conn.execute("SELECT COUNT(*) FROM memories_fts").fetchone()[0]
1286
+ conn.close()
1287
+ logger.info(f"Rebuilt FTS5 index: {count} rows indexed")
1288
+ return count
1289
+ except Exception as e:
1290
+ logger.warning(f"Could not rebuild FTS5 index: {e}")
1291
+ return 0
1292
+
1293
+
1261
1294
  def cleanup_old_databases(memory_dir: Path, source_dbs: List[Dict]) -> int:
1262
1295
  """Delete hash-named databases and their WAL/SHM files after successful merge.
1263
1296
 
@@ -145,9 +145,11 @@ class RecallService:
145
145
  LEFT JOIN memory_entities me2 ON m.id = me2.memory_id
146
146
  LEFT JOIN entities e ON me2.entity_id = e.id
147
147
  WHERE me.embedding MATCH ?
148
+ AND k = ?
148
149
  """
149
150
  )
150
151
  params.append(json.dumps(query_embedding))
152
+ params.append(limit * 2)
151
153
 
152
154
  self._apply_filters(sql_parts, params, memory_types, min_importance, date_after, date_before, about_entity, include_archived)
153
155
  sql_parts.append("GROUP BY m.id ORDER BY vector_score DESC LIMIT ?")
@@ -1368,11 +1370,12 @@ class RecallService:
1368
1370
  FROM episode_embeddings ee
1369
1371
  JOIN episodes e ON e.id = ee.episode_id
1370
1372
  WHERE ee.embedding MATCH ?
1373
+ AND k = ?
1371
1374
  AND e.is_summarized = 1
1372
1375
  ORDER BY relevance DESC
1373
1376
  LIMIT ?
1374
1377
  """,
1375
- (json.dumps(query_embedding), limit),
1378
+ (json.dumps(query_embedding), limit, limit),
1376
1379
  fetch=True,
1377
1380
  ) or []
1378
1381
  except Exception as e:
@@ -2286,8 +2289,9 @@ class RecallService:
2286
2289
  JOIN reflections r ON r.id = re.reflection_id
2287
2290
  LEFT JOIN entities e ON r.about_entity_id = e.id
2288
2291
  WHERE re.embedding MATCH ?
2292
+ AND k = ?
2289
2293
  """
2290
- params: list = [json.dumps(query_embedding)]
2294
+ params: list = [json.dumps(query_embedding), limit]
2291
2295
 
2292
2296
  if reflection_types:
2293
2297
  placeholders = ", ".join(["?" for _ in reflection_types])
@@ -2429,6 +2433,8 @@ class RecallService:
2429
2433
  rows = self.db.execute(sql, tuple(params), fetch=True) or []
2430
2434
  if rows:
2431
2435
  return rows
2436
+ # FTS5 succeeded but returned 0 rows (empty index after migration).
2437
+ # Fall through to LIKE instead of returning empty.
2432
2438
  except Exception:
2433
2439
  pass # FTS5 not available, fall through to LIKE
2434
2440
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "get-claudia",
3
- "version": "1.55.6",
3
+ "version": "1.55.8",
4
4
  "description": "An AI assistant who learns how you work.",
5
5
  "keywords": [
6
6
  "claudia",