get-claudia 1.55.19 → 1.55.20

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,18 @@
2
2
 
3
3
  All notable changes to Claudia will be documented in this file.
4
4
 
5
+ ## 1.55.20 (2026-03-19)
6
+
7
+ ### Community Fixes
8
+
9
+ Four fixes from GitHub issues #24, #26, #28, #31.
10
+
11
+ - **Fixed MCP double-spawn crash (#24)** -- Removed `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` from the template settings. This env var caused Claude Code to spawn the memory daemon twice on Linux, crashing both with BrokenPipeError. The installer now strips it from existing user settings on upgrade.
12
+ - **Alias overlap false positives (#26)** -- Single-token aliases (common first names like "Joel") no longer flag unrelated entities as 95% similar. The filter checks if full entity names diverge beyond the shared alias. Multi-token aliases work normally.
13
+ - **Stale dedupe predictions (#28)** -- After merging or deleting entities, dedupe predictions referencing them are now expired immediately instead of lingering for up to 14 days in briefings.
14
+ - **Memory-health skill schema reference (#31)** -- Added complete column-name reference to the skill file. Documents `sacred_reason` (not `sacred`), `invalid_at` (not `invalidated_at` on relationships), and that embeddings live in separate tables.
15
+ - 717 tests pass, 0 regressions, 11 new tests across 2 new test files.
16
+
5
17
  ## 1.55.19 (2026-03-19)
6
18
 
7
19
  ### The Self-Healer
package/bin/index.js CHANGED
@@ -667,6 +667,20 @@ async function main() {
667
667
  console.log(` ${colors.cyan}✓${colors.reset} Framework updated (data preserved)`);
668
668
  }
669
669
 
670
+ // Self-heal: strip CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS from settings (#24)
671
+ // This env var causes double-spawn crashes on Linux and some macOS setups
672
+ try {
673
+ const settingsPath = join(targetPath, '.claude', 'settings.local.json');
674
+ if (existsSync(settingsPath)) {
675
+ const raw = readFileSync(settingsPath, 'utf8');
676
+ const settings = JSON.parse(raw);
677
+ if (settings.env && settings.env.CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS) {
678
+ delete settings.env.CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS;
679
+ writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + '\n');
680
+ }
681
+ }
682
+ } catch { /* non-fatal */ }
683
+
670
684
  // Restore MCP servers that earlier versions incorrectly disabled.
671
685
  restoreMcpServers(targetPath);
672
686
 
@@ -2496,6 +2496,22 @@ class ConsolidateService:
2496
2496
  e1 = self.db.get_one("entities", where="id = ?", where_params=(row["eid1"],))
2497
2497
  e2 = self.db.get_one("entities", where="id = ?", where_params=(row["eid2"],))
2498
2498
  if e1 and e2:
2499
+ shared = row["alias"].strip()
2500
+
2501
+ # Single-token alias filter (#26): a shared first name
2502
+ # like "joel" is weak evidence when full names diverge.
2503
+ if " " not in shared:
2504
+ tokens1 = set(e1["name"].lower().split())
2505
+ tokens2 = set(e2["name"].lower().split())
2506
+ overlap = tokens1 & tokens2
2507
+ non_shared = (tokens1 | tokens2) - overlap
2508
+ if len(non_shared) >= 2 and len(overlap) <= 1:
2509
+ logger.debug(
2510
+ f"Alias overlap skipped: '{e1['name']}' / '{e2['name']}' "
2511
+ f"share only single-token alias '{shared}'"
2512
+ )
2513
+ continue
2514
+
2499
2515
  candidates.append({
2500
2516
  "entity_1": {"id": e1["id"], "name": e1["name"], "type": e1["type"]},
2501
2517
  "entity_2": {"id": e2["id"], "name": e2["name"], "type": e2["type"]},
@@ -996,6 +996,9 @@ class RememberService:
996
996
  result["success"] = True
997
997
  logger.info(f"Merged entity {source_id} ({source['name']}) into {target_id} ({target['name']})")
998
998
 
999
+ # Expire dedupe predictions referencing the merged entity (#28)
1000
+ self._expire_dedupe_predictions(source_id)
1001
+
999
1002
  # Audit log
1000
1003
  _audit_log(
1001
1004
  "entity_merge",
@@ -1048,6 +1051,9 @@ class RememberService:
1048
1051
 
1049
1052
  logger.info(f"Soft-deleted entity {entity_id} ({entity['name']}): {reason}")
1050
1053
 
1054
+ # Expire dedupe predictions referencing the deleted entity (#28)
1055
+ self._expire_dedupe_predictions(entity_id)
1056
+
1051
1057
  # Audit log
1052
1058
  _audit_log(
1053
1059
  "entity_delete",
@@ -1063,6 +1069,28 @@ class RememberService:
1063
1069
  "deleted_at": now,
1064
1070
  }
1065
1071
 
1072
+ def _expire_dedupe_predictions(self, entity_id: int) -> None:
1073
+ """Expire dedupe predictions that reference the given entity ID.
1074
+
1075
+ After merging or deleting an entity, stale dedupe suggestions should
1076
+ not keep appearing in briefings. This expires (not deletes) any
1077
+ prediction where the entity_id appears in the dedupe_pair metadata.
1078
+ """
1079
+ now = datetime.utcnow().isoformat()
1080
+ try:
1081
+ self.db.execute(
1082
+ """
1083
+ UPDATE predictions
1084
+ SET expires_at = ?
1085
+ WHERE prediction_type = 'suggestion'
1086
+ AND expires_at > ?
1087
+ AND (metadata LIKE ? OR metadata LIKE ?)
1088
+ """,
1089
+ (now, now, f'%"dedupe_pair": [{entity_id},%', f'%, {entity_id}]%'),
1090
+ )
1091
+ except Exception as e:
1092
+ logger.debug(f"Failed to expire dedupe predictions for entity {entity_id}: {e}")
1093
+
1066
1094
  def correct_memory(
1067
1095
  self,
1068
1096
  memory_id: int,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "get-claudia",
3
- "version": "1.55.19",
3
+ "version": "1.55.20",
4
4
  "description": "An AI assistant who learns how you work.",
5
5
  "keywords": [
6
6
  "claudia",
@@ -1,7 +1,5 @@
1
1
  {
2
- "env": {
3
- "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
4
- },
2
+ "env": {},
5
3
  "hooks": {
6
4
  "SessionStart": [
7
5
  {
@@ -15,11 +15,32 @@ Provide a dashboard view of the memory system's health, including entity counts,
15
15
  - User says "how much do you remember?", "what's in your brain?"
16
16
  - Periodic self-check (weekly review, morning brief)
17
17
 
18
+ ## Schema Reference
19
+
20
+ **Use these exact column names in all SQLite queries. Do NOT guess column names.**
21
+
22
+ **entities** table: `id`, `name`, `type`, `canonical_name`, `description`, `importance` (REAL), `created_at`, `updated_at`, `metadata`, `last_contact_at`, `contact_frequency_days`, `contact_trend`, `attention_tier`, `close_circle` (BOOLEAN), `close_circle_reason`, `deleted_at`, `deleted_reason`
23
+
24
+ **memories** table: `id`, `content`, `content_hash`, `type`, `importance` (REAL), `confidence` (REAL), `source`, `source_id`, `source_context`, `created_at`, `updated_at`, `last_accessed_at`, `access_count`, `verified_at`, `verification_status`, `metadata`, `source_channel`, `deadline_at`, `temporal_markers`, `lifecycle_tier`, `sacred_reason` (NOT `sacred`), `archived_at`, `fact_id`, `hash`, `prev_hash`, `workspace_id`, `corrected_at`, `corrected_from`, `invalidated_at`, `invalidated_reason`, `origin_type`
25
+
26
+ **relationships** table: `id`, `source_entity_id`, `target_entity_id`, `relationship_type`, `strength` (REAL), `origin_type`, `direction`, `valid_at`, `invalid_at` (NOT `invalidated_at`), `created_at`, `updated_at`, `metadata`, `lifecycle_tier`
27
+
28
+ **predictions** table: `id`, `content`, `prediction_type`, `priority` (REAL), `expires_at`, `is_shown`, `is_acted_on`, `created_at`, `shown_at`, `prediction_pattern_name`, `metadata`
29
+
30
+ **Important distinctions:**
31
+ - Embeddings are in SEPARATE tables (`entity_embeddings`, `memory_embeddings`), NOT columns on the main tables
32
+ - `memories.sacred_reason` exists, but there is no column called `sacred` or `critical`
33
+ - `relationships.invalid_at` (not `invalidated_at`) marks invalid relationships
34
+ - `memories.invalidated_at` marks invalidated memories (different column name than relationships)
35
+ - Always filter with `deleted_at IS NULL` on entities and `invalidated_at IS NULL` on memories
36
+
18
37
  ## Workflow
19
38
 
20
39
  ### Step 1: Gather Statistics
21
40
 
22
- Use the Claudia CLI to get current system state:
41
+ Use the `memory.system_health` MCP tool or direct SQLite queries with the schema above.
42
+
43
+ Alternatively, use the Claudia CLI to get current system state:
23
44
 
24
45
  ```bash
25
46
  claudia memory session context --scope full --project-dir "$PWD"