get-claudia 1.55.14 → 1.55.16
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +18 -0
- package/README.md +4 -4
- package/memory-daemon/claudia_memory/daemon/scheduler.py +3 -1
- package/memory-daemon/claudia_memory/extraction/entity_extractor.py +32 -46
- package/memory-daemon/claudia_memory/mcp/server.py +5 -4
- package/memory-daemon/claudia_memory/services/canvas_generator.py +2 -1
- package/memory-daemon/claudia_memory/services/consolidate.py +6 -5
- package/memory-daemon/claudia_memory/services/guards.py +2 -2
- package/memory-daemon/claudia_memory/services/recall.py +9 -11
- package/memory-daemon/claudia_memory/services/remember.py +40 -0
- package/memory-daemon/claudia_memory/services/vault_sync.py +3 -2
- package/memory-daemon/claudia_memory/utils.py +22 -0
- package/package.json +2 -2
package/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,24 @@
|
|
|
2
2
|
|
|
3
3
|
All notable changes to Claudia will be documented in this file.
|
|
4
4
|
|
|
5
|
+
## 1.55.16 (2026-03-18)
|
|
6
|
+
|
|
7
|
+
### Reliability Fixes
|
|
8
|
+
|
|
9
|
+
Three fixes for issues surfaced from daemon logs. All backward-compatible, no schema changes.
|
|
10
|
+
|
|
11
|
+
- **Overnight jobs now fire after sleep** -- APScheduler's `BackgroundScheduler` now has `misfire_grace_time=14400` (4 hours) and `coalesce=True`. Previously, the default 1-second grace time meant every scheduled job (decay, backup, consolidation, vault sync) was silently skipped when a Mac slept through the 2am-3:15am window. Now jobs fire immediately on wake if missed within the last 4 hours, with multiple missed runs collapsed into one execution.
|
|
12
|
+
- **Reduced log noise from summary memories** -- The content length warning threshold was raised from 500 to 800 chars. Legitimate summary-type memories (550-850 chars) no longer trigger "Long content" warnings. Hard truncation at 1000 chars is unchanged.
|
|
13
|
+
- **Fuzzy entity dedup on write** -- `_ensure_entity()` and `_find_or_create_entity()` now perform a fuzzy pre-check (SequenceMatcher > 0.90) before creating new entities. Name variants like "Kris Krisko" vs "Kris Krisco" (ratio ~0.92) match the existing entity instead of creating a duplicate. Only compares entities of the same type, skips deleted entities.
|
|
14
|
+
- **Expanded STOP_WORDS** -- Added ~55 common English words that spaCy misidentifies as entities ("drawn", "overall", "recently", "several", etc.). Prevents ghost entities from cluttering the graph.
|
|
15
|
+
- **Person entities require 2+ words** -- Regex-extracted person entities must have at least two words (e.g., "First Last"). Single-word extractions like "Metal" or "Drawn" are rejected. spaCy-identified entities are unaffected.
|
|
16
|
+
- 637 tests pass, 0 regressions, 22 new tests across 4 new test files.
|
|
17
|
+
|
|
18
|
+
## 1.55.15 (2026-03-18)
|
|
19
|
+
|
|
20
|
+
- **Fix mixed-timezone datetime crash** -- The memory daemon could crash with `can't subtract offset-naive and offset-aware datetimes` when recall or consolidation queries hit records with timezone suffixes (e.g., `+00:00` from email or transcript timestamps). Added a shared `parse_naive()` utility that strips timezone info on parse, applied across 14 locations in 5 files (recall.py, consolidate.py, server.py, vault_sync.py, canvas_generator.py). Replaces the older `[:19]` string truncation workaround. 615 tests pass.
|
|
21
|
+
- **License updated to PolyForm Noncommercial 1.0.0** -- README, package.json, and ARCHITECTURE.md now reflect the license change from Apache 2.0 to PolyForm NC. Free for personal, research, educational, and nonprofit use. Commercial licensing available via mail@kbanc.com.
|
|
22
|
+
|
|
5
23
|
## 1.55.14 (2026-03-16)
|
|
6
24
|
|
|
7
25
|
- **LaunchAgent no longer bakes in --project-dir** -- The standalone background daemon now starts without a `--project-dir` argument. This forces a plist content change for all existing installs, which triggers an automatic LaunchAgent reload on next `claudia setup`, picking up the current Python daemon code. Previously, the plist could be identical across updates, leaving old daemon code running indefinitely even after `pip install --upgrade`.
|
package/README.md
CHANGED
|
@@ -11,7 +11,7 @@ Remembers your people. Catches your commitments. Learns how you work.
|
|
|
11
11
|
<p align="center">
|
|
12
12
|
<a href="https://github.com/kbanc85/claudia/stargazers"><img src="https://img.shields.io/github/stars/kbanc85/claudia?style=flat-square" alt="GitHub stars"></a>
|
|
13
13
|
<a href="https://www.npmjs.com/package/get-claudia"><img src="https://img.shields.io/npm/v/get-claudia?style=flat-square" alt="npm version"></a>
|
|
14
|
-
<a href="https://github.com/kbanc85/claudia/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-
|
|
14
|
+
<a href="https://github.com/kbanc85/claudia/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-PolyForm%20NC%201.0.0-purple?style=flat-square" alt="License"></a>
|
|
15
15
|
</p>
|
|
16
16
|
|
|
17
17
|
<p align="center">
|
|
@@ -479,7 +479,7 @@ This updates daemon code, skills, and rules while preserving your databases and
|
|
|
479
479
|
|
|
480
480
|
## Contributing
|
|
481
481
|
|
|
482
|
-
Claudia is
|
|
482
|
+
Claudia is source-available under the PolyForm Noncommercial License 1.0.0.
|
|
483
483
|
|
|
484
484
|
- **Template (skills, rules, identity):** `template-v2/`
|
|
485
485
|
- **Memory daemon (Python):** `memory-daemon/` (tests: `cd memory-daemon && pytest tests/`)
|
|
@@ -491,9 +491,9 @@ Claudia is open source under the Apache 2.0 License.
|
|
|
491
491
|
|
|
492
492
|
## License
|
|
493
493
|
|
|
494
|
-
[
|
|
494
|
+
[PolyForm Noncommercial 1.0.0](LICENSE)
|
|
495
495
|
|
|
496
|
-
|
|
496
|
+
Free for personal, research, educational, and nonprofit use. Commercial licensing: mail@kbanc.com
|
|
497
497
|
|
|
498
498
|
---
|
|
499
499
|
|
|
@@ -27,7 +27,9 @@ class MemoryScheduler:
|
|
|
27
27
|
"""Manages scheduled memory maintenance tasks"""
|
|
28
28
|
|
|
29
29
|
def __init__(self):
|
|
30
|
-
self.scheduler = BackgroundScheduler(
|
|
30
|
+
self.scheduler = BackgroundScheduler(
|
|
31
|
+
job_defaults={"misfire_grace_time": 14400, "coalesce": True}
|
|
32
|
+
)
|
|
31
33
|
self.config = get_config()
|
|
32
34
|
self._started = False
|
|
33
35
|
|
|
@@ -188,50 +188,33 @@ class EntityExtractor:
|
|
|
188
188
|
|
|
189
189
|
# Common non-entity words to filter out
|
|
190
190
|
STOP_WORDS = {
|
|
191
|
-
|
|
192
|
-
"tuesday",
|
|
193
|
-
"
|
|
194
|
-
"
|
|
195
|
-
"
|
|
196
|
-
|
|
197
|
-
"
|
|
198
|
-
"
|
|
199
|
-
|
|
200
|
-
"
|
|
201
|
-
"
|
|
202
|
-
|
|
203
|
-
"
|
|
204
|
-
"
|
|
205
|
-
"
|
|
206
|
-
|
|
207
|
-
"
|
|
208
|
-
"
|
|
209
|
-
"
|
|
210
|
-
"
|
|
211
|
-
"
|
|
212
|
-
|
|
213
|
-
"
|
|
214
|
-
"
|
|
215
|
-
|
|
216
|
-
"
|
|
217
|
-
"
|
|
218
|
-
"this",
|
|
219
|
-
"that",
|
|
220
|
-
"these",
|
|
221
|
-
"those",
|
|
222
|
-
"here",
|
|
223
|
-
"there",
|
|
224
|
-
"where",
|
|
225
|
-
"when",
|
|
226
|
-
"what",
|
|
227
|
-
"which",
|
|
228
|
-
"who",
|
|
229
|
-
"how",
|
|
230
|
-
"just",
|
|
231
|
-
"only",
|
|
232
|
-
"also",
|
|
233
|
-
"even",
|
|
234
|
-
"still",
|
|
191
|
+
# Days and months
|
|
192
|
+
"monday", "tuesday", "wednesday", "thursday", "friday",
|
|
193
|
+
"saturday", "sunday",
|
|
194
|
+
"january", "february", "march", "april", "may", "june",
|
|
195
|
+
"july", "august", "september", "october", "november", "december",
|
|
196
|
+
# Temporal
|
|
197
|
+
"today", "tomorrow", "yesterday",
|
|
198
|
+
"morning", "afternoon", "evening", "night",
|
|
199
|
+
# Pronouns and determiners
|
|
200
|
+
"the", "this", "that", "these", "those",
|
|
201
|
+
"here", "there", "where", "when", "what", "which", "who", "how",
|
|
202
|
+
# Adverbs
|
|
203
|
+
"just", "only", "also", "even", "still",
|
|
204
|
+
"recently", "nearly", "almost", "already", "rather",
|
|
205
|
+
"somewhat", "perhaps", "quite", "likely", "enough",
|
|
206
|
+
# Quantifiers and adjectives
|
|
207
|
+
"several", "various", "another", "certain",
|
|
208
|
+
"much", "many", "some", "most", "both",
|
|
209
|
+
"each", "every", "other", "such", "same",
|
|
210
|
+
"new", "old", "big", "long", "last", "next",
|
|
211
|
+
"good", "well", "nice", "overall", "drawn",
|
|
212
|
+
# Common verbs (past tense / short forms spaCy misidentifies)
|
|
213
|
+
"done", "made", "said", "went", "got",
|
|
214
|
+
"set", "put", "run", "let", "get",
|
|
215
|
+
# Common nouns too generic to be entities
|
|
216
|
+
"work", "part", "plan", "team", "data",
|
|
217
|
+
"note", "time", "home", "call", "open",
|
|
235
218
|
}
|
|
236
219
|
|
|
237
220
|
def __init__(self):
|
|
@@ -296,12 +279,15 @@ class EntityExtractor:
|
|
|
296
279
|
"""Extract entities using regex patterns"""
|
|
297
280
|
entities = []
|
|
298
281
|
|
|
299
|
-
# Extract persons
|
|
282
|
+
# Extract persons (require at least 2 words to avoid ghost entities)
|
|
300
283
|
for pattern in self.PERSON_PATTERNS:
|
|
301
284
|
for match in re.finditer(pattern, text):
|
|
302
285
|
name = match.group(1) if match.lastindex else match.group(0)
|
|
303
286
|
canonical = self.canonical_name(name)
|
|
304
|
-
if
|
|
287
|
+
if (canonical
|
|
288
|
+
and len(canonical) > 1
|
|
289
|
+
and canonical not in self.STOP_WORDS
|
|
290
|
+
and len(canonical.split()) >= 2):
|
|
305
291
|
entities.append(
|
|
306
292
|
ExtractedEntity(
|
|
307
293
|
name=name,
|
|
@@ -22,6 +22,7 @@ from mcp.types import (
|
|
|
22
22
|
)
|
|
23
23
|
|
|
24
24
|
from ..database import get_db
|
|
25
|
+
from ..utils import parse_naive
|
|
25
26
|
from ..services.consolidate import (
|
|
26
27
|
get_consolidate_service,
|
|
27
28
|
get_predictions,
|
|
@@ -3190,9 +3191,9 @@ def _build_briefing() -> str:
|
|
|
3190
3191
|
"SELECT updated_at FROM _meta WHERE key = 'unified_db'", fetch=True
|
|
3191
3192
|
)
|
|
3192
3193
|
if ts_row and ts_row[0]["updated_at"]:
|
|
3193
|
-
from datetime import
|
|
3194
|
-
consolidated_at =
|
|
3195
|
-
if (
|
|
3194
|
+
from datetime import timedelta as _td
|
|
3195
|
+
consolidated_at = parse_naive(ts_row[0]["updated_at"])
|
|
3196
|
+
if (datetime.utcnow() - consolidated_at) < _td(minutes=5):
|
|
3196
3197
|
# Just consolidated, include stats
|
|
3197
3198
|
mem_row = db.execute("SELECT COUNT(*) as c FROM memories", fetch=True)
|
|
3198
3199
|
ent_row = db.execute("SELECT COUNT(*) as c FROM entities WHERE deleted_at IS NULL", fetch=True)
|
|
@@ -3660,7 +3661,7 @@ def _build_morning_context() -> str:
|
|
|
3660
3661
|
if stale:
|
|
3661
3662
|
sections.append(f"## Stale Commitments ({len(stale)})\n")
|
|
3662
3663
|
for c in stale:
|
|
3663
|
-
days_old = (datetime.utcnow() -
|
|
3664
|
+
days_old = (datetime.utcnow() - parse_naive(c["created_at"])).days
|
|
3664
3665
|
entities = c["entity_names"] or ""
|
|
3665
3666
|
prefix = f"[{entities}] " if entities else ""
|
|
3666
3667
|
sections.append(f"- {prefix}{c['content'][:100]} ({days_old}d old, importance: {c['importance']:.1f})")
|
|
@@ -28,6 +28,7 @@ from pathlib import Path
|
|
|
28
28
|
from typing import Any, Dict, List, Optional, Tuple
|
|
29
29
|
|
|
30
30
|
from ..database import get_db
|
|
31
|
+
from ..utils import parse_naive
|
|
31
32
|
|
|
32
33
|
logger = logging.getLogger(__name__)
|
|
33
34
|
|
|
@@ -418,7 +419,7 @@ class CanvasGenerator:
|
|
|
418
419
|
last = r["last_contact_at"]
|
|
419
420
|
if last:
|
|
420
421
|
try:
|
|
421
|
-
days_ago = (datetime.utcnow() -
|
|
422
|
+
days_ago = (datetime.utcnow() - parse_naive(last)).days
|
|
422
423
|
reconnect_lines.append(f"- [[{r['name']}]] ({trend}, {days_ago}d ago)")
|
|
423
424
|
except (ValueError, TypeError):
|
|
424
425
|
reconnect_lines.append(f"- [[{r['name']}]] ({trend})")
|
|
@@ -14,6 +14,7 @@ from typing import Any, Dict, List, Optional, Tuple
|
|
|
14
14
|
|
|
15
15
|
from ..config import get_config
|
|
16
16
|
from ..database import get_db
|
|
17
|
+
from ..utils import parse_naive
|
|
17
18
|
|
|
18
19
|
logger = logging.getLogger(__name__)
|
|
19
20
|
|
|
@@ -366,7 +367,7 @@ class ConsolidateService:
|
|
|
366
367
|
timestamps = []
|
|
367
368
|
for r in rows:
|
|
368
369
|
try:
|
|
369
|
-
timestamps.append(
|
|
370
|
+
timestamps.append(parse_naive(r["created_at"]))
|
|
370
371
|
except (ValueError, TypeError):
|
|
371
372
|
continue
|
|
372
373
|
|
|
@@ -538,7 +539,7 @@ class ConsolidateService:
|
|
|
538
539
|
days_since = 0
|
|
539
540
|
if entity["last_contact_at"]:
|
|
540
541
|
try:
|
|
541
|
-
last_dt =
|
|
542
|
+
last_dt = parse_naive(entity["last_contact_at"])
|
|
542
543
|
days_since = int((now - last_dt).total_seconds() / 86400)
|
|
543
544
|
except (ValueError, TypeError):
|
|
544
545
|
pass
|
|
@@ -671,7 +672,7 @@ class ConsolidateService:
|
|
|
671
672
|
for row in rows:
|
|
672
673
|
days_since = None
|
|
673
674
|
if row["last_mention"]:
|
|
674
|
-
last_dt =
|
|
675
|
+
last_dt = parse_naive(row["last_mention"])
|
|
675
676
|
days_since = (datetime.utcnow() - last_dt).days
|
|
676
677
|
|
|
677
678
|
severity = "warning" if days_since and days_since > 60 else "observation"
|
|
@@ -1377,7 +1378,7 @@ class ConsolidateService:
|
|
|
1377
1378
|
)
|
|
1378
1379
|
|
|
1379
1380
|
for commitment in old_commitments:
|
|
1380
|
-
created =
|
|
1381
|
+
created = parse_naive(commitment["created_at"])
|
|
1381
1382
|
days_old = (datetime.utcnow() - created).days
|
|
1382
1383
|
|
|
1383
1384
|
if days_old > 3:
|
|
@@ -2302,7 +2303,7 @@ class ConsolidateService:
|
|
|
2302
2303
|
velocity_parts.append(f"tier: {entity['attention_tier']}")
|
|
2303
2304
|
if entity["last_contact_at"]:
|
|
2304
2305
|
try:
|
|
2305
|
-
last_dt =
|
|
2306
|
+
last_dt = parse_naive(entity["last_contact_at"])
|
|
2306
2307
|
days_since = (datetime.utcnow() - last_dt).days
|
|
2307
2308
|
velocity_parts.append(f"last contact: {days_since} days ago")
|
|
2308
2309
|
except (ValueError, TypeError):
|
|
@@ -42,7 +42,7 @@ def validate_memory(
|
|
|
42
42
|
Validate a memory before storage.
|
|
43
43
|
|
|
44
44
|
Checks:
|
|
45
|
-
- Content length (warn >
|
|
45
|
+
- Content length (warn >800, truncate >1000)
|
|
46
46
|
- Commitment deadline detection via regex
|
|
47
47
|
- Importance clamped to [0, 1]
|
|
48
48
|
"""
|
|
@@ -52,7 +52,7 @@ def validate_memory(
|
|
|
52
52
|
if len(content) > 1000:
|
|
53
53
|
result.warnings.append(f"Content truncated from {len(content)} to 1000 characters")
|
|
54
54
|
result.adjustments["content"] = content[:1000]
|
|
55
|
-
elif len(content) >
|
|
55
|
+
elif len(content) > 800:
|
|
56
56
|
result.warnings.append(f"Long content ({len(content)} chars) -- consider breaking into multiple memories")
|
|
57
57
|
|
|
58
58
|
# Importance clamping
|
|
@@ -18,6 +18,7 @@ from typing import Any, Dict, List, Optional, Tuple
|
|
|
18
18
|
from ..config import get_config
|
|
19
19
|
from ..database import get_db
|
|
20
20
|
from ..embeddings import embed_sync, get_embedding_service
|
|
21
|
+
from ..utils import parse_naive
|
|
21
22
|
from ..extraction.entity_extractor import get_extractor
|
|
22
23
|
|
|
23
24
|
logger = logging.getLogger(__name__)
|
|
@@ -240,7 +241,7 @@ class RecallService:
|
|
|
240
241
|
row = vector_rows.get(mid)
|
|
241
242
|
if row:
|
|
242
243
|
try:
|
|
243
|
-
created =
|
|
244
|
+
created = parse_naive(row["created_at"])
|
|
244
245
|
recency_data[mid] = (now - created).total_seconds()
|
|
245
246
|
except (ValueError, TypeError):
|
|
246
247
|
recency_data[mid] = float("inf")
|
|
@@ -333,7 +334,7 @@ class RecallService:
|
|
|
333
334
|
importance_score = row["importance"]
|
|
334
335
|
|
|
335
336
|
# Recency score (configurable half-life decay)
|
|
336
|
-
created =
|
|
337
|
+
created = parse_naive(row["created_at"])
|
|
337
338
|
days_old = (now - created).days
|
|
338
339
|
recency_score = math.exp(-days_old / self.config.recency_half_life_days)
|
|
339
340
|
|
|
@@ -2122,8 +2123,8 @@ class RecallService:
|
|
|
2122
2123
|
results = []
|
|
2123
2124
|
now = datetime.utcnow()
|
|
2124
2125
|
for row in rows:
|
|
2125
|
-
source_last =
|
|
2126
|
-
target_last =
|
|
2126
|
+
source_last = parse_naive(row["source_last_memory"])
|
|
2127
|
+
target_last = parse_naive(row["target_last_memory"])
|
|
2127
2128
|
most_recent = max(source_last, target_last)
|
|
2128
2129
|
days_dormant = (now - most_recent).days
|
|
2129
2130
|
|
|
@@ -2513,7 +2514,7 @@ class RecallService:
|
|
|
2513
2514
|
urgency = "later"
|
|
2514
2515
|
if deadline_str:
|
|
2515
2516
|
try:
|
|
2516
|
-
deadline_dt =
|
|
2517
|
+
deadline_dt = parse_naive(deadline_str)
|
|
2517
2518
|
if deadline_dt < now:
|
|
2518
2519
|
urgency = "overdue"
|
|
2519
2520
|
elif deadline_dt < now + timedelta(days=1):
|
|
@@ -2705,12 +2706,9 @@ class RecallService:
|
|
|
2705
2706
|
}
|
|
2706
2707
|
|
|
2707
2708
|
try:
|
|
2708
|
-
last_dt =
|
|
2709
|
-
except
|
|
2710
|
-
|
|
2711
|
-
last_dt = datetime.fromisoformat(last_contact.replace("Z", "+00:00")).replace(tzinfo=None)
|
|
2712
|
-
except Exception:
|
|
2713
|
-
return {"entity": entity["name"], "status": "parse_error"}
|
|
2709
|
+
last_dt = parse_naive(last_contact.replace("Z", "+00:00"))
|
|
2710
|
+
except Exception:
|
|
2711
|
+
return {"entity": entity["name"], "status": "parse_error"}
|
|
2714
2712
|
|
|
2715
2713
|
now = datetime.utcnow()
|
|
2716
2714
|
days_since = (now - last_dt).days
|
|
@@ -1697,6 +1697,11 @@ class RememberService:
|
|
|
1697
1697
|
if alias_match:
|
|
1698
1698
|
return alias_match["entity_id"]
|
|
1699
1699
|
|
|
1700
|
+
# Fuzzy pre-check: find near-matches of the same type
|
|
1701
|
+
fuzzy_match = self._fuzzy_find_entity(extracted.canonical_name, extracted.type)
|
|
1702
|
+
if fuzzy_match:
|
|
1703
|
+
return fuzzy_match
|
|
1704
|
+
|
|
1700
1705
|
# Create new entity
|
|
1701
1706
|
return self.remember_entity(
|
|
1702
1707
|
name=extracted.name,
|
|
@@ -1725,9 +1730,44 @@ class RememberService:
|
|
|
1725
1730
|
if alias_match:
|
|
1726
1731
|
return alias_match["entity_id"]
|
|
1727
1732
|
|
|
1733
|
+
# Fuzzy pre-check: find near-matches of the same type
|
|
1734
|
+
fuzzy_match = self._fuzzy_find_entity(canonical, entity_type)
|
|
1735
|
+
if fuzzy_match:
|
|
1736
|
+
return fuzzy_match
|
|
1737
|
+
|
|
1728
1738
|
# Create new
|
|
1729
1739
|
return self.remember_entity(name=name, entity_type=entity_type)
|
|
1730
1740
|
|
|
1741
|
+
def _fuzzy_find_entity(self, canonical: str, entity_type: str) -> Optional[int]:
|
|
1742
|
+
"""Find a near-match entity of the same type using fuzzy string matching.
|
|
1743
|
+
|
|
1744
|
+
Queries entities of the given type and returns the ID of the best match
|
|
1745
|
+
if similarity > 0.90 (SequenceMatcher ratio). Returns None if no match.
|
|
1746
|
+
"""
|
|
1747
|
+
from difflib import SequenceMatcher
|
|
1748
|
+
|
|
1749
|
+
candidates = self.db.execute(
|
|
1750
|
+
"SELECT id, canonical_name FROM entities WHERE type = ? AND deleted_at IS NULL",
|
|
1751
|
+
(entity_type,),
|
|
1752
|
+
fetch=True,
|
|
1753
|
+
) or []
|
|
1754
|
+
|
|
1755
|
+
best_id = None
|
|
1756
|
+
best_ratio = 0.0
|
|
1757
|
+
for row in candidates:
|
|
1758
|
+
ratio = SequenceMatcher(None, canonical, row["canonical_name"]).ratio()
|
|
1759
|
+
if ratio > 0.90 and ratio > best_ratio:
|
|
1760
|
+
best_ratio = ratio
|
|
1761
|
+
best_id = row["id"]
|
|
1762
|
+
|
|
1763
|
+
if best_id is not None:
|
|
1764
|
+
logger.info(
|
|
1765
|
+
f"Fuzzy entity match: '{canonical}' matched existing entity id={best_id} "
|
|
1766
|
+
f"(type={entity_type}, similarity={best_ratio:.2f})"
|
|
1767
|
+
)
|
|
1768
|
+
|
|
1769
|
+
return best_id
|
|
1770
|
+
|
|
1731
1771
|
def _get_or_create_episode(self, source: Optional[str] = None) -> int:
|
|
1732
1772
|
"""Get current episode or create a new one"""
|
|
1733
1773
|
# For now, create a new episode each time
|
|
@@ -42,6 +42,7 @@ from typing import Any, Dict, List, Optional, Tuple
|
|
|
42
42
|
|
|
43
43
|
from ..config import get_config
|
|
44
44
|
from ..database import get_db
|
|
45
|
+
from ..utils import parse_naive
|
|
45
46
|
|
|
46
47
|
logger = logging.getLogger(__name__)
|
|
47
48
|
|
|
@@ -1234,7 +1235,7 @@ class VaultSyncService:
|
|
|
1234
1235
|
last = w["last_contact_at"]
|
|
1235
1236
|
if last:
|
|
1236
1237
|
try:
|
|
1237
|
-
days_ago = (datetime.utcnow() -
|
|
1238
|
+
days_ago = (datetime.utcnow() - parse_naive(last)).days
|
|
1238
1239
|
lines.append(f"- [[{w['name']}]] - {trend} ({days_ago}d)")
|
|
1239
1240
|
except (ValueError, TypeError):
|
|
1240
1241
|
lines.append(f"- [[{w['name']}]] - {trend}")
|
|
@@ -1599,7 +1600,7 @@ class VaultSyncService:
|
|
|
1599
1600
|
last_contact = p["last_contact_at"]
|
|
1600
1601
|
if last_contact:
|
|
1601
1602
|
try:
|
|
1602
|
-
dt =
|
|
1603
|
+
dt = parse_naive(last_contact)
|
|
1603
1604
|
days_ago = (now - dt).days
|
|
1604
1605
|
last_str = f"{days_ago}d ago"
|
|
1605
1606
|
except (ValueError, TypeError):
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Shared utilities for Claudia Memory System.
|
|
3
|
+
"""
|
|
4
|
+
|
|
5
|
+
from datetime import datetime
|
|
6
|
+
|
|
7
|
+
|
|
8
|
+
def parse_naive(dt_string: str) -> datetime:
|
|
9
|
+
"""Parse an ISO datetime string and strip timezone info.
|
|
10
|
+
|
|
11
|
+
The database stores a mix of naive and offset-aware datetimes.
|
|
12
|
+
External sources (emails, transcripts, calendar events) often include
|
|
13
|
+
timezone suffixes like +00:00 or Z. Since all timestamps are treated
|
|
14
|
+
as UTC internally, we strip tzinfo to avoid:
|
|
15
|
+
|
|
16
|
+
TypeError: can't subtract offset-naive and offset-aware datetimes
|
|
17
|
+
|
|
18
|
+
This is used everywhere a parsed timestamp participates in arithmetic
|
|
19
|
+
with datetime.utcnow() (which returns a naive datetime).
|
|
20
|
+
"""
|
|
21
|
+
dt = datetime.fromisoformat(dt_string)
|
|
22
|
+
return dt.replace(tzinfo=None) if dt.tzinfo else dt
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "get-claudia",
|
|
3
|
-
"version": "1.55.
|
|
3
|
+
"version": "1.55.16",
|
|
4
4
|
"description": "An AI assistant who learns how you work.",
|
|
5
5
|
"keywords": [
|
|
6
6
|
"claudia",
|
|
@@ -16,7 +16,7 @@
|
|
|
16
16
|
"adaptive"
|
|
17
17
|
],
|
|
18
18
|
"author": "Kamil Banc",
|
|
19
|
-
"license": "
|
|
19
|
+
"license": "SEE LICENSE IN LICENSE",
|
|
20
20
|
"repository": {
|
|
21
21
|
"type": "git",
|
|
22
22
|
"url": "git+https://github.com/kbanc85/claudia.git"
|