nexo-brain 0.6.0 → 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,12 +1,12 @@
1
1
  # NEXO Brain — Your AI Gets a Brain
2
2
 
3
- [![npm v0.5.0](https://img.shields.io/npm/v/nexo-brain?label=npm&color=purple)](https://www.npmjs.com/package/nexo-brain)
3
+ [![npm v0.6.0](https://img.shields.io/npm/v/nexo-brain?label=npm&color=purple)](https://www.npmjs.com/package/nexo-brain)
4
4
  [![F1 0.588 on LoCoMo](https://img.shields.io/badge/LoCoMo_F1-0.588-brightgreen)](https://github.com/wazionapps/nexo/blob/main/benchmarks/locomo/results/)
5
5
  [![+55% vs GPT-4](https://img.shields.io/badge/vs_GPT--4-%2B55%25-blue)](https://github.com/snap-research/locomo/issues/33)
6
6
  [![GitHub stars](https://img.shields.io/github/stars/wazionapps/nexo?style=social)](https://github.com/wazionapps/nexo/stargazers)
7
7
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
8
8
 
9
- > **v0.5.0** — Highest published score on [LoCoMo benchmark](https://github.com/snap-research/locomo) (ACL 2024). F1 **0.588** — outperforms GPT-4 (0.379) by 55%. Runs on CPU. No GPU required. [Full results](benchmarks/locomo/results/)
9
+ > **v0.6.0** — Now ships with **full orchestration**: 5 automated hooks, mandatory post-mortem with self-critique, pre-compaction context preservation, reflection engine, and auto-migration. Plus: F1 **0.588** on [LoCoMo](https://github.com/snap-research/locomo) (ACL 2024) — outperforms GPT-4 by 55%. Runs on CPU. [Full results](benchmarks/locomo/results/)
10
10
 
11
11
  **NEXO Brain transforms any MCP-compatible AI agent from a stateless assistant into a cognitive partner that remembers, learns, forgets, adapts, and builds a relationship with you over time.**
12
12
 
@@ -130,9 +130,9 @@ Like a human brain, NEXO Brain has automated processes that run while you're not
130
130
 
131
131
  If your Mac was asleep during any scheduled process, NEXO Brain catches up in order when it wakes.
132
132
 
133
- ## Cognitive Features (v0.3.1)
133
+ ## Cognitive Features
134
134
 
135
- NEXO Brain v0.3.1 adds 21 cognitive tools on top of the 76 base tools, bringing the total to **97+ MCP tools**. These features implement cognitive science concepts that go beyond basic memory:
135
+ NEXO Brain provides 21 cognitive tools on top of the 76 base tools, totaling **97+ MCP tools**. These features implement cognitive science concepts that go beyond basic memory:
136
136
 
137
137
  ### Input Pipeline
138
138
 
@@ -198,6 +198,65 @@ NEXO Brain was evaluated on [LoCoMo](https://github.com/snap-research/locomo) (A
198
198
 
199
199
  Full results in [`benchmarks/locomo/results/`](benchmarks/locomo/results/).
200
200
 
201
+ ## Full Orchestration System (v0.6.0)
202
+
203
+ Memory alone doesn't make a co-operator. What makes the difference is the **behavioral loop** — the automated discipline that ensures every session starts informed, runs with guardrails, and ends with self-reflection.
204
+
205
+ ### 5 Automated Hooks
206
+
207
+ These fire automatically at key moments in every Claude Code session:
208
+
209
+ | Hook | When | What It Does |
210
+ |------|------|-------------|
211
+ | **SessionStart** | Session opens | Generates a briefing from SQLite: overdue reminders, today's tasks, pending followups, active sessions |
212
+ | **Stop** | Session ends | Mandatory post-mortem: self-critique (5 questions), session buffer entry, followup creation, proactive seeds for next session |
213
+ | **PostToolUse** | After each tool call | Captures meaningful mutations to the Sensory Register |
214
+ | **PreCompact** | Before context compression | Saves checkpoint, reminds operator to write diary — prevents losing the thread |
215
+ | **Caffeinate** | Always (optional) | Keeps Mac awake for nocturnal cognitive processes |
216
+
217
+ ### The Session Lifecycle
218
+
219
+ ```
220
+ Session starts
221
+
222
+ SessionStart hook generates briefing
223
+
224
+ Operator reads diary, reminders, followups
225
+
226
+ Heartbeat on every interaction (sentiment, context shifts)
227
+
228
+ Guard check before every code edit
229
+
230
+ PreCompact hook saves context if conversation is compressed
231
+
232
+ Stop hook triggers mandatory post-mortem:
233
+ - Self-critique: 5 questions about what could be better
234
+ - Session buffer: structured entry for the reflection engine
235
+ - Followups: anything promised gets scheduled
236
+ - Proactive seeds: what can the next session do without being asked?
237
+
238
+ Reflection engine processes buffer (after 3+ sessions)
239
+
240
+ Nocturnal processes: decay, consolidation, self-audit, dreaming
241
+ ```
242
+
243
+ ### Reflection Engine
244
+
245
+ After 3+ sessions accumulate, the stop hook triggers `nexo-reflection.py`:
246
+ - Extracts recurring tasks, error patterns, mood trends
247
+ - Updates `user_model.json` with observed behavior
248
+ - No LLM required — runs as pure Python
249
+
250
+ ### Auto-Migration
251
+
252
+ Existing users upgrading from v0.5.0:
253
+ ```bash
254
+ npx nexo-brain # detects v0.5.0, migrates automatically
255
+ ```
256
+ - Updates hooks, core files, plugins, scripts
257
+ - **Never touches your data** (memories, learnings, preferences)
258
+ - Saves updated CLAUDE.md as reference (doesn't overwrite customizations)
259
+
201
260
  ## Quick Start
202
261
 
203
262
  ### Claude Code (Primary)
@@ -248,10 +307,12 @@ That's it. No need to run `claude` manually. Atlas will greet you immediately
248
307
  | Cognitive engine | Python: fastembed, numpy, vector search | pip packages |
249
308
  | MCP server | 97+ tools for memory, cognition, learning, guard | ~/.nexo/ |
250
309
  | Plugins | Guard, episodic memory, cognitive memory, entities, preferences | ~/.nexo/plugins/ |
251
- | Hooks | Session capture, briefing, stop detection | ~/.nexo/hooks/ |
310
+ | Hooks (5) | SessionStart briefing, Stop post-mortem, PostToolUse capture, PreCompact checkpoint, Caffeinate | ~/.nexo/hooks/ |
311
+ | Reflection engine | Processes session buffer, extracts patterns, updates user model | ~/.nexo/scripts/ |
312
+ | CLAUDE.md | Complete operator instructions (Codex, hooks, guard, trust, memory) | ~/.claude/CLAUDE.md |
252
313
  | LaunchAgents | Decay, sleep, audit, postmortem, catch-up | ~/Library/LaunchAgents/ |
253
314
  | Auto-update | Checks for new versions at boot | Built into catch-up |
254
- | Claude Code config | MCP server + hooks registered | ~/.claude/settings.json |
315
+ | Claude Code config | MCP server + 5 hooks registered | ~/.claude/settings.json |
255
316
 
256
317
  ### Requirements
257
318
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "nexo-brain",
3
- "version": "0.6.0",
3
+ "version": "0.7.0",
4
4
  "mcpName": "io.github.wazionapps/nexo",
5
5
  "description": "NEXO — Cognitive co-operator for Claude Code. Atkinson-Shiffrin memory, semantic RAG, trust scoring, and metacognitive error prevention.",
6
6
  "bin": {
package/src/cognitive.py CHANGED
@@ -30,13 +30,13 @@ DISCRIMINATING_ENTITIES = {
30
30
  # OS / Environment
31
31
  "linux", "mac", "macos", "windows", "darwin", "ubuntu", "debian", "alpine",
32
32
  # Platforms
33
- "nexo", "other", "whatsapp", "chrome", "firefox",
33
+ "shopify", "wazion", "whatsapp", "chrome", "firefox",
34
34
  # Languages / Runtimes
35
35
  "python", "php", "javascript", "typescript", "node", "deno", "ruby",
36
36
  # Versions
37
37
  "v1", "v2", "v3", "v4", "v5", "5.6", "7.4", "8.0", "8.1", "8.2",
38
38
  # Infrastructure
39
- "vps", "local", "production", "staging",
39
+ "cloudrun", "gcloud", "vps", "local", "production", "staging",
40
40
  # DB
41
41
  "mysql", "sqlite", "postgresql", "postgres", "redis",
42
42
  }
@@ -468,13 +468,32 @@ def _init_tables(conn: sqlite3.Connection):
468
468
  SELECT id, content, source_type, source_id, domain FROM {store}_memories
469
469
  """)
470
470
 
471
- # Temporal indexing columns
471
+ # Temporal indexing columns (Task C)
472
472
  for table in ("stm_memories", "ltm_memories"):
473
473
  try:
474
474
  conn.execute(f"ALTER TABLE {table} ADD COLUMN temporal_date TEXT DEFAULT ''")
475
475
  except Exception:
476
476
  pass # Column already exists
477
477
 
478
+ # Somatic markers — emotional risk memory for files and areas
479
+ conn.execute("""
480
+ CREATE TABLE IF NOT EXISTS somatic_markers (
481
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
482
+ target TEXT NOT NULL,
483
+ target_type TEXT NOT NULL,
484
+ risk_score REAL DEFAULT 0.0,
485
+ incident_count INTEGER DEFAULT 0,
486
+ last_incident TEXT DEFAULT NULL,
487
+ last_decay TEXT DEFAULT NULL,
488
+ last_guard_decay_date TEXT DEFAULT NULL,
489
+ last_validated_at TEXT DEFAULT NULL,
490
+ created_at TEXT DEFAULT (datetime('now')),
491
+ updated_at TEXT DEFAULT (datetime('now')),
492
+ UNIQUE(target, target_type)
493
+ )
494
+ """)
495
+ conn.execute("CREATE INDEX IF NOT EXISTS idx_somatic_target ON somatic_markers(target)")
496
+
478
497
  conn.commit()
479
498
 
480
499
 
@@ -574,28 +593,37 @@ def extract_temporal_date(text: str) -> str:
574
593
  text_lower = text.lower()
575
594
 
576
595
  # Pattern 1: "DD Month YYYY" or "Month DD, YYYY" or "D Month, YYYY"
596
+ # e.g., "8 May, 2023", "May 8, 2023", "25 May, 2023"
577
597
  for month_name, month_num in _MONTH_MAP.items():
578
598
  # "8 May, 2023" or "8 May 2023"
579
599
  match = re.search(rf'(\d{{1,2}})\s+{month_name}[,]?\s+(\d{{4}})', text_lower)
580
600
  if match:
581
- day, year = match.group(1).zfill(2), match.group(2)
582
- return f"{year}-{month_num}-{day}"
601
+ day = int(match.group(1))
602
+ year = match.group(2)
603
+ return f"{year}-{month_num}-{day:02d}"
604
+
583
605
  # "May 8, 2023" or "May 8 2023"
584
606
  match = re.search(rf'{month_name}\s+(\d{{1,2}})[,]?\s+(\d{{4}})', text_lower)
585
607
  if match:
586
- day, year = match.group(1).zfill(2), match.group(2)
587
- return f"{year}-{month_num}-{day}"
608
+ day = int(match.group(1))
609
+ year = match.group(2)
610
+ return f"{year}-{month_num}-{day:02d}"
588
611
 
589
- # Pattern 2: ISO format YYYY-MM-DD
612
+ # Pattern 2: ISO format "2023-05-08"
590
613
  match = re.search(r'(\d{4})-(\d{2})-(\d{2})', text)
591
614
  if match:
592
615
  return match.group(0)
593
616
 
594
- # Pattern 3: DD/MM/YYYY or MM/DD/YYYY (assume DD/MM/YYYY)
617
+ # Pattern 3: "DD/MM/YYYY" or "MM/DD/YYYY" (ambiguous, try DD/MM first)
595
618
  match = re.search(r'(\d{1,2})/(\d{1,2})/(\d{4})', text)
596
619
  if match:
597
- d, m, y = match.group(1).zfill(2), match.group(2).zfill(2), match.group(3)
598
- return f"{y}-{m}-{d}"
620
+ a, b, year = int(match.group(1)), int(match.group(2)), match.group(3)
621
+ if a > 12: # Must be DD/MM
622
+ return f"{year}-{b:02d}-{a:02d}"
623
+ elif b > 12: # Must be MM/DD
624
+ return f"{year}-{a:02d}-{b:02d}"
625
+ # Ambiguous — default to DD/MM (European)
626
+ return f"{year}-{b:02d}-{a:02d}"
599
627
 
600
628
  return ""
601
629
 
@@ -670,45 +698,37 @@ def bm25_search(query_text: str, stores: str = "both", top_k: int = 20,
670
698
 
671
699
 
672
700
  def _rrf_fuse(vector_results: list[dict], bm25_results: list[dict],
673
- k: int = 60, alpha: float = 0.5) -> list[dict]:
674
- """Reciprocal Rank Fusion: combine vector and BM25 results.
701
+ k: int = 60, alpha: float = 0.7) -> list[dict]:
702
+ """Reciprocal Rank Fusion: boost vector results with BM25 keyword matches.
675
703
 
676
- RRF score = alpha * 1/(k + vector_rank) + (1-alpha) * 1/(k + bm25_rank)
677
- Higher alpha = more weight on vector search.
678
- """
679
- # Build score maps by (store, id)
680
- scores = {}
681
- metadata = {}
682
-
683
- for rank, r in enumerate(vector_results):
684
- key = (r["store"], r["id"])
685
- scores[key] = alpha * (1.0 / (k + rank + 1))
686
- metadata[key] = r
704
+ BM25 only BOOSTS existing vector results never adds new ones.
705
+ This preserves vector search recall while improving precision for keyword-heavy queries.
687
706
 
707
+ RRF score = vector_score + (1-alpha) * 1/(k + bm25_rank) for items found by both.
708
+ Items only in vector results keep their original score.
709
+ """
710
+ # Build BM25 lookup by (store, id)
711
+ bm25_lookup = {}
688
712
  for rank, r in enumerate(bm25_results):
689
713
  key = (r["store"], r["id"])
690
- bm25_score = (1 - alpha) * (1.0 / (k + rank + 1))
691
- if key in scores:
692
- scores[key] += bm25_score
693
- else:
694
- scores[key] = bm25_score
695
- metadata[key] = r
696
-
697
- # Sort by fused score descending
698
- sorted_keys = sorted(scores.keys(), key=lambda x: scores[x], reverse=True)
714
+ bm25_lookup[key] = rank + 1 # 1-based rank
699
715
 
716
+ # Boost vector results that also appear in BM25
700
717
  fused = []
701
- for key in sorted_keys:
702
- result = metadata[key].copy()
703
- result["rrf_score"] = scores[key]
704
- # Preserve original vector score if available
705
- if "score" not in result:
706
- result["score"] = scores[key]
718
+ for r in vector_results:
719
+ result = r.copy()
720
+ key = (r["store"], r["id"])
721
+ if key in bm25_lookup:
722
+ bm25_rank = bm25_lookup[key]
723
+ boost = (1 - alpha) * (1.0 / (k + bm25_rank))
724
+ result["score"] = r["score"] + boost
725
+ result["bm25_boosted"] = True
707
726
  fused.append(result)
708
727
 
709
728
  return fused
710
729
 
711
730
 
731
+
712
732
  # ============================================================================
713
733
  # FEATURE 1: HyDE Query Expansion (adapted from Vestige hyde.rs)
714
734
  # Template-based Hypothetical Document Embeddings for improved search recall.
@@ -1091,6 +1111,13 @@ def search(
1091
1111
  return merged
1092
1112
 
1093
1113
  db = _get_db()
1114
+
1115
+ # Detect temporal queries — boost results with temporal_date
1116
+ _temporal_keywords = {"when", "date", "time", "first", "last", "before", "after",
1117
+ "cuándo", "cuando", "fecha", "primero", "último", "antes", "después"}
1118
+ query_lower = query_text.lower().split()
1119
+ is_temporal_query = bool(_temporal_keywords & set(query_lower))
1120
+
1094
1121
  if use_hyde:
1095
1122
  query_vec = hyde_expand_query(query_text)
1096
1123
  else:
@@ -1126,6 +1153,11 @@ def search(
1126
1153
  if lifecycle == "pinned":
1127
1154
  score = min(1.0, score + 0.2)
1128
1155
  if score >= min_score:
1156
+ temporal = ""
1157
+ try:
1158
+ temporal = row["temporal_date"] or ""
1159
+ except (IndexError, KeyError):
1160
+ pass
1129
1161
  results.append({
1130
1162
  "store": "stm",
1131
1163
  "id": row["id"],
@@ -1139,6 +1171,7 @@ def search(
1139
1171
  "access_count": row["access_count"],
1140
1172
  "score": score,
1141
1173
  "lifecycle_state": lifecycle,
1174
+ "temporal_date": temporal,
1142
1175
  })
1143
1176
 
1144
1177
  # Search LTM (active)
@@ -1204,14 +1237,18 @@ def search(
1204
1237
  if reactivated_ids:
1205
1238
  db.commit()
1206
1239
 
1207
- # Hybrid search: fuse vector results with BM25 keyword results
1240
+ # Hybrid search: boost vector results with BM25 keyword matches
1208
1241
  if hybrid and query_text:
1209
- bm25_results = bm25_search(query_text, stores=stores, top_k=top_k * 2,
1242
+ bm25_results = bm25_search(query_text, stores=stores, top_k=top_k * 4,
1210
1243
  source_type_filter=source_type_filter)
1211
1244
  if bm25_results:
1212
1245
  results = _rrf_fuse(results, bm25_results, alpha=hybrid_alpha)
1213
- # Re-apply min_score filter on fused results (use original vector score or rrf_score)
1214
- results = [r for r in results if r.get("score", 0) >= min_score or r.get("rrf_score", 0) > 0]
1246
+
1247
+ # Temporal boost: for "when" queries, boost results that have temporal_date
1248
+ if is_temporal_query:
1249
+ for r in results:
1250
+ if r.get("temporal_date"):
1251
+ r["score"] = min(1.0, r["score"] + 0.05)
1215
1252
 
1216
1253
  # Sort by score descending, take top-20 for reranking
1217
1254
  results.sort(key=lambda x: x.get("score", 0), reverse=True)
@@ -1296,16 +1333,7 @@ def search(
1296
1333
 
1297
1334
  # Rehearsal: update strength and access_count for returned results
1298
1335
  if rehearse and results:
1299
- now = datetime.utcnow().isoformat()
1300
- for r in results:
1301
- if (r["store"], r["id"]) in reactivated_ids:
1302
- continue
1303
- table = "stm_memories" if r["store"] == "stm" else "ltm_memories"
1304
- db.execute(
1305
- f"UPDATE {table} SET strength = 1.0, access_count = access_count + 1, last_accessed = ? WHERE id = ?",
1306
- (now, r["id"])
1307
- )
1308
- db.commit()
1336
+ _rehearse_results(results, skip_ids=reactivated_ids)
1309
1337
 
1310
1338
  # Record co-activation for future spreading (Feature 2)
1311
1339
  if results and len(results) >= 2:
@@ -2528,7 +2556,7 @@ def resolve_dissonance(memory_id: int, resolution: str, context: str = "") -> st
2528
2556
  Args:
2529
2557
  memory_id: The LTM memory that conflicts with the new instruction
2530
2558
  resolution: One of:
2531
- - 'paradigm_shift': the user changed their mind permanently. Decay old memory,
2559
+ - 'paradigm_shift': the user changed his mind permanently. Decay old memory,
2532
2560
  new instruction becomes the standard.
2533
2561
  - 'exception': This is a one-time override. Keep old memory as standard.
2534
2562
  - 'override': Old memory was wrong. Mark as corrupted and decay to dormant.
@@ -2717,6 +2745,25 @@ def get_trust_score() -> float:
2717
2745
  return row[0]
2718
2746
 
2719
2747
 
2748
+ def _annotate_adaptive_log(event: str, delta: float):
2749
+ """Retroactively annotate the most recent adaptive_log entry with trust feedback."""
2750
+ try:
2751
+ from db import get_db
2752
+ conn = get_db()
2753
+ conn.execute(
2754
+ "UPDATE adaptive_log SET feedback_event = ?, feedback_delta = ?, "
2755
+ "feedback_ts = datetime('now') "
2756
+ "WHERE id = (SELECT id FROM adaptive_log "
2757
+ "WHERE feedback_event IS NULL "
2758
+ "AND timestamp >= datetime('now', '-5 minutes') "
2759
+ "ORDER BY id DESC LIMIT 1)",
2760
+ (event, int(delta))
2761
+ )
2762
+ conn.commit()
2763
+ except Exception:
2764
+ pass
2765
+
2766
+
2720
2767
  def adjust_trust(event: str, context: str = "", custom_delta: float = None) -> dict:
2721
2768
  """Adjust trust score based on an event.
2722
2769
 
@@ -2744,6 +2791,22 @@ def adjust_trust(event: str, context: str = "", custom_delta: float = None) -> d
2744
2791
  )
2745
2792
  db.commit()
2746
2793
 
2794
+ # Annotate adaptive log for learned weights
2795
+ _annotate_adaptive_log(event, delta)
2796
+
2797
+ # Somatic event logging for repeated_error events (append-only in nexo.db)
2798
+ if event == "repeated_error" and context:
2799
+ try:
2800
+ from db import get_db as get_nexo_db
2801
+ area = context.split(":")[0].strip() if ":" in context else "unknown"
2802
+ get_nexo_db().execute(
2803
+ "INSERT INTO somatic_events (target, target_type, event_type, delta, source) VALUES (?, ?, ?, ?, ?)",
2804
+ (area, "area", "repeated_error", 0.20, f"trust:{event}")
2805
+ )
2806
+ get_nexo_db().commit()
2807
+ except Exception:
2808
+ pass
2809
+
2747
2810
  return {
2748
2811
  "old_score": round(old_score, 1),
2749
2812
  "delta": delta,
@@ -3321,3 +3384,125 @@ def security_scan(content: str) -> dict:
3321
3384
  "risk_score": round(risk_score, 3),
3322
3385
  }
3323
3386
 
3387
+
3388
+ # ─── Somatic Markers ────────────────────────────────────────────────
3389
+
3390
+ def somatic_accumulate(target: str, target_type: str, delta: float):
3391
+ """Increase risk_score for a target (file or area). Capped at 1.0."""
3392
+ db = _get_db()
3393
+ now = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
3394
+ existing = db.execute(
3395
+ "SELECT id, risk_score, incident_count FROM somatic_markers WHERE target = ? AND target_type = ?",
3396
+ (target, target_type)
3397
+ ).fetchone()
3398
+ if existing:
3399
+ new_score = min(1.0, existing["risk_score"] + delta)
3400
+ db.execute(
3401
+ "UPDATE somatic_markers SET risk_score = ?, incident_count = incident_count + 1, "
3402
+ "last_incident = ?, updated_at = ? WHERE id = ?",
3403
+ (new_score, now, now, existing["id"])
3404
+ )
3405
+ else:
3406
+ db.execute(
3407
+ "INSERT INTO somatic_markers (target, target_type, risk_score, incident_count, last_incident, updated_at) "
3408
+ "VALUES (?, ?, ?, 1, ?, ?)",
3409
+ (target, target_type, min(1.0, delta), now, now)
3410
+ )
3411
+ db.commit()
3412
+
3413
+
3414
+ def somatic_guard_decay(target: str, target_type: str):
3415
+ """Validated recovery: multiplicative x0.7 on successful guard check. Max once/day/target."""
3416
+ db = _get_db()
3417
+ today = datetime.utcnow().strftime("%Y-%m-%d")
3418
+ now = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
3419
+ row = db.execute(
3420
+ "SELECT id, risk_score, last_guard_decay_date FROM somatic_markers WHERE target = ? AND target_type = ?",
3421
+ (target, target_type)
3422
+ ).fetchone()
3423
+ if not row or row["risk_score"] <= 0:
3424
+ return
3425
+ if row["last_guard_decay_date"] == today:
3426
+ return
3427
+ new_score = max(0.0, row["risk_score"] * 0.7)
3428
+ if new_score < 0.01:
3429
+ new_score = 0.0
3430
+ db.execute(
3431
+ "UPDATE somatic_markers SET risk_score = ?, last_guard_decay_date = ?, "
3432
+ "last_validated_at = ?, updated_at = datetime('now') WHERE id = ?",
3433
+ (new_score, today, now, row["id"])
3434
+ )
3435
+ db.commit()
3436
+
3437
+
3438
+ def somatic_nightly_decay(gamma: float = 0.95):
3439
+ """Apply nightly decay to all somatic markers. Called from cognitive-decay cron."""
3440
+ db = _get_db()
3441
+ rows = db.execute("SELECT id, risk_score FROM somatic_markers WHERE risk_score > 0").fetchall()
3442
+ updated = 0
3443
+ for row in rows:
3444
+ new_score = row["risk_score"] * gamma
3445
+ if new_score < 0.01:
3446
+ new_score = 0.0
3447
+ db.execute(
3448
+ "UPDATE somatic_markers SET risk_score = ?, last_decay = datetime('now'), updated_at = datetime('now') WHERE id = ?",
3449
+ (new_score, row["id"])
3450
+ )
3451
+ updated += 1
3452
+ db.commit()
3453
+ return updated
3454
+
3455
+
3456
+ def somatic_project_events():
3457
+ """Project unprojected somatic_events from nexo.db into cognitive.db somatic_markers.
3458
+ Called during nightly cron. Idempotent — each event processed exactly once.
3459
+ """
3460
+ try:
3461
+ from db import get_db
3462
+ conn = get_db()
3463
+ rows = conn.execute(
3464
+ "SELECT id, target, target_type, delta FROM somatic_events WHERE projected = 0 ORDER BY id"
3465
+ ).fetchall()
3466
+ for row in rows:
3467
+ somatic_accumulate(row["target"], row["target_type"], row["delta"])
3468
+ conn.execute("UPDATE somatic_events SET projected = 1 WHERE id = ?", (row["id"],))
3469
+ conn.commit()
3470
+ return len(rows)
3471
+ except Exception:
3472
+ return 0
3473
+
3474
+
3475
+ def somatic_get_risk(targets: list, area: str = "") -> dict:
3476
+ """Get risk scores for targets (files) and optional area."""
3477
+ db = _get_db()
3478
+ scores = {}
3479
+ for t in targets:
3480
+ row = db.execute(
3481
+ "SELECT risk_score, incident_count, last_incident FROM somatic_markers WHERE target = ? AND target_type = 'file'",
3482
+ (t,)
3483
+ ).fetchone()
3484
+ if row and row["risk_score"] > 0:
3485
+ scores[t] = {"risk": round(row["risk_score"], 3), "incidents": row["incident_count"],
3486
+ "last": row["last_incident"] or "unknown"}
3487
+ if area:
3488
+ row = db.execute(
3489
+ "SELECT risk_score, incident_count, last_incident FROM somatic_markers WHERE target = ? AND target_type = 'area'",
3490
+ (area,)
3491
+ ).fetchone()
3492
+ if row and row["risk_score"] > 0:
3493
+ scores[f"area:{area}"] = {"risk": round(row["risk_score"], 3), "incidents": row["incident_count"],
3494
+ "last": row["last_incident"] or "unknown"}
3495
+ all_risks = [s["risk"] for s in scores.values()]
3496
+ return {"max_risk": max(all_risks) if all_risks else 0.0, "scores": scores}
3497
+
3498
+
3499
+ def somatic_top_risks(limit: int = 10) -> list:
3500
+ """Get top N riskiest targets across all types."""
3501
+ db = _get_db()
3502
+ rows = db.execute(
3503
+ "SELECT target, target_type, risk_score, incident_count, last_incident "
3504
+ "FROM somatic_markers WHERE risk_score > 0 ORDER BY risk_score DESC LIMIT ?",
3505
+ (limit,)
3506
+ ).fetchall()
3507
+ return [dict(r) for r in rows]
3508
+
package/src/db.py CHANGED
@@ -862,6 +862,42 @@ def _m7_diary_source_and_draft(conn):
862
862
  """)
863
863
 
864
864
 
865
+ def _m8_adaptive_log_and_somatic(conn):
866
+ conn.execute("""
867
+ CREATE TABLE IF NOT EXISTS adaptive_log (
868
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
869
+ timestamp TEXT DEFAULT (datetime('now')),
870
+ mode TEXT NOT NULL,
871
+ tension_score REAL NOT NULL,
872
+ sig_vibe REAL DEFAULT 0,
873
+ sig_corrections REAL DEFAULT 0,
874
+ sig_brevity REAL DEFAULT 0,
875
+ sig_topic REAL DEFAULT 0,
876
+ sig_tool_errors REAL DEFAULT 0,
877
+ sig_git_diff REAL DEFAULT 0,
878
+ context_hint TEXT DEFAULT '',
879
+ feedback_event TEXT DEFAULT NULL,
880
+ feedback_delta INTEGER DEFAULT NULL,
881
+ feedback_ts TEXT DEFAULT NULL
882
+ )
883
+ """)
884
+ conn.execute("CREATE INDEX IF NOT EXISTS idx_adaptive_log_ts ON adaptive_log(timestamp)")
885
+ conn.execute("""
886
+ CREATE TABLE IF NOT EXISTS somatic_events (
887
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
888
+ timestamp TEXT DEFAULT (datetime('now')),
889
+ target TEXT NOT NULL,
890
+ target_type TEXT NOT NULL,
891
+ event_type TEXT NOT NULL,
892
+ delta REAL NOT NULL,
893
+ source TEXT DEFAULT '',
894
+ projected INTEGER DEFAULT 0
895
+ )
896
+ """)
897
+ conn.execute("CREATE INDEX IF NOT EXISTS idx_somatic_events_target ON somatic_events(target)")
898
+ conn.execute("CREATE INDEX IF NOT EXISTS idx_somatic_events_projected ON somatic_events(projected)")
899
+
900
+
865
901
  # Migration registry — APPEND ONLY, never reorder or delete
866
902
  MIGRATIONS = [
867
903
  (1, "learnings_columns", _m1_learnings_columns),
@@ -871,6 +907,7 @@ MIGRATIONS = [
871
907
  (5, "change_log_indexes", _m5_change_log_indexes),
872
908
  (6, "error_guard_tables", _m6_error_guard_tables),
873
909
  (7, "diary_source_and_draft", _m7_diary_source_and_draft),
910
+ (8, "adaptive_log_and_somatic", _m8_adaptive_log_and_somatic),
874
911
  ]
875
912
 
876
913
 
@@ -79,7 +79,20 @@ fi
79
79
 
80
80
  if [ "$SKIP_FALLBACK" = false ]; then
81
81
  mkdir -p "$(dirname "$BUFFER")"
82
- echo "{\"ts\":\"$TIMESTAMP\",\"tasks\":[\"session ended\"],\"decisions\":[],\"user_patterns\":[],\"files_modified\":[],\"errors_resolved\":[],\"self_critique\":\"hook-fallback, no self-critique captured\",\"mood\":\"unknown\",\"source\":\"hook-fallback\"}" >> "$BUFFER" 2>/dev/null
82
+ # Read current adaptive mode for the buffer entry
83
+ ADAPTIVE_MODE="unknown"
84
+ ADAPTIVE_FILE="$NEXO_HOME/brain/adaptive_state.json"
85
+ if [ -f "$ADAPTIVE_FILE" ]; then
86
+ ADAPTIVE_MODE=$(python3 -c "
87
+ import json
88
+ try:
89
+ d = json.load(open('$ADAPTIVE_FILE'))
90
+ print(d.get('current_mode', 'unknown').lower())
91
+ except:
92
+ print('unknown')
93
+ " 2>/dev/null || echo "unknown")
94
+ fi
95
+ echo "{\"ts\":\"$TIMESTAMP\",\"tasks\":[\"session ended\"],\"decisions\":[],\"user_patterns\":[],\"files_modified\":[],\"errors_resolved\":[],\"self_critique\":\"hook-fallback, no self-critique captured\",\"mood\":\"unknown\",\"session_end_mode\":\"$ADAPTIVE_MODE\",\"source\":\"hook-fallback\"}" >> "$BUFFER" 2>/dev/null
83
96
  fi
84
97
 
85
98
  # 3. Intra-day reflection trigger