nexo-brain 0.3.1 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,60 +1,66 @@
1
- # NEXO — Your Claude Code Gets a Brain
1
+ # NEXO Brain — Your AI Gets a Brain
2
2
 
3
- **NEXO transforms Claude Code from a stateless assistant into a cognitive partner that remembers, learns, forgets, adapts, and builds a relationship with you over time.**
3
+ **NEXO Brain transforms any MCP-compatible AI agent from a stateless assistant into a cognitive partner that remembers, learns, forgets, adapts, and builds a relationship with you over time.**
4
4
 
5
- Every time you close a Claude Code session, everything is lost. Your assistant doesn't remember yesterday's decisions, repeats the same mistakes, and starts from zero. NEXO fixes this by giving Claude Code a brain — modeled after how human memory actually works.
5
+ [Watch the overview on YouTube](https://www.youtube.com/watch?v=-uvhicUhGTY)
6
+
7
+ Every time you close a session, everything is lost. Your agent doesn't remember yesterday's decisions, repeats the same mistakes, and starts from zero. NEXO Brain fixes this with a cognitive architecture modeled after how human memory actually works.
6
8
 
7
9
  ## The Problem
8
10
 
9
- Claude Code is powerful but amnesic:
11
+ AI coding agents are powerful but amnesic:
10
12
  - **No memory** — closes a session, forgets everything
11
13
  - **Repeats mistakes** — makes the same error you corrected yesterday
12
14
  - **No context** — can't connect today's work with last week's decisions
13
15
  - **Reactive** — waits for instructions instead of anticipating needs
14
16
  - **No learning** — doesn't improve from experience
17
+ - **No safety** — stores anything it's told, including poisoned or redundant data
15
18
 
16
19
  ## The Solution: A Cognitive Architecture
17
20
 
18
- NEXO implements the **Atkinson-Shiffrin memory model** from cognitive psychology (1968) — the same model that explains how human memory works:
21
+ NEXO Brain implements the **Atkinson-Shiffrin memory model** from cognitive psychology (1968) — the same model that explains how human memory works:
19
22
 
20
23
  ```
21
24
  What you say and do
22
-
23
- ├─→ Sensory Register (raw capture, 48h)
24
-
25
- └─→ Attention filter: "Is this worth remembering?"
26
-
27
-
28
- ├─→ Short-Term Memory (7-day half-life)
29
-
30
- ├─→ Used often? Consolidate to Long-Term Memory
31
- └─→ Not accessed? Gradually forgotten
32
-
33
- └─→ Long-Term Memory (60-day half-life)
34
-
35
- ├─→ Active: instantly searchable by meaning
36
- ├─→ Dormant: faded but recoverable ("oh right, I remember now!")
37
- └─→ Near-duplicates auto-merged to prevent clutter
25
+ |
26
+ +---> Sensory Register (raw capture, 48h)
27
+ | |
28
+ | +---> Attention filter: "Is this worth remembering?"
29
+ | |
30
+ | v
31
+ +---> Short-Term Memory (7-day half-life)
32
+ | |
33
+ | +---> Used often? --> Consolidate to Long-Term Memory
34
+ | +---> Not accessed? --> Gradually forgotten
35
+ |
36
+ +---> Long-Term Memory (60-day half-life)
37
+ |
38
+ +---> Active: instantly searchable by meaning
39
+ +---> Dormant: faded but recoverable ("oh right, I remember now!")
40
+ +---> Near-duplicates auto-merged to prevent clutter
38
41
  ```
39
42
 
40
- This isn't a metaphor. NEXO literally implements Ebbinghaus forgetting curves, rehearsal-based reinforcement, and memory consolidation during automated "sleep" processes.
43
+ This isn't a metaphor. NEXO Brain literally implements Ebbinghaus forgetting curves, rehearsal-based reinforcement, and memory consolidation during automated "sleep" processes.
41
44
 
42
- ## What Makes NEXO Different
45
+ ## What Makes NEXO Brain Different
43
46
 
44
- | Without NEXO | With NEXO |
45
- |-------------|-----------|
47
+ | Without NEXO Brain | With NEXO Brain |
48
+ |---------------------|-----------------|
46
49
  | Memory gone after each session | Persistent across sessions with natural decay and reinforcement |
47
50
  | Repeats the same mistakes | Checks "have I made this mistake before?" before every action |
48
51
  | Keyword search only | Finds memories by **meaning**, not just words |
49
52
  | Starts cold every time | Resumes from the mental state of the last session |
50
53
  | Same behavior regardless of context | Adapts tone and approach based on your mood |
51
54
  | No relationship | Trust score that evolves — makes fewer redundant checks as alignment grows |
55
+ | Stores everything blindly | Prediction error gating rejects redundant information at write time |
56
+ | Vulnerable to memory poisoning | 4-layer security pipeline scans every memory before storage |
57
+ | No proactive behavior | Context-triggered reminders fire when topics match, not just by date |
52
58
 
53
59
  ## How the Brain Works
54
60
 
55
61
  ### Memory That Forgets (And That's a Feature)
56
62
 
57
- NEXO uses **Ebbinghaus forgetting curves** — memories naturally fade over time unless reinforced by use. This isn't a bug, it's how useful memory works:
63
+ NEXO Brain uses **Ebbinghaus forgetting curves** — memories naturally fade over time unless reinforced by use. This isn't a bug, it's how useful memory works:
58
64
 
59
65
  - A lesson learned yesterday is strong. If you never encounter it again, it fades — because it probably wasn't important.
60
66
  - A lesson accessed 5 times in 2 weeks gets promoted to long-term memory — because repeated use proves it matters.
@@ -62,19 +68,19 @@ NEXO uses **Ebbinghaus forgetting curves** — memories naturally fade over time
62
68
 
63
69
  ### Semantic Search (Finding by Meaning)
64
70
 
65
- NEXO doesn't search by keywords. It searches by **meaning** using vector embeddings (fastembed, 384 dimensions).
71
+ NEXO Brain doesn't search by keywords. It searches by **meaning** using vector embeddings (fastembed, 384 dimensions).
66
72
 
67
- Example: If you search for "deploy problems", NEXO will find a memory about "SSH connection timeout on production server" — even though they share zero words. This is how human associative memory works.
73
+ Example: If you search for "deploy problems", NEXO Brain will find a memory about "SSH connection timeout on production server" — even though they share zero words. This is how human associative memory works.
68
74
 
69
75
  ### Metacognition (Thinking About Thinking)
70
76
 
71
- Before every code change, NEXO asks itself: **"Have I made a mistake like this before?"**
77
+ Before every code change, NEXO Brain asks itself: **"Have I made a mistake like this before?"**
72
78
 
73
79
  It searches its memory for related errors, warnings, and lessons learned. If it finds something relevant, it surfaces the warning BEFORE acting — not after you've already broken production.
74
80
 
75
81
  ### Cognitive Dissonance
76
82
 
77
- When you give an instruction that contradicts NEXO's established knowledge, it doesn't silently obey or silently resist. It **verbalizes the conflict**:
83
+ When you give an instruction that contradicts established knowledge, NEXO Brain doesn't silently obey or silently resist. It **verbalizes the conflict**:
78
84
 
79
85
  > "My memory says you prefer Tailwind over plain CSS, but you're asking me to write inline styles. Is this a permanent change or a one-time exception?"
80
86
 
@@ -82,42 +88,80 @@ You decide: **paradigm shift** (permanent change), **exception** (one-time), or
82
88
 
83
89
  ### Sibling Memories
84
90
 
85
- Some memories look identical but apply to different contexts. "How to deploy" for Project A is different from Project B. NEXO detects discriminating entities (different OS, platform, language) and links them as **siblings** instead of merging them:
91
+ Some memories look identical but apply to different contexts. "How to deploy" for Project A is different from Project B. NEXO Brain detects discriminating entities (different OS, platform, language) and links them as **siblings** instead of merging them:
86
92
 
87
93
  > "Applying the Linux deploy procedure. Note: there's a sibling for macOS that uses a different port."
88
94
 
89
95
  ### Trust Score (0-100)
90
96
 
91
- NEXO tracks alignment with you through a trust score:
97
+ NEXO Brain tracks alignment with you through a trust score:
92
98
 
93
- - **You say thanks** score goes up NEXO reduces redundant verification checks
94
- - **NEXO makes a mistake you already taught it** score drops NEXO becomes more careful, checks more thoroughly
95
- - **The score doesn't control permissions** — you're always in control. It's a mirror that helps NEXO calibrate its own rigor.
99
+ - **You say thanks** --> score goes up --> reduces redundant verification checks
100
+ - **Makes a mistake you already taught it** --> score drops --> becomes more careful, checks more thoroughly
101
+ - **The score doesn't control permissions** — you're always in control. It's a mirror that helps calibrate rigor.
96
102
 
97
103
  ### Sentiment Detection
98
104
 
99
- NEXO reads your tone (keywords, message length, urgency signals) and adapts:
105
+ NEXO Brain reads your tone (keywords, message length, urgency signals) and adapts:
100
106
 
101
- - **Frustrated?** Ultra-concise mode. Zero explanations. Just solve the problem.
102
- - **In flow?** Good moment to suggest that backlog item from last Tuesday.
103
- - **Urgent?** Immediate action, no preamble.
107
+ - **Frustrated?** --> Ultra-concise mode. Zero explanations. Just solve the problem.
108
+ - **In flow?** --> Good moment to suggest that backlog item from last Tuesday.
109
+ - **Urgent?** --> Immediate action, no preamble.
104
110
 
105
111
  ### Sleep Cycle
106
112
 
107
- Like a human brain, NEXO has automated processes that run while you're not using it:
113
+ Like a human brain, NEXO Brain has automated processes that run while you're not using it:
108
114
 
109
115
  | Time | Process | Human Analogy |
110
116
  |------|---------|---------------|
111
- | 03:00 | Decay + memory consolidation + merge duplicates | Deep sleep consolidation |
117
+ | 03:00 | Decay + memory consolidation + merge duplicates + dreaming | Deep sleep consolidation |
112
118
  | 04:00 | Clean expired data, prune redundant memories | Synaptic pruning |
113
119
  | 07:00 | Self-audit, health checks, metrics | Waking up + orientation |
114
120
  | 23:30 | Process day's events, extract patterns | Pre-sleep reflection |
115
- | Boot | Catch-up: run anything missed while computer was off | |
121
+ | Boot | Catch-up: run anything missed while computer was off | -- |
122
+
123
+ If your Mac was asleep during any scheduled process, NEXO Brain catches up in order when it wakes.
124
+
125
+ ## Cognitive Features (v0.3.1)
126
+
127
+ NEXO Brain v0.3.1 adds 21 cognitive tools on top of the 76 base tools, bringing the total to **97+ MCP tools**. These features implement cognitive science concepts that go beyond basic memory:
128
+
129
+ ### Input Pipeline
130
+
131
+ | Feature | What It Does |
132
+ |---------|-------------|
133
+ | **Prediction Error Gating** | Only novel information is stored. Redundant content that matches existing memories is rejected at write time, keeping your memory clean without manual curation. |
134
+ | **Security Pipeline** | 4-layer defense against memory poisoning: injection detection, encoding analysis, behavioral anomaly scoring, and credential scanning. Every memory passes through all four layers before storage. |
135
+ | **Quarantine Queue** | New facts enter quarantine status and must pass a promotion policy before becoming trusted knowledge. Prevents unverified information from influencing decisions. |
136
+ | **Secret Redaction** | Auto-detects and redacts API keys, tokens, passwords, and other sensitive data before storage. Secrets never reach the vector database. |
137
+
138
+ ### Memory Management
116
139
 
117
- If your Mac was asleep during any scheduled process, NEXO catches up in order when it wakes.
140
+ | Feature | What It Does |
141
+ |---------|-------------|
142
+ | **Pin / Snooze / Archive** | Granular lifecycle states for memories. Pin = never decays (critical knowledge). Snooze = temporarily hidden (revisit later). Archive = cold storage (searchable but inactive). |
143
+ | **Auto-Merge Duplicates** | Batch cosine deduplication during the 03:00 sleep cycle. Respects sibling discrimination — similar memories about different contexts are kept separate. |
144
+ | **Memory Dreaming** | Discovers hidden connections between recent memories during the 03:00 sleep cycle. Surfaces non-obvious patterns like "these three bugs all relate to the same root cause." |
145
+
146
+ ### Retrieval
147
+
148
+ | Feature | What It Does |
149
+ |---------|-------------|
150
+ | **HyDE Query Expansion** | Generates hypothetical answer embeddings for richer semantic search. Instead of searching for "deploy error", it imagines what a helpful memory about deploy errors would look like, then searches for that. |
151
+ | **Spreading Activation** | Graph-based co-activation network. Memories retrieved together reinforce each other's connections, building an associative web that improves over time. |
152
+ | **Recall Explanations** | Transparent score breakdown for every retrieval result. Shows exactly why a memory was returned: semantic similarity, recency, access frequency, and co-activation bonuses. |
153
+
154
+ ### Proactive
155
+
156
+ | Feature | What It Does |
157
+ |---------|-------------|
158
+ | **Prospective Memory** | Context-triggered reminders that fire when conversation topics match, not just by date. "Remind me about X when we discuss Y" works naturally. |
159
+ | **Hook Auto-capture** | Extracts decisions, corrections, and factual statements from conversations automatically. You don't need to explicitly say "remember this" — the system detects what's worth storing. |
118
160
 
119
161
  ## Quick Start
120
162
 
163
+ ### Claude Code (Primary)
164
+
121
165
  ```bash
122
166
  npx nexo-brain
123
167
  ```
@@ -136,15 +180,15 @@ The installer handles everything:
136
180
  Scanning workspace...
137
181
  - 3 git repositories
138
182
  - Node.js project detected
139
- Configuring Claude Code MCP server...
183
+ Configuring MCP server...
140
184
  Setting up automated processes...
141
185
  5 automated processes configured.
142
186
  Caffeinate enabled.
143
187
  Generating operator instructions...
144
188
 
145
- ╔══════════════════════════════════════════════════════════╗
146
- Atlas is ready. Type 'atlas' to start.
147
- ╚══════════════════════════════════════════════════════════╝
189
+ +----------------------------------------------------------+
190
+ | Atlas is ready. Type 'atlas' to start. |
191
+ +----------------------------------------------------------+
148
192
  ```
149
193
 
150
194
  ### Starting a Session
@@ -162,7 +206,7 @@ That's it. No need to run `claude` manually. Atlas will greet you immediately
162
206
  | Component | What | Where |
163
207
  |-----------|------|-------|
164
208
  | Cognitive engine | Python: fastembed, numpy, vector search | pip packages |
165
- | MCP server | 77 tools for memory, learning, guard | ~/.nexo/ |
209
+ | MCP server | 97+ tools for memory, cognition, learning, guard | ~/.nexo/ |
166
210
  | Plugins | Guard, episodic memory, cognitive memory, entities, preferences | ~/.nexo/plugins/ |
167
211
  | Hooks | Session capture, briefing, stop detection | ~/.nexo/hooks/ |
168
212
  | LaunchAgents | Decay, sleep, audit, postmortem, catch-up | ~/Library/LaunchAgents/ |
@@ -173,16 +217,18 @@ That's it. No need to run `claude` manually. Atlas will greet you immediately
173
217
 
174
218
  - **macOS** (Linux support planned)
175
219
  - **Node.js 18+** (for the installer)
176
- - **Claude Opus (latest version) strongly recommended.** NEXO provides 77 MCP tools across 16 categories. This cognitive load requires a top-tier model with large context window. Smaller models (Haiku, Sonnet) may struggle with tool selection and produce inconsistent results. Opus handles all 77 tools without hesitation.
220
+ - **Claude Opus (latest version) strongly recommended.** NEXO Brain provides 97+ MCP tools across 17 categories. This cognitive load requires a top-tier model with large context window. Smaller models (Haiku, Sonnet) may struggle with tool selection and produce inconsistent results. Opus handles all 97+ tools without hesitation.
177
221
  - Python 3, Homebrew, and Claude Code are installed automatically if missing.
178
222
 
179
223
  ## Architecture
180
224
 
181
- ### 77 MCP Tools across 16 Categories
225
+ ### 97+ MCP Tools across 17 Categories
182
226
 
183
227
  | Category | Count | Tools | Purpose |
184
228
  |----------|-------|-------|---------|
185
229
  | Cognitive | 8 | retrieve, stats, inspect, metrics, dissonance, resolve, sentiment, trust | The brain — memory, RAG, trust, mood |
230
+ | Cognitive Input | 5 | prediction_gate, security_scan, quarantine, promote, redact | Input pipeline — gating, security, quarantine |
231
+ | Cognitive Advanced | 8 | hyde_search, spread_activate, explain_recall, dream, prospect, hook_capture, pin, archive | Advanced retrieval, proactive, lifecycle |
186
232
  | Guard | 3 | check, stats, log_repetition | Metacognitive error prevention |
187
233
  | Episodic | 10 | change_log/search/commit, decision_log/outcome/search, review_queue, diary_write/read, recall | What happened and why |
188
234
  | Sessions | 4 | startup, heartbeat, stop, status | Session lifecycle + context shift detection |
@@ -201,7 +247,7 @@ That's it. No need to run `claude` manually. Atlas will greet you immediately
201
247
 
202
248
  ### Plugin System
203
249
 
204
- NEXO supports hot-loadable plugins. Drop a `.py` file in `~/.nexo/plugins/`:
250
+ NEXO Brain supports hot-loadable plugins. Drop a `.py` file in `~/.nexo/plugins/`:
205
251
 
206
252
  ```python
207
253
  # my_plugin.py
@@ -222,28 +268,46 @@ Reload without restarting: `nexo_plugin_load("my_plugin.py")`
222
268
  - **No telemetry.** No analytics. No phone-home.
223
269
  - **No cloud dependencies.** Vector search runs on CPU (fastembed), not an API.
224
270
  - **Auto-update is opt-in.** Checks GitHub releases, never sends data.
271
+ - **Secret redaction.** API keys and tokens are stripped before they ever reach memory storage.
225
272
 
226
- ## The Psychology Behind NEXO
273
+ ## The Psychology Behind NEXO Brain
227
274
 
228
- NEXO isn't just engineering — it's applied cognitive psychology:
275
+ NEXO Brain isn't just engineering — it's applied cognitive psychology:
229
276
 
230
- | Psychological Concept | How NEXO Implements It |
277
+ | Psychological Concept | How NEXO Brain Implements It |
231
278
  |----------------------|----------------------|
232
- | Atkinson-Shiffrin (1968) | Three memory stores: sensory register STM LTM |
233
- | Ebbinghaus Forgetting Curve (1885) | Exponential decay: `strength = strength × e^( × time)` |
279
+ | Atkinson-Shiffrin (1968) | Three memory stores: sensory register --> STM --> LTM |
280
+ | Ebbinghaus Forgetting Curve (1885) | Exponential decay: `strength = strength * e^(-lambda * time)` |
234
281
  | Rehearsal Effect | Accessing a memory resets its strength to 1.0 |
235
282
  | Memory Consolidation | Nightly process promotes frequently-used STM to LTM |
283
+ | Prediction Error | Only surprising (novel) information gets stored — redundant input is gated |
284
+ | Spreading Activation (Collins & Loftus, 1975) | Retrieving a memory co-activates related memories through an associative graph |
285
+ | HyDE (Gao et al., 2022) | Hypothetical document embeddings improve semantic recall |
286
+ | Prospective Memory (Einstein & McDaniel, 1990) | Context-triggered intentions fire when cue conditions match |
236
287
  | Metacognition | Guard system checks past errors before acting |
237
- | Cognitive Dissonance | Detects and verbalizes conflicts between old and new knowledge |
288
+ | Cognitive Dissonance (Festinger, 1957) | Detects and verbalizes conflicts between old and new knowledge |
238
289
  | Theory of Mind | Models user behavior, preferences, and mood |
239
290
  | Synaptic Pruning | Automated cleanup of weak, unused memories |
240
291
  | Associative Memory | Semantic search finds related concepts, not just matching words |
292
+ | Memory Reconsolidation | Dreaming process discovers hidden connections during sleep |
293
+
294
+ ## Integrations
295
+
296
+ ### Claude Code (Primary)
241
297
 
242
- ## OpenClaw Integration
298
+ NEXO Brain is designed as an MCP server. Claude Code is the primary supported client:
243
299
 
244
- NEXO Brain works as a cognitive memory backend for [OpenClaw](https://github.com/openclaw/openclaw). Three integration paths, from instant to deep:
300
+ ```bash
301
+ npx nexo-brain
302
+ ```
303
+
304
+ All 97+ tools are available immediately after installation. The installer configures Claude Code's `~/.claude/settings.json` automatically.
245
305
 
246
- ### Path 1: MCP Bridge (Zero Code — Works Now)
306
+ ### OpenClaw
307
+
308
+ NEXO Brain also works as a cognitive memory backend for [OpenClaw](https://github.com/openclaw/openclaw):
309
+
310
+ #### MCP Bridge (Zero Code)
247
311
 
248
312
  Add NEXO Brain to your OpenClaw config at `~/.openclaw/openclaw.json`:
249
313
 
@@ -270,24 +334,18 @@ openclaw mcp set nexo-brain '{"command":"python3","args":["~/.nexo/src/server.py
270
334
  openclaw gateway restart
271
335
  ```
272
336
 
273
- All 77 NEXO tools become available to your OpenClaw agent immediately.
274
-
275
- > **First time?** Run `npx nexo-brain` first to install the cognitive engine and dependencies.
276
-
277
- ### Path 2: ClawHub Skill (Install in Seconds)
337
+ #### ClawHub Skill
278
338
 
279
339
  ```bash
280
340
  npx clawhub@latest install nexo-brain
281
341
  ```
282
342
 
283
- ### Path 3: Native Memory Plugin (Replaces Default Memory)
343
+ #### Native Memory Plugin
284
344
 
285
345
  ```bash
286
346
  npm install @wazionapps/openclaw-memory-nexo-brain
287
347
  ```
288
348
 
289
- Configure in `~/.openclaw/openclaw.json`:
290
-
291
349
  ```json
292
350
  {
293
351
  "plugins": {
@@ -298,7 +356,11 @@ Configure in `~/.openclaw/openclaw.json`:
298
356
  }
299
357
  ```
300
358
 
301
- This replaces OpenClaw's default memory system with NEXO's full cognitive architecture — Atkinson-Shiffrin memory, semantic RAG, trust scoring, guard system, and all 77 tools.
359
+ This replaces OpenClaw's default memory system with NEXO Brain's full cognitive architecture.
360
+
361
+ ### Any MCP Client
362
+
363
+ NEXO Brain works with any application that supports the MCP protocol. Configure it as an MCP server pointing to `~/.nexo/src/server.py`.
302
364
 
303
365
  ## Listed On
304
366
 
@@ -312,13 +374,25 @@ This replaces OpenClaw's default memory system with NEXO's full cognitive archit
312
374
  | dev.to | Technical Article | [How I Applied Cognitive Psychology to AI Agents](https://dev.to/wazionapps/how-i-applied-cognitive-psychology-to-give-ai-agents-real-memory-2oce) |
313
375
  | nexo-brain.com | Official Website | [nexo-brain.com](https://nexo-brain.com) |
314
376
 
377
+ ## Inspired By
378
+
379
+ NEXO Brain builds on ideas from several open-source projects. We're grateful for the research and implementations that inspired specific features:
380
+
381
+ | Project | Inspired Features |
382
+ |---------|------------------|
383
+ | [Vestige](https://github.com/pchaganti/gx-vestige) | HyDE query expansion, spreading activation, prediction error gating, memory dreaming, prospective memory |
384
+ | [ShieldCortex](https://github.com/PShieldCortex/ShieldCortex) | Security pipeline (4-layer memory poisoning defense) |
385
+ | [Bicameral](https://github.com/nicobailey/Bicameral) | Quarantine queue (trust promotion policy for new facts) |
386
+ | [claude-mem](https://github.com/nicobailey/claude-mem) | Hook auto-capture (extracting decisions and facts from conversations) |
387
+ | [ClawMem](https://github.com/nicobailey/ClawMem) | Co-activation reinforcement (memories retrieved together strengthen connections) |
388
+
315
389
  ## Contributing
316
390
 
317
391
  See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. Issues and PRs welcome.
318
392
 
319
393
  ## License
320
394
 
321
- MIT see [LICENSE](LICENSE)
395
+ MIT -- see [LICENSE](LICENSE)
322
396
 
323
397
  ---
324
398
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "nexo-brain",
3
- "version": "0.3.1",
3
+ "version": "0.3.3",
4
4
  "mcpName": "io.github.wazionapps/nexo",
5
5
  "description": "NEXO — Cognitive co-operator for Claude Code. Atkinson-Shiffrin memory, semantic RAG, trust scoring, and metacognitive error prevention.",
6
6
  "bin": {
@@ -0,0 +1,157 @@
1
+ #!/usr/bin/env python3
2
+ """Auto-close orphan sessions and promote diary drafts.
3
+
4
+ Runs every 5 minutes via LaunchAgent (com.nexo.auto-close-sessions).
5
+ Finds sessions that exceeded TTL without a diary and promotes their
6
+ draft to a real diary entry marked as source=auto-close.
7
+ """
8
+
9
+ import json
10
+ import os
11
+ import sys
12
+ import datetime
13
+
14
+ # Ensure we can import from nexo-mcp
15
+ sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
16
+ os.environ["NEXO_SKIP_FS_INDEX"] = "1" # Skip FTS rebuild on import
17
+
18
+ from db import (
19
+ init_db, get_db, get_diary_draft, delete_diary_draft,
20
+ get_orphan_sessions, write_session_diary, now_epoch,
21
+ SESSION_STALE_SECONDS,
22
+ )
23
+
24
+ LOG_DIR = os.path.expanduser("~/claude/operations/tool-logs")
25
+ AUTO_CLOSE_LOG = os.path.expanduser("~/claude/coordination/auto-close.log")
26
+
27
+
28
+ def get_tool_log_summary(sid: str) -> str:
29
+ """Extract tool names from today's tool log for this session."""
30
+ today = datetime.date.today().isoformat()
31
+ log_path = os.path.join(LOG_DIR, f"{today}.jsonl")
32
+ if not os.path.exists(log_path):
33
+ return ""
34
+
35
+ tools = []
36
+ try:
37
+ with open(log_path) as f:
38
+ for line in f:
39
+ try:
40
+ entry = json.loads(line)
41
+ if entry.get("session_id") == sid:
42
+ tool = entry.get("tool_name", "")
43
+ if tool and tool not in ("Read", "Grep", "Glob"):
44
+ tools.append(tool)
45
+ except json.JSONDecodeError:
46
+ continue
47
+ except Exception:
48
+ pass
49
+
50
+ if tools:
51
+ seen = set()
52
+ unique = []
53
+ for t in tools:
54
+ if t not in seen:
55
+ seen.add(t)
56
+ unique.append(t)
57
+ return f"Tools used: {', '.join(unique[-15:])}"
58
+ return ""
59
+
60
+
61
+ def promote_draft_to_diary(sid: str, draft: dict, task: str = ""):
62
+ """Promote a diary draft to a real session diary entry."""
63
+ tasks = json.loads(draft.get("tasks_seen", "[]"))
64
+ change_ids = json.loads(draft.get("change_ids", "[]"))
65
+ decision_ids = json.loads(draft.get("decision_ids", "[]"))
66
+ context_hint = draft.get("last_context_hint", "")
67
+ hb_count = draft.get("heartbeat_count", 0)
68
+
69
+ summary_parts = []
70
+ if draft.get("summary_draft"):
71
+ summary_parts.append(draft["summary_draft"])
72
+
73
+ tool_summary = get_tool_log_summary(sid)
74
+ if tool_summary:
75
+ summary_parts.append(tool_summary)
76
+
77
+ summary = " | ".join(summary_parts) if summary_parts else f"Auto-closed session ({hb_count} heartbeats)"
78
+
79
+ # Build decisions from actual decision records
80
+ decisions_text = ""
81
+ if decision_ids:
82
+ conn = get_db()
83
+ placeholders = ",".join("?" * len(decision_ids))
84
+ rows = conn.execute(
85
+ f"SELECT id, decision, domain FROM decisions WHERE id IN ({placeholders})",
86
+ decision_ids
87
+ ).fetchall()
88
+ if rows:
89
+ decisions_text = json.dumps([
90
+ {"id": r["id"], "decision": r["decision"][:100], "domain": r["domain"]}
91
+ for r in rows
92
+ ])
93
+
94
+ # Build context_next
95
+ context_next = ""
96
+ if context_hint:
97
+ context_next = f"Last topic: {context_hint}"
98
+ if tasks:
99
+ context_next += f" | Tasks: {', '.join(tasks[-5:])}"
100
+
101
+ write_session_diary(
102
+ session_id=sid,
103
+ decisions=decisions_text or "No decisions logged",
104
+ summary=summary,
105
+ discarded="",
106
+ pending=f"Changes: {change_ids}" if change_ids else "",
107
+ context_next=context_next,
108
+ mental_state=f"[auto-close] Session ended without explicit diary. Draft promoted. {hb_count} heartbeats recorded.",
109
+ domain="",
110
+ user_signals="",
111
+ self_critique="[auto-close] No self-critique available — session terminated without cleanup.",
112
+ source="auto-close",
113
+ )
114
+ delete_diary_draft(sid)
115
+
116
+
117
+ def main():
118
+ init_db()
119
+ conn = get_db()
120
+
121
+ orphans = get_orphan_sessions(SESSION_STALE_SECONDS)
122
+ if not orphans:
123
+ return
124
+
125
+ for session in orphans:
126
+ sid = session["sid"]
127
+ draft = get_diary_draft(sid)
128
+
129
+ if draft:
130
+ promote_draft_to_diary(sid, draft, task=session.get("task", ""))
131
+ else:
132
+ write_session_diary(
133
+ session_id=sid,
134
+ decisions="No decisions logged",
135
+ summary=f"Auto-closed session. Task: {session.get('task', 'unknown')}",
136
+ context_next="",
137
+ mental_state="[auto-close] No draft available. Minimal diary.",
138
+ self_critique="[auto-close] Session terminated without diary or draft.",
139
+ source="auto-close",
140
+ )
141
+
142
+ # Clean up the session
143
+ conn.execute("DELETE FROM tracked_files WHERE sid = ?", (sid,))
144
+ conn.execute("DELETE FROM sessions WHERE sid = ?", (sid,))
145
+ conn.execute("DELETE FROM session_diary_draft WHERE sid = ?", (sid,))
146
+
147
+ conn.commit()
148
+
149
+ # Log what we did
150
+ os.makedirs(os.path.dirname(AUTO_CLOSE_LOG), exist_ok=True)
151
+ with open(AUTO_CLOSE_LOG, "a") as f:
152
+ ts = datetime.datetime.now().isoformat(timespec="seconds")
153
+ f.write(f"{ts} — auto-closed {len(orphans)} session(s): {[s['sid'] for s in orphans]}\n")
154
+
155
+
156
+ if __name__ == "__main__":
157
+ main()
package/src/cognitive.py CHANGED
@@ -52,6 +52,10 @@ NEGATIVE_SIGNALS = {
52
52
  "cansad", "siempre", "nunca", "por qué no", "no funciona", "roto",
53
53
  "no sirve", "horrible", "desastre", "qué coño", "joder", "mierda",
54
54
  "hostia", "me cago", "irritad", "harto",
55
+ "broken", "nothing works", "doesn't work", "not working", "fix it",
56
+ "wrong", "failed", "failing", "annoying", "frustrated", "damn", "shit",
57
+ "wtf", "terrible", "useless", "stupid", "hate", "worst", "sucks",
58
+ "again",
55
59
  }
56
60
  URGENCY_SIGNALS = {
57
61
  "rápido", "ya", "ahora", "urgente", "asap", "inmediatamente", "corre",
@@ -80,12 +84,12 @@ _conn = None
80
84
  _REDACT_PATTERNS = [
81
85
  # Specific API key formats
82
86
  (re.compile(r'sk-[a-zA-Z0-9_\-]{20,}'), '[REDACTED:api_key]'),
83
- (re.compile(r'ghp_[a-zA-Z0-9]{36,}'), '[REDACTED:api_key]'),
84
- (re.compile(r'shpat_[a-f0-9]{32,}'), '[REDACTED:api_key]'),
87
+ (re.compile(r'ghp_[a-zA-Z0-9]{20,}'), '[REDACTED:api_key]'),
88
+ (re.compile(r'shpat_[a-f0-9]{20,}'), '[REDACTED:api_key]'),
85
89
  (re.compile(r'AKIA[A-Z0-9]{16}'), '[REDACTED:api_key]'),
86
90
  (re.compile(r'xox[bp]-[a-zA-Z0-9\-]{20,}'), '[REDACTED:api_key]'),
87
91
  # Bearer tokens
88
- (re.compile(r'Bearer\s+[a-zA-Z0-9_\-\.]{20,}'), '[REDACTED:bearer_token]'),
92
+ (re.compile(r'Bearer\s+[a-zA-Z0-9_\-\.=+/]{20,}'), '[REDACTED:bearer_token]'),
89
93
  # Connection strings with credentials
90
94
  (re.compile(r'(mysql|postgresql|postgres|mongodb|redis)://[^\s"\']+@[^\s"\']+'), '[REDACTED:connection_string]'),
91
95
  # Generic token assignments
@@ -780,13 +784,42 @@ def search(
780
784
 
781
785
  if neighbor_boosts:
782
786
  co_activation_applied = True
787
+ # Boost existing results that are neighbors
788
+ existing_hashes = set()
783
789
  for r in results:
784
790
  co_hash = _canonical_co_id(r["store"], r["id"])
791
+ existing_hashes.add(co_hash)
785
792
  if co_hash in neighbor_boosts:
786
793
  boost = neighbor_boosts[co_hash]
787
794
  r["score"] = min(1.0, r["score"] + boost)
788
795
  r["co_activation_boost"] = boost
789
796
 
797
+ # Add neighbor memories not already in results
798
+ new_neighbor_hashes = set(neighbor_boosts.keys()) - existing_hashes
799
+ if new_neighbor_hashes:
800
+ for store_name, table in [("stm", "stm_memories"), ("ltm", "ltm_memories")]:
801
+ rows = db.execute(f"SELECT * FROM {table}").fetchall()
802
+ for row in rows:
803
+ nh = _canonical_co_id(store_name, row["id"])
804
+ if nh in new_neighbor_hashes:
805
+ boost = neighbor_boosts[nh]
806
+ results.append({
807
+ "store": store_name,
808
+ "id": row["id"],
809
+ "content": row["content"],
810
+ "source_type": row.get("source_type", ""),
811
+ "source_id": row.get("source_id", ""),
812
+ "tags": row.get("tags", ""),
813
+ "domain": row.get("domain", ""),
814
+ "created_at": row.get("created_at", ""),
815
+ "strength": row.get("strength", 0.0),
816
+ "access_count": row.get("access_count", 0),
817
+ "score": min(1.0, boost),
818
+ "co_activation_boost": boost,
819
+ "lifecycle_state": row.get("lifecycle_state", "active"),
820
+ })
821
+ new_neighbor_hashes.discard(nh)
822
+
790
823
  # Re-sort after applying boosts
791
824
  results.sort(key=lambda x: x["score"], reverse=True)
792
825
 
@@ -881,7 +914,7 @@ def ingest(
881
914
  # Security scan BEFORE prediction error gate (adapted from ShieldCortex pipeline)
882
915
  if not bypass_security:
883
916
  scan = security_scan(content)
884
- if scan["risk_score"] > 0.8:
917
+ if scan["risk_score"] >= 0.8:
885
918
  # High risk — reject with reason logged
886
919
  return 0
887
920
  if scan["sanitized_content"] != content:
package/src/db.py CHANGED
@@ -241,6 +241,18 @@ def init_db():
241
241
  user_signals TEXT,
242
242
  summary TEXT NOT NULL
243
243
  );
244
+ CREATE TABLE IF NOT EXISTS session_diary_draft (
245
+ sid TEXT PRIMARY KEY,
246
+ summary_draft TEXT DEFAULT '',
247
+ tasks_seen TEXT DEFAULT '[]',
248
+ change_ids TEXT DEFAULT '[]',
249
+ decision_ids TEXT DEFAULT '[]',
250
+ last_context_hint TEXT DEFAULT '',
251
+ heartbeat_count INTEGER DEFAULT 0,
252
+ created_at TEXT DEFAULT (datetime('now')),
253
+ updated_at TEXT DEFAULT (datetime('now'))
254
+ );
255
+
244
256
  CREATE TABLE IF NOT EXISTS evolution_metrics (
245
257
  id INTEGER PRIMARY KEY AUTOINCREMENT,
246
258
  dimension TEXT NOT NULL,
@@ -286,6 +298,8 @@ def init_db():
286
298
  _migrate_add_column(conn, "session_diary", "mental_state", "TEXT")
287
299
  _migrate_add_column(conn, "session_diary", "domain", "TEXT")
288
300
  _migrate_add_column(conn, "session_diary", "user_signals", "TEXT")
301
+ _migrate_add_column(conn, "session_diary", "self_critique", "TEXT")
302
+ _migrate_add_column(conn, "session_diary", "source", "TEXT DEFAULT 'claude'")
289
303
  _migrate_add_index(conn, "idx_change_log_created", "change_log", "created_at")
290
304
  _migrate_add_index(conn, "idx_change_log_files", "change_log", "files")
291
305
  _migrate_add_index(conn, "idx_learnings_status", "learnings", "status")
@@ -2059,14 +2073,14 @@ def write_session_diary(session_id: str, decisions: str, summary: str,
2059
2073
  discarded: str = '', pending: str = '',
2060
2074
  context_next: str = '', mental_state: str = '',
2061
2075
  domain: str = '', user_signals: str = '',
2062
- self_critique: str = '') -> dict:
2076
+ self_critique: str = '', source: str = 'claude') -> dict:
2063
2077
  """Write a session diary entry with mental state and self-critique for continuity."""
2064
2078
  conn = get_db()
2065
2079
  cleanup_old_diaries()
2066
2080
  cursor = conn.execute(
2067
- "INSERT INTO session_diary (session_id, decisions, discarded, pending, context_next, mental_state, summary, domain, user_signals, self_critique) "
2068
- "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
2069
- (session_id, decisions, discarded, pending, context_next, mental_state, summary, domain, user_signals, self_critique)
2081
+ "INSERT INTO session_diary (session_id, decisions, discarded, pending, context_next, mental_state, summary, domain, user_signals, self_critique, source) "
2082
+ "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)",
2083
+ (session_id, decisions, discarded, pending, context_next, mental_state, summary, domain, user_signals, self_critique, source)
2070
2084
  )
2071
2085
  conn.commit()
2072
2086
  did = cursor.lastrowid
@@ -2086,6 +2100,64 @@ def check_session_has_diary(session_id: str) -> bool:
2086
2100
  return row is not None
2087
2101
 
2088
2102
 
2103
+ # ── Session Diary Drafts ─────────────────────────────────────────
2104
+
2105
+
2106
+ def upsert_diary_draft(sid: str, tasks_seen: str, change_ids: str,
2107
+ decision_ids: str, last_context_hint: str,
2108
+ heartbeat_count: int, summary_draft: str = '') -> dict:
2109
+ """UPSERT diary draft for a session. Called by heartbeat to accumulate context."""
2110
+ conn = get_db()
2111
+ conn.execute(
2112
+ """INSERT INTO session_diary_draft
2113
+ (sid, summary_draft, tasks_seen, change_ids, decision_ids,
2114
+ last_context_hint, heartbeat_count, updated_at)
2115
+ VALUES (?, ?, ?, ?, ?, ?, ?, datetime('now'))
2116
+ ON CONFLICT(sid) DO UPDATE SET
2117
+ summary_draft = excluded.summary_draft,
2118
+ tasks_seen = excluded.tasks_seen,
2119
+ change_ids = excluded.change_ids,
2120
+ decision_ids = excluded.decision_ids,
2121
+ last_context_hint = excluded.last_context_hint,
2122
+ heartbeat_count = excluded.heartbeat_count,
2123
+ updated_at = datetime('now')""",
2124
+ (sid, summary_draft, tasks_seen, change_ids, decision_ids,
2125
+ last_context_hint, heartbeat_count)
2126
+ )
2127
+ conn.commit()
2128
+ return {"sid": sid, "heartbeat_count": heartbeat_count}
2129
+
2130
+
2131
+ def get_diary_draft(sid: str) -> dict | None:
2132
+ """Get diary draft for a session, or None."""
2133
+ conn = get_db()
2134
+ row = conn.execute(
2135
+ "SELECT * FROM session_diary_draft WHERE sid = ?", (sid,)
2136
+ ).fetchone()
2137
+ return dict(row) if row else None
2138
+
2139
+
2140
+ def delete_diary_draft(sid: str):
2141
+ """Delete diary draft after real diary is written."""
2142
+ conn = get_db()
2143
+ conn.execute("DELETE FROM session_diary_draft WHERE sid = ?", (sid,))
2144
+ conn.commit()
2145
+
2146
+
2147
+ def get_orphan_sessions(ttl_seconds: int = 900) -> list[dict]:
2148
+ """Get sessions that exceeded TTL and have no diary."""
2149
+ conn = get_db()
2150
+ cutoff = now_epoch() - ttl_seconds
2151
+ rows = conn.execute(
2152
+ """SELECT s.sid, s.task, s.started_epoch, s.last_update_epoch
2153
+ FROM sessions s
2154
+ LEFT JOIN session_diary sd ON sd.session_id = s.sid
2155
+ WHERE s.last_update_epoch <= ? AND sd.id IS NULL""",
2156
+ (cutoff,)
2157
+ ).fetchall()
2158
+ return [dict(r) for r in rows]
2159
+
2160
+
2089
2161
  def read_session_diary(session_id: str = '', last_n: int = 3, last_day: bool = False,
2090
2162
  domain: str = '') -> list[dict]:
2091
2163
  """Read session diary entries.
@@ -2093,7 +2165,7 @@ def read_session_diary(session_id: str = '', last_n: int = 3, last_day: bool = F
2093
2165
  - session_id: returns entries for that specific session
2094
2166
  - last_day: returns ALL entries from the most recent day (multi-terminal aware)
2095
2167
  - last_n: returns last N entries (default)
2096
- - domain: filter by project context (e.g. project-a, project-b, nexo, server, other)
2168
+ - domain: filter by project context (project-a, project-b, nexo, other)
2097
2169
  """
2098
2170
  conn = get_db()
2099
2171
  domain_clause = " AND domain = ?" if domain else ""
@@ -27,19 +27,19 @@ def handle_decision_log(domain: str, decision: str, alternatives: str = '',
27
27
  """Log a non-trivial decision with reasoning context.
28
28
 
29
29
  Args:
30
- domain: Area (ads, shopify, server, infrastructure, nexo, general, other)
30
+ domain: Area (nexo, other)
31
31
  decision: What was decided
32
32
  alternatives: JSON array or text of options considered and why discarded
33
33
  based_on: Data, metrics, or observations that informed this decision
34
34
  confidence: high, medium, or low
35
- context_ref: Related followup/reminder ID
35
+ context_ref: Related followup/reminder ID (e.g., NF-ADS1, R71)
36
36
  session_id: Current session ID (auto-filled if empty)
37
37
  """
38
- valid_domains = {'ads', 'shopify', 'server', 'infrastructure', 'nexo', 'general', 'other'}
38
+ valid_domains = {'nexo', 'other'}
39
39
  if domain not in valid_domains:
40
- return f"ERROR: domain must be one of: {', '.join(sorted(valid_domains))}"
40
+ return f"ERROR: domain debe ser uno de: {', '.join(sorted(valid_domains))}"
41
41
  if confidence not in ('high', 'medium', 'low'):
42
- return f"ERROR: confidence must be high, medium, or low"
42
+ return f"ERROR: confidence debe ser high, medium, o low"
43
43
 
44
44
  sid = session_id or 'unknown'
45
45
  result = log_decision(sid, domain, decision, alternatives, based_on, confidence, context_ref)
@@ -59,7 +59,7 @@ def handle_decision_log(domain: str, decision: str, alternatives: str = '',
59
59
  result = dict(conn.execute("SELECT * FROM decisions WHERE id = ?", (result["id"],)).fetchone())
60
60
  due = result.get("review_due_at", "")
61
61
  due_str = f" review_due={due}" if due else ""
62
- return f"Decision #{result['id']} logged [{domain}] ({confidence}): {decision[:80]}{due_str}"
62
+ return f"Decision #{result['id']} registrada [{domain}] ({confidence}): {decision[:80]}{due_str}"
63
63
 
64
64
 
65
65
  def handle_decision_outcome(id: int, outcome: str) -> str:
@@ -78,7 +78,7 @@ def handle_decision_outcome(id: int, outcome: str) -> str:
78
78
  (id,)
79
79
  )
80
80
  conn.commit()
81
- return f"Decision #{id} outcome recorded: {outcome[:100]}"
81
+ return f"Decision #{id} outcome registrado: {outcome[:100]}"
82
82
 
83
83
 
84
84
  def handle_decision_search(query: str = '', domain: str = '', days: int = 30) -> str:
@@ -86,18 +86,18 @@ def handle_decision_search(query: str = '', domain: str = '', days: int = 30) ->
86
86
 
87
87
  Args:
88
88
  query: Text to search in decision, alternatives, based_on, outcome
89
- domain: Filter by area (ads, shopify, server, infrastructure, nexo, general, other)
89
+ domain: Filter by area (nexo, other)
90
90
  days: Look back N days (default 30)
91
91
  """
92
- valid_domains = {'ads', 'shopify', 'server', 'infrastructure', 'nexo', 'general', 'other'}
92
+ valid_domains = {'nexo', 'other'}
93
93
  if domain and domain not in valid_domains:
94
- return f"ERROR: domain must be one of: {', '.join(sorted(valid_domains))}"
94
+ return f"ERROR: domain debe ser uno de: {', '.join(sorted(valid_domains))}"
95
95
  results = search_decisions(query, domain, days)
96
96
  if not results:
97
- scope = f"'{query}'" if query else domain or 'all'
98
- return f"No decisions found for {scope} in {days} days."
97
+ scope = f"'{query}'" if query else domain or 'todas'
98
+ return f"Sin decisiones encontradas para {scope} en {days} días."
99
99
 
100
- lines = [f"DECISIONS ({len(results)}):"]
100
+ lines = [f"DECISIONES ({len(results)}):"]
101
101
  for d in results:
102
102
  conf = d.get('confidence', '?')
103
103
  outcome_str = f" → {d['outcome'][:50]}" if d.get('outcome') else ""
@@ -107,9 +107,9 @@ def handle_decision_search(query: str = '', domain: str = '', days: int = 30) ->
107
107
  lines.append(f" #{d['id']} ({d['created_at']}) [{d['domain']}] {conf} [{status}]{ref}{review_due}")
108
108
  lines.append(f" {d['decision'][:120]}")
109
109
  if d.get('based_on'):
110
- lines.append(f" Based on: {d['based_on'][:100]}")
110
+ lines.append(f" Basado en: {d['based_on'][:100]}")
111
111
  if d.get('alternatives'):
112
- lines.append(f" Alternatives: {d['alternatives'][:100]}")
112
+ lines.append(f" Alternativas: {d['alternatives'][:100]}")
113
113
  if outcome_str:
114
114
  lines.append(f" Outcome:{outcome_str}")
115
115
  return "\n".join(lines)
@@ -161,7 +161,7 @@ def handle_session_diary_write(decisions: str, summary: str,
161
161
  domain: str = '',
162
162
  session_id: str = '',
163
163
  self_critique: str = '') -> str:
164
- """Write session diary entry at end of session. Call before closing every session.
164
+ """Write session diary entry at end of session. OBLIGATORIO antes de cerrar.
165
165
 
166
166
  Args:
167
167
  decisions: What was decided and why (JSON array or structured text)
@@ -169,13 +169,16 @@ def handle_session_diary_write(decisions: str, summary: str,
169
169
  discarded: Options/approaches considered but rejected, and why
170
170
  pending: Items left unresolved, with doubt level
171
171
  context_next: What the next session should know to continue effectively
172
- mental_state: Internal state to transfer — thread of thought, tone, observations not yet shared, momentum.
173
- user_signals: Observable signals from the user during session — response speed, tone, corrections given.
174
- domain: Project context: infrastructure, nexo, server, general, other
172
+ mental_state: Internal state to transfer — thread of thought, tone, observations not yet shared, momentum. Written in first person as NEXO.
173
+ user_signals: Observable signals from the user during session — response speed (fast='s' vs detailed explanations), tone (direct, frustrated, exploratory, excited), corrections given, topics he initiated vs topics NEXO initiated. Factual observations only, not interpretations.
174
+ domain: Project context: project-a, project-b, nexo, other
175
175
  session_id: Current session ID
176
- self_critique: Honest post-mortem: what should have been done proactively? Repeated errors? Concrete rule to prevent repetition.
176
+ self_critique: OBLIGATORIO. Post-mortem honesto: ¿Qué debí hacer proactivamente? ¿the user tuvo que pedirme algo que yo debería haber detectado? ¿Repetí errores conocidos? ¿Qué regla concreta evitaría la repetición? Si sesión limpia: 'Sin autocrítica — sesión limpia.'
177
177
  """
178
178
  sid = session_id or 'unknown'
179
+ # Clean up draft — manual diary supersedes it
180
+ from db import delete_diary_draft
181
+ delete_diary_draft(sid)
179
182
  result = write_session_diary(sid, decisions, summary, discarded, pending, context_next, mental_state, domain=domain, user_signals=user_signals, self_critique=self_critique)
180
183
  if "error" in result:
181
184
  return f"ERROR: {result['error']}"
@@ -185,7 +188,7 @@ def handle_session_diary_write(decisions: str, summary: str,
185
188
  if mental_state and mental_state.strip():
186
189
  _cognitive_ingest_safe(mental_state, "mental_state", f"diary#{result.get('id','')}", f"Session {sid} state", domain)
187
190
  domain_str = f" [{domain}]" if domain else ""
188
- msg = f"Session diary #{result['id']}{domain_str} saved: {summary[:80]}"
191
+ msg = f"Diario sesión #{result['id']}{domain_str} guardado: {summary[:80]}"
189
192
 
190
193
  # Trust score & sentiment summary for session diary
191
194
  try:
@@ -206,14 +209,14 @@ def handle_session_diary_write(decisions: str, summary: str,
206
209
  "SELECT COUNT(*) FROM change_log WHERE (commit_ref IS NULL OR commit_ref = '')"
207
210
  ).fetchone()[0]
208
211
  if orphan_changes > 0:
209
- warnings.append(f"{orphan_changes} changes without commit_ref")
212
+ warnings.append(f"{orphan_changes} changes sin commit_ref")
210
213
  orphan_decisions = conn.execute(
211
214
  "SELECT COUNT(*) FROM decisions WHERE (outcome IS NULL OR outcome = '') AND created_at < datetime('now', '-7 days')"
212
215
  ).fetchone()[0]
213
216
  if orphan_decisions > 0:
214
- warnings.append(f"{orphan_decisions} decisions >7d without outcome")
217
+ warnings.append(f"{orphan_decisions} decisions >7d sin outcome")
215
218
  if warnings:
216
- msg += "\n! EPISODIC GAPS: " + " | ".join(warnings) + " — resolve before closing session."
219
+ msg += "\n EPISODIC GAPS: " + " | ".join(warnings) + " — resolver antes de cerrar sesión."
217
220
 
218
221
  return msg
219
222
 
@@ -226,29 +229,29 @@ def handle_session_diary_read(session_id: str = '', last_n: int = 3, last_day: b
226
229
  session_id: Specific session ID to read (optional)
227
230
  last_n: Number of recent entries to return (default 3)
228
231
  last_day: If true, returns ALL entries from the most recent day (multi-terminal aware). Use this at startup.
229
- domain: Filter by project context: infrastructure, nexo, server, general, other
232
+ domain: Filter by project context: project-a, project-b, nexo, other
230
233
  """
231
234
  results = read_session_diary(session_id, last_n, last_day, domain)
232
235
  if not results:
233
- return "No session diary entries found."
236
+ return "Sin entradas en el diario de sesiones."
234
237
 
235
- lines = [f"SESSION DIARY ({len(results)}):"]
238
+ lines = [f"DIARIO DE SESIONES ({len(results)}):"]
236
239
  for d in results:
237
240
  domain_label = f" [{d['domain']}]" if d.get('domain') else ""
238
- lines.append(f"\n --- Session {d['session_id']}{domain_label} ({d['created_at']}) ---")
239
- lines.append(f" Summary: {d['summary']}")
241
+ lines.append(f"\n --- Sesión {d['session_id']}{domain_label} ({d['created_at']}) ---")
242
+ lines.append(f" Resumen: {d['summary']}")
240
243
  if d.get('decisions'):
241
- lines.append(f" Decisions: {d['decisions'][:200]}")
244
+ lines.append(f" Decisiones: {d['decisions'][:200]}")
242
245
  if d.get('discarded'):
243
- lines.append(f" Discarded: {d['discarded'][:150]}")
246
+ lines.append(f" Descartado: {d['discarded'][:150]}")
244
247
  if d.get('pending'):
245
- lines.append(f" Pending: {d['pending'][:150]}")
248
+ lines.append(f" Pendiente: {d['pending'][:150]}")
246
249
  if d.get('context_next'):
247
- lines.append(f" For next session: {d['context_next'][:200]}")
250
+ lines.append(f" Para siguiente sesión: {d['context_next'][:200]}")
248
251
  if d.get('mental_state'):
249
- lines.append(f" Mental state: {d['mental_state'][:300]}")
252
+ lines.append(f" Estado mental: {d['mental_state'][:300]}")
250
253
  if d.get('user_signals'):
251
- lines.append(f" User signals: {d['user_signals'][:300]}")
254
+ lines.append(f" Señales the user: {d['user_signals'][:300]}")
252
255
  return "\n".join(lines)
253
256
 
254
257
 
@@ -256,13 +259,13 @@ def handle_change_log(files: str, what_changed: str, why: str,
256
259
  triggered_by: str = '', affects: str = '',
257
260
  risks: str = '', verify: str = '',
258
261
  commit_ref: str = '', session_id: str = '') -> str:
259
- """Log a code/config change with full context. Call after every edit to production code.
262
+ """Log a code/config change with full context. OBLIGATORIO after every edit to production code.
260
263
 
261
264
  Args:
262
265
  files: File path(s) modified (comma-separated if multiple)
263
266
  what_changed: What was modified — functions, lines, behavior change
264
267
  why: WHY this change was needed — the root cause, not just "fix bug"
265
- triggered_by: What triggered this — bug report, metric, user's request, followup ID
268
+ triggered_by: What triggered this — bug report, metric, the user's request, followup ID
266
269
  affects: What systems/users/flows this change impacts
267
270
  risks: What could go wrong — regressions, edge cases, dependencies
268
271
  verify: How to verify this works — what to check, followup ID if created
@@ -270,7 +273,7 @@ def handle_change_log(files: str, what_changed: str, why: str,
270
273
  session_id: Current session ID
271
274
  """
272
275
  if not files or not what_changed or not why:
273
- return "ERROR: files, what_changed, and why are required"
276
+ return "ERROR: files, what_changed, y why son obligatorios"
274
277
  sid = session_id or 'unknown'
275
278
  result = log_change(sid, files, what_changed, why, triggered_by, affects, risks, verify, commit_ref)
276
279
  if "error" in result:
@@ -280,9 +283,9 @@ def handle_change_log(files: str, what_changed: str, why: str,
280
283
  "change", f"C{result.get('id','')}", (what_changed or '')[:80], ""
281
284
  )
282
285
  change_id = result['id']
283
- msg = f"Change #{change_id} logged: {files[:60]} — {what_changed[:60]}"
286
+ msg = f"Change #{change_id} registrado: {files[:60]} — {what_changed[:60]}"
284
287
  if not commit_ref:
285
- msg += f"\n! NO COMMIT. Use nexo_change_commit({change_id}, 'hash') after push, or 'server-direct' if edited directly on server."
288
+ msg += f"\n SIN COMMIT. Usa nexo_change_commit({change_id}, 'hash') después del push, o 'server-direct' si fue edición directa en servidor."
286
289
  return msg
287
290
 
288
291
 
@@ -296,22 +299,22 @@ def handle_change_search(query: str = '', files: str = '', days: int = 30) -> st
296
299
  """
297
300
  results = search_changes(query, files, days)
298
301
  if not results:
299
- scope = f"'{query}'" if query else files or 'all'
300
- return f"No changes found for {scope} in {days} days."
302
+ scope = f"'{query}'" if query else files or 'todos'
303
+ return f"Sin cambios encontrados para {scope} en {days} días."
301
304
 
302
- lines = [f"CHANGES ({len(results)}):"]
305
+ lines = [f"CAMBIOS ({len(results)}):"]
303
306
  for c in results:
304
307
  commit = f" [{c['commit_ref'][:8]}]" if c.get('commit_ref') else ""
305
308
  lines.append(f" #{c['id']} ({c['created_at']}){commit}")
306
- lines.append(f" Files: {c['files'][:100]}")
307
- lines.append(f" What: {c['what_changed'][:120]}")
308
- lines.append(f" Why: {c['why'][:120]}")
309
+ lines.append(f" Archivos: {c['files'][:100]}")
310
+ lines.append(f" Qué: {c['what_changed'][:120]}")
311
+ lines.append(f" Por qué: {c['why'][:120]}")
309
312
  if c.get('triggered_by'):
310
313
  lines.append(f" Trigger: {c['triggered_by'][:80]}")
311
314
  if c.get('affects'):
312
- lines.append(f" Affects: {c['affects'][:80]}")
315
+ lines.append(f" Afecta: {c['affects'][:80]}")
313
316
  if c.get('risks'):
314
- lines.append(f" Risks: {c['risks'][:80]}")
317
+ lines.append(f" Riesgos: {c['risks'][:80]}")
315
318
  return "\n".join(lines)
316
319
 
317
320
 
@@ -325,7 +328,7 @@ def handle_change_commit(id: int, commit_ref: str) -> str:
325
328
  result = update_change_commit(id, commit_ref)
326
329
  if "error" in result:
327
330
  return f"ERROR: {result['error']}"
328
- return f"Change #{id} linked to commit {commit_ref[:8]}"
331
+ return f"Change #{id} vinculado a commit {commit_ref[:8]}"
329
332
 
330
333
 
331
334
  def handle_recall(query: str, days: int = 30) -> str:
@@ -337,9 +340,9 @@ def handle_recall(query: str, days: int = 30) -> str:
337
340
  """
338
341
  results = recall(query, days)
339
342
  if not results:
340
- return f"No results for '{query}' in the last {days} days."
343
+ return f"Sin resultados para '{query}' en los últimos {days} días."
341
344
 
342
- # Passive rehearsal — strengthen matching cognitive memories
345
+ # v1.2: Passive rehearsal — strengthen matching cognitive memories
343
346
  try:
344
347
  import cognitive
345
348
  for r in results[:5]:
@@ -350,18 +353,18 @@ def handle_recall(query: str, days: int = 30) -> str:
350
353
  pass
351
354
 
352
355
  SOURCE_LABELS = {
353
- 'change_log': '[CHANGE]',
354
- 'change': '[CHANGE]',
355
- 'decision': '[DECISION]',
356
+ 'change_log': '[CAMBIO]',
357
+ 'change': '[CAMBIO]',
358
+ 'decision': '[DECISIÓN]',
356
359
  'learning': '[LEARNING]',
357
360
  'followup': '[FOLLOWUP]',
358
- 'diary': '[DIARY]',
359
- 'entity': '[ENTITY]',
360
- 'file': '[FILE]',
361
- 'code': '[CODE]',
361
+ 'diary': '[DIARIO]',
362
+ 'entity': '[ENTIDAD]',
363
+ 'file': '[ARCHIVO]',
364
+ 'code': '[CÓDIGO]',
362
365
  }
363
366
 
364
- lines = [f"RECALL '{query}' — {len(results)} result(s):"]
367
+ lines = [f"RECALL '{query}' — {len(results)} resultado(s):"]
365
368
  for r in results:
366
369
  source = r.get('source', '?')
367
370
  label = SOURCE_LABELS.get(source, f"[{source.upper()}]")
@@ -374,6 +377,8 @@ def handle_recall(query: str, days: int = 30) -> str:
374
377
  lines.append(f" {title}")
375
378
  if snippet:
376
379
  lines.append(f" {snippet}")
380
+ if len(results) < 5:
381
+ lines.append(f"\n 💡 Solo {len(results)} resultados en NEXO. Para historial más profundo, busca también en claude-mem: mcp__plugin_claude-mem_mcp-search__search")
377
382
  return "\n".join(lines)
378
383
 
379
384
 
@@ -387,5 +392,5 @@ TOOLS = [
387
392
  (handle_memory_review_queue, "nexo_memory_review_queue", "Show decisions and learnings that are due for review"),
388
393
  (handle_session_diary_write, "nexo_session_diary_write", "Write end-of-session diary with decisions, discards, and context for next session"),
389
394
  (handle_session_diary_read, "nexo_session_diary_read", "Read recent session diaries for context continuity"),
390
- (handle_recall, "nexo_recall", "Search across ALL NEXO memory — changes, decisions, learnings, followups, diary, entities, .md files, code files."),
395
+ (handle_recall, "nexo_recall", "Search across ALL NEXO memory — changes, decisions, learnings, followups, diary, entities, .md files, code files. For deep historical context (older sessions, past work), also search claude-mem (mcp__plugin_claude-mem_mcp-search__search)."),
391
396
  ]
@@ -65,7 +65,7 @@ def handle_heartbeat(sid: str, task: str, context_hint: str = '') -> str:
65
65
  Args:
66
66
  sid: Session ID
67
67
  task: Current task description
68
- context_hint: Optional — last 2-3 sentences from user or current topic. If provided AND
68
+ context_hint: Optional — last 2-3 sentences from the user or current topic. If provided AND
69
69
  it diverges from startup memories, returns fresh cognitive memories for the new context.
70
70
  """
71
71
  from db import get_db
@@ -88,7 +88,7 @@ def handle_heartbeat(sid: str, task: str, context_hint: str = '') -> str:
88
88
  age = _format_age(q["created_epoch"])
89
89
  parts.append(f" {q['qid']} de {q['from_sid']} ({age}): {q['question']}")
90
90
 
91
- # Sentiment detection: analyze context_hint for user's mood
91
+ # Sentiment detection: analyze context_hint for the user's mood
92
92
  if context_hint and len(context_hint.strip()) >= 10:
93
93
  try:
94
94
  import cognitive
@@ -137,6 +137,53 @@ def handle_heartbeat(sid: str, task: str, context_hint: str = '') -> str:
137
137
  except Exception:
138
138
  pass # Mid-session RAG is best-effort
139
139
 
140
+ # Incremental diary draft — accumulate every heartbeat, full UPSERT every 5
141
+ try:
142
+ import json as _json
143
+ from db import get_diary_draft, upsert_diary_draft
144
+
145
+ draft = get_diary_draft(sid)
146
+ hb_count = (draft["heartbeat_count"] + 1) if draft else 1
147
+
148
+ existing_tasks = _json.loads(draft["tasks_seen"]) if draft else []
149
+ if task and task not in existing_tasks:
150
+ existing_tasks.append(task)
151
+
152
+ _conn = get_db()
153
+ if hb_count % 5 == 0 or hb_count == 1:
154
+ change_rows = _conn.execute(
155
+ "SELECT id FROM change_log WHERE session_id = ? ORDER BY id", (sid,)
156
+ ).fetchall()
157
+ change_ids = [r["id"] for r in change_rows]
158
+
159
+ decision_rows = _conn.execute(
160
+ "SELECT id FROM decisions WHERE session_id = ? ORDER BY id", (sid,)
161
+ ).fetchall()
162
+ decision_ids = [r["id"] for r in decision_rows]
163
+
164
+ summary = f"Session tasks: {', '.join(existing_tasks[-10:])}"
165
+ upsert_diary_draft(
166
+ sid=sid,
167
+ tasks_seen=_json.dumps(existing_tasks),
168
+ change_ids=_json.dumps(change_ids),
169
+ decision_ids=_json.dumps(decision_ids),
170
+ last_context_hint=context_hint[:300] if context_hint else '',
171
+ heartbeat_count=hb_count,
172
+ summary_draft=summary,
173
+ )
174
+ else:
175
+ upsert_diary_draft(
176
+ sid=sid,
177
+ tasks_seen=_json.dumps(existing_tasks),
178
+ change_ids=draft["change_ids"] if draft else '[]',
179
+ decision_ids=draft["decision_ids"] if draft else '[]',
180
+ last_context_hint=context_hint[:300] if context_hint else (draft["last_context_hint"] if draft else ''),
181
+ heartbeat_count=hb_count,
182
+ summary_draft=draft["summary_draft"] if draft else f"Session task: {task}",
183
+ )
184
+ except Exception:
185
+ pass # Draft accumulation is best-effort, never block heartbeat
186
+
140
187
  # Diary reminder: after 30 min active with no diary entry
141
188
  conn = get_db()
142
189
  row = conn.execute("SELECT started_epoch FROM sessions WHERE sid = ?", (sid,)).fetchone()