superlocalmemory 3.0.13 → 3.0.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -14,8 +14,9 @@ copies or substantial portions of the Software.
14
14
 
15
15
  ATTRIBUTION NOTICE:
16
16
  This software contains novel mathematical methods described in:
17
+ - arXiv:2603.14588 (SuperLocalMemory V3 — Information-Geometric Foundations)
17
18
  - arXiv:2603.02240 (SuperLocalMemory V2 — Bayesian Trust Defense)
18
- - Zenodo DOI: 10.5281/zenodo.19038659 (SuperLocalMemory V3 — Information-Geometric Foundations)
19
+ - Zenodo DOI: 10.5281/zenodo.19038659 (SuperLocalMemory V3)
19
20
 
20
21
  Academic use MUST cite the relevant paper.
21
22
  Commercial use MUST retain this notice, the NOTICE file, and visible attribution.
package/NOTICE CHANGED
@@ -42,7 +42,7 @@ PUBLICATIONS
42
42
  - V3 Paper: "SuperLocalMemory V3: Information-Geometric Foundations for
43
43
  Zero-LLM Enterprise Agent Memory"
44
44
  Zenodo DOI: 10.5281/zenodo.19038659
45
- arXiv: Submitted March 2026 (cs.AI, cs.IR, cs.LG)
45
+ arXiv: 2603.14588 (cs.AI, cs.IR, cs.LG)
46
46
 
47
47
  - V2 Paper: "SuperLocalMemory: Privacy-Preserving Multi-Agent Memory with
48
48
  Bayesian Trust Defense Against Memory Poisoning"
package/README.md CHANGED
@@ -3,57 +3,104 @@
3
3
  </p>
4
4
 
5
5
  <h1 align="center">SuperLocalMemory V3</h1>
6
- <p align="center"><strong>Information-Geometric Agent Memory with Mathematical Guarantees</strong></p>
6
+ <p align="center"><strong>The first local-only AI memory to break 74% retrieval on LoCoMo.<br/>No cloud. No APIs. No data leaves your machine.</strong></p>
7
7
 
8
8
  <p align="center">
9
- The first agent memory system with mathematically grounded retrieval, lifecycle management, and consistency verification. Four-channel hybrid retrieval. Three operating modes. EU AI Act compliant.
9
+ <code>+16pp vs Mem0 (zero cloud)</code> &nbsp;·&nbsp; <code>85% Open-Domain (best of any system)</code> &nbsp;·&nbsp; <code>EU AI Act Ready</code>
10
10
  </p>
11
11
 
12
12
  <p align="center">
13
- <a href="https://superlocalmemory.com"><img src="https://img.shields.io/badge/Website-superlocalmemory.com-ff6b35?style=for-the-badge" alt="Website"/></a>
14
- <a href="https://arxiv.org/abs/2603.02240"><img src="https://img.shields.io/badge/arXiv-2603.02240-b31b1b?style=for-the-badge&logo=arxiv&logoColor=white" alt="arXiv Paper"/></a>
15
- <a href="https://zenodo.org/records/19038659"><img src="https://img.shields.io/badge/DOI-10.5281%2Fzenodo.19038659-blue?style=for-the-badge&logo=doi&logoColor=white" alt="V3 DOI"/></a>
13
+ <a href="https://arxiv.org/abs/2603.14588"><img src="https://img.shields.io/badge/arXiv-2603.14588-b31b1b?style=for-the-badge&logo=arxiv&logoColor=white" alt="arXiv Paper"/></a>
14
+ <a href="https://pypi.org/project/superlocalmemory/"><img src="https://img.shields.io/pypi/v/superlocalmemory?style=for-the-badge&logo=pypi&logoColor=white" alt="PyPI"/></a>
15
+ <a href="https://www.npmjs.com/package/superlocalmemory"><img src="https://img.shields.io/npm/v/superlocalmemory?style=for-the-badge&logo=npm&logoColor=white" alt="npm"/></a>
16
+ <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green?style=for-the-badge" alt="MIT License"/></a>
17
+ <a href="#eu-ai-act-compliance"><img src="https://img.shields.io/badge/EU_AI_Act-Compliant-brightgreen?style=for-the-badge" alt="EU AI Act"/></a>
18
+ <a href="https://superlocalmemory.com"><img src="https://img.shields.io/badge/Web-superlocalmemory.com-ff6b35?style=for-the-badge" alt="Website"/></a>
16
19
  </p>
17
20
 
18
- <p align="center">
19
- <a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.11+-3776AB?style=flat-square&logo=python&logoColor=white" alt="Python 3.11+"/></a>
20
- <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green?style=flat-square" alt="MIT License"/></a>
21
- <a href="#three-operating-modes"><img src="https://img.shields.io/badge/EU_AI_Act-Compliant-brightgreen?style=flat-square" alt="EU AI Act"/></a>
22
- <a href="#"><img src="https://img.shields.io/badge/tests-1400+-brightgreen?style=flat-square" alt="1400+ Tests"/></a>
23
- <a href="#"><img src="https://img.shields.io/badge/platform-Mac_|_Linux_|_Windows-blue?style=flat-square" alt="Cross Platform"/></a>
24
- <a href="https://github.com/qualixar/superlocalmemory/wiki"><img src="https://img.shields.io/badge/Wiki-Documentation-blue?style=flat-square" alt="Wiki"/></a>
25
- </p>
21
+ ---
22
+
23
+ ## Why SuperLocalMemory?
24
+
25
+ Every major AI memory system — Mem0, Zep, Letta, EverMemOS — sends your data to cloud LLMs for core operations. That means latency on every query, cost on every interaction, and after **August 2, 2026**, a compliance problem under the EU AI Act.
26
+
27
+ SuperLocalMemory V3 takes a different approach: **mathematics instead of cloud compute.** Three techniques from differential geometry, algebraic topology, and stochastic analysis replace the work that other systems need LLMs to do — similarity scoring, contradiction detection, and lifecycle management. The result is an agent memory that runs entirely on your machine, on CPU, with no API keys, and still outperforms funded alternatives.
28
+
29
+ **The numbers** (evaluated on [LoCoMo](https://arxiv.org/abs/2402.09714), the standard long-conversation memory benchmark):
30
+
31
+ | System | Score | Cloud Required | Open Source | Funding |
32
+ |:-------|:-----:|:--------------:|:-----------:|:-------:|
33
+ | EverMemOS | 92.3% | Yes | No | — |
34
+ | Hindsight | 89.6% | Yes | No | — |
35
+ | **SLM V3 Mode C** | **87.7%** | Optional | **Yes (MIT)** | $0 |
36
+ | Zep v3 | 85.2% | Yes | Deprecated | $35M |
37
+ | **SLM V3 Mode A** | **74.8%** | **No** | **Yes (MIT)** | $0 |
38
+ | Mem0 | 64.2% | Yes | Partial | $24M |
39
+
40
+ Mode A scores **74.8% with zero cloud dependency** — outperforming Mem0 by 16 percentage points without a single API call. On open-domain questions, Mode A scores **85.0% — the highest of any system in the evaluation**, including cloud-powered ones. Mode C reaches **87.7%**, matching enterprise cloud systems.
41
+
42
+ Mathematical layers contribute **+12.7 percentage points** on average across 6 conversations (n=832 questions), with up to **+19.9pp on the most challenging dialogues**. This isn't more compute — it's better math.
43
+
44
+ > **Upgrading from V2 (2.8.6)?** V3 is a complete architectural reinvention — new mathematical engine, new retrieval pipeline, new storage schema. Your existing data is preserved but requires migration. After installing V3, run `slm migrate` to upgrade your data. Read the [Migration Guide](https://github.com/qualixar/superlocalmemory/wiki/Migration-from-V2) before upgrading. Backup is created automatically.
26
45
 
27
46
  ---
28
47
 
29
- ## What is SuperLocalMemory?
48
+ ## Quick Start
30
49
 
31
- SuperLocalMemory gives AI assistants persistent, structured memory that survives across sessions. Unlike simple vector stores, V3 uses **information geometry** to provide mathematically grounded retrieval, automatic contradiction detection, and self-organizing memory lifecycle management.
50
+ ### Install via npm (recommended)
32
51
 
33
- **Works with:** Claude, Cursor, Windsurf, VS Code Copilot, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and 17+ AI tools via MCP.
52
+ ```bash
53
+ npm install -g superlocalmemory
54
+ slm setup # Choose mode (A/B/C)
55
+ slm warmup # Pre-download embedding model (~500MB, optional)
56
+ ```
57
+
58
+ ### Install via pip
59
+
60
+ ```bash
61
+ pip install superlocalmemory
62
+ ```
34
63
 
35
- > **Upgrading from V2 (2.8.6)?** V3 is a complete architectural reinvention — new mathematical engine, new retrieval pipeline, new storage schema. Your existing data is preserved but requires migration. After installing V3, run `slm migrate` to upgrade your data. Read the [Migration Guide](docs/migration-from-v2.md) before upgrading. Backup is created automatically.
64
+ ### First Use
36
65
 
37
- ### Key Results
66
+ ```bash
67
+ slm remember "Alice works at Google as a Staff Engineer"
68
+ slm recall "What does Alice do?"
69
+ slm status
70
+ ```
38
71
 
39
- | Metric | Score | Context |
40
- |:-------|:-----:|:--------|
41
- | LoCoMo (Mode C, full power) | **87.7%** | On conv-30, 81 scored questions |
42
- | LoCoMo (Mode A, zero-LLM) | **62.3%** | Highest zero-LLM score. No cloud dependency. |
43
- | Math layer improvement | **+12.7pp** | Average gain from mathematical foundations |
44
- | Multi-hop improvement | **+12pp** | 50% vs 38% (math on vs off) |
72
+ ### MCP Integration (Claude, Cursor, Windsurf, VS Code, etc.)
73
+
74
+ ```json
75
+ {
76
+ "mcpServers": {
77
+ "superlocalmemory": {
78
+ "command": "slm",
79
+ "args": ["mcp"]
80
+ }
81
+ }
82
+ }
83
+ ```
84
+
85
+ 24 MCP tools available. Works with Claude Code, Cursor, Windsurf, VS Code Copilot, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and 17+ AI tools.
45
86
 
46
87
  ---
47
88
 
48
89
  ## Three Operating Modes
49
90
 
50
- | Mode | What | LLM Required? | EU AI Act | Best For |
51
- |:----:|:-----|:-------------:|:---------:|:---------|
52
- | **A** | Local Guardian | No | Compliant | Privacy-first, air-gapped, enterprise |
53
- | **B** | Smart Local | Local only (Ollama) | Compliant | Enhanced quality, data stays local |
54
- | **C** | Full Power | Cloud (optional) | Partial | Maximum accuracy, research |
91
+ | Mode | What | Cloud? | EU AI Act | Best For |
92
+ |:----:|:-----|:------:|:---------:|:---------|
93
+ | **A** | Local Guardian | **None** | **Compliant** | Privacy-first, air-gapped, enterprise |
94
+ | **B** | Smart Local | Local only (Ollama) | Compliant | Better answers, data stays local |
95
+ | **C** | Full Power | Cloud LLM | Partial | Maximum accuracy, research |
96
+
97
+ ```bash
98
+ slm mode a # Zero-cloud (default)
99
+ slm mode b # Local Ollama
100
+ slm mode c # Cloud LLM
101
+ ```
55
102
 
56
- **Mode A** is the only agent memory that operates with **zero cloud dependency** while achieving competitive retrieval accuracy. All data stays on your device. No API keys required.
103
+ **Mode A** is the only agent memory that operates with **zero cloud dependency** while achieving competitive retrieval accuracy on a standard benchmark. All data stays on your device. No API keys. No GPU. Runs on 2 vCPUs + 4GB RAM.
57
104
 
58
105
  ---
59
106
 
@@ -61,8 +108,8 @@ SuperLocalMemory gives AI assistants persistent, structured memory that survives
61
108
 
62
109
  ```
63
110
  Query ──► Strategy Classifier ──► 4 Parallel Channels:
64
- ├── Semantic (Fisher-Rao graduated similarity)
65
- ├── BM25 (keyword matching, k1=1.2, b=0.75)
111
+ ├── Semantic (Fisher-Rao geodesic distance)
112
+ ├── BM25 (keyword matching)
66
113
  ├── Entity Graph (spreading activation, 3 hops)
67
114
  └── Temporal (date-aware retrieval)
68
115
 
@@ -75,104 +122,79 @@ Query ──► Strategy Classifier ──► 4 Parallel Channels:
75
122
  ◄── Top-K Results with channel scores
76
123
  ```
77
124
 
78
- ### Mathematical Foundations (Novel Contributions)
79
-
80
- 1. **Fisher-Rao Retrieval Metric** — Similarity scoring derived from the Fisher information structure of diagonal Gaussian families. Graduated ramp from cosine to Fisher-information-weighted scoring over the first 10 accesses per memory.
81
-
82
- 2. **Sheaf Cohomology for Consistency** — Algebraic topology detects contradictions between facts by computing coboundary norms on the knowledge graph. Non-trivial restriction maps amplify disagreements along discriminative subspaces.
83
-
84
- 3. **Riemannian Langevin Lifecycle** — Memory positions evolve on the Poincare ball via a discretized Langevin SDE. Frequently accessed memories stay near the origin (ACTIVE); neglected memories diffuse toward the boundary (ARCHIVED). The potential is modulated by access frequency, age, and importance.
85
-
86
- ---
87
-
88
- ## Prerequisites
125
+ ### Mathematical Foundations
89
126
 
90
- | Requirement | Version | Why |
91
- |:-----------|:--------|:----|
92
- | **Node.js** | 14+ | npm package manager |
93
- | **Python** | 3.11+ | V3 engine runtime |
94
- | **pip** | Latest | Python dependency installer |
127
+ Three novel contributions replace cloud LLM dependency with mathematical guarantees:
95
128
 
96
- > All Python dependencies are installed automatically during `npm install`. You don't need to run pip manually. If any dependency fails, the installer shows clear instructions.
129
+ 1. **Fisher-Rao Retrieval Metric** Similarity scoring derived from the Fisher information structure of diagonal Gaussian families. Graduated ramp from cosine to geodesic distance over the first 10 accesses. The first application of information geometry to agent memory retrieval.
97
130
 
98
- ### What Gets Installed Automatically
131
+ 2. **Sheaf Cohomology for Consistency** — Algebraic topology detects contradictions by computing coboundary norms on the knowledge graph. The first algebraic guarantee for contradiction detection in agent memory.
99
132
 
100
- | Component | Size | When |
101
- |:----------|:-----|:-----|
102
- | Core math libraries (numpy, scipy, networkx) | ~50MB | During `npm install` |
103
- | Search engine (sentence-transformers, einops, torch) | ~200MB | During `npm install` |
104
- | Embedding model (nomic-ai/nomic-embed-text-v1.5) | ~500MB | On first use OR `slm warmup` |
133
+ 3. **Riemannian Langevin Lifecycle** Memory positions evolve on the Poincare ball via discretized Langevin SDE. Frequently accessed memories stay active; neglected memories self-archive. No hardcoded thresholds.
105
134
 
106
- **If any dependency fails during install**, the installer prints the exact `pip install` command to fix it. BM25 keyword search works even without embeddings — you're never fully blocked.
135
+ These three layers collectively yield **+12.7pp average improvement** over the engineering-only baseline, with the Fisher metric alone contributing **+10.8pp** on the hardest conversations.
107
136
 
108
137
  ---
109
138
 
110
- ## Quick Start
139
+ ## Benchmarks
111
140
 
112
- ### Install via npm (recommendedone command, everything included)
141
+ Evaluated on [LoCoMo](https://arxiv.org/abs/2402.09714)10 multi-session conversations, 1,986 total questions, 4 scored categories.
113
142
 
114
- ```bash
115
- npm install -g superlocalmemory
116
- ```
143
+ ### Mode A (Zero-Cloud, 10 Conversations, 1,276 Questions)
117
144
 
118
- This single command:
119
- - Installs the V3 engine and CLI
120
- - Auto-installs all Python dependencies (numpy, scipy, networkx, sentence-transformers, einops, torch, etc.)
121
- - Creates the data directory at `~/.superlocalmemory/`
122
- - Detects and guides V2 migration if applicable
145
+ | Category | Score | vs. Mem0 (64.2%) |
146
+ |:---------|:-----:|:-----------------:|
147
+ | Single-Hop | 72.0% | +3.0pp |
148
+ | Multi-Hop | 70.3% | +8.6pp |
149
+ | Temporal | 80.0% | +21.7pp |
150
+ | **Open-Domain** | **85.0%** | **+35.0pp** |
151
+ | **Aggregate** | **74.8%** | **+10.6pp** |
123
152
 
124
- Then configure and pre-download the embedding model:
125
- ```bash
126
- slm setup # Choose mode, configure provider
127
- slm warmup # Pre-download embedding model (~500MB, optional)
128
- ```
153
+ Mode A achieves **85.0% on open-domain questions — the highest of any system in the evaluation**, including cloud-powered ones.
129
154
 
130
- > **First time?** If you skip `slm warmup`, the model downloads automatically on first `slm remember` or `slm recall`. Either way works.
155
+ ### Math Layer Impact (6 Conversations, n=832)
131
156
 
132
- ### Install via pip
157
+ | Conversation | With Math | Without | Delta |
158
+ |:-------------|:---------:|:-------:|:-----:|
159
+ | Easiest | 78.5% | 71.2% | +7.3pp |
160
+ | Hardest | 64.2% | 44.3% | **+19.9pp** |
161
+ | **Average** | **71.7%** | **58.9%** | **+12.7pp** |
133
162
 
134
- ```bash
135
- pip install superlocalmemory
136
- # or with all features:
137
- pip install "superlocalmemory[full]"
138
- ```
163
+ Mathematical layers help most where heuristic methods struggle — the harder the conversation, the bigger the improvement.
139
164
 
140
- ### First Use
165
+ ### Ablation (What Each Component Contributes)
141
166
 
142
- ```bash
143
- # Store a memory
144
- slm remember "Alice works at Google as a Staff Engineer"
167
+ | Removed | Impact |
168
+ |:--------|:------:|
169
+ | Cross-encoder reranking | **-30.7pp** |
170
+ | Fisher-Rao metric | **-10.8pp** |
171
+ | All math layers | **-7.6pp** |
172
+ | BM25 channel | **-6.5pp** |
173
+ | Sheaf consistency | -1.7pp |
174
+ | Entity graph | -1.0pp |
145
175
 
146
- # Recall
147
- slm recall "What does Alice do?"
176
+ Full ablation details in the [Wiki](https://github.com/qualixar/superlocalmemory/wiki/Benchmarks).
148
177
 
149
- # Check status
150
- slm status
178
+ ---
151
179
 
152
- # Switch modes
153
- slm mode a # Zero-LLM (default)
154
- slm mode b # Local Ollama
155
- slm mode c # Full power
156
- ```
180
+ ## EU AI Act Compliance
157
181
 
158
- ### MCP Integration (Claude, Cursor, etc.)
182
+ The EU AI Act (Regulation 2024/1689) takes full effect **August 2, 2026**. Every AI memory system that sends personal data to cloud LLMs for core operations has a compliance question to answer.
159
183
 
160
- Add to your IDE's MCP config:
184
+ | Requirement | Mode A | Mode B | Mode C |
185
+ |:------------|:------:|:------:|:------:|
186
+ | Data sovereignty (Art. 10) | **Pass** | **Pass** | Requires DPA |
187
+ | Right to erasure (GDPR Art. 17) | **Pass** | **Pass** | **Pass** |
188
+ | Transparency (Art. 13) | **Pass** | **Pass** | **Pass** |
189
+ | No network calls during memory ops | **Yes** | **Yes** | No |
161
190
 
162
- ```json
163
- {
164
- "mcpServers": {
165
- "superlocalmemory": {
166
- "command": "slm",
167
- "args": ["mcp"]
168
- }
169
- }
170
- }
171
- ```
191
+ To the best of our knowledge, **no existing agent memory system addresses EU AI Act compliance**. Modes A and B pass all checks by architectural design — no personal data leaves the device during any memory operation.
172
192
 
173
- 24 MCP tools available: `remember`, `recall`, `search`, `fetch`, `list_recent`, `get_status`, `build_graph`, `switch_profile`, `health`, `consistency_check`, `recall_trace`, and more.
193
+ Built-in compliance tools: GDPR Article 15/17 export + complete erasure, tamper-proof SHA-256 audit chain, data provenance tracking, ABAC policy enforcement.
194
+
195
+ ---
174
196
 
175
- ### Web Dashboard (17 tabs)
197
+ ## Web Dashboard
176
198
 
177
199
  ```bash
178
200
  slm dashboard # Opens at http://localhost:8765
@@ -193,82 +215,58 @@ slm dashboard # Opens at http://localhost:8765
193
215
  </p>
194
216
  </details>
195
217
 
196
- The V3 dashboard provides real-time visibility into your memory system:
197
-
198
- - **Dashboard** — Mode switcher, health score, quick store/recall
199
- - **Recall Lab** — Search with per-channel score breakdown (Semantic, BM25, Entity, Temporal)
200
- - **Knowledge Graph** — Interactive entity relationship visualization
201
- - **Memories** — Browse, search, and manage stored memories
202
- - **Trust Dashboard** — Bayesian trust scores per agent with Beta distribution visualization
203
- - **Math Health** — Fisher-Rao confidence, Sheaf consistency, Langevin lifecycle state
204
- - **Compliance** — GDPR export/erasure, EU AI Act status, audit trail
205
- - **Learning** — Adaptive ranking progress, behavioral patterns, outcome tracking
206
- - **IDE Connections** — Connected AI tools status and configuration
207
- - **Settings** — Mode, provider, auto-capture/recall configuration
208
-
209
- > The dashboard runs locally at `http://localhost:8765`. No data leaves your machine.
218
+ 17 tabs: Dashboard, Recall Lab, Knowledge Graph, Memories, Trust Scores, Math Health, Compliance, Learning, IDE Connections, Settings, and more. Runs locally — no data leaves your machine.
210
219
 
211
220
  ---
212
221
 
213
- ## V3 Engine Features
222
+ ## Features
214
223
 
215
- ### Retrieval (4-Channel Hybrid)
216
- - Semantic similarity with Fisher-Rao information geometry
217
- - BM25 keyword matching (persisted tokens, survives restart)
218
- - Entity graph with spreading activation (3-hop, decay=0.7)
219
- - Temporal date-aware retrieval with interval support
220
- - RRF fusion (k=60) + cross-encoder reranking
224
+ ### Retrieval
225
+ - 4-channel hybrid: Semantic (Fisher-Rao) + BM25 + Entity Graph + Temporal
226
+ - RRF fusion + cross-encoder reranking
227
+ - Agentic sufficiency verification (auto-retry on weak results)
228
+ - Adaptive ranking with LightGBM (learns from usage)
221
229
 
222
230
  ### Intelligence
223
- - 11-step ingestion pipeline (entity resolution, fact extraction, emotional tagging, scene building, sheaf consistency)
224
- - Adaptive learning with LightGBM-based ranking (3-phase bootstrap)
225
- - Behavioral pattern detection (query habits, entity preferences, active hours)
226
- - Outcome tracking for retrieval feedback loops
231
+ - 11-step ingestion pipeline (entity resolution, fact extraction, emotional tagging, scene building)
232
+ - Automatic contradiction detection via sheaf cohomology
233
+ - Self-organizing memory lifecycle (no hardcoded thresholds)
234
+ - Behavioral pattern detection and outcome tracking
227
235
 
228
- ### Trust & Compliance
229
- - Bayesian Beta distribution trust scoring (per-agent, per-fact)
236
+ ### Trust & Security
237
+ - Bayesian Beta-distribution trust scoring (per-agent, per-fact)
230
238
  - Trust gates (block low-trust agents from writing/deleting)
231
239
  - ABAC (Attribute-Based Access Control) with DB-persisted policies
232
- - GDPR Article 15/17 compliance (full export + complete erasure)
233
- - EU AI Act data sovereignty (Mode A: zero cloud, data stays local)
234
240
  - Tamper-proof hash-chain audit trail (SHA-256 linked entries)
235
- - Data provenance tracking (who created what, when, from where)
236
241
 
237
242
  ### Infrastructure
238
- - 17-tab web dashboard (trust visualization, math health, recall lab)
239
- - 17+ IDE integrations with pre-built configs
240
- - Profile isolation (16+ independent memory spaces)
241
- - V2 to V3 migration tool (zero data loss, rollback support)
242
- - Auto-capture and auto-recall hooks for Claude Code
243
+ - 17-tab web dashboard with real-time visualization
244
+ - 17+ IDE integrations (Claude, Cursor, Windsurf, VS Code, JetBrains, Zed, etc.)
245
+ - 24 MCP tools + 6 MCP resources
246
+ - Profile isolation (independent memory spaces)
247
+ - 1400+ tests, MIT license, cross-platform (Mac/Linux/Windows)
248
+ - CPU-only — no GPU required
243
249
 
244
250
  ---
245
251
 
246
- ## Benchmarks
247
-
248
- Evaluated on the [LoCoMo benchmark](https://arxiv.org/abs/2402.09714) (Long Conversation Memory):
249
-
250
- ### Mode A Ablation (conv-30, 81 questions, zero-LLM)
251
-
252
- | Configuration | Micro Avg | Multi-Hop | Open Domain |
253
- |:-------------|:---------:|:---------:|:-----------:|
254
- | Full (all layers) | **62.3%** | **50%** | **78%** |
255
- | Math layers off | 59.3% | 38% | 70% |
256
- | Entity channel off | 56.8% | 38% | 73% |
257
- | BM25 channel off | 53.2% | 23% | 71% |
258
- | Cross-encoder off | 31.8% | 17% | — |
259
-
260
- ### Competitive Landscape
261
-
262
- | System | Score | LLM Required | Open Source | EU AI Act |
263
- |:-------|:-----:|:------------:|:-----------:|:---------:|
264
- | EverMemOS | 92.3% | Yes | No | No |
265
- | MemMachine | 91.7% | Yes | No | No |
266
- | Hindsight | 89.6% | Yes | No | No |
267
- | **SLM V3 Mode C** | **87.7%** | Optional | **Yes** | Partial |
268
- | **SLM V3 Mode A** | **62.3%** | **No** | **Yes** | **Yes** |
269
- | Mem0 ($24M) | 34.2% F1 | Yes | Partial | No |
270
-
271
- *SLM V3 is the only system offering a fully local mode with mathematical guarantees and EU AI Act compliance.*
252
+ ## CLI Reference
253
+
254
+ | Command | What It Does |
255
+ |:--------|:-------------|
256
+ | `slm remember "..."` | Store a memory |
257
+ | `slm recall "..."` | Search memories |
258
+ | `slm forget "..."` | Delete matching memories |
259
+ | `slm trace "..."` | Recall with per-channel score breakdown |
260
+ | `slm status` | System status |
261
+ | `slm health` | Math layer health (Fisher, Sheaf, Langevin) |
262
+ | `slm mode a/b/c` | Switch operating mode |
263
+ | `slm setup` | Interactive first-time wizard |
264
+ | `slm warmup` | Pre-download embedding model |
265
+ | `slm migrate` | V2 to V3 migration |
266
+ | `slm dashboard` | Launch web dashboard |
267
+ | `slm mcp` | Start MCP server (for IDE integration) |
268
+ | `slm connect` | Configure IDE integrations |
269
+ | `slm profile list/create/switch` | Profile management |
272
270
 
273
271
  ---
274
272
 
@@ -277,43 +275,47 @@ Evaluated on the [LoCoMo benchmark](https://arxiv.org/abs/2402.09714) (Long Conv
277
275
  ### V3: Information-Geometric Foundations
278
276
  > **SuperLocalMemory V3: Information-Geometric Foundations for Zero-LLM Enterprise Agent Memory**
279
277
  > Varun Pratap Bhardwaj (2026)
280
- > [Zenodo DOI: 10.5281/zenodo.19038659](https://zenodo.org/records/19038659)
278
+ > [arXiv:2603.14588](https://arxiv.org/abs/2603.14588) · [Zenodo DOI: 10.5281/zenodo.19038659](https://zenodo.org/records/19038659)
281
279
 
282
280
  ### V2: Architecture & Engineering
283
281
  > **SuperLocalMemory: A Structured Local Memory Architecture for Persistent AI Agent Context**
284
282
  > Varun Pratap Bhardwaj (2026)
285
- > [arXiv:2603.02240](https://arxiv.org/abs/2603.02240) | [Zenodo DOI: 10.5281/zenodo.18709670](https://zenodo.org/records/18709670)
283
+ > [arXiv:2603.02240](https://arxiv.org/abs/2603.02240) · [Zenodo DOI: 10.5281/zenodo.18709670](https://zenodo.org/records/18709670)
284
+
285
+ ### Cite This Work
286
+
287
+ ```bibtex
288
+ @article{bhardwaj2026slmv3,
289
+ title={Information-Geometric Foundations for Zero-LLM Enterprise Agent Memory},
290
+ author={Bhardwaj, Varun Pratap},
291
+ journal={arXiv preprint arXiv:2603.14588},
292
+ year={2026},
293
+ url={https://arxiv.org/abs/2603.14588}
294
+ }
295
+ ```
286
296
 
287
297
  ---
288
298
 
289
- ## Project Structure
299
+ ## Prerequisites
290
300
 
291
- ```
292
- superlocalmemory/
293
- ├── src/superlocalmemory/ # Python package (17 sub-packages)
294
- │ ├── core/ # Engine, config, modes, profiles
295
- │ ├── retrieval/ # 4-channel retrieval + fusion + reranking
296
- │ ├── math/ # Fisher-Rao, Sheaf, Langevin
297
- │ ├── encoding/ # 11-step ingestion pipeline
298
- │ ├── storage/ # SQLite with WAL, FTS5, migrations
299
- │ ├── trust/ # Bayesian scoring, gates, provenance
300
- │ ├── compliance/ # GDPR, EU AI Act, ABAC, audit chain
301
- │ ├── learning/ # Adaptive ranking, behavioral patterns
302
- │ ├── mcp/ # MCP server (24 tools, 6 resources)
303
- │ ├── cli/ # CLI with setup wizard
304
- │ └── server/ # Dashboard API + UI server
305
- ├── tests/ # 1400+ tests
306
- ├── ui/ # 17-tab web dashboard
307
- ├── ide/ # IDE configs for 17+ tools
308
- ├── docs/ # Documentation
309
- └── pyproject.toml # Modern Python packaging
310
- ```
301
+ | Requirement | Version | Why |
302
+ |:-----------|:--------|:----|
303
+ | **Node.js** | 14+ | npm package manager |
304
+ | **Python** | 3.11+ | V3 engine runtime |
305
+
306
+ All Python dependencies install automatically during `npm install`. If anything fails, the installer shows exact fix commands. BM25 keyword search works even without embeddings — you're never fully blocked.
307
+
308
+ | Component | Size | When |
309
+ |:----------|:-----|:-----|
310
+ | Core libraries (numpy, scipy, networkx) | ~50MB | During install |
311
+ | Search engine (sentence-transformers, torch) | ~200MB | During install |
312
+ | Embedding model (nomic-embed-text-v1.5, 768d) | ~500MB | First use or `slm warmup` |
311
313
 
312
314
  ---
313
315
 
314
316
  ## Contributing
315
317
 
316
- See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
318
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. [Wiki](https://github.com/qualixar/superlocalmemory/wiki) for detailed documentation.
317
319
 
318
320
  ## License
319
321
 
@@ -321,7 +323,7 @@ MIT License. See [LICENSE](LICENSE).
321
323
 
322
324
  ## Attribution
323
325
 
324
- Part of [Qualixar](https://qualixar.com) | Author: [Varun Pratap Bhardwaj](https://varunpratap.com)
326
+ Part of [Qualixar](https://qualixar.com) · Author: [Varun Pratap Bhardwaj](https://varunpratap.com)
325
327
 
326
328
  ---
327
329
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "superlocalmemory",
3
- "version": "3.0.13",
3
+ "version": "3.0.15",
4
4
  "description": "Information-geometric agent memory with mathematical guarantees. 4-channel retrieval, Fisher-Rao similarity, zero-LLM mode, EU AI Act compliant. Works with Claude, Cursor, Windsurf, and 17+ AI tools.",
5
5
  "keywords": [
6
6
  "ai-memory",
package/pyproject.toml CHANGED
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "superlocalmemory"
3
- version = "3.0.13"
3
+ version = "3.0.15"
4
4
  description = "Information-geometric agent memory with mathematical guarantees"
5
5
  readme = "README.md"
6
6
  license = {text = "MIT"}
@@ -151,6 +151,7 @@ def cmd_forget(args: Namespace) -> None:
151
151
 
152
152
  config = SLMConfig.load()
153
153
  engine = MemoryEngine(config)
154
+ engine.initialize()
154
155
  facts = engine._db.get_all_facts(engine.profile_id)
155
156
  query_lower = args.query.lower()
156
157
  matches = [f for f in facts if query_lower in f.content.lower()]
@@ -191,6 +192,7 @@ def cmd_health(_args: Namespace) -> None:
191
192
 
192
193
  config = SLMConfig.load()
193
194
  engine = MemoryEngine(config)
195
+ engine.initialize()
194
196
  facts = engine._db.get_all_facts(engine.profile_id)
195
197
  fisher_count = sum(1 for f in facts if f.fisher_mean is not None)
196
198
  langevin_count = sum(1 for f in facts if f.langevin_position is not None)