superlocalmemory 3.0.13 → 3.0.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: superlocalmemory
3
- Version: 3.0.13
3
+ Version: 3.0.15
4
4
  Summary: Information-geometric agent memory with mathematical guarantees
5
5
  Author-email: Varun Pratap Bhardwaj <admin@superlocalmemory.com>
6
6
  License: MIT
@@ -50,57 +50,104 @@ Dynamic: license-file
50
50
  </p>
51
51
 
52
52
  <h1 align="center">SuperLocalMemory V3</h1>
53
- <p align="center"><strong>Information-Geometric Agent Memory with Mathematical Guarantees</strong></p>
53
+ <p align="center"><strong>The first local-only AI memory to break 74% retrieval on LoCoMo.<br/>No cloud. No APIs. No data leaves your machine.</strong></p>
54
54
 
55
55
  <p align="center">
56
- The first agent memory system with mathematically grounded retrieval, lifecycle management, and consistency verification. Four-channel hybrid retrieval. Three operating modes. EU AI Act compliant.
56
+ <code>+16pp vs Mem0 (zero cloud)</code> &nbsp;·&nbsp; <code>85% Open-Domain (best of any system)</code> &nbsp;·&nbsp; <code>EU AI Act Ready</code>
57
57
  </p>
58
58
 
59
59
  <p align="center">
60
- <a href="https://superlocalmemory.com"><img src="https://img.shields.io/badge/Website-superlocalmemory.com-ff6b35?style=for-the-badge" alt="Website"/></a>
61
- <a href="https://arxiv.org/abs/2603.02240"><img src="https://img.shields.io/badge/arXiv-2603.02240-b31b1b?style=for-the-badge&logo=arxiv&logoColor=white" alt="arXiv Paper"/></a>
62
- <a href="https://zenodo.org/records/19038659"><img src="https://img.shields.io/badge/DOI-10.5281%2Fzenodo.19038659-blue?style=for-the-badge&logo=doi&logoColor=white" alt="V3 DOI"/></a>
60
+ <a href="https://arxiv.org/abs/2603.14588"><img src="https://img.shields.io/badge/arXiv-2603.14588-b31b1b?style=for-the-badge&logo=arxiv&logoColor=white" alt="arXiv Paper"/></a>
61
+ <a href="https://pypi.org/project/superlocalmemory/"><img src="https://img.shields.io/pypi/v/superlocalmemory?style=for-the-badge&logo=pypi&logoColor=white" alt="PyPI"/></a>
62
+ <a href="https://www.npmjs.com/package/superlocalmemory"><img src="https://img.shields.io/npm/v/superlocalmemory?style=for-the-badge&logo=npm&logoColor=white" alt="npm"/></a>
63
+ <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green?style=for-the-badge" alt="MIT License"/></a>
64
+ <a href="#eu-ai-act-compliance"><img src="https://img.shields.io/badge/EU_AI_Act-Compliant-brightgreen?style=for-the-badge" alt="EU AI Act"/></a>
65
+ <a href="https://superlocalmemory.com"><img src="https://img.shields.io/badge/Web-superlocalmemory.com-ff6b35?style=for-the-badge" alt="Website"/></a>
63
66
  </p>
64
67
 
65
- <p align="center">
66
- <a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.11+-3776AB?style=flat-square&logo=python&logoColor=white" alt="Python 3.11+"/></a>
67
- <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green?style=flat-square" alt="MIT License"/></a>
68
- <a href="#three-operating-modes"><img src="https://img.shields.io/badge/EU_AI_Act-Compliant-brightgreen?style=flat-square" alt="EU AI Act"/></a>
69
- <a href="#"><img src="https://img.shields.io/badge/tests-1400+-brightgreen?style=flat-square" alt="1400+ Tests"/></a>
70
- <a href="#"><img src="https://img.shields.io/badge/platform-Mac_|_Linux_|_Windows-blue?style=flat-square" alt="Cross Platform"/></a>
71
- <a href="https://github.com/qualixar/superlocalmemory/wiki"><img src="https://img.shields.io/badge/Wiki-Documentation-blue?style=flat-square" alt="Wiki"/></a>
72
- </p>
68
+ ---
69
+
70
+ ## Why SuperLocalMemory?
71
+
72
+ Every major AI memory system — Mem0, Zep, Letta, EverMemOS — sends your data to cloud LLMs for core operations. That means latency on every query, cost on every interaction, and after **August 2, 2026**, a compliance problem under the EU AI Act.
73
+
74
+ SuperLocalMemory V3 takes a different approach: **mathematics instead of cloud compute.** Three techniques from differential geometry, algebraic topology, and stochastic analysis replace the work that other systems need LLMs to do — similarity scoring, contradiction detection, and lifecycle management. The result is an agent memory that runs entirely on your machine, on CPU, with no API keys, and still outperforms funded alternatives.
75
+
76
+ **The numbers** (evaluated on [LoCoMo](https://arxiv.org/abs/2402.09714), the standard long-conversation memory benchmark):
77
+
78
+ | System | Score | Cloud Required | Open Source | Funding |
79
+ |:-------|:-----:|:--------------:|:-----------:|:-------:|
80
+ | EverMemOS | 92.3% | Yes | No | — |
81
+ | Hindsight | 89.6% | Yes | No | — |
82
+ | **SLM V3 Mode C** | **87.7%** | Optional | **Yes (MIT)** | $0 |
83
+ | Zep v3 | 85.2% | Yes | Deprecated | $35M |
84
+ | **SLM V3 Mode A** | **74.8%** | **No** | **Yes (MIT)** | $0 |
85
+ | Mem0 | 64.2% | Yes | Partial | $24M |
86
+
87
+ Mode A scores **74.8% with zero cloud dependency** — outperforming Mem0 by 16 percentage points without a single API call. On open-domain questions, Mode A scores **85.0% — the highest of any system in the evaluation**, including cloud-powered ones. Mode C reaches **87.7%**, matching enterprise cloud systems.
88
+
89
+ Mathematical layers contribute **+12.7 percentage points** on average across 6 conversations (n=832 questions), with up to **+19.9pp on the most challenging dialogues**. This isn't more compute — it's better math.
90
+
91
+ > **Upgrading from V2 (2.8.6)?** V3 is a complete architectural reinvention — new mathematical engine, new retrieval pipeline, new storage schema. Your existing data is preserved but requires migration. After installing V3, run `slm migrate` to upgrade your data. Read the [Migration Guide](https://github.com/qualixar/superlocalmemory/wiki/Migration-from-V2) before upgrading. Backup is created automatically.
73
92
 
74
93
  ---
75
94
 
76
- ## What is SuperLocalMemory?
95
+ ## Quick Start
77
96
 
78
- SuperLocalMemory gives AI assistants persistent, structured memory that survives across sessions. Unlike simple vector stores, V3 uses **information geometry** to provide mathematically grounded retrieval, automatic contradiction detection, and self-organizing memory lifecycle management.
97
+ ### Install via npm (recommended)
79
98
 
80
- **Works with:** Claude, Cursor, Windsurf, VS Code Copilot, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and 17+ AI tools via MCP.
99
+ ```bash
100
+ npm install -g superlocalmemory
101
+ slm setup # Choose mode (A/B/C)
102
+ slm warmup # Pre-download embedding model (~500MB, optional)
103
+ ```
104
+
105
+ ### Install via pip
106
+
107
+ ```bash
108
+ pip install superlocalmemory
109
+ ```
81
110
 
82
- > **Upgrading from V2 (2.8.6)?** V3 is a complete architectural reinvention — new mathematical engine, new retrieval pipeline, new storage schema. Your existing data is preserved but requires migration. After installing V3, run `slm migrate` to upgrade your data. Read the [Migration Guide](docs/migration-from-v2.md) before upgrading. Backup is created automatically.
111
+ ### First Use
83
112
 
84
- ### Key Results
113
+ ```bash
114
+ slm remember "Alice works at Google as a Staff Engineer"
115
+ slm recall "What does Alice do?"
116
+ slm status
117
+ ```
85
118
 
86
- | Metric | Score | Context |
87
- |:-------|:-----:|:--------|
88
- | LoCoMo (Mode C, full power) | **87.7%** | On conv-30, 81 scored questions |
89
- | LoCoMo (Mode A, zero-LLM) | **62.3%** | Highest zero-LLM score. No cloud dependency. |
90
- | Math layer improvement | **+12.7pp** | Average gain from mathematical foundations |
91
- | Multi-hop improvement | **+12pp** | 50% vs 38% (math on vs off) |
119
+ ### MCP Integration (Claude, Cursor, Windsurf, VS Code, etc.)
120
+
121
+ ```json
122
+ {
123
+ "mcpServers": {
124
+ "superlocalmemory": {
125
+ "command": "slm",
126
+ "args": ["mcp"]
127
+ }
128
+ }
129
+ }
130
+ ```
131
+
132
+ 24 MCP tools available. Works with Claude Code, Cursor, Windsurf, VS Code Copilot, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and 17+ AI tools.
92
133
 
93
134
  ---
94
135
 
95
136
  ## Three Operating Modes
96
137
 
97
- | Mode | What | LLM Required? | EU AI Act | Best For |
98
- |:----:|:-----|:-------------:|:---------:|:---------|
99
- | **A** | Local Guardian | No | Compliant | Privacy-first, air-gapped, enterprise |
100
- | **B** | Smart Local | Local only (Ollama) | Compliant | Enhanced quality, data stays local |
101
- | **C** | Full Power | Cloud (optional) | Partial | Maximum accuracy, research |
138
+ | Mode | What | Cloud? | EU AI Act | Best For |
139
+ |:----:|:-----|:------:|:---------:|:---------|
140
+ | **A** | Local Guardian | **None** | **Compliant** | Privacy-first, air-gapped, enterprise |
141
+ | **B** | Smart Local | Local only (Ollama) | Compliant | Better answers, data stays local |
142
+ | **C** | Full Power | Cloud LLM | Partial | Maximum accuracy, research |
143
+
144
+ ```bash
145
+ slm mode a # Zero-cloud (default)
146
+ slm mode b # Local Ollama
147
+ slm mode c # Cloud LLM
148
+ ```
102
149
 
103
- **Mode A** is the only agent memory that operates with **zero cloud dependency** while achieving competitive retrieval accuracy. All data stays on your device. No API keys required.
150
+ **Mode A** is the only agent memory that operates with **zero cloud dependency** while achieving competitive retrieval accuracy on a standard benchmark. All data stays on your device. No API keys. No GPU. Runs on 2 vCPUs + 4GB RAM.
104
151
 
105
152
  ---
106
153
 
@@ -108,8 +155,8 @@ SuperLocalMemory gives AI assistants persistent, structured memory that survives
108
155
 
109
156
  ```
110
157
  Query ──► Strategy Classifier ──► 4 Parallel Channels:
111
- ├── Semantic (Fisher-Rao graduated similarity)
112
- ├── BM25 (keyword matching, k1=1.2, b=0.75)
158
+ ├── Semantic (Fisher-Rao geodesic distance)
159
+ ├── BM25 (keyword matching)
113
160
  ├── Entity Graph (spreading activation, 3 hops)
114
161
  └── Temporal (date-aware retrieval)
115
162
 
@@ -122,104 +169,79 @@ Query ──► Strategy Classifier ──► 4 Parallel Channels:
122
169
  ◄── Top-K Results with channel scores
123
170
  ```
124
171
 
125
- ### Mathematical Foundations (Novel Contributions)
126
-
127
- 1. **Fisher-Rao Retrieval Metric** — Similarity scoring derived from the Fisher information structure of diagonal Gaussian families. Graduated ramp from cosine to Fisher-information-weighted scoring over the first 10 accesses per memory.
128
-
129
- 2. **Sheaf Cohomology for Consistency** — Algebraic topology detects contradictions between facts by computing coboundary norms on the knowledge graph. Non-trivial restriction maps amplify disagreements along discriminative subspaces.
130
-
131
- 3. **Riemannian Langevin Lifecycle** — Memory positions evolve on the Poincare ball via a discretized Langevin SDE. Frequently accessed memories stay near the origin (ACTIVE); neglected memories diffuse toward the boundary (ARCHIVED). The potential is modulated by access frequency, age, and importance.
132
-
133
- ---
134
-
135
- ## Prerequisites
172
+ ### Mathematical Foundations
136
173
 
137
- | Requirement | Version | Why |
138
- |:-----------|:--------|:----|
139
- | **Node.js** | 14+ | npm package manager |
140
- | **Python** | 3.11+ | V3 engine runtime |
141
- | **pip** | Latest | Python dependency installer |
174
+ Three novel contributions replace cloud LLM dependency with mathematical guarantees:
142
175
 
143
- > All Python dependencies are installed automatically during `npm install`. You don't need to run pip manually. If any dependency fails, the installer shows clear instructions.
176
+ 1. **Fisher-Rao Retrieval Metric** Similarity scoring derived from the Fisher information structure of diagonal Gaussian families. Graduated ramp from cosine to geodesic distance over the first 10 accesses. The first application of information geometry to agent memory retrieval.
144
177
 
145
- ### What Gets Installed Automatically
178
+ 2. **Sheaf Cohomology for Consistency** — Algebraic topology detects contradictions by computing coboundary norms on the knowledge graph. The first algebraic guarantee for contradiction detection in agent memory.
146
179
 
147
- | Component | Size | When |
148
- |:----------|:-----|:-----|
149
- | Core math libraries (numpy, scipy, networkx) | ~50MB | During `npm install` |
150
- | Search engine (sentence-transformers, einops, torch) | ~200MB | During `npm install` |
151
- | Embedding model (nomic-ai/nomic-embed-text-v1.5) | ~500MB | On first use OR `slm warmup` |
180
+ 3. **Riemannian Langevin Lifecycle** Memory positions evolve on the Poincare ball via discretized Langevin SDE. Frequently accessed memories stay active; neglected memories self-archive. No hardcoded thresholds.
152
181
 
153
- **If any dependency fails during install**, the installer prints the exact `pip install` command to fix it. BM25 keyword search works even without embeddings — you're never fully blocked.
182
+ These three layers collectively yield **+12.7pp average improvement** over the engineering-only baseline, with the Fisher metric alone contributing **+10.8pp** on the hardest conversations.
154
183
 
155
184
  ---
156
185
 
157
- ## Quick Start
186
+ ## Benchmarks
158
187
 
159
- ### Install via npm (recommendedone command, everything included)
188
+ Evaluated on [LoCoMo](https://arxiv.org/abs/2402.09714)10 multi-session conversations, 1,986 total questions, 4 scored categories.
160
189
 
161
- ```bash
162
- npm install -g superlocalmemory
163
- ```
190
+ ### Mode A (Zero-Cloud, 10 Conversations, 1,276 Questions)
164
191
 
165
- This single command:
166
- - Installs the V3 engine and CLI
167
- - Auto-installs all Python dependencies (numpy, scipy, networkx, sentence-transformers, einops, torch, etc.)
168
- - Creates the data directory at `~/.superlocalmemory/`
169
- - Detects and guides V2 migration if applicable
192
+ | Category | Score | vs. Mem0 (64.2%) |
193
+ |:---------|:-----:|:-----------------:|
194
+ | Single-Hop | 72.0% | +3.0pp |
195
+ | Multi-Hop | 70.3% | +8.6pp |
196
+ | Temporal | 80.0% | +21.7pp |
197
+ | **Open-Domain** | **85.0%** | **+35.0pp** |
198
+ | **Aggregate** | **74.8%** | **+10.6pp** |
170
199
 
171
- Then configure and pre-download the embedding model:
172
- ```bash
173
- slm setup # Choose mode, configure provider
174
- slm warmup # Pre-download embedding model (~500MB, optional)
175
- ```
200
+ Mode A achieves **85.0% on open-domain questions — the highest of any system in the evaluation**, including cloud-powered ones.
176
201
 
177
- > **First time?** If you skip `slm warmup`, the model downloads automatically on first `slm remember` or `slm recall`. Either way works.
202
+ ### Math Layer Impact (6 Conversations, n=832)
178
203
 
179
- ### Install via pip
204
+ | Conversation | With Math | Without | Delta |
205
+ |:-------------|:---------:|:-------:|:-----:|
206
+ | Easiest | 78.5% | 71.2% | +7.3pp |
207
+ | Hardest | 64.2% | 44.3% | **+19.9pp** |
208
+ | **Average** | **71.7%** | **58.9%** | **+12.7pp** |
180
209
 
181
- ```bash
182
- pip install superlocalmemory
183
- # or with all features:
184
- pip install "superlocalmemory[full]"
185
- ```
210
+ Mathematical layers help most where heuristic methods struggle — the harder the conversation, the bigger the improvement.
186
211
 
187
- ### First Use
212
+ ### Ablation (What Each Component Contributes)
188
213
 
189
- ```bash
190
- # Store a memory
191
- slm remember "Alice works at Google as a Staff Engineer"
214
+ | Removed | Impact |
215
+ |:--------|:------:|
216
+ | Cross-encoder reranking | **-30.7pp** |
217
+ | Fisher-Rao metric | **-10.8pp** |
218
+ | All math layers | **-7.6pp** |
219
+ | BM25 channel | **-6.5pp** |
220
+ | Sheaf consistency | -1.7pp |
221
+ | Entity graph | -1.0pp |
192
222
 
193
- # Recall
194
- slm recall "What does Alice do?"
223
+ Full ablation details in the [Wiki](https://github.com/qualixar/superlocalmemory/wiki/Benchmarks).
195
224
 
196
- # Check status
197
- slm status
225
+ ---
198
226
 
199
- # Switch modes
200
- slm mode a # Zero-LLM (default)
201
- slm mode b # Local Ollama
202
- slm mode c # Full power
203
- ```
227
+ ## EU AI Act Compliance
204
228
 
205
- ### MCP Integration (Claude, Cursor, etc.)
229
+ The EU AI Act (Regulation 2024/1689) takes full effect **August 2, 2026**. Every AI memory system that sends personal data to cloud LLMs for core operations has a compliance question to answer.
206
230
 
207
- Add to your IDE's MCP config:
231
+ | Requirement | Mode A | Mode B | Mode C |
232
+ |:------------|:------:|:------:|:------:|
233
+ | Data sovereignty (Art. 10) | **Pass** | **Pass** | Requires DPA |
234
+ | Right to erasure (GDPR Art. 17) | **Pass** | **Pass** | **Pass** |
235
+ | Transparency (Art. 13) | **Pass** | **Pass** | **Pass** |
236
+ | No network calls during memory ops | **Yes** | **Yes** | No |
208
237
 
209
- ```json
210
- {
211
- "mcpServers": {
212
- "superlocalmemory": {
213
- "command": "slm",
214
- "args": ["mcp"]
215
- }
216
- }
217
- }
218
- ```
238
+ To the best of our knowledge, **no existing agent memory system addresses EU AI Act compliance**. Modes A and B pass all checks by architectural design — no personal data leaves the device during any memory operation.
219
239
 
220
- 24 MCP tools available: `remember`, `recall`, `search`, `fetch`, `list_recent`, `get_status`, `build_graph`, `switch_profile`, `health`, `consistency_check`, `recall_trace`, and more.
240
+ Built-in compliance tools: GDPR Article 15/17 export + complete erasure, tamper-proof SHA-256 audit chain, data provenance tracking, ABAC policy enforcement.
241
+
242
+ ---
221
243
 
222
- ### Web Dashboard (17 tabs)
244
+ ## Web Dashboard
223
245
 
224
246
  ```bash
225
247
  slm dashboard # Opens at http://localhost:8765
@@ -240,82 +262,58 @@ slm dashboard # Opens at http://localhost:8765
240
262
  </p>
241
263
  </details>
242
264
 
243
- The V3 dashboard provides real-time visibility into your memory system:
244
-
245
- - **Dashboard** — Mode switcher, health score, quick store/recall
246
- - **Recall Lab** — Search with per-channel score breakdown (Semantic, BM25, Entity, Temporal)
247
- - **Knowledge Graph** — Interactive entity relationship visualization
248
- - **Memories** — Browse, search, and manage stored memories
249
- - **Trust Dashboard** — Bayesian trust scores per agent with Beta distribution visualization
250
- - **Math Health** — Fisher-Rao confidence, Sheaf consistency, Langevin lifecycle state
251
- - **Compliance** — GDPR export/erasure, EU AI Act status, audit trail
252
- - **Learning** — Adaptive ranking progress, behavioral patterns, outcome tracking
253
- - **IDE Connections** — Connected AI tools status and configuration
254
- - **Settings** — Mode, provider, auto-capture/recall configuration
255
-
256
- > The dashboard runs locally at `http://localhost:8765`. No data leaves your machine.
265
+ 17 tabs: Dashboard, Recall Lab, Knowledge Graph, Memories, Trust Scores, Math Health, Compliance, Learning, IDE Connections, Settings, and more. Runs locally — no data leaves your machine.
257
266
 
258
267
  ---
259
268
 
260
- ## V3 Engine Features
269
+ ## Features
261
270
 
262
- ### Retrieval (4-Channel Hybrid)
263
- - Semantic similarity with Fisher-Rao information geometry
264
- - BM25 keyword matching (persisted tokens, survives restart)
265
- - Entity graph with spreading activation (3-hop, decay=0.7)
266
- - Temporal date-aware retrieval with interval support
267
- - RRF fusion (k=60) + cross-encoder reranking
271
+ ### Retrieval
272
+ - 4-channel hybrid: Semantic (Fisher-Rao) + BM25 + Entity Graph + Temporal
273
+ - RRF fusion + cross-encoder reranking
274
+ - Agentic sufficiency verification (auto-retry on weak results)
275
+ - Adaptive ranking with LightGBM (learns from usage)
268
276
 
269
277
  ### Intelligence
270
- - 11-step ingestion pipeline (entity resolution, fact extraction, emotional tagging, scene building, sheaf consistency)
271
- - Adaptive learning with LightGBM-based ranking (3-phase bootstrap)
272
- - Behavioral pattern detection (query habits, entity preferences, active hours)
273
- - Outcome tracking for retrieval feedback loops
278
+ - 11-step ingestion pipeline (entity resolution, fact extraction, emotional tagging, scene building)
279
+ - Automatic contradiction detection via sheaf cohomology
280
+ - Self-organizing memory lifecycle (no hardcoded thresholds)
281
+ - Behavioral pattern detection and outcome tracking
274
282
 
275
- ### Trust & Compliance
276
- - Bayesian Beta distribution trust scoring (per-agent, per-fact)
283
+ ### Trust & Security
284
+ - Bayesian Beta-distribution trust scoring (per-agent, per-fact)
277
285
  - Trust gates (block low-trust agents from writing/deleting)
278
286
  - ABAC (Attribute-Based Access Control) with DB-persisted policies
279
- - GDPR Article 15/17 compliance (full export + complete erasure)
280
- - EU AI Act data sovereignty (Mode A: zero cloud, data stays local)
281
287
  - Tamper-proof hash-chain audit trail (SHA-256 linked entries)
282
- - Data provenance tracking (who created what, when, from where)
283
288
 
284
289
  ### Infrastructure
285
- - 17-tab web dashboard (trust visualization, math health, recall lab)
286
- - 17+ IDE integrations with pre-built configs
287
- - Profile isolation (16+ independent memory spaces)
288
- - V2 to V3 migration tool (zero data loss, rollback support)
289
- - Auto-capture and auto-recall hooks for Claude Code
290
+ - 17-tab web dashboard with real-time visualization
291
+ - 17+ IDE integrations (Claude, Cursor, Windsurf, VS Code, JetBrains, Zed, etc.)
292
+ - 24 MCP tools + 6 MCP resources
293
+ - Profile isolation (independent memory spaces)
294
+ - 1400+ tests, MIT license, cross-platform (Mac/Linux/Windows)
295
+ - CPU-only — no GPU required
290
296
 
291
297
  ---
292
298
 
293
- ## Benchmarks
294
-
295
- Evaluated on the [LoCoMo benchmark](https://arxiv.org/abs/2402.09714) (Long Conversation Memory):
296
-
297
- ### Mode A Ablation (conv-30, 81 questions, zero-LLM)
298
-
299
- | Configuration | Micro Avg | Multi-Hop | Open Domain |
300
- |:-------------|:---------:|:---------:|:-----------:|
301
- | Full (all layers) | **62.3%** | **50%** | **78%** |
302
- | Math layers off | 59.3% | 38% | 70% |
303
- | Entity channel off | 56.8% | 38% | 73% |
304
- | BM25 channel off | 53.2% | 23% | 71% |
305
- | Cross-encoder off | 31.8% | 17% | — |
306
-
307
- ### Competitive Landscape
308
-
309
- | System | Score | LLM Required | Open Source | EU AI Act |
310
- |:-------|:-----:|:------------:|:-----------:|:---------:|
311
- | EverMemOS | 92.3% | Yes | No | No |
312
- | MemMachine | 91.7% | Yes | No | No |
313
- | Hindsight | 89.6% | Yes | No | No |
314
- | **SLM V3 Mode C** | **87.7%** | Optional | **Yes** | Partial |
315
- | **SLM V3 Mode A** | **62.3%** | **No** | **Yes** | **Yes** |
316
- | Mem0 ($24M) | 34.2% F1 | Yes | Partial | No |
317
-
318
- *SLM V3 is the only system offering a fully local mode with mathematical guarantees and EU AI Act compliance.*
299
+ ## CLI Reference
300
+
301
+ | Command | What It Does |
302
+ |:--------|:-------------|
303
+ | `slm remember "..."` | Store a memory |
304
+ | `slm recall "..."` | Search memories |
305
+ | `slm forget "..."` | Delete matching memories |
306
+ | `slm trace "..."` | Recall with per-channel score breakdown |
307
+ | `slm status` | System status |
308
+ | `slm health` | Math layer health (Fisher, Sheaf, Langevin) |
309
+ | `slm mode a/b/c` | Switch operating mode |
310
+ | `slm setup` | Interactive first-time wizard |
311
+ | `slm warmup` | Pre-download embedding model |
312
+ | `slm migrate` | V2 to V3 migration |
313
+ | `slm dashboard` | Launch web dashboard |
314
+ | `slm mcp` | Start MCP server (for IDE integration) |
315
+ | `slm connect` | Configure IDE integrations |
316
+ | `slm profile list/create/switch` | Profile management |
319
317
 
320
318
  ---
321
319
 
@@ -324,43 +322,47 @@ Evaluated on the [LoCoMo benchmark](https://arxiv.org/abs/2402.09714) (Long Conv
324
322
  ### V3: Information-Geometric Foundations
325
323
  > **SuperLocalMemory V3: Information-Geometric Foundations for Zero-LLM Enterprise Agent Memory**
326
324
  > Varun Pratap Bhardwaj (2026)
327
- > [Zenodo DOI: 10.5281/zenodo.19038659](https://zenodo.org/records/19038659)
325
+ > [arXiv:2603.14588](https://arxiv.org/abs/2603.14588) · [Zenodo DOI: 10.5281/zenodo.19038659](https://zenodo.org/records/19038659)
328
326
 
329
327
  ### V2: Architecture & Engineering
330
328
  > **SuperLocalMemory: A Structured Local Memory Architecture for Persistent AI Agent Context**
331
329
  > Varun Pratap Bhardwaj (2026)
332
- > [arXiv:2603.02240](https://arxiv.org/abs/2603.02240) | [Zenodo DOI: 10.5281/zenodo.18709670](https://zenodo.org/records/18709670)
330
+ > [arXiv:2603.02240](https://arxiv.org/abs/2603.02240) · [Zenodo DOI: 10.5281/zenodo.18709670](https://zenodo.org/records/18709670)
331
+
332
+ ### Cite This Work
333
+
334
+ ```bibtex
335
+ @article{bhardwaj2026slmv3,
336
+ title={Information-Geometric Foundations for Zero-LLM Enterprise Agent Memory},
337
+ author={Bhardwaj, Varun Pratap},
338
+ journal={arXiv preprint arXiv:2603.14588},
339
+ year={2026},
340
+ url={https://arxiv.org/abs/2603.14588}
341
+ }
342
+ ```
333
343
 
334
344
  ---
335
345
 
336
- ## Project Structure
346
+ ## Prerequisites
337
347
 
338
- ```
339
- superlocalmemory/
340
- ├── src/superlocalmemory/ # Python package (17 sub-packages)
341
- │ ├── core/ # Engine, config, modes, profiles
342
- │ ├── retrieval/ # 4-channel retrieval + fusion + reranking
343
- │ ├── math/ # Fisher-Rao, Sheaf, Langevin
344
- │ ├── encoding/ # 11-step ingestion pipeline
345
- │ ├── storage/ # SQLite with WAL, FTS5, migrations
346
- │ ├── trust/ # Bayesian scoring, gates, provenance
347
- │ ├── compliance/ # GDPR, EU AI Act, ABAC, audit chain
348
- │ ├── learning/ # Adaptive ranking, behavioral patterns
349
- │ ├── mcp/ # MCP server (24 tools, 6 resources)
350
- │ ├── cli/ # CLI with setup wizard
351
- │ └── server/ # Dashboard API + UI server
352
- ├── tests/ # 1400+ tests
353
- ├── ui/ # 17-tab web dashboard
354
- ├── ide/ # IDE configs for 17+ tools
355
- ├── docs/ # Documentation
356
- └── pyproject.toml # Modern Python packaging
357
- ```
348
+ | Requirement | Version | Why |
349
+ |:-----------|:--------|:----|
350
+ | **Node.js** | 14+ | npm package manager |
351
+ | **Python** | 3.11+ | V3 engine runtime |
352
+
353
+ All Python dependencies install automatically during `npm install`. If anything fails, the installer shows exact fix commands. BM25 keyword search works even without embeddings — you're never fully blocked.
354
+
355
+ | Component | Size | When |
356
+ |:----------|:-----|:-----|
357
+ | Core libraries (numpy, scipy, networkx) | ~50MB | During install |
358
+ | Search engine (sentence-transformers, torch) | ~200MB | During install |
359
+ | Embedding model (nomic-embed-text-v1.5, 768d) | ~500MB | First use or `slm warmup` |
358
360
 
359
361
  ---
360
362
 
361
363
  ## Contributing
362
364
 
363
- See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
365
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. [Wiki](https://github.com/qualixar/superlocalmemory/wiki) for detailed documentation.
364
366
 
365
367
  ## License
366
368
 
@@ -368,7 +370,7 @@ MIT License. See [LICENSE](LICENSE).
368
370
 
369
371
  ## Attribution
370
372
 
371
- Part of [Qualixar](https://qualixar.com) | Author: [Varun Pratap Bhardwaj](https://varunpratap.com)
373
+ Part of [Qualixar](https://qualixar.com) · Author: [Varun Pratap Bhardwaj](https://varunpratap.com)
372
374
 
373
375
  ---
374
376