superlocalmemory 3.3.29 → 3.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/ATTRIBUTION.md +1 -1
- package/CHANGELOG.md +3 -0
- package/LICENSE +633 -70
- package/README.md +14 -11
- package/docs/screenshots/01-dashboard-main.png +0 -0
- package/docs/screenshots/02-knowledge-graph.png +0 -0
- package/docs/screenshots/03-patterns-learning.png +0 -0
- package/docs/screenshots/04-learning-dashboard.png +0 -0
- package/docs/screenshots/05-behavioral-analysis.png +0 -0
- package/docs/screenshots/06-graph-communities.png +0 -0
- package/docs/v2-archive/ACCESSIBILITY.md +1 -1
- package/docs/v2-archive/FRAMEWORK-INTEGRATIONS.md +1 -1
- package/docs/v2-archive/MCP-MANUAL-SETUP.md +1 -1
- package/docs/v2-archive/SEARCH-ENGINE-V2.2.0.md +2 -2
- package/docs/v2-archive/SEARCH-INTEGRATION-GUIDE.md +1 -1
- package/docs/v2-archive/UNIVERSAL-INTEGRATION.md +1 -1
- package/docs/v2-archive/V2.2.0-OPTIONAL-SEARCH.md +1 -1
- package/docs/v2-archive/example_graph_usage.py +1 -1
- package/ide/configs/codex-mcp.toml +1 -1
- package/ide/integrations/langchain/README.md +1 -1
- package/ide/integrations/langchain/langchain_superlocalmemory/__init__.py +1 -1
- package/ide/integrations/langchain/langchain_superlocalmemory/chat_message_history.py +1 -1
- package/ide/integrations/langchain/pyproject.toml +2 -2
- package/ide/integrations/langchain/tests/__init__.py +1 -1
- package/ide/integrations/langchain/tests/test_chat_message_history.py +1 -1
- package/ide/integrations/langchain/tests/test_security.py +1 -1
- package/ide/integrations/llamaindex/llama_index/storage/chat_store/superlocalmemory/__init__.py +1 -1
- package/ide/integrations/llamaindex/llama_index/storage/chat_store/superlocalmemory/base.py +1 -1
- package/ide/integrations/llamaindex/pyproject.toml +2 -2
- package/ide/integrations/llamaindex/tests/__init__.py +1 -1
- package/ide/integrations/llamaindex/tests/test_chat_store.py +1 -1
- package/ide/integrations/llamaindex/tests/test_security.py +1 -1
- package/ide/skills/slm-build-graph/SKILL.md +3 -3
- package/ide/skills/slm-list-recent/SKILL.md +3 -3
- package/ide/skills/slm-recall/SKILL.md +3 -3
- package/ide/skills/slm-remember/SKILL.md +3 -3
- package/ide/skills/slm-show-patterns/SKILL.md +3 -3
- package/ide/skills/slm-status/SKILL.md +3 -3
- package/ide/skills/slm-switch-profile/SKILL.md +3 -3
- package/package.json +3 -3
- package/pyproject.toml +3 -3
- package/src/superlocalmemory/core/engine_wiring.py +5 -1
- package/src/superlocalmemory/core/graph_analyzer.py +254 -12
- package/src/superlocalmemory/learning/consolidation_worker.py +240 -52
- package/src/superlocalmemory/retrieval/entity_channel.py +135 -4
- package/src/superlocalmemory/retrieval/spreading_activation.py +45 -0
- package/src/superlocalmemory/server/api.py +9 -1
- package/src/superlocalmemory/server/routes/behavioral.py +8 -4
- package/src/superlocalmemory/server/routes/chat.py +320 -0
- package/src/superlocalmemory/server/routes/insights.py +368 -0
- package/src/superlocalmemory/server/routes/learning.py +106 -6
- package/src/superlocalmemory/server/routes/memories.py +20 -9
- package/src/superlocalmemory/server/routes/stats.py +25 -3
- package/src/superlocalmemory/server/routes/timeline.py +252 -0
- package/src/superlocalmemory/server/routes/v3_api.py +161 -0
- package/src/superlocalmemory/server/ui.py +8 -0
- package/src/superlocalmemory/ui/index.html +168 -58
- package/src/superlocalmemory/ui/js/graph-event-bus.js +83 -0
- package/src/superlocalmemory/ui/js/graph-filters.js +1 -1
- package/src/superlocalmemory/ui/js/knowledge-graph.js +942 -0
- package/src/superlocalmemory/ui/js/memory-chat.js +344 -0
- package/src/superlocalmemory/ui/js/memory-timeline.js +265 -0
- package/src/superlocalmemory/ui/js/quick-actions.js +334 -0
- package/src/superlocalmemory.egg-info/PKG-INFO +597 -0
- package/src/superlocalmemory.egg-info/SOURCES.txt +287 -0
- package/src/superlocalmemory.egg-info/dependency_links.txt +1 -0
- package/src/superlocalmemory.egg-info/entry_points.txt +2 -0
- package/src/superlocalmemory.egg-info/requires.txt +47 -0
- package/src/superlocalmemory.egg-info/top_level.txt +1 -0
|
@@ -0,0 +1,597 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: superlocalmemory
|
|
3
|
+
Version: 3.4.1
|
|
4
|
+
Summary: Information-geometric agent memory with mathematical guarantees
|
|
5
|
+
Author-email: Varun Pratap Bhardwaj <admin@superlocalmemory.com>
|
|
6
|
+
License: AGPL-3.0-or-later
|
|
7
|
+
Project-URL: Homepage, https://superlocalmemory.com
|
|
8
|
+
Project-URL: Repository, https://github.com/qualixar/superlocalmemory
|
|
9
|
+
Project-URL: Documentation, https://github.com/qualixar/superlocalmemory/wiki
|
|
10
|
+
Project-URL: Issues, https://github.com/qualixar/superlocalmemory/issues
|
|
11
|
+
Keywords: ai-memory,mcp-server,local-first,agent-memory,information-geometry,privacy-first,eu-ai-act
|
|
12
|
+
Classifier: Development Status :: 4 - Beta
|
|
13
|
+
Classifier: Intended Audience :: Developers
|
|
14
|
+
Classifier: License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)
|
|
15
|
+
Classifier: Operating System :: OS Independent
|
|
16
|
+
Classifier: Operating System :: MacOS
|
|
17
|
+
Classifier: Operating System :: Microsoft :: Windows
|
|
18
|
+
Classifier: Operating System :: POSIX :: Linux
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
20
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
21
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
22
|
+
Classifier: Programming Language :: Python :: 3.14
|
|
23
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
24
|
+
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
25
|
+
Requires-Python: <3.15,>=3.11
|
|
26
|
+
Description-Content-Type: text/markdown
|
|
27
|
+
License-File: LICENSE
|
|
28
|
+
License-File: NOTICE
|
|
29
|
+
License-File: AUTHORS.md
|
|
30
|
+
Requires-Dist: httpx>=0.24.0
|
|
31
|
+
Requires-Dist: numpy<3.0.0,>=1.26.0
|
|
32
|
+
Requires-Dist: scipy<2.0.0,>=1.12.0
|
|
33
|
+
Requires-Dist: networkx>=3.0
|
|
34
|
+
Requires-Dist: mcp>=1.0.0
|
|
35
|
+
Requires-Dist: python-dateutil>=2.9.0.post0
|
|
36
|
+
Requires-Dist: rank-bm25>=0.2.2
|
|
37
|
+
Requires-Dist: vadersentiment>=3.3.2
|
|
38
|
+
Requires-Dist: einops>=0.8.2
|
|
39
|
+
Requires-Dist: fastapi[all]>=0.135.1
|
|
40
|
+
Requires-Dist: uvicorn>=0.42.0
|
|
41
|
+
Requires-Dist: websockets>=16.0
|
|
42
|
+
Requires-Dist: lightgbm>=4.0.0
|
|
43
|
+
Requires-Dist: diskcache>=5.6.0
|
|
44
|
+
Requires-Dist: orjson>=3.9.0
|
|
45
|
+
Requires-Dist: tree-sitter<1,>=0.23.0
|
|
46
|
+
Requires-Dist: tree-sitter-language-pack<2,>=0.3
|
|
47
|
+
Requires-Dist: rustworkx<1,>=0.15
|
|
48
|
+
Requires-Dist: watchdog<6,>=4.0
|
|
49
|
+
Provides-Extra: search
|
|
50
|
+
Requires-Dist: sentence-transformers[onnx]>=5.0.0; extra == "search"
|
|
51
|
+
Requires-Dist: einops>=0.8.2; extra == "search"
|
|
52
|
+
Requires-Dist: torch>=2.2.0; extra == "search"
|
|
53
|
+
Requires-Dist: scikit-learn<2.0.0,>=1.3.0; extra == "search"
|
|
54
|
+
Requires-Dist: geoopt>=0.5.0; extra == "search"
|
|
55
|
+
Requires-Dist: onnxruntime>=1.17.0; extra == "search"
|
|
56
|
+
Provides-Extra: ui
|
|
57
|
+
Requires-Dist: fastapi[all]>=0.135.1; extra == "ui"
|
|
58
|
+
Requires-Dist: uvicorn>=0.42.0; extra == "ui"
|
|
59
|
+
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "ui"
|
|
60
|
+
Provides-Extra: learning
|
|
61
|
+
Requires-Dist: lightgbm>=4.0.0; extra == "learning"
|
|
62
|
+
Provides-Extra: performance
|
|
63
|
+
Requires-Dist: diskcache>=5.6.0; extra == "performance"
|
|
64
|
+
Requires-Dist: orjson>=3.9.0; extra == "performance"
|
|
65
|
+
Provides-Extra: full
|
|
66
|
+
Requires-Dist: superlocalmemory[learning,performance,search,ui]; extra == "full"
|
|
67
|
+
Provides-Extra: dev
|
|
68
|
+
Requires-Dist: pytest>=8.0; extra == "dev"
|
|
69
|
+
Requires-Dist: pytest-cov>=4.1; extra == "dev"
|
|
70
|
+
Requires-Dist: sqlite-vec>=0.1.6; extra == "dev"
|
|
71
|
+
Dynamic: license-file
|
|
72
|
+
|
|
73
|
+
<p align="center">
|
|
74
|
+
<img src="https://superlocalmemory.com/assets/logo-mark.png" alt="SuperLocalMemory" width="200"/>
|
|
75
|
+
</p>
|
|
76
|
+
|
|
77
|
+
<h1 align="center">SuperLocalMemory V3.3</h1>
|
|
78
|
+
<p align="center"><strong>Every other AI forgets. Yours won't.</strong><br/><em>Infinite memory for Claude Code, Cursor, Windsurf & 17+ AI tools.</em></p>
|
|
79
|
+
<p align="center"><code>v3.3.26</code> — Install once. Every session remembers the last. Automatically.</p>
|
|
80
|
+
<p align="center"><strong>Backed by 3 peer-reviewed research papers</strong> · <a href="https://arxiv.org/abs/2603.02240">arXiv:2603.02240</a> · <a href="https://arxiv.org/abs/2603.14588">arXiv:2603.14588</a> · <a href="https://arxiv.org/abs/2604.04514">arXiv:2604.04514</a></p>
|
|
81
|
+
|
|
82
|
+
<p align="center">
|
|
83
|
+
<code>+16pp vs Mem0 (zero cloud)</code> · <code>85% Open-Domain (best of any system)</code> · <code>EU AI Act Ready</code>
|
|
84
|
+
</p>
|
|
85
|
+
|
|
86
|
+
<p align="center">
|
|
87
|
+
<a href="https://arxiv.org/abs/2603.14588"><img src="https://img.shields.io/badge/arXiv-2603.14588-b31b1b?style=for-the-badge&logo=arxiv&logoColor=white" alt="arXiv Paper"/></a>
|
|
88
|
+
<a href="https://pypi.org/project/superlocalmemory/"><img src="https://img.shields.io/pypi/v/superlocalmemory?style=for-the-badge&logo=pypi&logoColor=white" alt="PyPI"/></a>
|
|
89
|
+
<a href="https://www.npmjs.com/package/superlocalmemory"><img src="https://img.shields.io/npm/v/superlocalmemory?style=for-the-badge&logo=npm&logoColor=white" alt="npm"/></a>
|
|
90
|
+
<a href="https://www.gnu.org/licenses/agpl-3.0"><img src="https://img.shields.io/badge/License-AGPL_v3-blue.svg?style=for-the-badge" alt="AGPL v3"/></a>
|
|
91
|
+
<a href="#eu-ai-act-compliance"><img src="https://img.shields.io/badge/EU_AI_Act-Compliant-brightgreen?style=for-the-badge" alt="EU AI Act"/></a>
|
|
92
|
+
<a href="https://superlocalmemory.com"><img src="https://img.shields.io/badge/Web-superlocalmemory.com-ff6b35?style=for-the-badge" alt="Website"/></a>
|
|
93
|
+
<a href="#dual-interface-mcp--cli"><img src="https://img.shields.io/badge/MCP-Native-blue?style=for-the-badge" alt="MCP Native"/></a>
|
|
94
|
+
<a href="#dual-interface-mcp--cli"><img src="https://img.shields.io/badge/CLI-Agent--Native-green?style=for-the-badge" alt="CLI Agent-Native"/></a>
|
|
95
|
+
</p>
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## Why SuperLocalMemory?
|
|
100
|
+
|
|
101
|
+
Every major AI memory system — Mem0, Zep, Letta, EverMemOS — sends your data to cloud LLMs for core operations. That means latency on every query, cost on every interaction, and after **August 2, 2026**, a compliance problem under the EU AI Act.
|
|
102
|
+
|
|
103
|
+
SuperLocalMemory V3 takes a different approach: **mathematics instead of cloud compute.** Three techniques from differential geometry, algebraic topology, and stochastic analysis replace the work that other systems need LLMs to do — similarity scoring, contradiction detection, and lifecycle management. The result is an agent memory that runs entirely on your machine, on CPU, with no API keys, and still outperforms funded alternatives.
|
|
104
|
+
|
|
105
|
+
**The numbers** (evaluated on [LoCoMo](https://arxiv.org/abs/2402.09714), the standard long-conversation memory benchmark):
|
|
106
|
+
|
|
107
|
+
| System | Score | Cloud Required | Open Source | Funding |
|
|
108
|
+
|:-------|:-----:|:--------------:|:-----------:|:-------:|
|
|
109
|
+
| EverMemOS | 92.3% | Yes | No | — |
|
|
110
|
+
| Hindsight | 89.6% | Yes | No | — |
|
|
111
|
+
| **SLM V3 Mode C** | **87.7%** | Optional | **Yes (EL2)** | $0 |
|
|
112
|
+
| Zep v3 | 85.2% | Yes | Deprecated | $35M |
|
|
113
|
+
| **SLM V3 Mode A** | **74.8%** | **No** | **Yes (EL2)** | $0 |
|
|
114
|
+
| Mem0 | 64.2% | Yes | Partial | $24M |
|
|
115
|
+
|
|
116
|
+
Mode A scores **74.8% with zero cloud dependency** — outperforming Mem0 by 16 percentage points without a single API call. On open-domain questions, Mode A scores **85.0% — the highest of any system in the evaluation**, including cloud-powered ones. Mode C reaches **87.7%**, matching enterprise cloud systems.
|
|
117
|
+
|
|
118
|
+
Mathematical layers contribute **+12.7 percentage points** on average across 6 conversations (n=832 questions), with up to **+19.9pp on the most challenging dialogues**. This isn't more compute — it's better math.
|
|
119
|
+
|
|
120
|
+
> **Upgrading from V2 (2.8.6)?** V3 is a complete architectural reinvention — new mathematical engine, new retrieval pipeline, new storage schema. Your existing data is preserved but requires migration. After installing V3, run `slm migrate` to upgrade your data. Read the [Migration Guide](https://github.com/qualixar/superlocalmemory/wiki/Migration-from-V2) before upgrading. Backup is created automatically.
|
|
121
|
+
|
|
122
|
+
---
|
|
123
|
+
|
|
124
|
+
## What's New in V3.3 — The Living Brain Evolves
|
|
125
|
+
|
|
126
|
+
> V3.3 gives your memory a lifecycle. Memories strengthen when used, fade when neglected, compress when idle, and consolidate into reusable patterns — all automatically, all locally. Your agent gets smarter the longer it runs.
|
|
127
|
+
|
|
128
|
+
### Features at a Glance
|
|
129
|
+
|
|
130
|
+
- **Adaptive Memory Lifecycle** — memories naturally strengthen with use and fade when neglected. No manual cleanup, no hardcoded TTLs.
|
|
131
|
+
- **Smart Compression** — embedding precision adapts to memory importance. Low-priority memories compress up to 32x. High-value memories stay full-resolution.
|
|
132
|
+
- **Cognitive Consolidation** — the system automatically extracts patterns from clusters of related memories. One decision referenced 50 times becomes one reusable insight.
|
|
133
|
+
- **Pattern Learning** — auto-learned soft prompts injected into your agent's context at session start. The system teaches itself what matters to you.
|
|
134
|
+
- **Hopfield Retrieval (6th Channel)** — vague or partial queries now complete themselves. Ask half a question, get the whole answer.
|
|
135
|
+
- **Process Health** — orphaned SLM processes detected and cleaned automatically. No more zombie workers eating RAM.
|
|
136
|
+
|
|
137
|
+
### New CLI Commands
|
|
138
|
+
|
|
139
|
+
```bash
|
|
140
|
+
# Run a memory lifecycle review — strengthens active memories, archives neglected ones
|
|
141
|
+
slm decay
|
|
142
|
+
|
|
143
|
+
# Run smart compression — adapts embedding precision to memory importance
|
|
144
|
+
slm quantize
|
|
145
|
+
|
|
146
|
+
# Extract reusable patterns from memory clusters
|
|
147
|
+
slm consolidate --cognitive
|
|
148
|
+
|
|
149
|
+
# View auto-learned patterns that get injected into agent context
|
|
150
|
+
slm soft-prompts
|
|
151
|
+
|
|
152
|
+
# Clean up orphaned SLM processes
|
|
153
|
+
slm reap
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
### New MCP Tools
|
|
157
|
+
|
|
158
|
+
| Tool | Description |
|
|
159
|
+
|:-----|:------------|
|
|
160
|
+
| `forget` | Programmatic memory archival via lifecycle rules |
|
|
161
|
+
| `quantize` | Trigger smart compression on demand |
|
|
162
|
+
| `consolidate_cognitive` | Extract and store patterns from memory clusters |
|
|
163
|
+
| `get_soft_prompts` | Retrieve auto-learned patterns for context injection |
|
|
164
|
+
| `reap_processes` | Clean orphaned SLM processes |
|
|
165
|
+
| `get_retention_stats` | Memory lifecycle analytics |
|
|
166
|
+
|
|
167
|
+
### Mode A/B Memory Improvements
|
|
168
|
+
|
|
169
|
+
| Metric | V3.2 | V3.3 | Change |
|
|
170
|
+
|:-------|:----:|:----:|:------:|
|
|
171
|
+
| RAM usage (Mode A/B) | ~4GB | ~40MB | **100x reduction** |
|
|
172
|
+
| Retrieval channels | 5 | 6 | +Hopfield completion |
|
|
173
|
+
| MCP tools | 29 | 35 | +6 new |
|
|
174
|
+
| CLI commands | 21 | 26 | +5 new |
|
|
175
|
+
| Dashboard tabs | 20 | 23 | +3 new |
|
|
176
|
+
| API endpoints | 9 | 16 | +7 new |
|
|
177
|
+
|
|
178
|
+
Embedding migration happens automatically when you switch modes — no manual steps needed.
|
|
179
|
+
|
|
180
|
+
### Dashboard
|
|
181
|
+
|
|
182
|
+
Three new tabs: **Memory Lifecycle** (retention curves, decay stats), **Compression** (storage savings, precision distribution), and **Patterns** (auto-learned soft prompts, consolidation history). Seven new API endpoints power the new views.
|
|
183
|
+
|
|
184
|
+
### Enable V3.3 Features
|
|
185
|
+
|
|
186
|
+
All new features default OFF. Zero breaking changes. Opt in when ready:
|
|
187
|
+
|
|
188
|
+
```bash
|
|
189
|
+
# Turn on adaptive memory lifecycle
|
|
190
|
+
slm config set lifecycle.enabled true
|
|
191
|
+
|
|
192
|
+
# Turn on smart compression
|
|
193
|
+
slm config set quantization.enabled true
|
|
194
|
+
|
|
195
|
+
# Turn on cognitive consolidation
|
|
196
|
+
slm config set consolidation.cognitive.enabled true
|
|
197
|
+
|
|
198
|
+
# Turn on pattern learning (soft prompts)
|
|
199
|
+
slm config set soft_prompts.enabled true
|
|
200
|
+
|
|
201
|
+
# Turn on Hopfield retrieval (6th channel)
|
|
202
|
+
slm config set retrieval.hopfield.enabled true
|
|
203
|
+
|
|
204
|
+
# Or enable everything at once
|
|
205
|
+
slm config set v33_features.all true
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
**Fully backward compatible.** All existing MCP tools, CLI commands, and configs work unchanged. New tables are created automatically on first run. No migration needed.
|
|
209
|
+
|
|
210
|
+
---
|
|
211
|
+
|
|
212
|
+
<details>
|
|
213
|
+
<summary><strong>What's New in V3.2 — The Living Brain</strong> (click to expand)</summary>
|
|
214
|
+
|
|
215
|
+
100x faster recall (<10ms at 10K facts), automatic memory surfacing, associative retrieval (5th channel), temporal intelligence with bi-temporal validity, sleep-time consolidation, and core memory blocks. All features default OFF, zero breaking changes.
|
|
216
|
+
|
|
217
|
+
| Metric | V3.0 | V3.2 | Change |
|
|
218
|
+
|:-------|:----:|:----:|:------:|
|
|
219
|
+
| Recall latency (10K facts) | ~500ms | <10ms | **100x faster** |
|
|
220
|
+
| Retrieval channels | 4 | 5 | +spreading activation |
|
|
221
|
+
| MCP tools | 24 | 29 | +5 new |
|
|
222
|
+
| DB tables | 9 | 18 | +9 new |
|
|
223
|
+
|
|
224
|
+
Enable with `slm config set v32_features.all true`. See the [V3.2 Overview](https://github.com/qualixar/superlocalmemory/wiki/V3.2-Overview) wiki page for details.
|
|
225
|
+
|
|
226
|
+
</details>
|
|
227
|
+
|
|
228
|
+
---
|
|
229
|
+
|
|
230
|
+
## Quick Start
|
|
231
|
+
|
|
232
|
+
### Install via npm (recommended)
|
|
233
|
+
|
|
234
|
+
```bash
|
|
235
|
+
npm install -g superlocalmemory
|
|
236
|
+
slm setup # Choose mode (A/B/C)
|
|
237
|
+
slm doctor # Verify everything is working
|
|
238
|
+
slm warmup # Pre-download embedding model (~500MB, optional)
|
|
239
|
+
```
|
|
240
|
+
|
|
241
|
+
### Install via pip
|
|
242
|
+
|
|
243
|
+
```bash
|
|
244
|
+
pip install superlocalmemory
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
### First Use
|
|
248
|
+
|
|
249
|
+
```bash
|
|
250
|
+
slm remember "Alice works at Google as a Staff Engineer"
|
|
251
|
+
slm recall "What does Alice do?"
|
|
252
|
+
slm status
|
|
253
|
+
```
|
|
254
|
+
|
|
255
|
+
### MCP Integration (Claude, Cursor, Windsurf, VS Code, etc.)
|
|
256
|
+
|
|
257
|
+
```json
|
|
258
|
+
{
|
|
259
|
+
"mcpServers": {
|
|
260
|
+
"superlocalmemory": {
|
|
261
|
+
"command": "slm",
|
|
262
|
+
"args": ["mcp"]
|
|
263
|
+
}
|
|
264
|
+
}
|
|
265
|
+
}
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
35 MCP tools + 7 resources available. Works with Claude Code, Cursor, Windsurf, VS Code Copilot, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and 17+ AI tools. **V3.3: Adaptive lifecycle, smart compression, and pattern learning.**
|
|
269
|
+
|
|
270
|
+
### Dual Interface: MCP + CLI
|
|
271
|
+
|
|
272
|
+
SLM works everywhere -- from IDEs to CI pipelines to Docker containers. The only AI memory system with both MCP and agent-native CLI.
|
|
273
|
+
|
|
274
|
+
| Need | Use | Example |
|
|
275
|
+
|------|-----|---------|
|
|
276
|
+
| IDE integration | MCP | Auto-configured for 17+ IDEs via `slm connect` |
|
|
277
|
+
| Shell scripts | CLI + `--json` | `slm recall "auth" --json \| jq '.data.results[0]'` |
|
|
278
|
+
| CI/CD pipelines | CLI + `--json` | `slm remember "deployed v2.1" --json` in GitHub Actions |
|
|
279
|
+
| Agent frameworks | CLI + `--json` | OpenClaw, Codex, Goose, nanobot |
|
|
280
|
+
| Human use | CLI | `slm recall "auth"` (readable text output) |
|
|
281
|
+
|
|
282
|
+
**Agent-native JSON output** on every command:
|
|
283
|
+
|
|
284
|
+
```bash
|
|
285
|
+
# Human-readable (default)
|
|
286
|
+
slm recall "database schema"
|
|
287
|
+
# 1. [0.87] Database uses PostgreSQL 16 on port 5432...
|
|
288
|
+
|
|
289
|
+
# Agent-native JSON
|
|
290
|
+
slm recall "database schema" --json
|
|
291
|
+
# {"success": true, "command": "recall", "version": "3.0.22", "data": {"results": [...]}}
|
|
292
|
+
```
|
|
293
|
+
|
|
294
|
+
All `--json` responses follow a consistent envelope with `success`, `command`, `version`, `data`, and `next_actions` for agent guidance.
|
|
295
|
+
|
|
296
|
+
---
|
|
297
|
+
|
|
298
|
+
## Three Operating Modes
|
|
299
|
+
|
|
300
|
+
| Mode | What | Cloud? | EU AI Act | Best For |
|
|
301
|
+
|:----:|:-----|:------:|:---------:|:---------|
|
|
302
|
+
| **A** | Local Guardian | **None** | **Compliant** | Privacy-first, air-gapped, enterprise |
|
|
303
|
+
| **B** | Smart Local | Local only (Ollama) | Compliant | Better answers, data stays local |
|
|
304
|
+
| **C** | Full Power | Cloud LLM | Partial | Maximum accuracy, research |
|
|
305
|
+
|
|
306
|
+
```bash
|
|
307
|
+
slm mode a # Zero-cloud (default)
|
|
308
|
+
slm mode b # Local Ollama
|
|
309
|
+
slm mode c # Cloud LLM
|
|
310
|
+
```
|
|
311
|
+
|
|
312
|
+
**Mode A** is the only agent memory that operates with **zero cloud dependency** while achieving competitive retrieval accuracy on a standard benchmark. All data stays on your device. No API keys. No GPU. Runs on 2 vCPUs + 4GB RAM.
|
|
313
|
+
|
|
314
|
+
---
|
|
315
|
+
|
|
316
|
+
## Architecture
|
|
317
|
+
|
|
318
|
+
```
|
|
319
|
+
Query ──► Strategy Classifier ──► 6 Parallel Channels:
|
|
320
|
+
├── Semantic (Fisher-Rao geodesic distance)
|
|
321
|
+
├── BM25 (keyword matching)
|
|
322
|
+
├── Entity Graph (spreading activation, 3 hops)
|
|
323
|
+
├── Temporal (date-aware retrieval)
|
|
324
|
+
├── Associative (multi-hop spreading activation)
|
|
325
|
+
└── Hopfield (partial query completion)
|
|
326
|
+
│
|
|
327
|
+
RRF Fusion (k=60)
|
|
328
|
+
│
|
|
329
|
+
Scene Expansion + Bridge Discovery
|
|
330
|
+
│
|
|
331
|
+
Cross-Encoder Reranking
|
|
332
|
+
│
|
|
333
|
+
◄── Top-K Results with channel scores
|
|
334
|
+
```
|
|
335
|
+
|
|
336
|
+
### Mathematical Foundations
|
|
337
|
+
|
|
338
|
+
Three novel contributions replace cloud LLM dependency with mathematical guarantees:
|
|
339
|
+
|
|
340
|
+
1. **Fisher-Rao Retrieval Metric** — Similarity scoring derived from the Fisher information structure of diagonal Gaussian families. Graduated ramp from cosine to geodesic distance over the first 10 accesses. The first application of information geometry to agent memory retrieval.
|
|
341
|
+
|
|
342
|
+
2. **Sheaf Cohomology for Consistency** — Algebraic topology detects contradictions by computing coboundary norms on the knowledge graph. The first algebraic guarantee for contradiction detection in agent memory.
|
|
343
|
+
|
|
344
|
+
3. **Riemannian Langevin Lifecycle** — Memory positions evolve on the Poincare ball via discretized Langevin SDE. Frequently accessed memories stay active; neglected memories self-archive. No hardcoded thresholds.
|
|
345
|
+
|
|
346
|
+
These three layers collectively yield **+12.7pp average improvement** over the engineering-only baseline, with the Fisher metric alone contributing **+10.8pp** on the hardest conversations.
|
|
347
|
+
|
|
348
|
+
---
|
|
349
|
+
|
|
350
|
+
## Benchmarks
|
|
351
|
+
|
|
352
|
+
Evaluated on [LoCoMo](https://arxiv.org/abs/2402.09714) — 10 multi-session conversations, 1,986 total questions, 4 scored categories.
|
|
353
|
+
|
|
354
|
+
### Mode A (Zero-Cloud, 10 Conversations, 1,276 Questions)
|
|
355
|
+
|
|
356
|
+
| Category | Score | vs. Mem0 (64.2%) |
|
|
357
|
+
|:---------|:-----:|:-----------------:|
|
|
358
|
+
| Single-Hop | 72.0% | +3.0pp |
|
|
359
|
+
| Multi-Hop | 70.3% | +8.6pp |
|
|
360
|
+
| Temporal | 80.0% | +21.7pp |
|
|
361
|
+
| **Open-Domain** | **85.0%** | **+35.0pp** |
|
|
362
|
+
| **Aggregate** | **74.8%** | **+10.6pp** |
|
|
363
|
+
|
|
364
|
+
Mode A achieves **85.0% on open-domain questions — the highest of any system in the evaluation**, including cloud-powered ones.
|
|
365
|
+
|
|
366
|
+
### Math Layer Impact (6 Conversations, n=832)
|
|
367
|
+
|
|
368
|
+
| Conversation | With Math | Without | Delta |
|
|
369
|
+
|:-------------|:---------:|:-------:|:-----:|
|
|
370
|
+
| Easiest | 78.5% | 71.2% | +7.3pp |
|
|
371
|
+
| Hardest | 64.2% | 44.3% | **+19.9pp** |
|
|
372
|
+
| **Average** | **71.7%** | **58.9%** | **+12.7pp** |
|
|
373
|
+
|
|
374
|
+
Mathematical layers help most where heuristic methods struggle — the harder the conversation, the bigger the improvement.
|
|
375
|
+
|
|
376
|
+
### Ablation (What Each Component Contributes)
|
|
377
|
+
|
|
378
|
+
| Removed | Impact |
|
|
379
|
+
|:--------|:------:|
|
|
380
|
+
| Cross-encoder reranking | **-30.7pp** |
|
|
381
|
+
| Fisher-Rao metric | **-10.8pp** |
|
|
382
|
+
| All math layers | **-7.6pp** |
|
|
383
|
+
| BM25 channel | **-6.5pp** |
|
|
384
|
+
| Sheaf consistency | -1.7pp |
|
|
385
|
+
| Entity graph | -1.0pp |
|
|
386
|
+
|
|
387
|
+
Full ablation details in the [Wiki](https://github.com/qualixar/superlocalmemory/wiki/Benchmarks).
|
|
388
|
+
|
|
389
|
+
---
|
|
390
|
+
|
|
391
|
+
## EU AI Act Compliance
|
|
392
|
+
|
|
393
|
+
The EU AI Act (Regulation 2024/1689) takes full effect **August 2, 2026**. Every AI memory system that sends personal data to cloud LLMs for core operations has a compliance question to answer.
|
|
394
|
+
|
|
395
|
+
| Requirement | Mode A | Mode B | Mode C |
|
|
396
|
+
|:------------|:------:|:------:|:------:|
|
|
397
|
+
| Data sovereignty (Art. 10) | **Pass** | **Pass** | Requires DPA |
|
|
398
|
+
| Right to erasure (GDPR Art. 17) | **Pass** | **Pass** | **Pass** |
|
|
399
|
+
| Transparency (Art. 13) | **Pass** | **Pass** | **Pass** |
|
|
400
|
+
| No network calls during memory ops | **Yes** | **Yes** | No |
|
|
401
|
+
|
|
402
|
+
To the best of our knowledge, **no existing agent memory system addresses EU AI Act compliance**. Modes A and B pass all checks by architectural design — no personal data leaves the device during any memory operation.
|
|
403
|
+
|
|
404
|
+
Built-in compliance tools: GDPR Article 15/17 export + complete erasure, tamper-proof SHA-256 audit chain, data provenance tracking, ABAC policy enforcement.
|
|
405
|
+
|
|
406
|
+
---
|
|
407
|
+
|
|
408
|
+
## Web Dashboard
|
|
409
|
+
|
|
410
|
+
```bash
|
|
411
|
+
slm dashboard # Opens at http://localhost:8765
|
|
412
|
+
```
|
|
413
|
+
|
|
414
|
+
<details open>
|
|
415
|
+
<summary><strong>Dashboard Screenshots</strong> (click to collapse)</summary>
|
|
416
|
+
<p align="center"><img src="docs/screenshots/01-dashboard-main.png" alt="Dashboard Overview — 3,100+ memories, 430K connections" width="600"/></p>
|
|
417
|
+
<p align="center">
|
|
418
|
+
<img src="docs/screenshots/02-knowledge-graph.png" alt="Knowledge Graph — Sigma.js WebGL with community detection, chat, quick actions, timeline" width="290"/>
|
|
419
|
+
<img src="docs/screenshots/06-graph-communities.png" alt="Graph Communities — Louvain clustering with colored nodes" width="290"/>
|
|
420
|
+
</p>
|
|
421
|
+
<p align="center">
|
|
422
|
+
<img src="docs/screenshots/03-patterns-learning.png" alt="Patterns — 50 learned behavioral patterns with confidence bars" width="190"/>
|
|
423
|
+
<img src="docs/screenshots/04-learning-dashboard.png" alt="Learning — 722 signals, ML Model phase, tech preferences" width="190"/>
|
|
424
|
+
<img src="docs/screenshots/05-behavioral-analysis.png" alt="Behavioral — pattern analysis with confidence distribution" width="190"/>
|
|
425
|
+
</p>
|
|
426
|
+
</details>
|
|
427
|
+
|
|
428
|
+
**v3.4.1 Visual Intelligence:** Sigma.js WebGL knowledge graph with community detection (Louvain/Leiden), 5 quick insight actions, D3 memory timeline, graph-enhanced retrieval (PageRank bias + community boost + contradiction suppression), and 56 auto-mined behavioral patterns. 23+ tabs. Runs locally — no data leaves your machine.
|
|
429
|
+
|
|
430
|
+
---
|
|
431
|
+
|
|
432
|
+
<details>
|
|
433
|
+
<summary><strong>Active Memory (V3.1) — Memory That Learns</strong> (click to expand)</summary>
|
|
434
|
+
|
|
435
|
+
Every recall generates learning signals. Over time, the system adapts to your patterns — from baseline (0-19 signals) → rule-based (20+) → ML model (200+, LightGBM trained on YOUR usage). Zero LLM tokens spent. Four mathematical signals computed locally: co-retrieval, confidence lifecycle, channel performance, and entropy gap.
|
|
436
|
+
|
|
437
|
+
Auto-capture hooks: `slm hooks install` + `slm observe` + `slm session-context`. MCP tools: `session_init`, `observe`, `report_feedback`.
|
|
438
|
+
|
|
439
|
+
**No competitor learns at zero token cost.**
|
|
440
|
+
|
|
441
|
+
</details>
|
|
442
|
+
|
|
443
|
+
---
|
|
444
|
+
|
|
445
|
+
## Features
|
|
446
|
+
|
|
447
|
+
### Retrieval
|
|
448
|
+
- 6-channel hybrid: Semantic (Fisher-Rao) + BM25 + Entity Graph + Temporal + Associative + Hopfield
|
|
449
|
+
- RRF fusion + cross-encoder reranking
|
|
450
|
+
- Agentic sufficiency verification (auto-retry on weak results)
|
|
451
|
+
- Adaptive ranking with LightGBM (learns from usage)
|
|
452
|
+
- Hopfield completion for vague/partial queries
|
|
453
|
+
|
|
454
|
+
### Intelligence
|
|
455
|
+
- 11-step ingestion pipeline (entity resolution, fact extraction, emotional tagging, scene building)
|
|
456
|
+
- Automatic contradiction detection via sheaf cohomology
|
|
457
|
+
- Adaptive memory lifecycle — memories strengthen with use, fade when neglected
|
|
458
|
+
- Smart compression — embedding precision adapts to memory importance (up to 32x savings)
|
|
459
|
+
- Cognitive consolidation — automatic pattern extraction from related memories
|
|
460
|
+
- Auto-learned soft prompts injected into agent context
|
|
461
|
+
- Behavioral pattern detection and outcome tracking
|
|
462
|
+
|
|
463
|
+
### Trust & Security
|
|
464
|
+
- Bayesian Beta-distribution trust scoring (per-agent, per-fact)
|
|
465
|
+
- Trust gates (block low-trust agents from writing/deleting)
|
|
466
|
+
- ABAC (Attribute-Based Access Control) with DB-persisted policies
|
|
467
|
+
- Tamper-proof hash-chain audit trail (SHA-256 linked entries)
|
|
468
|
+
|
|
469
|
+
### Infrastructure
|
|
470
|
+
- 23-tab web dashboard with real-time visualization
|
|
471
|
+
- 17+ IDE integrations (Claude, Cursor, Windsurf, VS Code, JetBrains, Zed, etc.)
|
|
472
|
+
- 35 MCP tools + 7 MCP resources
|
|
473
|
+
- Profile isolation (independent memory spaces)
|
|
474
|
+
- 1400+ tests, AGPL v3, cross-platform (Mac/Linux/Windows)
|
|
475
|
+
- CPU-only — no GPU required
|
|
476
|
+
- Automatic orphaned process cleanup
|
|
477
|
+
|
|
478
|
+
---
|
|
479
|
+
|
|
480
|
+
## CLI Reference
|
|
481
|
+
|
|
482
|
+
| Command | What It Does |
|
|
483
|
+
|:--------|:-------------|
|
|
484
|
+
| `slm remember "..."` | Store a memory |
|
|
485
|
+
| `slm recall "..."` | Search memories |
|
|
486
|
+
| `slm forget "..."` | Delete matching memories |
|
|
487
|
+
| `slm trace "..."` | Recall with per-channel score breakdown |
|
|
488
|
+
| `slm status` | System status |
|
|
489
|
+
| `slm health` | Math layer health (Fisher, Sheaf, Langevin) |
|
|
490
|
+
| `slm doctor` | Pre-flight check (deps, worker, Ollama, database) |
|
|
491
|
+
| `slm mode a/b/c` | Switch operating mode |
|
|
492
|
+
| `slm setup` | Interactive first-time wizard |
|
|
493
|
+
| `slm warmup` | Pre-download embedding model |
|
|
494
|
+
| `slm migrate` | V2 to V3 migration |
|
|
495
|
+
| `slm dashboard` | Launch 17-tab web dashboard |
|
|
496
|
+
| `slm mcp` | Start MCP server (for IDE integration) |
|
|
497
|
+
| `slm connect` | Configure IDE integrations |
|
|
498
|
+
| `slm hooks install` | Wire auto-memory into Claude Code hooks |
|
|
499
|
+
| `slm profile list/create/switch` | Profile management |
|
|
500
|
+
| `slm decay` | Run memory lifecycle review |
|
|
501
|
+
| `slm quantize` | Run smart compression cycle |
|
|
502
|
+
| `slm consolidate --cognitive` | Extract patterns from memory clusters |
|
|
503
|
+
| `slm soft-prompts` | View auto-learned patterns |
|
|
504
|
+
| `slm reap` | Clean orphaned SLM processes |
|
|
505
|
+
|
|
506
|
+
---
|
|
507
|
+
|
|
508
|
+
## Research Papers
|
|
509
|
+
|
|
510
|
+
SuperLocalMemory is backed by three peer-reviewed research papers covering trust, information geometry, and cognitive memory architecture.
|
|
511
|
+
|
|
512
|
+
### Paper 3: The Living Brain (V3.3)
|
|
513
|
+
> **SuperLocalMemory V3.3: The Living Brain — Biologically-Inspired Forgetting, Cognitive Quantization, and Multi-Channel Retrieval for Zero-LLM Agent Memory Systems**
|
|
514
|
+
> Varun Pratap Bhardwaj (2026)
|
|
515
|
+
> [arXiv:2604.04514](https://arxiv.org/abs/2604.04514) · [Zenodo DOI: 10.5281/zenodo.19435120](https://zenodo.org/records/19435120)
|
|
516
|
+
|
|
517
|
+
### Paper 2: Information-Geometric Foundations (V3)
|
|
518
|
+
> **SuperLocalMemory V3: Information-Geometric Foundations for Zero-LLM Enterprise Agent Memory**
|
|
519
|
+
> Varun Pratap Bhardwaj (2026)
|
|
520
|
+
> [arXiv:2603.14588](https://arxiv.org/abs/2603.14588) · [Zenodo DOI: 10.5281/zenodo.19038659](https://zenodo.org/records/19038659)
|
|
521
|
+
|
|
522
|
+
### Paper 1: Trust & Behavioral Foundations (V2)
|
|
523
|
+
> **SuperLocalMemory: A Structured Local Memory Architecture for Persistent AI Agent Context**
|
|
524
|
+
> Varun Pratap Bhardwaj (2026)
|
|
525
|
+
> [arXiv:2603.02240](https://arxiv.org/abs/2603.02240) · [Zenodo DOI: 10.5281/zenodo.18709670](https://zenodo.org/records/18709670)
|
|
526
|
+
|
|
527
|
+
### Cite This Work
|
|
528
|
+
|
|
529
|
+
```bibtex
|
|
530
|
+
@article{bhardwaj2026slmv33,
|
|
531
|
+
title={SuperLocalMemory V3.3: The Living Brain — Biologically-Inspired
|
|
532
|
+
Forgetting, Cognitive Quantization, and Multi-Channel Retrieval
|
|
533
|
+
for Zero-LLM Agent Memory Systems},
|
|
534
|
+
author={Bhardwaj, Varun Pratap},
|
|
535
|
+
journal={arXiv preprint arXiv:2604.04514},
|
|
536
|
+
year={2026},
|
|
537
|
+
url={https://arxiv.org/abs/2604.04514}
|
|
538
|
+
}
|
|
539
|
+
|
|
540
|
+
@article{bhardwaj2026slmv3,
|
|
541
|
+
title={Information-Geometric Foundations for Zero-LLM Enterprise Agent Memory},
|
|
542
|
+
author={Bhardwaj, Varun Pratap},
|
|
543
|
+
journal={arXiv preprint arXiv:2603.14588},
|
|
544
|
+
year={2026}
|
|
545
|
+
}
|
|
546
|
+
|
|
547
|
+
@article{bhardwaj2026slm,
|
|
548
|
+
title={A Structured Local Memory Architecture for Persistent AI Agent Context},
|
|
549
|
+
author={Bhardwaj, Varun Pratap},
|
|
550
|
+
journal={arXiv preprint arXiv:2603.02240},
|
|
551
|
+
year={2026}
|
|
552
|
+
}
|
|
553
|
+
```
|
|
554
|
+
|
|
555
|
+
---
|
|
556
|
+
|
|
557
|
+
## Prerequisites
|
|
558
|
+
|
|
559
|
+
| Requirement | Version | Why |
|
|
560
|
+
|:-----------|:--------|:----|
|
|
561
|
+
| **Node.js** | 14+ | npm package manager |
|
|
562
|
+
| **Python** | 3.11+ | V3 engine runtime |
|
|
563
|
+
|
|
564
|
+
All Python dependencies install automatically during `npm install` — core math, dashboard server, learning engine, and performance optimizations. If anything fails, the installer shows exact fix commands. Run `slm doctor` after install to verify everything works. BM25 keyword search works even without embeddings — you're never fully blocked.
|
|
565
|
+
|
|
566
|
+
| Component | Size | When |
|
|
567
|
+
|:----------|:-----|:-----|
|
|
568
|
+
| Core libraries (numpy, scipy, networkx) | ~50MB | During install |
|
|
569
|
+
| Dashboard & MCP server (fastapi, uvicorn) | ~20MB | During install |
|
|
570
|
+
| Learning engine (lightgbm) | ~10MB | During install |
|
|
571
|
+
| Search engine (sentence-transformers, torch) | ~200MB | During install |
|
|
572
|
+
| Embedding model (nomic-embed-text-v1.5, 768d) | ~500MB | First use or `slm warmup` |
|
|
573
|
+
| **Mode B** requires [Ollama](https://ollama.com) + a model (`ollama pull llama3.2`) | ~2GB | Manual |
|
|
574
|
+
|
|
575
|
+
---
|
|
576
|
+
|
|
577
|
+
## Contributing
|
|
578
|
+
|
|
579
|
+
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. [Wiki](https://github.com/qualixar/superlocalmemory/wiki) for detailed documentation.
|
|
580
|
+
|
|
581
|
+
## License
|
|
582
|
+
|
|
583
|
+
GNU Affero General Public License v3.0 (AGPL-3.0). See [LICENSE](LICENSE).
|
|
584
|
+
|
|
585
|
+
For commercial licensing (closed-source, proprietary, or hosted use), see [COMMERCIAL-LICENSE.md](COMMERCIAL-LICENSE.md) or contact varun.pratap.bhardwaj@gmail.com.
|
|
586
|
+
|
|
587
|
+
Copyright (c) 2026 Varun Pratap Bhardwaj / Qualixar.
|
|
588
|
+
|
|
589
|
+
## Attribution
|
|
590
|
+
|
|
591
|
+
Part of [Qualixar](https://qualixar.com) · Author: [Varun Pratap Bhardwaj](https://varunpratap.com)
|
|
592
|
+
|
|
593
|
+
---
|
|
594
|
+
|
|
595
|
+
<p align="center">
|
|
596
|
+
<sub>Built with mathematical rigor. Not in the race — here to help everyone build better AI memory systems.</sub>
|
|
597
|
+
</p>
|