superlocalmemory 2.6.5 → 2.7.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -16,6 +16,63 @@ SuperLocalMemory V2 - Intelligent local memory system for AI coding assistants.
16
16
 
17
17
  ---
18
18
 
19
+ ## [2.7.1] - 2026-02-16
20
+
21
+ ### Added
22
+ - **Learning Dashboard Tab** — New "Learning" tab in the web dashboard showing ranking phase, tech preferences, workflow patterns, source quality, engagement health, and privacy controls
23
+ - **Learning API Routes** — `/api/learning/status`, `/api/learning/reset`, `/api/learning/retrain` endpoints for the dashboard
24
+ - **One-click Reset** — Reset all learning data directly from the dashboard UI
25
+
26
+ ---
27
+
28
+ ## [2.7.0] - 2026-02-16
29
+
30
+ **Release Type:** Major Feature Release — "Your AI Learns You"
31
+
32
+ SuperLocalMemory now learns your patterns, adapts to your workflow, and personalizes recall. All processing happens 100% locally — your behavioral data never leaves your machine. GDPR Article 17 compliant by design.
33
+
34
+ ### Added
35
+ - **Adaptive Learning System** — Three-layer learning architecture that detects tech preferences, project context, and workflow patterns across all your projects
36
+ - **Personalized Recall Ranking** — Search results re-ranked using learned patterns. Three-phase adaptive system: baseline → rule-based → ML (LightGBM LambdaRank)
37
+ - **Synthetic Bootstrap** — ML model works from day 1 by bootstrapping from existing memory patterns. No cold-start degradation.
38
+ - **Multi-Channel Feedback** — Tell the system which memories were useful via MCP (`memory_used`), CLI (`slm useful`), or dashboard clicks
39
+ - **Source Quality Scoring** — Learns which tools produce the most useful memories using Beta-Binomial Bayesian scoring
40
+ - **Workflow Pattern Detection** — Detects your coding workflow sequences (e.g., docs → architecture → code → test) using time-weighted sliding-window mining
41
+ - **Local Engagement Metrics** — Track memory system health locally with zero telemetry (`slm engagement`)
42
+ - **Separate Learning Database** — Behavioral data in `learning.db`, isolated from `memory.db`. One-command erasure: `slm learning reset`
43
+ - **3 New MCP Tools** — `memory_used` (feedback signal), `get_learned_patterns` (transparency), `correct_pattern` (user control)
44
+ - **2 New MCP Resources** — `memory://learning/status`, `memory://engagement`
45
+ - **New CLI Commands** — `slm useful`, `slm learning status/retrain/reset`, `slm engagement`, `slm patterns correct`
46
+ - **New Skill** — `slm-show-patterns` for viewing learned preferences in Claude Code and compatible tools
47
+ - **Auto Python Installation** — `install.sh` now auto-installs Python 3 on macOS (Homebrew/Xcode) and Linux (apt/dnf) for new users
48
+ - **319 Tests** — 229 unit tests + 13 E2E + 14 regression + 19 fresh-install + 42 edge-case tests
49
+
50
+ ### Research Foundations
51
+ - Two-stage BM25 → re-ranker pipeline (eKNOW 2025)
52
+ - LightGBM LambdaRank pairwise ranking (Burges 2010, MO-LightGBM SIGIR 2025)
53
+ - Three-phase cold-start mitigation (LREC 2024)
54
+ - Time-weighted sequence mining (TSW-PrefixSpan, IEEE 2020)
55
+ - Bayesian temporal confidence (MACLA, arXiv:2512.18950)
56
+ - Privacy-preserving zero-communication feedback design
57
+
58
+ ### Changed
59
+ - **MCP Tools** — Now 12 tools (was 9), 6 resources (was 4), 2 prompts
60
+ - **Skills** — Now 7 universal skills (was 6)
61
+ - **install.sh** — Auto-installs Python if missing, installs learning deps automatically
62
+ - **DMG Installer** — Updated to v2.7.0 with learning modules
63
+
64
+ ### Dependencies (Optional)
65
+ - `lightgbm>=4.0.0` — ML ranking (auto-installed, graceful fallback if unavailable)
66
+ - `scipy>=1.9.0` — Statistical functions (auto-installed, graceful fallback if unavailable)
67
+
68
+ ### Performance
69
+ - Re-ranking adds <15ms latency to recall queries
70
+ - Learning DB typically <1MB for 1,000 memories
71
+ - Bootstrap model trains in <30 seconds for 1,000 memories
72
+ - All BM1-BM6 benchmarks: no regression >10%
73
+
74
+ ---
75
+
19
76
  ## [2.6.5] - 2026-02-16
20
77
 
21
78
  ### Added
package/README.md CHANGED
@@ -44,22 +44,89 @@
44
44
 
45
45
  ---
46
46
 
47
- ## What's New in v2.6.5
47
+ ## What's New in v2.7 — "Your AI Learns You"
48
48
 
49
- **Interactive Knowledge Graph** Your memories, visually connected. Powered by [Cytoscape.js](https://js.cytoscape.org/) (the same library behind Obsidian's graph plugins):
49
+ **SuperLocalMemory now learns your patterns, adapts to your workflow, and personalizes recall all 100% locally on your machine.** No cloud. No LLM. Your behavioral data never leaves your device.
50
50
 
51
- - 🔍 **Zoom, pan, explore** — Mouse wheel to zoom, drag to pan, smooth navigation
52
- - 👆 **Click nodes** — Opens memory preview with "View Full Memory", "Expand Neighbors", "Filter to Cluster"
53
- - 🎨 **6 layout algorithms** — Force-directed (physics-based), circular, grid, hierarchical, concentric, breadthfirst
54
- - 🔗 **Smart filtering** — Click cluster cards or entity badges → graph instantly updates
55
- - ⚡ **Performance** — 3-tier rendering strategy handles 10,000+ nodes smoothly
51
+ ### Adaptive Learning System
56
52
 
57
- Launch the dashboard: `python3 ~/.claude-memory/ui_server.py` `http://localhost:8765/graph.html`
53
+ Your memory system evolves with you through three learning layers:
58
54
 
59
- **[[Complete Interactive Graph Guide →|Using-Interactive-Graph]]**
55
+ | Layer | What It Learns | How |
56
+ |-------|---------------|-----|
57
+ | **Tech Preferences** | "You prefer FastAPI over Django" (83% confidence) | Cross-project frequency analysis with Bayesian scoring |
58
+ | **Project Context** | Detects your active project from 4 signals | Path analysis, tags, profile, content clustering |
59
+ | **Workflow Patterns** | "You typically: docs → architecture → code → test" | Time-weighted sliding-window sequence mining |
60
+
61
+ ### Three-Phase Adaptive Ranking
62
+
63
+ Recall results get smarter over time — automatically:
64
+
65
+ 1. **Phase 1 (Baseline):** Standard search — same as v2.6
66
+ 2. **Phase 2 (Rule-Based):** After ~20 feedback signals — boosts results matching your preferences
67
+ 3. **Phase 3 (ML Ranking):** After ~200 signals — LightGBM LambdaRank re-ranks with 9 personalized features
68
+
69
+ ### Privacy by Design — GDPR Compliant
70
+
71
+ | Concern | SuperLocalMemory v2.7 | Cloud-Based Alternatives |
72
+ |---------|----------------------|--------------------------|
73
+ | **Where is learning data?** | `~/.claude-memory/learning.db` on YOUR machine | Their servers, their terms |
74
+ | **Who processes your behavior?** | Local gradient boosting (no LLM, no GPU) | Cloud LLMs process your data |
75
+ | **Right to erasure (GDPR Art. 17)?** | `slm learning reset` — one command, instant | Submit a request, wait weeks |
76
+ | **Data portability?** | Copy the SQLite file | Vendor lock-in |
77
+ | **Telemetry?** | Zero. Absolutely none. | Usage analytics, behavior tracking |
78
+
79
+ **Your learning data is stored separately from your memories.** Delete `learning.db` and your memories are untouched. Delete `memory.db` and your learning patterns are untouched. Full data sovereignty.
80
+
81
+ ### Research-Backed Architecture
82
+
83
+ Every component is grounded in peer-reviewed research, adapted for local-first operation:
84
+
85
+ | Component | Research Basis |
86
+ |-----------|---------------|
87
+ | Two-stage retrieval pipeline | BM25 → re-ranker (eKNOW 2025) |
88
+ | Adaptive cold-start ranking | Hierarchical meta-learning (LREC 2024) |
89
+ | Time-weighted sequence mining | TSW-PrefixSpan (IEEE 2020) |
90
+ | Bayesian confidence scoring | MACLA (arXiv:2512.18950) |
91
+ | LightGBM LambdaRank | Pairwise ranking (Burges 2010, MO-LightGBM SIGIR 2025) |
92
+ | Privacy-preserving feedback | Zero-communication design — stronger than differential privacy (ADPMF, IPM 2024) |
93
+
94
+ ### New MCP Tools
95
+
96
+ | Tool | Purpose |
97
+ |------|---------|
98
+ | `memory_used` | Tell the AI which recalled memories were useful — trains the ranking model |
99
+ | `get_learned_patterns` | See what the system has learned about your preferences |
100
+ | `correct_pattern` | Fix a wrong pattern — your correction overrides with maximum confidence |
101
+
102
+ ### New CLI Commands
103
+
104
+ ```bash
105
+ slm useful 42 87 # Mark memories as useful (ranking feedback)
106
+ slm patterns list # See learned tech preferences
107
+ slm learning status # Learning system diagnostics
108
+ slm learning reset # Delete all behavioral data (memories preserved)
109
+ slm engagement # Local engagement health metrics
110
+ ```
111
+
112
+ **Upgrade:** `npm install -g superlocalmemory@latest` — Learning dependencies install automatically.
113
+
114
+ [Learning System Guide →](https://github.com/varun369/SuperLocalMemoryV2/wiki/Learning-System) | [Upgrade Guide →](https://github.com/varun369/SuperLocalMemoryV2/wiki/Upgrading-to-v2.7) | [Full Changelog](CHANGELOG.md)
60
115
 
61
116
  ---
62
117
 
118
+ <details>
119
+ <summary><strong>Previous: v2.6.5 — Interactive Knowledge Graph</strong></summary>
120
+
121
+ - Fully interactive visualization with zoom, pan, click-to-explore (Cytoscape.js)
122
+ - 6 layout algorithms, smart cluster filtering, 10,000+ node performance
123
+ - Mobile & accessibility support: touch gestures, keyboard nav, screen reader
124
+
125
+ </details>
126
+
127
+ <details>
128
+ <summary><strong>Previous: v2.6 — Security & Scale</strong></summary>
129
+
63
130
  ## What's New in v2.6
64
131
 
65
132
  SuperLocalMemory is now **production-hardened** with security, performance, and scale improvements:
@@ -76,6 +143,8 @@ SuperLocalMemory is now **production-hardened** with security, performance, and
76
143
 
77
144
  [Interactive Architecture Diagram](https://superlocalmemory.com/architecture.html) | [Architecture Doc](docs/ARCHITECTURE-V2.5.md) | [Full Changelog](CHANGELOG.md)
78
145
 
146
+ </details>
147
+
79
148
  ---
80
149
 
81
150
  ## The Problem
@@ -165,14 +234,18 @@ python3 ~/.claude-memory/ui_server.py
165
234
 
166
235
  ### Built on 2026 Research
167
236
 
168
- Not another simple key-value store. SuperLocalMemory implements **cutting-edge memory architecture**:
237
+ Not another simple key-value store. SuperLocalMemory implements **cutting-edge memory architecture** backed by peer-reviewed research:
169
238
 
170
239
  - **PageIndex** (Meta AI) → Hierarchical memory organization
171
240
  - **GraphRAG** (Microsoft) → Knowledge graph with auto-clustering
172
241
  - **xMemory** (Stanford) → Identity pattern learning
173
242
  - **A-RAG** → Multi-level retrieval with context awareness
243
+ - **LambdaRank** (Burges 2010, MO-LightGBM SIGIR 2025) → Adaptive re-ranking *(v2.7)*
244
+ - **TSW-PrefixSpan** (IEEE 2020) → Time-weighted workflow pattern mining *(v2.7)*
245
+ - **MACLA** (arXiv:2512.18950) → Bayesian temporal confidence scoring *(v2.7)*
246
+ - **FCS** (LREC 2024) → Hierarchical cold-start mitigation *(v2.7)*
174
247
 
175
- **The only open-source implementation combining all four approaches.**
248
+ **The only open-source implementation combining all eight approaches — entirely locally.**
176
249
 
177
250
  [See research citations →](https://github.com/varun369/SuperLocalMemoryV2/wiki/Research-Foundations)
178
251
 
@@ -197,11 +270,16 @@ Not another simple key-value store. SuperLocalMemory implements **cutting-edge m
197
270
  │ 17+ IDEs with single database │
198
271
  ├─────────────────────────────────────────────────────────────┤
199
272
  │ Layer 6: MCP INTEGRATION │
200
- │ Model Context Protocol: 6 tools, 4 resources, 2 prompts
273
+ │ Model Context Protocol: 12 tools, 6 resources, 2 prompts
201
274
  │ Auto-configured for Cursor, Windsurf, Claude │
202
275
  ├─────────────────────────────────────────────────────────────┤
276
+ │ Layer 5½: ADAPTIVE LEARNING (v2.7 — NEW) │
277
+ │ Three-layer learning: tech prefs + project context + flow │
278
+ │ LightGBM LambdaRank re-ranking (fully local, no cloud) │
279
+ │ Research: eKNOW 2025, MACLA, TSW-PrefixSpan, LREC 2024 │
280
+ ├─────────────────────────────────────────────────────────────┤
203
281
  │ Layer 5: SKILLS LAYER │
204
- 6 universal slash-commands for AI assistants │
282
+ 7 universal slash-commands for AI assistants │
205
283
  │ Compatible with Claude Code, Continue, Cody │
206
284
  ├─────────────────────────────────────────────────────────────┤
207
285
  │ Layer 4: PATTERN LEARNING + MACLA │
@@ -224,6 +302,7 @@ Not another simple key-value store. SuperLocalMemory implements **cutting-edge m
224
302
 
225
303
  ### Key Capabilities
226
304
 
305
+ - **[Adaptive Learning System](https://github.com/varun369/SuperLocalMemoryV2/wiki/Learning-System)** — Learns your tech preferences, workflow patterns, and project context. Personalizes recall ranking using local ML (LightGBM). Zero cloud dependency. *New in v2.7*
227
306
  - **[Knowledge Graphs](https://github.com/varun369/SuperLocalMemoryV2/wiki/Knowledge-Graph)** — Automatic relationship discovery. Interactive visualization with zoom, pan, click.
228
307
  - **[Pattern Learning](https://github.com/varun369/SuperLocalMemoryV2/wiki/Pattern-Learning)** — Learns your coding preferences and style automatically.
229
308
  - **[Multi-Profile Support](https://github.com/varun369/SuperLocalMemoryV2/wiki/Using-Memory-Profiles)** — Isolated contexts for work, personal, clients. Zero context bleeding.
@@ -302,14 +381,18 @@ Not another simple key-value store. SuperLocalMemory implements **cutting-edge m
302
381
  | **Universal CLI** | ❌ | ❌ | ❌ | ❌ | ✅ |
303
382
  | **Multi-Layer Architecture** | ❌ | ❌ | ❌ | ❌ | ✅ |
304
383
  | **Pattern Learning** | ❌ | ❌ | ❌ | ❌ | ✅ |
384
+ | **Adaptive ML Ranking** | Cloud LLM | ❌ | ❌ | ❌ | ✅ **Local ML** |
305
385
  | **Knowledge Graphs** | ✅ | ✅ | ❌ | ❌ | ✅ |
306
386
  | **100% Local** | ❌ | ❌ | Partial | Partial | ✅ |
387
+ | **GDPR by Design** | ❌ | ❌ | ❌ | ❌ | ✅ |
307
388
  | **Zero Setup** | ❌ | ❌ | ❌ | ❌ | ✅ |
308
389
  | **Completely Free** | Limited | Limited | Partial | ✅ | ✅ |
309
390
 
310
391
  **SuperLocalMemory V2 is the ONLY solution that:**
392
+ - ✅ **Learns and adapts** locally — no cloud LLM needed for personalization
311
393
  - ✅ Works across 17+ IDEs and CLI tools
312
394
  - ✅ Remains 100% local (no cloud dependencies)
395
+ - ✅ GDPR Article 17 compliant — one-command data erasure
313
396
  - ✅ Completely free with unlimited memories
314
397
 
315
398
  [See full competitive analysis →](docs/COMPETITIVE-ANALYSIS.md)
package/bin/slm CHANGED
@@ -121,13 +121,178 @@ case "$1" in
121
121
  context)
122
122
  python3 "$SLM_DIR/pattern_learner.py" context "$@"
123
123
  ;;
124
+ correct)
125
+ # slm patterns correct <id> <new_value>
126
+ pattern_id="$1"
127
+ new_value="$2"
128
+ if [ -z "$pattern_id" ] || [ -z "$new_value" ]; then
129
+ echo "Usage: slm patterns correct <pattern_id> <new_value>"
130
+ exit 1
131
+ fi
132
+ python3 -c "
133
+ import sys
134
+ sys.path.insert(0, '${SLM_DIR}')
135
+ sys.path.insert(0, '$(dirname "${SLM_DIR}")/Documents/AGENTIC_Official/SuperLocalMemoryV2-repo/src')
136
+ try:
137
+ from learning import get_learning_db
138
+ ldb = get_learning_db()
139
+ if ldb:
140
+ conn = ldb._get_connection()
141
+ cursor = conn.cursor()
142
+ cursor.execute('SELECT * FROM transferable_patterns WHERE id = ?', (int(sys.argv[1]),))
143
+ p = cursor.fetchone()
144
+ conn.close()
145
+ if p:
146
+ ldb.upsert_transferable_pattern(
147
+ pattern_type=p['pattern_type'], key=p['key'],
148
+ value=sys.argv[2], confidence=1.0,
149
+ evidence_count=p['evidence_count']+1,
150
+ profiles_seen=p['profiles_seen'],
151
+ contradictions=[f\"Corrected from '{p['value']}' to '{sys.argv[2]}'\"],
152
+ )
153
+ print(f\"Pattern '{p['key']}' corrected: '{p['value']}' -> '{sys.argv[2]}'\")
154
+ else:
155
+ print(f'Pattern #{sys.argv[1]} not found')
156
+ else:
157
+ print('Learning database not available')
158
+ except ImportError:
159
+ print('Learning features not available')
160
+ except Exception as e:
161
+ print(f'Error: {e}')
162
+ " "$pattern_id" "$new_value"
163
+ ;;
124
164
  *)
125
- echo "Usage: slm patterns {update|list|context}"
165
+ echo "Usage: slm patterns {update|list|context|correct}"
126
166
  exit 1
127
167
  ;;
128
168
  esac
129
169
  ;;
130
170
 
171
+ useful)
172
+ shift
173
+ # Mark memory IDs as useful (v2.7 learning feedback)
174
+ if [ -z "$1" ]; then
175
+ echo "Usage: slm useful <memory_id> [memory_id...]"
176
+ echo " Mark recalled memories as useful to improve future ranking"
177
+ exit 1
178
+ fi
179
+ python3 -c "
180
+ import sys
181
+ sys.path.insert(0, '${SLM_DIR}')
182
+ sys.path.insert(0, '$(dirname \"${SLM_DIR}\")/Documents/AGENTIC_Official/SuperLocalMemoryV2-repo/src')
183
+ try:
184
+ from learning.feedback_collector import FeedbackCollector
185
+ fc = FeedbackCollector()
186
+ ids = [int(x) for x in sys.argv[1:]]
187
+ fc.record_cli_useful(ids, query='cli-feedback')
188
+ print(f'Marked {len(ids)} memor{\"y\" if len(ids)==1 else \"ies\"} as useful')
189
+ except ImportError:
190
+ print('Learning features not available. Install: pip3 install lightgbm scipy')
191
+ except Exception as e:
192
+ print(f'Error: {e}')
193
+ " "$@"
194
+ ;;
195
+
196
+ learning)
197
+ shift
198
+ subcommand="${1:-status}"
199
+ shift 2>/dev/null || true
200
+ case "$subcommand" in
201
+ status)
202
+ python3 -c "
203
+ import sys, json
204
+ sys.path.insert(0, '${SLM_DIR}')
205
+ sys.path.insert(0, '$(dirname \"${SLM_DIR}\")/Documents/AGENTIC_Official/SuperLocalMemoryV2-repo/src')
206
+ try:
207
+ from learning import get_status
208
+ status = get_status()
209
+ print('SuperLocalMemory v2.7 — Learning System Status')
210
+ print('=' * 50)
211
+ deps = status['dependencies']
212
+ print(f'LightGBM: {\"installed (\" + deps[\"lightgbm\"][\"version\"] + \")\" if deps[\"lightgbm\"][\"installed\"] else \"not installed\"}')
213
+ print(f'SciPy: {\"installed (\" + deps[\"scipy\"][\"version\"] + \")\" if deps[\"scipy\"][\"installed\"] else \"not installed\"}')
214
+ print(f'ML Ranking: {\"available\" if status[\"ml_ranking_available\"] else \"unavailable\"}')
215
+ print(f'Full Learning: {\"available\" if status[\"learning_available\"] else \"unavailable\"}')
216
+ if status.get('learning_db_stats'):
217
+ s = status['learning_db_stats']
218
+ print(f'')
219
+ print(f'Feedback signals: {s[\"feedback_count\"]}')
220
+ print(f'Unique queries: {s[\"unique_queries\"]}')
221
+ print(f'Patterns learned: {s[\"transferable_patterns\"]} ({s[\"high_confidence_patterns\"]} high confidence)')
222
+ print(f'Workflow patterns: {s[\"workflow_patterns\"]}')
223
+ print(f'Sources tracked: {s[\"tracked_sources\"]}')
224
+ print(f'Models trained: {s[\"models_trained\"]}')
225
+ print(f'Learning DB size: {s[\"db_size_kb\"]} KB')
226
+ except ImportError:
227
+ print('Learning features not available. Install: pip3 install lightgbm scipy')
228
+ except Exception as e:
229
+ print(f'Error: {e}')
230
+ "
231
+ ;;
232
+ retrain)
233
+ python3 -c "
234
+ import sys
235
+ sys.path.insert(0, '${SLM_DIR}')
236
+ sys.path.insert(0, '$(dirname \"${SLM_DIR}\")/Documents/AGENTIC_Official/SuperLocalMemoryV2-repo/src')
237
+ try:
238
+ from learning import get_adaptive_ranker
239
+ ranker = get_adaptive_ranker()
240
+ if ranker:
241
+ result = ranker.train(force=True)
242
+ if result:
243
+ print(f'Model retrained successfully')
244
+ else:
245
+ print('Insufficient data for training (need 200+ feedback signals)')
246
+ else:
247
+ print('Adaptive ranker not available')
248
+ except ImportError:
249
+ print('Learning features not available. Install: pip3 install lightgbm scipy')
250
+ except Exception as e:
251
+ print(f'Error: {e}')
252
+ "
253
+ ;;
254
+ reset)
255
+ python3 -c "
256
+ import sys
257
+ sys.path.insert(0, '${SLM_DIR}')
258
+ sys.path.insert(0, '$(dirname \"${SLM_DIR}\")/Documents/AGENTIC_Official/SuperLocalMemoryV2-repo/src')
259
+ try:
260
+ from learning import get_learning_db
261
+ ldb = get_learning_db()
262
+ if ldb:
263
+ ldb.reset()
264
+ print('All learning data reset. Memories in memory.db preserved.')
265
+ else:
266
+ print('Learning database not available')
267
+ except ImportError:
268
+ print('Learning features not available')
269
+ except Exception as e:
270
+ print(f'Error: {e}')
271
+ "
272
+ ;;
273
+ *)
274
+ echo "Usage: slm learning {status|retrain|reset}"
275
+ exit 1
276
+ ;;
277
+ esac
278
+ ;;
279
+
280
+ engagement)
281
+ python3 -c "
282
+ import sys
283
+ sys.path.insert(0, '${SLM_DIR}')
284
+ sys.path.insert(0, '$(dirname \"${SLM_DIR}\")/Documents/AGENTIC_Official/SuperLocalMemoryV2-repo/src')
285
+ try:
286
+ from learning.engagement_tracker import EngagementTracker
287
+ tracker = EngagementTracker()
288
+ print(tracker.format_for_cli())
289
+ except ImportError:
290
+ print('Learning features not available. Install: pip3 install lightgbm scipy')
291
+ except Exception as e:
292
+ print(f'Error: {e}')
293
+ "
294
+ ;;
295
+
131
296
  help|--help|-h)
132
297
  cat <<EOF
133
298
  SuperLocalMemory V2 - Universal CLI
@@ -154,6 +319,14 @@ PATTERN LEARNING:
154
319
  slm patterns update Learn patterns from memories
155
320
  slm patterns list [threshold] List learned patterns
156
321
  slm patterns context [threshold] Get coding identity context
322
+ slm patterns correct <id> <value> Correct a learned pattern
323
+
324
+ LEARNING (v2.7):
325
+ slm useful <id> [id...] Mark memories as useful (ranking feedback)
326
+ slm learning status Learning system status
327
+ slm learning retrain Force model retrain
328
+ slm learning reset Delete all learning data (memories preserved)
329
+ slm engagement Show engagement metrics
157
330
 
158
331
  WEB DASHBOARD:
159
332
  slm ui [PORT] Start web dashboard (default port 8765)
@@ -187,13 +360,13 @@ DOCUMENTATION:
187
360
  README: https://github.com/varun369/SuperLocalMemoryV2
188
361
  Docs: ~/.claude-memory/docs/
189
362
 
190
- VERSION: 2.4.1
363
+ VERSION: 2.7.0
191
364
  EOF
192
365
  ;;
193
366
 
194
367
  version|--version|-v)
195
368
  echo "SuperLocalMemory V2 - Universal CLI"
196
- echo "Version: 2.4.1"
369
+ echo "Version: 2.7.0"
197
370
  echo "Database: $SLM_DIR/memory.db"
198
371
  ;;
199
372
 
@@ -213,6 +386,9 @@ EOF
213
386
  echo " profile - Manage profiles"
214
387
  echo " graph - Knowledge graph operations"
215
388
  echo " patterns - Pattern learning"
389
+ echo " useful - Mark memories as useful"
390
+ echo " learning - Learning system management"
391
+ echo " engagement - Show engagement metrics"
216
392
  echo " help - Show detailed help"
217
393
  echo ""
218
394
  echo "Run 'slm help' for more information"
@@ -0,0 +1,4 @@
1
+ #!/bin/bash
2
+ # SuperLocalMemory V2 - Learning System Status
3
+ SLM_DIR="${HOME}/.claude-memory"
4
+ exec "${SLM_DIR}/bin/slm" learning "${@:-status}"
@@ -0,0 +1,4 @@
1
+ #!/bin/bash
2
+ # SuperLocalMemory V2 - Show Learned Patterns
3
+ SLM_DIR="${HOME}/.claude-memory"
4
+ exec "${SLM_DIR}/bin/slm" patterns list "$@"
@@ -85,7 +85,7 @@ Simple Storage → Intelligent Organization → Adaptive Learning
85
85
 
86
86
  The MCP server provides native integration with modern AI tools:
87
87
 
88
- **8 Tools:**
88
+ **12 Tools:**
89
89
  - `remember(content, tags, project)` - Save memories
90
90
  - `recall(query, limit)` - Search memories
91
91
  - `list_recent(limit)` - Recent memories
@@ -94,12 +94,18 @@ The MCP server provides native integration with modern AI tools:
94
94
  - `switch_profile(name)` - Change profile
95
95
  - `search(query)` - Search memories (OpenAI MCP spec for ChatGPT Connectors)
96
96
  - `fetch(id)` - Fetch memory by ID (OpenAI MCP spec for ChatGPT Connectors)
97
+ - `backup_status()` - Auto-backup status
98
+ - `memory_used(memory_id, query, usefulness)` - Feedback for learning (v2.7)
99
+ - `get_learned_patterns(min_confidence, category)` - Retrieve learned patterns (v2.7)
100
+ - `correct_pattern(pattern_id, correct_value)` - Correct a learned pattern (v2.7)
97
101
 
98
- **4 Resources:**
99
- - `memory://recent` - Recent memories feed
102
+ **6 Resources:**
103
+ - `memory://recent/{limit}` - Recent memories feed
100
104
  - `memory://stats` - System statistics
101
105
  - `memory://graph/clusters` - Graph clusters
102
106
  - `memory://patterns/identity` - Learned patterns
107
+ - `memory://learning/status` - Learning system status (v2.7)
108
+ - `memory://engagement` - Engagement metrics (v2.7)
103
109
 
104
110
  **2 Prompts:**
105
111
  - `coding_identity_prompt` - User's coding style
@@ -221,16 +227,16 @@ SuperLocalMemory V2 uses a hierarchical, additive architecture where each layer
221
227
 
222
228
  ┌─────────────────────────────────────────────────────────────────┐
223
229
  │ Layer 6: MCP Integration (mcp_server.py) │
224
- 6 tools, 4 resources, 2 prompts
230
+ 12 tools, 6 resources, 2 prompts
225
231
  │ Auto-configured for Claude Desktop, Cursor, Windsurf, etc. │
226
232
  │ Output: Native AI tool access via Model Context Protocol │
227
233
  └─────────────────────────────────────────────────────────────────┘
228
234
 
229
235
  ┌─────────────────────────────────────────────────────────────────┐
230
236
  │ Layer 5: Skills Layer (skills/*) │
231
- 6 slash-command skills for Claude Code, Continue.dev, Cody │
237
+ 7 slash-command skills for Claude Code, Continue.dev, Cody │
232
238
  │ slm-remember, slm-recall, slm-status, slm-list-recent, │
233
- │ slm-build-graph, slm-switch-profile
239
+ │ slm-build-graph, slm-switch-profile, slm-show-patterns
234
240
  │ Output: Familiar /command interface across AI tools │
235
241
  └─────────────────────────────────────────────────────────────────┘
236
242
 
@@ -683,7 +683,7 @@ python3 ~/.claude-memory/mcp_server.py
683
683
  ```
684
684
  ============================================================
685
685
  SuperLocalMemory V2 - MCP Server
686
- Version: 2.4.1
686
+ Version: 2.7.0
687
687
  ============================================================
688
688
 
689
689
  Transport: stdio
@@ -692,10 +692,16 @@ Database: /Users/yourusername/.claude-memory/memory.db
692
692
  MCP Tools Available:
693
693
  - remember(content, tags, project, importance)
694
694
  - recall(query, limit, min_score)
695
+ - search(query) [ChatGPT Connector]
696
+ - fetch(id) [ChatGPT Connector]
695
697
  - list_recent(limit)
696
698
  - get_status()
697
699
  - build_graph()
698
700
  - switch_profile(name)
701
+ - backup_status() [Auto-Backup]
702
+ - memory_used(...) [v2.7 Learning]
703
+ - get_learned_patterns(...) [v2.7 Learning]
704
+ - correct_pattern(...) [v2.7 Learning]
699
705
 
700
706
  ...
701
707
  ```
@@ -731,7 +737,7 @@ In your IDE/app, check:
731
737
 
732
738
  ## Available MCP Tools
733
739
 
734
- Once configured, these 8 tools are available:
740
+ Once configured, these 12 tools are available:
735
741
 
736
742
  | Tool | Purpose | Example Usage |
737
743
  |------|---------|---------------|
@@ -743,10 +749,14 @@ Once configured, these 8 tools are available:
743
749
  | `switch_profile()` | Change profile | "Switch to work profile" |
744
750
  | `search()` | Search memories (OpenAI MCP spec) | Used by ChatGPT Connectors and Deep Research |
745
751
  | `fetch()` | Fetch memory by ID (OpenAI MCP spec) | Used by ChatGPT Connectors and Deep Research |
752
+ | `backup_status()` | Auto-backup status | "What's the backup status?" |
753
+ | `memory_used()` | Feedback for learning (v2.7) | Implicit — called when a recalled memory is used |
754
+ | `get_learned_patterns()` | Retrieve learned patterns (v2.7) | "What patterns have you learned about me?" |
755
+ | `correct_pattern()` | Correct a learned pattern (v2.7) | "I actually prefer Vue, not React" |
746
756
 
747
- **Note:** `search()` and `fetch()` are required by OpenAI's MCP specification for ChatGPT Connectors. They are available in all transports but primarily used by ChatGPT.
757
+ **Note:** `search()` and `fetch()` are required by OpenAI's MCP specification for ChatGPT Connectors. They are available in all transports but primarily used by ChatGPT. The 3 learning tools (`memory_used`, `get_learned_patterns`, `correct_pattern`) require v2.7's optional learning dependencies.
748
758
 
749
- Plus **2 MCP prompts** and **4 MCP resources** for advanced use.
759
+ Plus **2 MCP prompts** and **6 MCP resources** for advanced use.
750
760
 
751
761
  ---
752
762