superlocalmemory 2.3.6 → 2.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -16,6 +16,71 @@ SuperLocalMemory V2 - Intelligent local memory system for AI coding assistants.
16
16
 
17
17
  ---
18
18
 
19
+ ## [2.4.0] - 2026-02-11
20
+
21
+ **Release Type:** Profile System & Intelligence Release
22
+ **Backward Compatible:** Yes (additive schema changes only)
23
+
24
+ ### Added
25
+ - **Column-based memory profiles**: Single `memory.db` with `profile` column — all memories, clusters, patterns, and graph data are profile-scoped. Switch profiles from any IDE/CLI and it takes effect everywhere instantly via shared `profiles.json`
26
+ - **Auto-backup system** (`src/auto_backup.py`): SQLite backup API with configurable interval (daily/weekly), retention policy, and one-click backup from UI
27
+ - **MACLA confidence scorer**: Research-grounded Beta-Binomial Bayesian posterior (arXiv:2512.18950) replaces ad-hoc log2 formula. Pattern-specific priors: preference(1,4), style(1,5), terminology(2,3). Log-scaled competition prevents over-dilution from sparse signals
28
+ - **UI: Profile Management**: Create, switch, and delete profiles from the web dashboard. "+" button in navbar for quick creation, full management table in Settings tab
29
+ - **UI: Settings tab**: Auto-backup status, configuration (interval, max backups, enable toggle), backup history, profile management — all in one place
30
+ - **UI: Column sorting**: Click any column header in the Memories table to sort asc/desc
31
+ - **UI: Enhanced Patterns view**: DOM-based rendering with confidence bars, color coding, type icons
32
+ - **API: Profile isolation on all endpoints**: `/api/graph`, `/api/clusters`, `/api/patterns`, `/api/timeline` now filter by active profile (previously showed all profiles)
33
+ - **API: `get_active_profile()` helper**: Shared function in `ui_server.py` replaces 4 duplicate inline profile-reading blocks
34
+ - **API: Profile CRUD endpoints**: `POST /api/profiles/create`, `DELETE /api/profiles/{name}` with validation and safety (can't delete default or active profile)
35
+
36
+ ### Fixed
37
+ - **Profile switching ValueError**: Rewrote from directory-based to column-based profiles — no more file copy errors on switch
38
+ - **Pattern learner schema validation**: Safe column addition with try/except for `profile` column on `identity_patterns` table
39
+ - **Graph engine schema validation**: Safe column check before profile-filtered queries
40
+ - **Research references**: PageIndex correctly attributed to VectifyAI (not Meta AI), removed fabricated xMemory/Stanford citation, replaced with MemoryBank (AAAI 2024) across wiki and website
41
+ - **Graph tooltip**: Shows project name or Memory ID instead of "Uncategorized" when category is null
42
+
43
+ ### Changed
44
+ - All 4 core layers (storage, tree, graph, patterns) are now profile-aware
45
+ - `memory_store_v2.py`: Every query filters by `WHERE profile = ?` from `_get_active_profile()`
46
+ - `graph_engine.py`: `build_graph()` and `get_stats()` scoped to active profile
47
+ - `pattern_learner.py`: Pattern learning and retrieval scoped to active profile
48
+ - `ui_server.py`: Refactored profile code into shared helper, eliminated 4 duplicate blocks
49
+
50
+ ### Technical Details
51
+ - Schema: `ALTER TABLE memories ADD COLUMN profile TEXT DEFAULT 'default'`
52
+ - Schema: `ALTER TABLE identity_patterns ADD COLUMN profile TEXT DEFAULT 'default'`
53
+ - MACLA formula: `posterior = (alpha + evidence) / (alpha + beta + evidence + log2(total_memories))`
54
+ - Confidence range: 0.0 to 0.95 (capped), with recency and distribution bonuses
55
+ - Backup: Uses SQLite `backup()` API for safe concurrent backup
56
+ - 17 API endpoint tests, 5 core module tests — all passing
57
+
58
+ ---
59
+
60
+ ## [2.3.7] - 2026-02-09
61
+
62
+ ### Added
63
+ - **--full flag**: Show complete memory content without truncation in search/list/recent/cluster commands
64
+ - **Smart truncation**: Memories <5000 chars shown in full, ≥5000 chars truncated to 2000 chars (previously always truncated at 200 chars)
65
+ - **Help text**: Added --full flag documentation to CLI help output
66
+
67
+ ### Fixed
68
+ - **CLI bug**: Fixed `get` command error - `get_memory()` → `get_by_id()` method call
69
+ - **Content display**: Recall now shows full content for short/medium memories instead of always truncating at 200 chars
70
+ - **User experience**: Agents and users can now see complete memory content by default for most memories
71
+
72
+ ### Changed
73
+ - **Truncation logic**: 200 char limit → 2000 char preview for memories ≥5000 chars
74
+ - **Node.js wrappers**: memory-recall-skill.js and memory-list-skill.js updated to pass --full flag through
75
+
76
+ ### Technical Details
77
+ - Added `format_content()` helper function in memory_store_v2.py (line 918)
78
+ - Updated search/list/recent/cluster commands to use smart truncation
79
+ - Backward compatible: same output structure, MCP/API calls unaffected
80
+ - All 74+ existing memories tested: short memories show full, long memories truncate intelligently
81
+
82
+ ---
83
+
19
84
  ## [2.3.5] - 2026-02-09
20
85
 
21
86
  ### Added
package/README.md CHANGED
@@ -119,6 +119,31 @@ superlocalmemoryv2:status
119
119
 
120
120
  **That's it.** No Docker. No API keys. No cloud accounts. No configuration.
121
121
 
122
+ ### Updating to Latest Version
123
+
124
+ **npm users:**
125
+ ```bash
126
+ # Update to latest version
127
+ npm update -g superlocalmemory
128
+
129
+ # Or force latest
130
+ npm install -g superlocalmemory@latest
131
+
132
+ # Install specific version
133
+ npm install -g superlocalmemory@2.3.7
134
+ ```
135
+
136
+ **Manual install users:**
137
+ ```bash
138
+ cd SuperLocalMemoryV2
139
+ git pull origin main
140
+ ./install.sh # Mac/Linux
141
+ # or
142
+ .\install.ps1 # Windows
143
+ ```
144
+
145
+ **Your data is safe:** Updates preserve your database and all memories.
146
+
122
147
  ### Start the Visualization Dashboard
123
148
 
124
149
  ```bash
@@ -34,6 +34,7 @@ Options:
34
34
  • recent: Latest created first (default)
35
35
  • accessed: Most recently accessed
36
36
  • importance: Highest importance first
37
+ --full Show complete content without truncation
37
38
 
38
39
  Examples:
39
40
  memory-list
@@ -42,17 +43,17 @@ Examples:
42
43
 
43
44
  memory-list --sort importance
44
45
 
45
- memory-list --limit 10 --sort accessed
46
+ memory-list --limit 10 --sort accessed --full
46
47
 
47
48
  Output Format:
48
- • ID, Content (truncated), Tags, Importance
49
+ • ID, Content (smart truncated), Tags, Importance
49
50
  • Creation timestamp
50
51
  • Access count and last accessed time
51
52
 
52
53
  Notes:
53
54
  • Default shows last 20 memories
54
- Content is truncated to 100 chars for readability
55
- • Use ID with memory-recall to see full content
55
+ Smart truncation: full content if <5000 chars, preview if ≥5000 chars
56
+ • Use --full flag to always show complete content
56
57
  • Sort by 'accessed' to find frequently used memories
57
58
  `);
58
59
  return;
@@ -61,6 +62,7 @@ Notes:
61
62
  // Parse options
62
63
  let limit = 20;
63
64
  let sortBy = 'recent';
65
+ let showFull = false;
64
66
 
65
67
  for (let i = 0; i < args.length; i++) {
66
68
  const arg = args[i];
@@ -83,6 +85,8 @@ Notes:
83
85
  return;
84
86
  }
85
87
  i++;
88
+ } else if (arg === '--full') {
89
+ showFull = true;
86
90
  }
87
91
  }
88
92
 
@@ -97,6 +101,11 @@ Notes:
97
101
  pythonArgs = ['list', limit.toString()];
98
102
  }
99
103
 
104
+ // Add --full flag if requested
105
+ if (showFull) {
106
+ pythonArgs.push('--full');
107
+ }
108
+
100
109
  try {
101
110
  const { stdout, stderr } = await execFileAsync('python3', [memoryScript, ...pythonArgs]);
102
111
 
@@ -31,13 +31,13 @@ async function memoryProfileSkill() {
31
31
  ║ ║
32
32
  ╚══════════════════════════════════════════════════════════╝
33
33
 
34
- Profiles let you maintain separate memory databases for different contexts:
34
+ Profiles let you maintain separate memory contexts in ONE database:
35
35
  • Work vs Personal projects
36
36
  • Different clients or teams
37
- • Different AI personalities
38
37
  • Experimentation vs Production
39
38
 
40
- Each profile has isolated: memories, graph, patterns, archives
39
+ All profiles share one database. Switching is instant and safe.
40
+ No data copying, no data loss risk.
41
41
 
42
42
  Usage: memory-profile <command> [options]
43
43
 
@@ -157,18 +157,12 @@ After switching, restart Claude CLI for changes to take effect.
157
157
 
158
158
  console.log(`
159
159
  ╔══════════════════════════════════════════════════════════╗
160
- ║ Profile Switch Confirmation
160
+ ║ Profile Switch
161
161
  ╚══════════════════════════════════════════════════════════╝
162
162
 
163
- This will:
164
- Save current profile state
165
- Load profile "${profileName}"
166
- ✓ Update active profile marker
167
-
168
- After switching, you MUST restart Claude CLI for the new profile
169
- to take effect.
170
-
171
- Current memories will be preserved in the old profile.
163
+ This will switch the active profile to "${profileName}".
164
+ All profiles share one database — switching is instant and safe.
165
+ Your current memories are always preserved.
172
166
  `);
173
167
 
174
168
  const answer = await question('Proceed with profile switch? (yes/no): ');
@@ -181,11 +175,6 @@ Current memories will be preserved in the old profile.
181
175
  profileName
182
176
  ]);
183
177
  console.log(stdout);
184
- console.log(`
185
- ⚠️ IMPORTANT: Restart Claude CLI now for profile switch to complete.
186
-
187
- The new profile will not be active until you restart.
188
- `);
189
178
  } catch (error) {
190
179
  console.error('❌ Error:', error.message);
191
180
  if (error.stdout) console.log(error.stdout);
@@ -33,13 +33,14 @@ Arguments:
33
33
 
34
34
  Options:
35
35
  --limit <n> Maximum results to return (default: 10)
36
+ --full Show complete content without truncation
36
37
 
37
38
  Examples:
38
39
  memory-recall "authentication bug"
39
40
 
40
41
  memory-recall "API configuration" --limit 5
41
42
 
42
- memory-recall "security best practices"
43
+ memory-recall "security best practices" --full
43
44
 
44
45
  memory-recall "user preferences"
45
46
 
@@ -47,6 +48,8 @@ Output Format:
47
48
  • Ranked by relevance (TF-IDF cosine similarity)
48
49
  • Shows: ID, Content, Tags, Importance, Timestamp
49
50
  • Higher scores = better matches
51
+ • Smart truncation: full content if <5000 chars, preview if ≥5000 chars
52
+ • Use --full flag to always show complete content
50
53
 
51
54
  Notes:
52
55
  • Uses local TF-IDF search (no external APIs)
@@ -67,6 +70,8 @@ Notes:
67
70
  if (arg === '--limit' && i + 1 < args.length) {
68
71
  // Note: V1 store doesn't support --limit in search, will truncate output instead
69
72
  i++; // Skip but don't add to pythonArgs
73
+ } else if (arg === '--full') {
74
+ pythonArgs.push('--full');
70
75
  } else if (!arg.startsWith('--') && query === null) {
71
76
  query = arg;
72
77
  }
package/mcp_server.py CHANGED
@@ -332,7 +332,8 @@ async def switch_profile(name: str) -> dict:
332
332
  Switch to a different memory profile.
333
333
 
334
334
  Profiles allow you to maintain separate memory contexts
335
- (e.g., work, personal, client projects).
335
+ (e.g., work, personal, client projects). All profiles share
336
+ one database — switching is instant and safe (no data copying).
336
337
 
337
338
  Args:
338
339
  name: Profile name to switch to
@@ -345,25 +346,40 @@ async def switch_profile(name: str) -> dict:
345
346
  }
346
347
  """
347
348
  try:
348
- # Profile switching logic (calls existing system)
349
- profile_path = MEMORY_DIR / "profiles" / name
350
-
351
- if not profile_path.exists():
349
+ # Import profile manager (uses column-based profiles)
350
+ sys.path.insert(0, str(MEMORY_DIR))
351
+ from importlib import import_module
352
+ # Use direct JSON config update for speed
353
+ import json
354
+ config_file = MEMORY_DIR / "profiles.json"
355
+
356
+ if config_file.exists():
357
+ with open(config_file, 'r') as f:
358
+ config = json.load(f)
359
+ else:
360
+ config = {'profiles': {'default': {'name': 'default', 'description': 'Default memory profile'}}, 'active_profile': 'default'}
361
+
362
+ if name not in config.get('profiles', {}):
363
+ available = ', '.join(config.get('profiles', {}).keys())
352
364
  return {
353
365
  "success": False,
354
- "message": f"Profile '{name}' does not exist. Use list_profiles() to see available profiles."
366
+ "message": f"Profile '{name}' not found. Available: {available}"
355
367
  }
356
368
 
357
- # Update current profile symlink
358
- current_link = MEMORY_DIR / "current_profile"
359
- if current_link.exists() or current_link.is_symlink():
360
- current_link.unlink()
361
- current_link.symlink_to(profile_path)
369
+ old_profile = config.get('active_profile', 'default')
370
+ config['active_profile'] = name
371
+
372
+ from datetime import datetime
373
+ config['profiles'][name]['last_used'] = datetime.now().isoformat()
374
+
375
+ with open(config_file, 'w') as f:
376
+ json.dump(config, f, indent=2)
362
377
 
363
378
  return {
364
379
  "success": True,
365
380
  "profile": name,
366
- "message": f"Switched to profile '{name}'. Restart IDE to use new profile."
381
+ "previous_profile": old_profile,
382
+ "message": f"Switched to profile '{name}'. Memory operations now use this profile."
367
383
  }
368
384
 
369
385
  except Exception as e:
@@ -374,6 +390,51 @@ async def switch_profile(name: str) -> dict:
374
390
  }
375
391
 
376
392
 
393
+ @mcp.tool(annotations=ToolAnnotations(
394
+ readOnlyHint=True,
395
+ destructiveHint=False,
396
+ openWorldHint=False,
397
+ ))
398
+ async def backup_status() -> dict:
399
+ """
400
+ Get auto-backup system status for SuperLocalMemory.
401
+
402
+ Returns backup configuration, last backup time, next scheduled backup,
403
+ total backup count, and storage used. Useful for monitoring data safety.
404
+
405
+ Returns:
406
+ {
407
+ "enabled": bool,
408
+ "interval_display": str,
409
+ "last_backup": str or null,
410
+ "next_backup": str or null,
411
+ "backup_count": int,
412
+ "total_size_mb": float
413
+ }
414
+ """
415
+ try:
416
+ from auto_backup import AutoBackup
417
+ backup = AutoBackup()
418
+ status = backup.get_status()
419
+ return {
420
+ "success": True,
421
+ **status
422
+ }
423
+ except ImportError:
424
+ return {
425
+ "success": False,
426
+ "message": "Auto-backup module not installed. Update SuperLocalMemory to v2.4.0+.",
427
+ "enabled": False,
428
+ "backup_count": 0
429
+ }
430
+ except Exception as e:
431
+ return {
432
+ "success": False,
433
+ "error": str(e),
434
+ "message": "Failed to get backup status"
435
+ }
436
+
437
+
377
438
  # ============================================================================
378
439
  # CHATGPT CONNECTOR TOOLS (search + fetch — required by OpenAI MCP spec)
379
440
  # These two tools are required for ChatGPT Connectors and Deep Research.
@@ -673,6 +734,7 @@ if __name__ == "__main__":
673
734
  print(" - get_status()", file=sys.stderr)
674
735
  print(" - build_graph()", file=sys.stderr)
675
736
  print(" - switch_profile(name)", file=sys.stderr)
737
+ print(" - backup_status() [Auto-Backup]", file=sys.stderr)
676
738
  print("", file=sys.stderr)
677
739
  print("MCP Resources Available:", file=sys.stderr)
678
740
  print(" - memory://recent/{limit}", file=sys.stderr)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "superlocalmemory",
3
- "version": "2.3.6",
3
+ "version": "2.4.0",
4
4
  "description": "Your AI Finally Remembers You - Local-first intelligent memory system for AI assistants. Works with Claude, Cursor, Windsurf, VS Code/Copilot, Codex, and 16+ AI tools. 100% local, zero cloud dependencies.",
5
5
  "keywords": [
6
6
  "ai-memory",