superlocalmemory 2.3.5 → 2.3.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -16,6 +16,52 @@ SuperLocalMemory V2 - Intelligent local memory system for AI coding assistants.
16
16
 
17
17
  ---
18
18
 
19
+ ## [2.3.7] - 2026-02-09
20
+
21
+ ### Added
22
+ - **--full flag**: Show complete memory content without truncation in search/list/recent/cluster commands
23
+ - **Smart truncation**: Memories <5000 chars shown in full, ≥5000 chars truncated to 2000 chars (previously always truncated at 200 chars)
24
+ - **Help text**: Added --full flag documentation to CLI help output
25
+
26
+ ### Fixed
27
+ - **CLI bug**: Fixed `get` command error - `get_memory()` → `get_by_id()` method call
28
+ - **Content display**: Recall now shows full content for short/medium memories instead of always truncating at 200 chars
29
+ - **User experience**: Agents and users can now see complete memory content by default for most memories
30
+
31
+ ### Changed
32
+ - **Truncation logic**: 200 char limit → 2000 char preview for memories ≥5000 chars
33
+ - **Node.js wrappers**: memory-recall-skill.js and memory-list-skill.js updated to pass --full flag through
34
+
35
+ ### Technical Details
36
+ - Added `format_content()` helper function in memory_store_v2.py (line 918)
37
+ - Updated search/list/recent/cluster commands to use smart truncation
38
+ - Backward compatible: same output structure, MCP/API calls unaffected
39
+ - All 74+ existing memories tested: short memories show full, long memories truncate intelligently
40
+
41
+ ---
42
+
43
+ ## [2.3.5] - 2026-02-09
44
+
45
+ ### Added
46
+ - **ChatGPT Connector Support**: `search(query)` and `fetch(id)` MCP tools per OpenAI spec
47
+ - **Streamable HTTP transport**: `slm serve --transport streamable-http` for ChatGPT 2026+
48
+ - **UI: Memory detail modal**: Click any memory row to see full content, tags, metadata
49
+ - **UI: Dark mode toggle**: Sun/moon icon in navbar, saved to localStorage, respects system preference
50
+ - **UI: Export buttons**: Export All (JSON/JSONL), Export Search Results, Export individual memory as Markdown
51
+ - **UI: Search score bars**: Color-coded relevance bars (red/yellow/green) in search results
52
+ - **UI: Animated stat counters**: Numbers animate up on page load with ease-out cubic
53
+ - **UI: Loading spinners and empty states**: Professional feedback across all tabs
54
+ - npm keywords: chatgpt, chatgpt-connector, openai, deep-research
55
+
56
+ ### Fixed
57
+ - **XSS vulnerability**: Replaced inline onclick with JSON injection with safe event delegation
58
+ - **UI: Content preview**: Increased from 80 to 100 characters
59
+
60
+ ### Changed
61
+ - npm package now includes `ui/`, `ui_server.py`, `api_server.py`
62
+
63
+ ---
64
+
19
65
  ## [2.3.0] - 2026-02-08
20
66
 
21
67
  **Release Type:** Universal Integration Release
package/README.md CHANGED
@@ -268,7 +268,7 @@ slm recall "API design patterns"
268
268
  | **OpenCode** | ✅ MCP | Native MCP tools |
269
269
  | **Perplexity** | ✅ MCP | Native MCP tools |
270
270
  | **Antigravity** | ✅ MCP + Skills | Native MCP tools |
271
- | **ChatGPT** | ✅ HTTP Transport | `slm serve` + tunnel |
271
+ | **ChatGPT** | ✅ MCP Connector | `search()` + `fetch()` via HTTP tunnel |
272
272
  | **Aider** | ✅ Smart Wrapper | `aider-smart` with context |
273
273
  | **Any Terminal** | ✅ Universal CLI | `slm remember "content"` |
274
274
 
@@ -85,13 +85,15 @@ Simple Storage → Intelligent Organization → Adaptive Learning
85
85
 
86
86
  The MCP server provides native integration with modern AI tools:
87
87
 
88
- **6 Tools:**
88
+ **8 Tools:**
89
89
  - `remember(content, tags, project)` - Save memories
90
90
  - `recall(query, limit)` - Search memories
91
91
  - `list_recent(limit)` - Recent memories
92
92
  - `get_status()` - System status
93
93
  - `build_graph()` - Build knowledge graph
94
94
  - `switch_profile(name)` - Change profile
95
+ - `search(query)` - Search memories (OpenAI MCP spec for ChatGPT Connectors)
96
+ - `fetch(id)` - Fetch memory by ID (OpenAI MCP spec for ChatGPT Connectors)
95
97
 
96
98
  **4 Resources:**
97
99
  - `memory://recent` - Recent memories feed
@@ -106,6 +108,7 @@ The MCP server provides native integration with modern AI tools:
106
108
  **Transport Support:**
107
109
  - stdio (default) - For local IDE integration
108
110
  - HTTP - For remote/network access
111
+ - streamable-http - For ChatGPT 2026+ Connectors
109
112
 
110
113
  **Key Design:** Zero duplication - calls existing `memory_store_v2.py` functions
111
114
 
@@ -390,34 +390,75 @@ PYTHONPATH = "/Users/yourusername/.claude-memory"
390
390
 
391
391
  ---
392
392
 
393
- ### ChatGPT Desktop App
393
+ ### ChatGPT Desktop App / ChatGPT Connectors
394
394
 
395
- **Important:** ChatGPT requires HTTP transport, not stdio. You need to run a local HTTP server and expose it via a tunnel.
395
+ **Important:** ChatGPT requires HTTP transport, not stdio. You need to run a local HTTP server and expose it via a tunnel. As of v2.3.5, SuperLocalMemory includes `search(query)` and `fetch(id)` MCP tools required by OpenAI's MCP spec for ChatGPT Connectors and Deep Research.
396
396
 
397
- **Steps:**
397
+ **Requirements:**
398
+ - ChatGPT Plus, Team, or Enterprise plan
399
+ - **Developer Mode** must be enabled in ChatGPT settings
400
+ - A tunnel tool: `cloudflared` (recommended, free) or `ngrok`
401
+ - Reference: https://platform.openai.com/docs/mcp
402
+
403
+ **Available Tools in ChatGPT:**
404
+
405
+ | Tool | Purpose |
406
+ |------|---------|
407
+ | `search(query)` | Search memories (required by OpenAI MCP spec) |
408
+ | `fetch(id)` | Fetch a specific memory by ID (required by OpenAI MCP spec) |
409
+ | `remember(content, tags, project)` | Save a new memory |
410
+ | `recall(query, limit)` | Search memories with full options |
411
+
412
+ **Step-by-Step Setup:**
398
413
 
399
414
  1. **Start the MCP HTTP server:**
400
415
  ```bash
401
- slm serve --port 8001
402
- # or: python3 ~/.claude-memory/mcp_server.py --transport http --port 8001
416
+ slm serve --port 8417
417
+ # or with streamable-http transport (ChatGPT 2026+):
418
+ slm serve --port 8417 --transport streamable-http
419
+ # or using Python directly:
420
+ python3 ~/.claude-memory/mcp_server.py --transport http --port 8417
403
421
  ```
404
422
 
405
- 2. **Expose via tunnel** (in another terminal):
423
+ 2. **Expose via cloudflared tunnel** (in another terminal):
406
424
  ```bash
407
- ngrok http 8001
408
- # or: cloudflared tunnel --url http://localhost:8001
409
- ```
425
+ # Install cloudflared (if not installed)
426
+ # macOS: brew install cloudflared
427
+ # Linux: sudo apt install cloudflared
410
428
 
411
- 3. **Copy the HTTPS URL** from ngrok/cloudflared output
412
-
413
- 4. **Add to ChatGPT:**
414
- - Open ChatGPT Desktop
415
- - Go to **Settings → Apps & Connectors → Developer Mode**
416
- - Add the HTTPS URL as an MCP endpoint
429
+ cloudflared tunnel --url http://localhost:8417
430
+ ```
431
+ Cloudflared will output a URL like `https://random-name.trycloudflare.com`
417
432
 
418
- 5. In a new chat, look for MCP tools in the tool selector
433
+ **Alternative ngrok:**
434
+ ```bash
435
+ ngrok http 8417
436
+ ```
419
437
 
420
- **Note:** 100% local — your MCP server runs on YOUR machine. The tunnel just makes it reachable by ChatGPT.
438
+ 3. **Copy the HTTPS URL** from cloudflared/ngrok output
439
+
440
+ 4. **Add to ChatGPT as a Connector:**
441
+ - Open ChatGPT (desktop or web)
442
+ - Go to **Settings → Connectors** (or **Settings → Apps & Connectors → Developer Mode**)
443
+ - Click **"Add Connector"**
444
+ - Paste the HTTPS URL with the `/sse/` suffix:
445
+ ```
446
+ https://random-name.trycloudflare.com/sse/
447
+ ```
448
+ - Name it: `SuperLocalMemory`
449
+ - Click **Save**
450
+
451
+ 5. **Verify in a new chat:**
452
+ - Start a new conversation in ChatGPT
453
+ - Look for the SuperLocalMemory connector in the tool selector
454
+ - Try: "Search my memories for authentication decisions"
455
+ - ChatGPT will call `search()` and return your local memories
456
+
457
+ **Important Notes:**
458
+ - The `/sse/` suffix on the URL is **required** by ChatGPT's MCP implementation
459
+ - 100% local — your MCP server runs on YOUR machine. The tunnel just makes it reachable by ChatGPT. Your data is served on demand and never stored by OpenAI beyond the conversation.
460
+ - The tunnel URL changes each time you restart cloudflared (unless you set up a named tunnel)
461
+ - For persistent URLs, configure a cloudflared named tunnel: `cloudflared tunnel create slm`
421
462
 
422
463
  **Auto-configured by install.sh:** ❌ No (requires HTTP transport + tunnel)
423
464
 
@@ -690,7 +731,7 @@ In your IDE/app, check:
690
731
 
691
732
  ## Available MCP Tools
692
733
 
693
- Once configured, these 6 tools are available:
734
+ Once configured, these 8 tools are available:
694
735
 
695
736
  | Tool | Purpose | Example Usage |
696
737
  |------|---------|---------------|
@@ -700,6 +741,10 @@ Once configured, these 6 tools are available:
700
741
  | `get_status()` | System health | "How many memories do I have?" |
701
742
  | `build_graph()` | Build knowledge graph | "Build the knowledge graph" |
702
743
  | `switch_profile()` | Change profile | "Switch to work profile" |
744
+ | `search()` | Search memories (OpenAI MCP spec) | Used by ChatGPT Connectors and Deep Research |
745
+ | `fetch()` | Fetch memory by ID (OpenAI MCP spec) | Used by ChatGPT Connectors and Deep Research |
746
+
747
+ **Note:** `search()` and `fetch()` are required by OpenAI's MCP specification for ChatGPT Connectors. They are available in all transports but primarily used by ChatGPT.
703
748
 
704
749
  Plus **2 MCP prompts** and **4 MCP resources** for advanced use.
705
750
 
package/docs/UI-SERVER.md CHANGED
@@ -22,7 +22,15 @@ Web-based visualization interface for exploring the SuperLocalMemory knowledge g
22
22
  - **Cluster Analysis**: Visual breakdown of thematic clusters with top entities
23
23
  - **Pattern Viewer**: Display learned preferences and coding styles
24
24
  - **Timeline**: Chart showing memory creation over time
25
- - **Statistics Dashboard**: Real-time system metrics
25
+ - **Statistics Dashboard**: Real-time system metrics with animated stat counters (ease-out cubic on page load)
26
+ - **Memory Detail Modal**: Click any memory row to see full content, metadata, and tags in an overlay modal
27
+ - **Dark Mode Toggle**: Sun/moon icon in the navbar; preference saved to localStorage and respects system preference (`prefers-color-scheme`)
28
+ - **Export Functionality**:
29
+ - **Export All**: Download entire memory database as JSON or JSONL
30
+ - **Export Search Results**: Export current search results to JSON
31
+ - **Export as Markdown**: Export an individual memory as a Markdown file
32
+ - **Search Score Bars**: Color-coded relevance bars in search results (red for low, yellow for medium, green for high relevance)
33
+ - **Loading Spinners and Empty States**: Professional loading feedback and informative empty states across all tabs
26
34
 
27
35
  ## Quick Start
28
36
 
@@ -242,9 +250,9 @@ This UI server is intended for **local development** only.
242
250
 
243
251
  - Add authentication for multi-user access
244
252
  - Implement real-time updates via WebSockets
245
- - Add export functionality (JSON, CSV)
246
253
  - Create memory editing interface
247
254
  - Build cluster visualization with hierarchical layout
255
+ - Add CSV export format alongside existing JSON/JSONL/Markdown exports
248
256
 
249
257
  ## Support
250
258
 
@@ -231,6 +231,62 @@ aider-smart "Add authentication to the API"
231
231
  3. Passes context to Aider
232
232
  4. Aider gets relevant memories without you asking
233
233
 
234
+ ### ChatGPT (Connectors / Deep Research) ✅
235
+
236
+ **How It Works:** HTTP transport with `search()` and `fetch()` MCP tools per OpenAI spec. Requires a tunnel to expose local server to ChatGPT.
237
+
238
+ **Requirements:**
239
+ - ChatGPT Plus, Team, or Enterprise plan
240
+ - Developer Mode enabled in ChatGPT settings
241
+ - `cloudflared` (recommended) or `ngrok` for tunneling
242
+ - Reference: https://platform.openai.com/docs/mcp
243
+
244
+ **Setup:**
245
+
246
+ ```bash
247
+ # Terminal 1: Start MCP server
248
+ slm serve --port 8417
249
+
250
+ # Terminal 2: Start tunnel
251
+ cloudflared tunnel --url http://localhost:8417
252
+ ```
253
+
254
+ Then in ChatGPT:
255
+ 1. Go to **Settings → Connectors**
256
+ 2. Click **"Add Connector"**
257
+ 3. Paste the HTTPS URL from cloudflared with `/sse/` suffix:
258
+ ```
259
+ https://random-name.trycloudflare.com/sse/
260
+ ```
261
+ 4. Name it `SuperLocalMemory` and save
262
+
263
+ **Available Tools in ChatGPT:**
264
+
265
+ | Tool | Purpose |
266
+ |------|---------|
267
+ | `search(query)` | Search memories (required by OpenAI MCP spec) |
268
+ | `fetch(id)` | Fetch a specific memory by ID (required by OpenAI MCP spec) |
269
+ | `remember(content, tags, project)` | Save a new memory |
270
+ | `recall(query, limit)` | Search memories with full options |
271
+
272
+ **Usage Examples:**
273
+ ```
274
+ User: "Search my memories for database decisions"
275
+ ChatGPT: [calls search("database decisions")]
276
+ → Returns matching memories from your local database
277
+
278
+ User: "What's memory #42 about?"
279
+ ChatGPT: [calls fetch(42)]
280
+ → Returns full content, tags, and metadata for memory 42
281
+ ```
282
+
283
+ **Notes:**
284
+ - 100% local — your data is served on demand and never stored beyond the conversation
285
+ - The tunnel URL changes on restart unless you configure a named cloudflared tunnel
286
+ - For streamable-http transport (ChatGPT 2026+): `slm serve --port 8417 --transport streamable-http`
287
+
288
+ ---
289
+
234
290
  ### Any Terminal / Script ✅
235
291
 
236
292
  **How It Works:** Universal CLI wrapper
@@ -0,0 +1,70 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * SuperLocalMemory V2 - Session Start Context Hook
4
+ * Copyright (c) 2026 Varun Pratap Bhardwaj
5
+ * Licensed under MIT License
6
+ *
7
+ * Loads recent memories and learned patterns on Claude Code session start.
8
+ * Outputs context to stderr (Claude Code reads hook stderr as context).
9
+ * Fails gracefully — never blocks session start if DB is missing or errors occur.
10
+ */
11
+
12
+ const { execFile } = require('child_process');
13
+ const { promisify } = require('util');
14
+ const path = require('path');
15
+ const fs = require('fs');
16
+
17
+ const execFileAsync = promisify(execFile);
18
+
19
+ const MEMORY_DIR = path.join(process.env.HOME, '.claude-memory');
20
+ const DB_PATH = path.join(MEMORY_DIR, 'memory.db');
21
+ const MEMORY_SCRIPT = path.join(MEMORY_DIR, 'memory_store_v2.py');
22
+
23
+ async function loadSessionContext() {
24
+ // Fail gracefully if not installed
25
+ if (!fs.existsSync(DB_PATH)) {
26
+ return;
27
+ }
28
+
29
+ if (!fs.existsSync(MEMORY_SCRIPT)) {
30
+ return;
31
+ }
32
+
33
+ try {
34
+ // Get stats (memory_store_v2.py stats → JSON output)
35
+ const { stdout: statsOutput } = await execFileAsync('python3', [
36
+ MEMORY_SCRIPT, 'stats'
37
+ ], { timeout: 5000 });
38
+
39
+ // Get recent memories (memory_store_v2.py list <limit>)
40
+ const { stdout: recentOutput } = await execFileAsync('python3', [
41
+ MEMORY_SCRIPT, 'list', '5'
42
+ ], { timeout: 5000 });
43
+
44
+ // Build context output
45
+ let context = '';
46
+
47
+ if (statsOutput && statsOutput.trim()) {
48
+ try {
49
+ const stats = JSON.parse(statsOutput.trim());
50
+ const total = stats.total_memories || 0;
51
+ const clusters = stats.total_clusters || 0;
52
+ if (total > 0) {
53
+ context += 'SuperLocalMemory: ' + total + ' memories, ' + clusters + ' clusters loaded.\n';
54
+ }
55
+ } catch (e) {
56
+ // Stats output wasn't JSON — use first line as-is
57
+ context += 'SuperLocalMemory: ' + statsOutput.trim().split('\n')[0] + '\n';
58
+ }
59
+ }
60
+
61
+ if (context) {
62
+ process.stderr.write(context);
63
+ }
64
+ } catch (error) {
65
+ // Never fail — session start must not be blocked
66
+ // Silently ignore errors (timeout, missing python, etc.)
67
+ }
68
+ }
69
+
70
+ loadSessionContext();
@@ -34,6 +34,7 @@ Options:
34
34
  • recent: Latest created first (default)
35
35
  • accessed: Most recently accessed
36
36
  • importance: Highest importance first
37
+ --full Show complete content without truncation
37
38
 
38
39
  Examples:
39
40
  memory-list
@@ -42,17 +43,17 @@ Examples:
42
43
 
43
44
  memory-list --sort importance
44
45
 
45
- memory-list --limit 10 --sort accessed
46
+ memory-list --limit 10 --sort accessed --full
46
47
 
47
48
  Output Format:
48
- • ID, Content (truncated), Tags, Importance
49
+ • ID, Content (smart truncated), Tags, Importance
49
50
  • Creation timestamp
50
51
  • Access count and last accessed time
51
52
 
52
53
  Notes:
53
54
  • Default shows last 20 memories
54
- Content is truncated to 100 chars for readability
55
- • Use ID with memory-recall to see full content
55
+ Smart truncation: full content if <5000 chars, preview if ≥5000 chars
56
+ • Use --full flag to always show complete content
56
57
  • Sort by 'accessed' to find frequently used memories
57
58
  `);
58
59
  return;
@@ -61,6 +62,7 @@ Notes:
61
62
  // Parse options
62
63
  let limit = 20;
63
64
  let sortBy = 'recent';
65
+ let showFull = false;
64
66
 
65
67
  for (let i = 0; i < args.length; i++) {
66
68
  const arg = args[i];
@@ -83,6 +85,8 @@ Notes:
83
85
  return;
84
86
  }
85
87
  i++;
88
+ } else if (arg === '--full') {
89
+ showFull = true;
86
90
  }
87
91
  }
88
92
 
@@ -97,6 +101,11 @@ Notes:
97
101
  pythonArgs = ['list', limit.toString()];
98
102
  }
99
103
 
104
+ // Add --full flag if requested
105
+ if (showFull) {
106
+ pythonArgs.push('--full');
107
+ }
108
+
100
109
  try {
101
110
  const { stdout, stderr } = await execFileAsync('python3', [memoryScript, ...pythonArgs]);
102
111
 
@@ -33,13 +33,14 @@ Arguments:
33
33
 
34
34
  Options:
35
35
  --limit <n> Maximum results to return (default: 10)
36
+ --full Show complete content without truncation
36
37
 
37
38
  Examples:
38
39
  memory-recall "authentication bug"
39
40
 
40
41
  memory-recall "API configuration" --limit 5
41
42
 
42
- memory-recall "security best practices"
43
+ memory-recall "security best practices" --full
43
44
 
44
45
  memory-recall "user preferences"
45
46
 
@@ -47,6 +48,8 @@ Output Format:
47
48
  • Ranked by relevance (TF-IDF cosine similarity)
48
49
  • Shows: ID, Content, Tags, Importance, Timestamp
49
50
  • Higher scores = better matches
51
+ • Smart truncation: full content if <5000 chars, preview if ≥5000 chars
52
+ • Use --full flag to always show complete content
50
53
 
51
54
  Notes:
52
55
  • Uses local TF-IDF search (no external APIs)
@@ -67,6 +70,8 @@ Notes:
67
70
  if (arg === '--limit' && i + 1 < args.length) {
68
71
  // Note: V1 store doesn't support --limit in search, will truncate output instead
69
72
  i++; // Skip but don't add to pythonArgs
73
+ } else if (arg === '--full') {
74
+ pythonArgs.push('--full');
70
75
  } else if (!arg.startsWith('--') && query === null) {
71
76
  query = arg;
72
77
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "superlocalmemory",
3
- "version": "2.3.5",
3
+ "version": "2.3.7",
4
4
  "description": "Your AI Finally Remembers You - Local-first intelligent memory system for AI assistants. Works with Claude, Cursor, Windsurf, VS Code/Copilot, Codex, and 16+ AI tools. 100% local, zero cloud dependencies.",
5
5
  "keywords": [
6
6
  "ai-memory",
@@ -915,6 +915,25 @@ class MemoryStoreV2:
915
915
  return ''.join(output)
916
916
 
917
917
 
918
+ def format_content(content: str, full: bool = False, threshold: int = 5000, preview_len: int = 2000) -> str:
919
+ """
920
+ Smart content formatting with optional truncation.
921
+
922
+ Args:
923
+ content: Content to format
924
+ full: If True, always show full content
925
+ threshold: Max length before truncation (default 5000)
926
+ preview_len: Preview length when truncating (default 2000)
927
+
928
+ Returns:
929
+ Formatted content string
930
+ """
931
+ if full or len(content) < threshold:
932
+ return content
933
+ else:
934
+ return f"{content[:preview_len]}..."
935
+
936
+
918
937
  # CLI interface (V1 compatible + V2 extensions)
919
938
  if __name__ == "__main__":
920
939
  import sys
@@ -925,16 +944,18 @@ if __name__ == "__main__":
925
944
  print("MemoryStore V2 CLI")
926
945
  print("\nV1 Compatible Commands:")
927
946
  print(" python memory_store_v2.py add <content> [--project <path>] [--tags tag1,tag2]")
928
- print(" python memory_store_v2.py search <query>")
929
- print(" python memory_store_v2.py list [limit]")
947
+ print(" python memory_store_v2.py search <query> [--full]")
948
+ print(" python memory_store_v2.py list [limit] [--full]")
930
949
  print(" python memory_store_v2.py get <id>")
931
- print(" python memory_store_v2.py recent [limit]")
950
+ print(" python memory_store_v2.py recent [limit] [--full]")
932
951
  print(" python memory_store_v2.py stats")
933
952
  print(" python memory_store_v2.py context <query>")
934
953
  print(" python memory_store_v2.py delete <id>")
935
954
  print("\nV2 Extensions:")
936
955
  print(" python memory_store_v2.py tree [parent_id]")
937
- print(" python memory_store_v2.py cluster <cluster_id>")
956
+ print(" python memory_store_v2.py cluster <cluster_id> [--full]")
957
+ print("\nOptions:")
958
+ print(" --full Show complete content (default: smart truncation at 5000 chars)")
938
959
  sys.exit(0)
939
960
 
940
961
  command = sys.argv[1]
@@ -954,6 +975,7 @@ if __name__ == "__main__":
954
975
 
955
976
  elif command == "cluster" and len(sys.argv) >= 3:
956
977
  cluster_id = int(sys.argv[2])
978
+ show_full = '--full' in sys.argv
957
979
  results = store.get_by_cluster(cluster_id)
958
980
 
959
981
  if not results:
@@ -962,7 +984,7 @@ if __name__ == "__main__":
962
984
  print(f"Cluster {cluster_id} - {len(results)} memories:")
963
985
  for r in results:
964
986
  print(f"\n[{r['id']}] Importance: {r['importance']}")
965
- print(f" {r['content'][:200]}...")
987
+ print(f" {format_content(r['content'], full=show_full)}")
966
988
 
967
989
  elif command == "stats":
968
990
  stats = store.get_stats()
@@ -996,10 +1018,11 @@ if __name__ == "__main__":
996
1018
  elif command == "search":
997
1019
  if len(sys.argv) < 3:
998
1020
  print("Error: Search query required")
999
- print("Usage: python memory_store_v2.py search <query>")
1021
+ print("Usage: python memory_store_v2.py search <query> [--full]")
1000
1022
  sys.exit(1)
1001
1023
 
1002
1024
  query = sys.argv[2]
1025
+ show_full = '--full' in sys.argv
1003
1026
  results = store.search(query, limit=5)
1004
1027
 
1005
1028
  if not results:
@@ -1011,11 +1034,18 @@ if __name__ == "__main__":
1011
1034
  print(f"Project: {r['project_name']}")
1012
1035
  if r.get('tags'):
1013
1036
  print(f"Tags: {', '.join(r['tags'])}")
1014
- print(f"Content: {r['content'][:200]}...")
1037
+ print(f"Content: {format_content(r['content'], full=show_full)}")
1015
1038
  print(f"Created: {r['created_at']}")
1016
1039
 
1017
1040
  elif command == "recent":
1018
- limit = int(sys.argv[2]) if len(sys.argv) > 2 else 10
1041
+ show_full = '--full' in sys.argv
1042
+ # Parse limit (skip --full flag)
1043
+ limit = 10
1044
+ for i, arg in enumerate(sys.argv[2:], start=2):
1045
+ if arg != '--full' and arg.isdigit():
1046
+ limit = int(arg)
1047
+ break
1048
+
1019
1049
  results = store.get_recent(limit)
1020
1050
 
1021
1051
  if not results:
@@ -1027,17 +1057,24 @@ if __name__ == "__main__":
1027
1057
  print(f"Project: {r['project_name']}")
1028
1058
  if r.get('tags'):
1029
1059
  print(f"Tags: {', '.join(r['tags'])}")
1030
- print(f"Content: {r['content'][:200]}...")
1060
+ print(f"Content: {format_content(r['content'], full=show_full)}")
1031
1061
 
1032
1062
  elif command == "list":
1033
- limit = int(sys.argv[2]) if len(sys.argv) > 2 else 10
1063
+ show_full = '--full' in sys.argv
1064
+ # Parse limit (skip --full flag)
1065
+ limit = 10
1066
+ for i, arg in enumerate(sys.argv[2:], start=2):
1067
+ if arg != '--full' and arg.isdigit():
1068
+ limit = int(arg)
1069
+ break
1070
+
1034
1071
  results = store.get_recent(limit)
1035
1072
 
1036
1073
  if not results:
1037
1074
  print("No memories found.")
1038
1075
  else:
1039
1076
  for r in results:
1040
- print(f"[{r['id']}] {r['content'][:100]}...")
1077
+ print(f"[{r['id']}] {format_content(r['content'], full=show_full)}")
1041
1078
 
1042
1079
  elif command == "get":
1043
1080
  if len(sys.argv) < 3:
@@ -1046,7 +1083,7 @@ if __name__ == "__main__":
1046
1083
  sys.exit(1)
1047
1084
 
1048
1085
  mem_id = int(sys.argv[2])
1049
- memory = store.get_memory(mem_id)
1086
+ memory = store.get_by_id(mem_id)
1050
1087
 
1051
1088
  if not memory:
1052
1089
  print(f"Memory {mem_id} not found.")