claude-self-reflect 2.5.17 → 2.5.19

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -13,6 +13,82 @@ You are a resilient and comprehensive testing specialist for Claude Self-Reflect
13
13
  - MCP tools enable reflection and memory storage
14
14
  - System must handle sensitive API keys securely
15
15
 
16
+ ## Comprehensive Test Suite
17
+
18
+ ### Available Test Categories
19
+ The project now includes a comprehensive test suite in `/tests/` directory:
20
+
21
+ 1. **MCP Tool Integration** (`test_mcp_tools_comprehensive.py`)
22
+ - All MCP tools with various parameters
23
+ - Edge cases and error handling
24
+ - Cross-project search validation
25
+
26
+ 2. **Memory Decay** (`test_memory_decay.py`)
27
+ - Decay calculations and half-life variations
28
+ - Score adjustments and ranking changes
29
+ - Performance impact measurements
30
+
31
+ 3. **Multi-Project Support** (`test_multi_project.py`)
32
+ - Project isolation and collection naming
33
+ - Cross-project search functionality
34
+ - Metadata storage and retrieval
35
+
36
+ 4. **Embedding Models** (`test_embedding_models.py`)
37
+ - FastEmbed vs Voyage AI switching
38
+ - Dimension compatibility (384 vs 1024)
39
+ - Model performance comparisons
40
+
41
+ 5. **Delta Metadata** (`test_delta_metadata.py`)
42
+ - Tool usage extraction
43
+ - File reference tracking
44
+ - Incremental updates without re-embedding
45
+
46
+ 6. **Performance & Load** (`test_performance_load.py`)
47
+ - Large conversation imports (>1000 chunks)
48
+ - Concurrent operations
49
+ - Memory and CPU monitoring
50
+
51
+ 7. **Data Integrity** (`test_data_integrity.py`)
52
+ - Duplicate detection
53
+ - Unicode handling
54
+ - Chunk ordering preservation
55
+
56
+ 8. **Recovery Scenarios** (`test_recovery_scenarios.py`)
57
+ - Partial import recovery
58
+ - Container restart resilience
59
+ - State file corruption handling
60
+
61
+ 9. **Security** (`test_security.py`)
62
+ - API key validation
63
+ - Input sanitization
64
+ - Path traversal prevention
65
+
66
+ ### Running the Test Suite
67
+ ```bash
68
+ # Run ALL tests
69
+ cd ~/projects/claude-self-reflect
70
+ python tests/run_all_tests.py
71
+
72
+ # Run specific categories
73
+ python tests/run_all_tests.py -c mcp_tools memory_decay multi_project
74
+
75
+ # Run with verbose output
76
+ python tests/run_all_tests.py -v
77
+
78
+ # List available test categories
79
+ python tests/run_all_tests.py --list
80
+
81
+ # Run individual test files
82
+ python tests/test_mcp_tools_comprehensive.py
83
+ python tests/test_memory_decay.py
84
+ python tests/test_multi_project.py
85
+ ```
86
+
87
+ ### Test Results Location
88
+ - JSON results: `tests/test_results.json`
89
+ - Contains timestamps, durations, pass/fail counts
90
+ - Useful for tracking test history
91
+
16
92
  ## Key Responsibilities
17
93
 
18
94
  1. **System State Detection**
@@ -13,6 +13,26 @@ You are a Docker orchestration specialist for the memento-stack project. You man
13
13
  - Services run on host network for local development
14
14
  - Production uses Railway deployment
15
15
 
16
+ ## CRITICAL GUARDRAILS (from v2.5.17 crisis)
17
+
18
+ ### Resource Limit Guidelines
19
+ ⚠️ **Memory limits must include baseline usage**
20
+ - Measure baseline: `docker stats --no-stream`
21
+ - Add 200MB+ headroom above baseline
22
+ - Default: 600MB minimum (not 400MB)
23
+
24
+ ⚠️ **CPU monitoring in containers**
25
+ - Containers see all host CPUs but have cgroup limits
26
+ - 1437% CPU = ~90% of actual allocation
27
+ - Use cgroup-aware monitoring: `/sys/fs/cgroup/cpu/cpu.cfs_quota_us`
28
+
29
+ ### Pre-Deployment Checklist
30
+ ✅ Test with production data volumes (600+ files)
31
+ ✅ Verify STATE_FILE paths match between config and container
32
+ ✅ Check volume mounts are writable
33
+ ✅ Confirm memory/CPU limits are realistic
34
+ ✅ Test graceful shutdown handling
35
+
16
36
  ## Key Responsibilities
17
37
 
18
38
  1. **Service Management**
@@ -16,6 +16,21 @@ You are an import pipeline debugging expert for the memento-stack project. You s
16
16
  - Project name must be correctly extracted from path for proper collection naming
17
17
  - Collections named using MD5 hash of project name
18
18
 
19
+ ## CRITICAL GUARDRAILS (from v2.5.17 crisis)
20
+
21
+ ### Pre-Release Testing Checklist
22
+ ✅ **Test with actual Claude JSONL files** - Real ~/.claude/projects/*.jsonl files
23
+ ✅ **Verify processing metrics** - files_processed, chunks_created must be > 0
24
+ ✅ **Memory limits = baseline + headroom** - Measure actual usage first (typically 400MB base)
25
+ ✅ **Run tests to completion** - Don't mark as done without execution proof
26
+ ✅ **Handle production backlogs** - Test with 600+ file queues
27
+
28
+ ### Common Failure Patterns
29
+ 🚨 **State updates without progress** - high_water_mark changes but processed_files = 0
30
+ 🚨 **Memory limit blocking** - "Memory limit exceeded" on every file = limit too low
31
+ 🚨 **CPU misreporting** - 1437% CPU might be 90% of container limit
32
+ 🚨 **Wrong file format** - Testing with .json when production uses .jsonl
33
+
19
34
  ## Key Responsibilities
20
35
 
21
36
  1. **JSONL Processing**
@@ -16,6 +16,51 @@ You are an MCP server development specialist for the memento-stack project. You
16
16
  - Supports both local (FastEmbed) and cloud (Voyage AI) embeddings
17
17
  - MCP determines project from working directory context
18
18
 
19
+ ## Available Test Suites
20
+
21
+ ### MCP-Specific Tests
22
+ 1. **Comprehensive MCP Tool Tests** (`tests/test_mcp_tools_comprehensive.py`)
23
+ - Tests all MCP tools: reflect_on_past, store_reflection, quick_search, search_summary
24
+ - Edge case handling and error scenarios
25
+ - Parameter validation (limit, min_score, use_decay, response_format)
26
+ - Cross-project search with project="all"
27
+ - Run with: `python tests/test_mcp_tools_comprehensive.py`
28
+
29
+ 2. **MCP Search Tests** (`scripts/test-mcp-search.py`)
30
+ - Basic MCP search functionality
31
+ - Integration with Qdrant backend
32
+ - Response parsing and formatting
33
+ - Run with: `python scripts/test-mcp-search.py`
34
+
35
+ 3. **MCP Robustness Tests** (`scripts/test-mcp-robustness.py`)
36
+ - Error recovery mechanisms
37
+ - Timeout handling
38
+ - Connection resilience
39
+ - Run with: `python scripts/test-mcp-robustness.py`
40
+
41
+ ### Running MCP Tests
42
+ ```bash
43
+ # Run all MCP tests
44
+ cd ~/projects/claude-self-reflect
45
+ python tests/run_all_tests.py -c mcp_tools mcp_search
46
+
47
+ # Test MCP server directly
48
+ cd mcp-server && python test_server.py
49
+
50
+ # Verify MCP registration in Claude Code
51
+ claude mcp list | grep claude-self-reflect
52
+
53
+ # Test MCP tools from Python
54
+ python -c "from mcp_server.src.server import reflect_on_past; import asyncio; asyncio.run(reflect_on_past({'query': 'test', 'limit': 5}))"
55
+ ```
56
+
57
+ ### MCP Tool Parameters Reference
58
+ - **reflect_on_past**: query, limit, brief, min_score, project, use_decay, response_format, include_raw
59
+ - **store_reflection**: content, tags
60
+ - **quick_search**: query, min_score, project
61
+ - **search_by_file**: file_path, limit, project
62
+ - **search_by_concept**: concept, include_files, limit, project
63
+
19
64
  ## Key Responsibilities
20
65
 
21
66
  1. **MCP Server Development**
@@ -36,6 +36,66 @@ You are a Qdrant vector database specialist for the memento-stack project. Your
36
36
  - Analyze embedding quality
37
37
  - Compare different embedding models (Voyage vs OpenAI)
38
38
 
39
+ ## CRITICAL GUARDRAILS (from v2.5.17 crisis)
40
+
41
+ ### Testing Requirements
42
+ - **ALWAYS test with real JSONL files** - Claude uses .jsonl, not .json
43
+ - **Verify actual processing** - Check files_processed > 0, not just state updates
44
+ - **Test memory limits with baseline** - System uses 400MB baseline, set limits accordingly
45
+ - **Never mark tests complete without execution** - Run and verify output
46
+
47
+ ### Resource Management
48
+ - **Default memory to 600MB minimum** - 400MB is too conservative
49
+ - **Monitor baseline + headroom** - Measure actual usage before setting limits
50
+ - **Use cgroup-aware CPU monitoring** - Docker shows all CPUs but has limits
51
+
52
+ ## Available Test Suites
53
+
54
+ ### Qdrant-Specific Tests
55
+ 1. **Multi-Project Support Tests** (`tests/test_multi_project.py`)
56
+ - Collection isolation verification
57
+ - Cross-project search functionality
58
+ - Collection naming consistency
59
+ - Project metadata storage
60
+ - Run with: `python tests/test_multi_project.py`
61
+
62
+ 2. **Data Integrity Tests** (`tests/test_data_integrity.py`)
63
+ - Duplicate detection
64
+ - Chunk ordering preservation
65
+ - Unicode and special character handling
66
+ - Collection consistency checks
67
+
68
+ 3. **Performance Tests** (`tests/test_performance_load.py`)
69
+ - Large conversation imports (>1000 chunks)
70
+ - Concurrent search requests
71
+ - Memory usage patterns
72
+ - Collection size limits
73
+
74
+ ### How to Run Tests
75
+ ```bash
76
+ # Run all Qdrant-related tests
77
+ cd ~/projects/claude-self-reflect
78
+ python tests/run_all_tests.py -c multi_project data_integrity performance
79
+
80
+ # Check collection health
81
+ docker exec claude-reflection-qdrant curl -s http://localhost:6333/collections | jq
82
+
83
+ # Verify specific collection
84
+ python -c "from qdrant_client import QdrantClient; c=QdrantClient('localhost', 6333); print(c.get_collection('conv_HASH_local'))"
85
+ ```
86
+
87
+ ### Common Issues & Solutions
88
+ - **Dimension mismatch**: Check embedding model (384 for local, 1024 for voyage)
89
+ - **Empty search results**: Verify collection exists and has points
90
+ - **Slow searches**: Check collection size and optimize with filters
91
+ - **Collection not found**: Verify project name normalization and MD5 hash
92
+
93
+ ### Quality Gates
94
+ - **Follow the workflow**: implementation → review → test → docs → release
95
+ - **Use pre-releases for major changes** - Better to test than break production
96
+ - **Document quantitative metrics** - "0 files processed" is clearer than "test failed"
97
+ - **Rollback immediately on failure** - Don't push through broken releases
98
+
39
99
  ## Essential Commands
40
100
 
41
101
  ### Collection Operations
@@ -14,11 +14,12 @@ RUN pip install --upgrade pip setuptools wheel
14
14
  # Install PyTorch CPU version for smaller size
15
15
  RUN pip install --no-cache-dir torch==2.3.0 --index-url https://download.pytorch.org/whl/cpu
16
16
 
17
- # Install other dependencies
17
+ # Install other dependencies with updated versions for security
18
+ # Note: fastembed 0.4.0 requires numpy<2, so using compatible version
18
19
  RUN pip install --no-cache-dir \
19
20
  qdrant-client==1.15.0 \
20
- fastembed==0.2.7 \
21
- numpy==1.26.0 \
21
+ fastembed==0.4.0 \
22
+ numpy==1.26.4 \
22
23
  psutil==7.0.0 \
23
24
  tenacity==8.2.3 \
24
25
  python-dotenv==1.0.0 \
package/README.md CHANGED
@@ -6,10 +6,7 @@ Claude forgets everything. This fixes that.
6
6
 
7
7
  - [What You Get](#what-you-get)
8
8
  - [Requirements](#requirements)
9
- - [Quick Install](#quick-install)
10
- - [Local Mode (Default)](#local-mode-default---your-data-stays-private)
11
- - [Cloud Mode](#cloud-mode-better-search-accuracy)
12
- - [Uninstall Instructions](#uninstall-instructions)
9
+ - [Quick Install/Uninstall](#quick-installuninstall)
13
10
  - [The Magic](#the-magic)
14
11
  - [Before & After](#before--after)
15
12
  - [Real Examples](#real-examples-that-made-us-build-this)
@@ -18,10 +15,9 @@ Claude forgets everything. This fixes that.
18
15
  - [Using It](#using-it)
19
16
  - [Key Features](#key-features)
20
17
  - [Performance](#performance)
21
- - [V2.5.16 Critical Updates](#v2516-critical-updates)
22
18
  - [Configuration](#configuration)
23
19
  - [Technical Stack](#the-technical-stack)
24
- - [Problems?](#problems)
20
+ - [Problems](#problems)
25
21
  - [What's New](#whats-new)
26
22
  - [Advanced Topics](#advanced-topics)
27
23
  - [Contributors](#contributors)
@@ -30,12 +26,12 @@ Claude forgets everything. This fixes that.
30
26
 
31
27
  Ask Claude about past conversations. Get actual answers. **100% local by default** - your conversations never leave your machine. Cloud-enhanced search available when you need it.
32
28
 
33
- **✅ Proven at Scale**: Successfully indexed 682 conversation files with 100% reliability. No data loss, no corruption, just seamless conversation memory that works.
29
+ **Proven at Scale**: Successfully indexed 682 conversation files with 100% reliability. No data loss, no corruption, just seamless conversation memory that works.
34
30
 
35
31
  **Before**: "I don't have access to previous conversations"
36
32
  **After**:
37
33
  ```
38
- reflection-specialist(Search FastEmbed vs cloud embedding decision)
34
+ reflection-specialist(Search FastEmbed vs cloud embedding decision)
39
35
  ⎿ Done (3 tool uses · 8.2k tokens · 12.4s)
40
36
 
41
37
  "Found it! Yesterday we decided on FastEmbed for local mode - better privacy,
@@ -52,9 +48,11 @@ Your conversations become searchable. Your decisions stay remembered. Your conte
52
48
  - **Node.js** 16+ (for the setup wizard)
53
49
  - **Claude Desktop** app
54
50
 
55
- ## Quick Install
51
+ ## Quick Install/Uninstall
56
52
 
57
- ### Local Mode (Default - Your Data Stays Private)
53
+ ### Install
54
+
55
+ #### Local Mode (Default - Your Data Stays Private)
58
56
  ```bash
59
57
  # Install and run automatic setup
60
58
  npm install -g claude-self-reflect
@@ -69,7 +67,7 @@ claude-self-reflect setup
69
67
  # 🔒 Keep all data local - no API keys needed
70
68
  ```
71
69
 
72
- ### Cloud Mode (Better Search Accuracy)
70
+ #### Cloud Mode (Better Search Accuracy)
73
71
  ```bash
74
72
  # Step 1: Get your free Voyage AI key
75
73
  # Sign up at https://www.voyageai.com/ - it takes 30 seconds
@@ -122,7 +120,7 @@ Here's how your conversations get imported and prioritized:
122
120
  ![Import Architecture](docs/diagrams/import-architecture.png)
123
121
 
124
122
  **The system intelligently prioritizes your conversations:**
125
- - **🔥 HOT** (< 5 minutes): Switches to 2-second intervals for near real-time import
123
+ - **HOT** (< 5 minutes): Switches to 2-second intervals for near real-time import
126
124
  - **🌡️ WARM** (< 24 hours): Normal priority, processed every 60 seconds
127
125
  - **❄️ COLD** (> 24 hours): Batch processed, max 5 per cycle to prevent blocking
128
126
 
@@ -138,7 +136,7 @@ The reflection specialist automatically activates. No special commands needed.
138
136
 
139
137
  ## Key Features
140
138
 
141
- ### 🎯 Project-Scoped Search
139
+ ### Project-Scoped Search
142
140
  Searches are **project-aware by default**. Claude automatically searches within your current project:
143
141
 
144
142
  ```
@@ -162,40 +160,6 @@ Recent conversations matter more. Old ones fade. Like your brain, but reliable.
162
160
  - **Scale**: 100% indexing success rate across all conversation types
163
161
  - **V2 Migration**: 100% complete - all conversations use token-aware chunking
164
162
 
165
- ## V2.5.16 Critical Updates
166
-
167
- ### 🚨 CPU Performance Fix - RESOLVED
168
- **Issue**: Streaming importer was consuming **1437% CPU** causing system overload
169
- **Solution**: Complete rewrite with production-grade throttling and monitoring
170
- **Result**: CPU usage reduced to **<1%** (99.93% improvement)
171
-
172
- ### ✅ Production-Ready Streaming Importer
173
- - **Non-blocking CPU monitoring** with cgroup awareness
174
- - **Queue overflow protection** - data deferred, never dropped
175
- - **Atomic state persistence** with fsync for crash recovery
176
- - **Memory management** with 15% GC buffer and automatic cleanup
177
- - **Proper async signal handling** for clean shutdowns
178
-
179
- ### 🎯 100% V2 Token-Aware Chunking
180
- - **Complete Migration**: All collections now use optimized chunking
181
- - **Configuration**: 400 tokens/1600 chars with 75 token/300 char overlap
182
- - **Search Quality**: Improved semantic boundaries and context preservation
183
- - **Memory Efficiency**: Streaming processing prevents OOM during imports
184
-
185
- ### 📊 Performance Metrics (v2.5.16)
186
- | Metric | Before | After | Improvement |
187
- |--------|--------|-------|-------------|
188
- | CPU Usage | 1437% | <1% | 99.93% ↓ |
189
- | Memory | 8GB peak | 302MB | 96.2% ↓ |
190
- | Search Latency | Variable | 3.16ms avg | Consistent |
191
- | Test Success | Unstable | 21/25 passing | Reliable |
192
-
193
- ### 🔧 CLI Status Command Fix
194
- Fixed broken `--status` command in MCP server - now returns:
195
- - Collection counts and health
196
- - Real-time CPU and memory usage
197
- - Search performance metrics
198
- - Import processing status
199
163
 
200
164
  ## The Technical Stack
201
165
 
@@ -206,15 +170,41 @@ Fixed broken `--status` command in MCP server - now returns:
206
170
  - **MCP Server**: Python + FastMCP
207
171
  - **Search**: Semantic similarity with time decay
208
172
 
209
- ## Problems?
173
+ ## Problems
210
174
 
211
175
  - [Troubleshooting Guide](docs/troubleshooting.md)
212
176
  - [GitHub Issues](https://github.com/ramakay/claude-self-reflect/issues)
213
177
  - [Discussions](https://github.com/ramakay/claude-self-reflect/discussions)
214
178
 
179
+ ## Upgrading to v2.5.19
180
+
181
+ ### 🆕 New Feature: Metadata Enrichment
182
+ v2.5.19 adds searchable metadata to your conversations - concepts, files, and tools!
183
+
184
+ #### For Existing Users
185
+ ```bash
186
+ # Update to latest version
187
+ npm update -g claude-self-reflect
188
+
189
+ # Run setup - it will detect your existing installation
190
+ claude-self-reflect setup
191
+ # Choose "yes" when asked about metadata enrichment
192
+
193
+ # Or manually enrich metadata anytime:
194
+ docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py
195
+ ```
196
+
197
+ #### What You Get
198
+ - `search_by_concept("docker")` - Find conversations by topic
199
+ - `search_by_file("server.py")` - Find conversations that touched specific files
200
+ - Better search accuracy with metadata-based filtering
201
+
215
202
  ## What's New
216
203
 
217
- - **v2.5.16** - **CRITICAL PERFORMANCE UPDATE** - Fixed 1437% CPU overload, 100% V2 migration complete, production streaming importer
204
+ - **v2.5.19** - Metadata Enrichment! Search by concepts, files, and tools. [Full release notes](docs/releases/v2.5.19-RELEASE-NOTES.md)
205
+ - **v2.5.18** - Security dependency updates
206
+ - **v2.5.17** - Critical CPU fix and memory limit adjustment. [Full release notes](docs/releases/v2.5.17-release-notes.md)
207
+ - **v2.5.16** - (Pre-release only) Initial streaming importer with CPU throttling
218
208
  - **v2.5.15** - Critical bug fixes and collection creation improvements
219
209
  - **v2.5.14** - Async importer collection fix - All conversations now searchable
220
210
  - **v2.5.11** - Critical cloud mode fix - Environment variables now properly passed to MCP server
@@ -231,6 +221,22 @@ Fixed broken `--status` command in MCP server - now returns:
231
221
  - [Architecture details](docs/architecture-details.md)
232
222
  - [Contributing](CONTRIBUTING.md)
233
223
 
224
+ ### Uninstall
225
+
226
+ For complete uninstall instructions, see [docs/UNINSTALL.md](docs/UNINSTALL.md).
227
+
228
+ Quick uninstall:
229
+ ```bash
230
+ # Remove MCP server
231
+ claude mcp remove claude-self-reflect
232
+
233
+ # Stop Docker containers
234
+ docker-compose down
235
+
236
+ # Uninstall npm package
237
+ npm uninstall -g claude-self-reflect
238
+ ```
239
+
234
240
  ## Contributors
235
241
 
236
242
  Special thanks to our contributors:
@@ -240,6 +246,4 @@ Special thanks to our contributors:
240
246
 
241
247
  ---
242
248
 
243
- Stop reading. Start installing. Your future self will thank you.
244
-
245
- MIT License. Built with ❤️ for the Claude community.
249
+ Built with ❤️ by [ramakay](https://github.com/ramakay) for the Claude community.
@@ -406,6 +406,54 @@ async function importConversations() {
406
406
  }
407
407
  }
408
408
 
409
+ async function enrichMetadata() {
410
+ console.log('\n🔍 Metadata Enrichment (NEW in v2.5.19!)...');
411
+ console.log('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
412
+ console.log('This feature enhances your conversations with searchable metadata:');
413
+ console.log(' • Concepts: High-level topics (docker, security, testing, etc.)');
414
+ console.log(' • Files: Track which files were analyzed or edited');
415
+ console.log(' • Tools: Record which Claude tools were used');
416
+ console.log('\nEnables powerful searches like:');
417
+ console.log(' • search_by_concept("docker")');
418
+ console.log(' • search_by_file("server.py")');
419
+
420
+ const enrichChoice = await question('\nEnrich past conversations with metadata? (recommended) (y/n): ');
421
+
422
+ if (enrichChoice.toLowerCase() === 'y') {
423
+ console.log('\n⏳ Starting metadata enrichment (safe mode)...');
424
+ console.log(' • Processing last 30 days of conversations');
425
+ console.log(' • Using conservative rate limiting');
426
+ console.log(' • This may take 5-10 minutes\n');
427
+
428
+ try {
429
+ // Run the safe delta update script
430
+ safeExec('docker', [
431
+ 'compose', 'run', '--rm',
432
+ '-e', 'DAYS_TO_UPDATE=30',
433
+ '-e', 'BATCH_SIZE=2',
434
+ '-e', 'RATE_LIMIT_DELAY=0.5',
435
+ '-e', 'MAX_CONCURRENT_UPDATES=2',
436
+ 'importer',
437
+ 'python', '/app/scripts/delta-metadata-update-safe.py'
438
+ ], {
439
+ cwd: projectRoot,
440
+ stdio: 'inherit'
441
+ });
442
+
443
+ console.log('\n✅ Metadata enrichment completed successfully!');
444
+ console.log(' Your conversations now have searchable concepts and file tracking.');
445
+ } catch (error) {
446
+ console.log('\n⚠️ Metadata enrichment had some issues but continuing setup');
447
+ console.log(' You can retry later with:');
448
+ console.log(' docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py');
449
+ }
450
+ } else {
451
+ console.log('\n📝 Skipping metadata enrichment.');
452
+ console.log(' You can run it later with:');
453
+ console.log(' docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py');
454
+ }
455
+ }
456
+
409
457
  async function showFinalInstructions() {
410
458
  console.log('\n✅ Setup complete!');
411
459
 
@@ -419,6 +467,7 @@ async function showFinalInstructions() {
419
467
  console.log(' • Check status: docker compose ps');
420
468
  console.log(' • View logs: docker compose logs -f');
421
469
  console.log(' • Import conversations: docker compose run --rm importer');
470
+ console.log(' • Enrich metadata: docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py');
422
471
  console.log(' • Start watcher: docker compose --profile watch up -d');
423
472
  console.log(' • Stop all: docker compose down');
424
473
 
@@ -450,13 +499,25 @@ async function checkExistingInstallation() {
450
499
  console.log(' • 🔍 Mode: ' + (localMode ? 'Local embeddings (privacy mode)' : 'Cloud embeddings (Voyage AI)'));
451
500
  console.log(' • ⚡ Memory decay: Enabled (90-day half-life)');
452
501
 
502
+ // Offer metadata enrichment for v2.5.19
503
+ console.log('\n🆕 NEW in v2.5.19: Metadata Enrichment!');
504
+ console.log(' Enhance your conversations with searchable concepts and file tracking.');
505
+
506
+ const upgradeChoice = await question('\nWould you like to enrich your conversations with metadata? (y/n): ');
507
+
508
+ if (upgradeChoice.toLowerCase() === 'y') {
509
+ await enrichMetadata();
510
+ console.log('\n✅ Upgrade complete! Your conversations now have enhanced search capabilities.');
511
+ }
512
+
453
513
  console.log('\n📋 Quick Commands:');
454
514
  console.log(' • View status: docker compose ps');
455
515
  console.log(' • View logs: docker compose logs -f');
516
+ console.log(' • Enrich metadata: docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py');
456
517
  console.log(' • Restart: docker compose restart');
457
518
  console.log(' • Stop: docker compose down');
458
519
 
459
- console.log('\n💡 To re-run setup, first stop services with: docker compose down');
520
+ console.log('\n💡 To re-run full setup, first stop services with: docker compose down');
460
521
  return true;
461
522
  }
462
523
  }
@@ -504,6 +565,9 @@ async function main() {
504
565
  // Import conversations
505
566
  await importConversations();
506
567
 
568
+ // Enrich metadata (new in v2.5.19)
569
+ await enrichMetadata();
570
+
507
571
  // Show final instructions
508
572
  await showFinalInstructions();
509
573
 
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "claude-self-reflect-mcp"
3
- version = "2.5.17"
3
+ version = "2.5.19"
4
4
  description = "MCP server for Claude self-reflection with memory decay"
5
5
  # readme = "README.md"
6
6
  requires-python = ">=3.10"