claude-self-reflect 2.5.18 → 2.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -13,6 +13,82 @@ You are a resilient and comprehensive testing specialist for Claude Self-Reflect
13
13
  - MCP tools enable reflection and memory storage
14
14
  - System must handle sensitive API keys securely
15
15
 
16
+ ## Comprehensive Test Suite
17
+
18
+ ### Available Test Categories
19
+ The project now includes a comprehensive test suite in `/tests/` directory:
20
+
21
+ 1. **MCP Tool Integration** (`test_mcp_tools_comprehensive.py`)
22
+ - All MCP tools with various parameters
23
+ - Edge cases and error handling
24
+ - Cross-project search validation
25
+
26
+ 2. **Memory Decay** (`test_memory_decay.py`)
27
+ - Decay calculations and half-life variations
28
+ - Score adjustments and ranking changes
29
+ - Performance impact measurements
30
+
31
+ 3. **Multi-Project Support** (`test_multi_project.py`)
32
+ - Project isolation and collection naming
33
+ - Cross-project search functionality
34
+ - Metadata storage and retrieval
35
+
36
+ 4. **Embedding Models** (`test_embedding_models.py`)
37
+ - FastEmbed vs Voyage AI switching
38
+ - Dimension compatibility (384 vs 1024)
39
+ - Model performance comparisons
40
+
41
+ 5. **Delta Metadata** (`test_delta_metadata.py`)
42
+ - Tool usage extraction
43
+ - File reference tracking
44
+ - Incremental updates without re-embedding
45
+
46
+ 6. **Performance & Load** (`test_performance_load.py`)
47
+ - Large conversation imports (>1000 chunks)
48
+ - Concurrent operations
49
+ - Memory and CPU monitoring
50
+
51
+ 7. **Data Integrity** (`test_data_integrity.py`)
52
+ - Duplicate detection
53
+ - Unicode handling
54
+ - Chunk ordering preservation
55
+
56
+ 8. **Recovery Scenarios** (`test_recovery_scenarios.py`)
57
+ - Partial import recovery
58
+ - Container restart resilience
59
+ - State file corruption handling
60
+
61
+ 9. **Security** (`test_security.py`)
62
+ - API key validation
63
+ - Input sanitization
64
+ - Path traversal prevention
65
+
66
+ ### Running the Test Suite
67
+ ```bash
68
+ # Run ALL tests
69
+ cd ~/projects/claude-self-reflect
70
+ python tests/run_all_tests.py
71
+
72
+ # Run specific categories
73
+ python tests/run_all_tests.py -c mcp_tools memory_decay multi_project
74
+
75
+ # Run with verbose output
76
+ python tests/run_all_tests.py -v
77
+
78
+ # List available test categories
79
+ python tests/run_all_tests.py --list
80
+
81
+ # Run individual test files
82
+ python tests/test_mcp_tools_comprehensive.py
83
+ python tests/test_memory_decay.py
84
+ python tests/test_multi_project.py
85
+ ```
86
+
87
+ ### Test Results Location
88
+ - JSON results: `tests/test_results.json`
89
+ - Contains timestamps, durations, pass/fail counts
90
+ - Useful for tracking test history
91
+
16
92
  ## Key Responsibilities
17
93
 
18
94
  1. **System State Detection**
@@ -16,6 +16,51 @@ You are an MCP server development specialist for the memento-stack project. You
16
16
  - Supports both local (FastEmbed) and cloud (Voyage AI) embeddings
17
17
  - MCP determines project from working directory context
18
18
 
19
+ ## Available Test Suites
20
+
21
+ ### MCP-Specific Tests
22
+ 1. **Comprehensive MCP Tool Tests** (`tests/test_mcp_tools_comprehensive.py`)
23
+ - Tests all MCP tools: reflect_on_past, store_reflection, quick_search, search_summary
24
+ - Edge case handling and error scenarios
25
+ - Parameter validation (limit, min_score, use_decay, response_format)
26
+ - Cross-project search with project="all"
27
+ - Run with: `python tests/test_mcp_tools_comprehensive.py`
28
+
29
+ 2. **MCP Search Tests** (`scripts/test-mcp-search.py`)
30
+ - Basic MCP search functionality
31
+ - Integration with Qdrant backend
32
+ - Response parsing and formatting
33
+ - Run with: `python scripts/test-mcp-search.py`
34
+
35
+ 3. **MCP Robustness Tests** (`scripts/test-mcp-robustness.py`)
36
+ - Error recovery mechanisms
37
+ - Timeout handling
38
+ - Connection resilience
39
+ - Run with: `python scripts/test-mcp-robustness.py`
40
+
41
+ ### Running MCP Tests
42
+ ```bash
43
+ # Run all MCP tests
44
+ cd ~/projects/claude-self-reflect
45
+ python tests/run_all_tests.py -c mcp_tools mcp_search
46
+
47
+ # Test MCP server directly
48
+ cd mcp-server && python test_server.py
49
+
50
+ # Verify MCP registration in Claude Code
51
+ claude mcp list | grep claude-self-reflect
52
+
53
+ # Test MCP tools from Python
54
+ python -c "from mcp_server.src.server import reflect_on_past; import asyncio; asyncio.run(reflect_on_past({'query': 'test', 'limit': 5}))"
55
+ ```
56
+
57
+ ### MCP Tool Parameters Reference
58
+ - **reflect_on_past**: query, limit, brief, min_score, project, use_decay, response_format, include_raw
59
+ - **store_reflection**: content, tags
60
+ - **quick_search**: query, min_score, project
61
+ - **search_by_file**: file_path, limit, project
62
+ - **search_by_concept**: concept, include_files, limit, project
63
+
19
64
  ## Key Responsibilities
20
65
 
21
66
  1. **MCP Server Development**
@@ -49,6 +49,47 @@ You are a Qdrant vector database specialist for the memento-stack project. Your
49
49
  - **Monitor baseline + headroom** - Measure actual usage before setting limits
50
50
  - **Use cgroup-aware CPU monitoring** - Docker shows all CPUs but has limits
51
51
 
52
+ ## Available Test Suites
53
+
54
+ ### Qdrant-Specific Tests
55
+ 1. **Multi-Project Support Tests** (`tests/test_multi_project.py`)
56
+ - Collection isolation verification
57
+ - Cross-project search functionality
58
+ - Collection naming consistency
59
+ - Project metadata storage
60
+ - Run with: `python tests/test_multi_project.py`
61
+
62
+ 2. **Data Integrity Tests** (`tests/test_data_integrity.py`)
63
+ - Duplicate detection
64
+ - Chunk ordering preservation
65
+ - Unicode and special character handling
66
+ - Collection consistency checks
67
+
68
+ 3. **Performance Tests** (`tests/test_performance_load.py`)
69
+ - Large conversation imports (>1000 chunks)
70
+ - Concurrent search requests
71
+ - Memory usage patterns
72
+ - Collection size limits
73
+
74
+ ### How to Run Tests
75
+ ```bash
76
+ # Run all Qdrant-related tests
77
+ cd ~/projects/claude-self-reflect
78
+ python tests/run_all_tests.py -c multi_project data_integrity performance
79
+
80
+ # Check collection health
81
+ docker exec claude-reflection-qdrant curl -s http://localhost:6333/collections | jq
82
+
83
+ # Verify specific collection
84
+ python -c "from qdrant_client import QdrantClient; c=QdrantClient('localhost', 6333); print(c.get_collection('conv_HASH_local'))"
85
+ ```
86
+
87
+ ### Common Issues & Solutions
88
+ - **Dimension mismatch**: Check embedding model (384 for local, 1024 for voyage)
89
+ - **Empty search results**: Verify collection exists and has points
90
+ - **Slow searches**: Check collection size and optimize with filters
91
+ - **Collection not found**: Verify project name normalization and MD5 hash
92
+
52
93
  ### Quality Gates
53
94
  - **Follow the workflow**: implementation → review → test → docs → release
54
95
  - **Use pre-releases for major changes** - Better to test than break production
package/README.md CHANGED
@@ -1,6 +1,34 @@
1
1
  # Claude Self-Reflect
2
2
 
3
- Claude forgets everything. This fixes that.
3
+ <div align="center">
4
+
5
+ [![npm version](https://badge.fury.io/js/claude-self-reflect.svg)](https://www.npmjs.com/package/claude-self-reflect)
6
+ [![npm downloads](https://img.shields.io/npm/dm/claude-self-reflect.svg)](https://www.npmjs.com/package/claude-self-reflect)
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
8
+ [![GitHub CI](https://github.com/ramakay/claude-self-reflect/actions/workflows/ci.yml/badge.svg)](https://github.com/ramakay/claude-self-reflect/actions/workflows/ci.yml)
9
+
10
+ [![Claude Code](https://img.shields.io/badge/Claude%20Code-Compatible-6B4FBB)](https://github.com/anthropics/claude-code)
11
+ [![MCP Protocol](https://img.shields.io/badge/MCP-Enabled-FF6B6B)](https://modelcontextprotocol.io/)
12
+ [![Docker](https://img.shields.io/badge/Docker-Ready-2496ED?logo=docker&logoColor=white)](https://www.docker.com/)
13
+ [![Local First](https://img.shields.io/badge/Local%20First-Privacy-4A90E2)](https://github.com/ramakay/claude-self-reflect)
14
+
15
+ [![GitHub stars](https://img.shields.io/github/stars/ramakay/claude-self-reflect.svg?style=social)](https://github.com/ramakay/claude-self-reflect/stargazers)
16
+ [![GitHub issues](https://img.shields.io/github/issues/ramakay/claude-self-reflect.svg)](https://github.com/ramakay/claude-self-reflect/issues)
17
+ [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/ramakay/claude-self-reflect/pulls)
18
+
19
+ </div>
20
+
21
+ **Claude forgets everything. This fixes that.**
22
+
23
+ Give Claude perfect memory of all your conversations. Search past discussions instantly. Never lose context again.
24
+
25
+ **100% Local by Default** - Your conversations never leave your machine. No cloud services required, no API keys needed, complete privacy out of the box.
26
+
27
+ **Blazing Fast Search** - Semantic search across thousands of conversations in milliseconds. Find that discussion about database schemas from three weeks ago in seconds.
28
+
29
+ **Zero Configuration** - Works immediately after installation. Smart auto-detection handles everything. No manual setup, no environment variables, just install and use.
30
+
31
+ **Production Ready** - Battle-tested with 600+ conversations across 24 projects. Handles mixed embedding types automatically. Scales from personal use to team deployments.
4
32
 
5
33
  ## Table of Contents
6
34
 
@@ -176,8 +204,33 @@ Recent conversations matter more. Old ones fade. Like your brain, but reliable.
176
204
  - [GitHub Issues](https://github.com/ramakay/claude-self-reflect/issues)
177
205
  - [Discussions](https://github.com/ramakay/claude-self-reflect/discussions)
178
206
 
207
+ ## Upgrading to v2.5.19
208
+
209
+ ### 🆕 New Feature: Metadata Enrichment
210
+ v2.5.19 adds searchable metadata to your conversations - concepts, files, and tools!
211
+
212
+ #### For Existing Users
213
+ ```bash
214
+ # Update to latest version
215
+ npm update -g claude-self-reflect
216
+
217
+ # Run setup - it will detect your existing installation
218
+ claude-self-reflect setup
219
+ # Choose "yes" when asked about metadata enrichment
220
+
221
+ # Or manually enrich metadata anytime:
222
+ docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py
223
+ ```
224
+
225
+ #### What You Get
226
+ - `search_by_concept("docker")` - Find conversations by topic
227
+ - `search_by_file("server.py")` - Find conversations that touched specific files
228
+ - Better search accuracy with metadata-based filtering
229
+
179
230
  ## What's New
180
231
 
232
+ - **v2.5.19** - Metadata Enrichment! Search by concepts, files, and tools. [Full release notes](docs/releases/v2.5.19-RELEASE-NOTES.md)
233
+ - **v2.5.18** - Security dependency updates
181
234
  - **v2.5.17** - Critical CPU fix and memory limit adjustment. [Full release notes](docs/releases/v2.5.17-release-notes.md)
182
235
  - **v2.5.16** - (Pre-release only) Initial streaming importer with CPU throttling
183
236
  - **v2.5.15** - Critical bug fixes and collection creation improvements
@@ -406,6 +406,54 @@ async function importConversations() {
406
406
  }
407
407
  }
408
408
 
409
+ async function enrichMetadata() {
410
+ console.log('\n🔍 Metadata Enrichment (NEW in v2.5.19!)...');
411
+ console.log('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
412
+ console.log('This feature enhances your conversations with searchable metadata:');
413
+ console.log(' • Concepts: High-level topics (docker, security, testing, etc.)');
414
+ console.log(' • Files: Track which files were analyzed or edited');
415
+ console.log(' • Tools: Record which Claude tools were used');
416
+ console.log('\nEnables powerful searches like:');
417
+ console.log(' • search_by_concept("docker")');
418
+ console.log(' • search_by_file("server.py")');
419
+
420
+ const enrichChoice = await question('\nEnrich past conversations with metadata? (recommended) (y/n): ');
421
+
422
+ if (enrichChoice.toLowerCase() === 'y') {
423
+ console.log('\n⏳ Starting metadata enrichment (safe mode)...');
424
+ console.log(' • Processing last 30 days of conversations');
425
+ console.log(' • Using conservative rate limiting');
426
+ console.log(' • This may take 5-10 minutes\n');
427
+
428
+ try {
429
+ // Run the safe delta update script
430
+ safeExec('docker', [
431
+ 'compose', 'run', '--rm',
432
+ '-e', 'DAYS_TO_UPDATE=30',
433
+ '-e', 'BATCH_SIZE=2',
434
+ '-e', 'RATE_LIMIT_DELAY=0.5',
435
+ '-e', 'MAX_CONCURRENT_UPDATES=2',
436
+ 'importer',
437
+ 'python', '/app/scripts/delta-metadata-update-safe.py'
438
+ ], {
439
+ cwd: projectRoot,
440
+ stdio: 'inherit'
441
+ });
442
+
443
+ console.log('\n✅ Metadata enrichment completed successfully!');
444
+ console.log(' Your conversations now have searchable concepts and file tracking.');
445
+ } catch (error) {
446
+ console.log('\n⚠️ Metadata enrichment had some issues but continuing setup');
447
+ console.log(' You can retry later with:');
448
+ console.log(' docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py');
449
+ }
450
+ } else {
451
+ console.log('\n📝 Skipping metadata enrichment.');
452
+ console.log(' You can run it later with:');
453
+ console.log(' docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py');
454
+ }
455
+ }
456
+
409
457
  async function showFinalInstructions() {
410
458
  console.log('\n✅ Setup complete!');
411
459
 
@@ -419,6 +467,7 @@ async function showFinalInstructions() {
419
467
  console.log(' • Check status: docker compose ps');
420
468
  console.log(' • View logs: docker compose logs -f');
421
469
  console.log(' • Import conversations: docker compose run --rm importer');
470
+ console.log(' • Enrich metadata: docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py');
422
471
  console.log(' • Start watcher: docker compose --profile watch up -d');
423
472
  console.log(' • Stop all: docker compose down');
424
473
 
@@ -450,13 +499,25 @@ async function checkExistingInstallation() {
450
499
  console.log(' • 🔍 Mode: ' + (localMode ? 'Local embeddings (privacy mode)' : 'Cloud embeddings (Voyage AI)'));
451
500
  console.log(' • ⚡ Memory decay: Enabled (90-day half-life)');
452
501
 
502
+ // Offer metadata enrichment for v2.5.19
503
+ console.log('\n🆕 NEW in v2.5.19: Metadata Enrichment!');
504
+ console.log(' Enhance your conversations with searchable concepts and file tracking.');
505
+
506
+ const upgradeChoice = await question('\nWould you like to enrich your conversations with metadata? (y/n): ');
507
+
508
+ if (upgradeChoice.toLowerCase() === 'y') {
509
+ await enrichMetadata();
510
+ console.log('\n✅ Upgrade complete! Your conversations now have enhanced search capabilities.');
511
+ }
512
+
453
513
  console.log('\n📋 Quick Commands:');
454
514
  console.log(' • View status: docker compose ps');
455
515
  console.log(' • View logs: docker compose logs -f');
516
+ console.log(' • Enrich metadata: docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py');
456
517
  console.log(' • Restart: docker compose restart');
457
518
  console.log(' • Stop: docker compose down');
458
519
 
459
- console.log('\n💡 To re-run setup, first stop services with: docker compose down');
520
+ console.log('\n💡 To re-run full setup, first stop services with: docker compose down');
460
521
  return true;
461
522
  }
462
523
  }
@@ -504,6 +565,9 @@ async function main() {
504
565
  // Import conversations
505
566
  await importConversations();
506
567
 
568
+ // Enrich metadata (new in v2.5.19)
569
+ await enrichMetadata();
570
+
507
571
  // Show final instructions
508
572
  await showFinalInstructions();
509
573
 
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "claude-self-reflect-mcp"
3
- version = "2.5.18"
3
+ version = "2.5.19"
4
4
  description = "MCP server for Claude self-reflection with memory decay"
5
5
  # readme = "README.md"
6
6
  requires-python = ">=3.10"
@@ -522,7 +522,7 @@ async def reflect_on_past(
522
522
 
523
523
  # Handle project matching - check if the target project name appears at the end of the stored project path
524
524
  if target_project != 'all' and not project_collections:
525
- # The stored project name is like "-Users-ramakrishnanannaswamy-projects-ShopifyMCPMockShop"
525
+ # The stored project name is like "-Users-username-projects-ShopifyMCPMockShop"
526
526
  # We want to match just "ShopifyMCPMockShop"
527
527
  if not point_project.endswith(f"-{target_project}") and point_project != target_project:
528
528
  continue # Skip results from other projects
@@ -602,7 +602,7 @@ async def reflect_on_past(
602
602
 
603
603
  # Handle project matching - check if the target project name appears at the end of the stored project path
604
604
  if target_project != 'all' and not project_collections:
605
- # The stored project name is like "-Users-ramakrishnanannaswamy-projects-ShopifyMCPMockShop"
605
+ # The stored project name is like "-Users-username-projects-ShopifyMCPMockShop"
606
606
  # We want to match just "ShopifyMCPMockShop"
607
607
  if not point_project.endswith(f"-{target_project}") and point_project != target_project:
608
608
  continue # Skip results from other projects
@@ -639,7 +639,7 @@ async def reflect_on_past(
639
639
 
640
640
  # Handle project matching - check if the target project name appears at the end of the stored project path
641
641
  if target_project != 'all' and not project_collections:
642
- # The stored project name is like "-Users-ramakrishnanannaswamy-projects-ShopifyMCPMockShop"
642
+ # The stored project name is like "-Users-username-projects-ShopifyMCPMockShop"
643
643
  # We want to match just "ShopifyMCPMockShop"
644
644
  if not point_project.endswith(f"-{target_project}") and point_project != target_project:
645
645
  continue # Skip results from other projects
@@ -1169,7 +1169,8 @@ async def search_by_concept(
1169
1169
 
1170
1170
  if project and project != 'all':
1171
1171
  # Filter collections for specific project
1172
- project_hash = hashlib.md5(project.encode()).hexdigest()[:8]
1172
+ normalized_project = normalize_project_name(project)
1173
+ project_hash = hashlib.md5(normalized_project.encode()).hexdigest()[:8]
1173
1174
  collection_prefix = f"conv_{project_hash}_"
1174
1175
  collections = [c for c in await get_all_collections() if c.startswith(collection_prefix)]
1175
1176
  elif project == 'all':
@@ -1178,49 +1179,101 @@ async def search_by_concept(
1178
1179
  if not collections:
1179
1180
  return "<search_by_concept>\n<error>No collections found to search</error>\n</search_by_concept>"
1180
1181
 
1181
- # Search all collections
1182
- all_results = []
1182
+ # First, check metadata health
1183
+ metadata_found = False
1184
+ total_points_checked = 0
1183
1185
 
1184
- for collection_name in collections:
1186
+ for collection_name in collections[:3]: # Sample first 3 collections
1185
1187
  try:
1186
- # Hybrid search: semantic + concept filter
1187
- results = await qdrant_client.search(
1188
+ sample_points, _ = await qdrant_client.scroll(
1188
1189
  collection_name=collection_name,
1189
- query_vector=embedding,
1190
- query_filter=models.Filter(
1191
- should=[
1192
- models.FieldCondition(
1193
- key="concepts",
1194
- match=models.MatchAny(any=[concept.lower()])
1195
- )
1196
- ]
1197
- ),
1198
- limit=limit * 2, # Get more results for better filtering
1190
+ limit=10,
1199
1191
  with_payload=True
1200
1192
  )
1201
-
1202
- for point in results:
1203
- payload = point.payload
1204
- # Boost score if concept is in the concepts list
1205
- score_boost = 0.2 if concept.lower() in payload.get('concepts', []) else 0.0
1206
- all_results.append({
1207
- 'score': float(point.score) + score_boost,
1208
- 'payload': payload,
1209
- 'collection': collection_name
1210
- })
1211
-
1212
- except Exception as e:
1193
+ total_points_checked += len(sample_points)
1194
+ for point in sample_points:
1195
+ if 'concepts' in point.payload and point.payload['concepts']:
1196
+ metadata_found = True
1197
+ break
1198
+ if metadata_found:
1199
+ break
1200
+ except:
1213
1201
  continue
1214
1202
 
1203
+ # Search all collections
1204
+ all_results = []
1205
+
1206
+ # If metadata exists, try metadata-based search first
1207
+ if metadata_found:
1208
+ for collection_name in collections:
1209
+ try:
1210
+ # Hybrid search: semantic + concept filter
1211
+ results = await qdrant_client.search(
1212
+ collection_name=collection_name,
1213
+ query_vector=embedding,
1214
+ query_filter=models.Filter(
1215
+ should=[
1216
+ models.FieldCondition(
1217
+ key="concepts",
1218
+ match=models.MatchAny(any=[concept.lower()])
1219
+ )
1220
+ ]
1221
+ ),
1222
+ limit=limit * 2, # Get more results for better filtering
1223
+ with_payload=True
1224
+ )
1225
+
1226
+ for point in results:
1227
+ payload = point.payload
1228
+ # Boost score if concept is in the concepts list
1229
+ score_boost = 0.2 if concept.lower() in payload.get('concepts', []) else 0.0
1230
+ all_results.append({
1231
+ 'score': float(point.score) + score_boost,
1232
+ 'payload': payload,
1233
+ 'collection': collection_name,
1234
+ 'search_type': 'metadata'
1235
+ })
1236
+
1237
+ except Exception as e:
1238
+ continue
1239
+
1240
+ # If no results from metadata search OR no metadata exists, fall back to semantic search
1241
+ if not all_results:
1242
+ await ctx.debug(f"Falling back to semantic search for concept: {concept}")
1243
+
1244
+ for collection_name in collections:
1245
+ try:
1246
+ # Pure semantic search without filters
1247
+ results = await qdrant_client.search(
1248
+ collection_name=collection_name,
1249
+ query_vector=embedding,
1250
+ limit=limit,
1251
+ score_threshold=0.5, # Lower threshold for broader results
1252
+ with_payload=True
1253
+ )
1254
+
1255
+ for point in results:
1256
+ all_results.append({
1257
+ 'score': float(point.score),
1258
+ 'payload': point.payload,
1259
+ 'collection': collection_name,
1260
+ 'search_type': 'semantic'
1261
+ })
1262
+
1263
+ except Exception as e:
1264
+ continue
1265
+
1215
1266
  # Sort by score and limit
1216
1267
  all_results.sort(key=lambda x: x['score'], reverse=True)
1217
1268
  all_results = all_results[:limit]
1218
1269
 
1219
1270
  # Format results
1220
1271
  if not all_results:
1272
+ metadata_status = "with metadata" if metadata_found else "NO METADATA FOUND"
1221
1273
  return f"""<search_by_concept>
1222
1274
  <concept>{concept}</concept>
1223
- <message>No conversations found about this concept</message>
1275
+ <metadata_health>{metadata_status} (checked {total_points_checked} points)</metadata_health>
1276
+ <message>No conversations found about this concept. {'Try running: python scripts/delta-metadata-update.py' if not metadata_found else 'Try different search terms.'}</message>
1224
1277
  </search_by_concept>"""
1225
1278
 
1226
1279
  results_text = []
@@ -1255,8 +1308,14 @@ async def search_by_concept(
1255
1308
  <preview>{text_preview}</preview>
1256
1309
  </result>""")
1257
1310
 
1311
+ # Determine if this was a fallback search
1312
+ used_fallback = any(r.get('search_type') == 'semantic' for r in all_results)
1313
+ metadata_status = "with metadata" if metadata_found else "NO METADATA FOUND"
1314
+
1258
1315
  return f"""<search_by_concept>
1259
1316
  <concept>{concept}</concept>
1317
+ <metadata_health>{metadata_status} (checked {total_points_checked} points)</metadata_health>
1318
+ <search_type>{'fallback_semantic' if used_fallback else 'metadata_based'}</search_type>
1260
1319
  <count>{len(all_results)}</count>
1261
1320
  <results>
1262
1321
  {''.join(results_text)}
@@ -13,8 +13,8 @@ def extract_project_name_from_path(file_path: str) -> str:
13
13
  """Extract project name from JSONL file path.
14
14
 
15
15
  Handles paths like:
16
- - ~/.claude/projects/-Users-ramakrishnanannaswamy-projects-claude-self-reflect/file.jsonl
17
- - /logs/-Users-ramakrishnanannaswamy-projects-n8n-builder/file.jsonl
16
+ - ~/.claude/projects/-Users-username-projects-claude-self-reflect/file.jsonl
17
+ - /logs/-Users-username-projects-n8n-builder/file.jsonl
18
18
  """
19
19
  # Get the directory name containing the JSONL file
20
20
  path_obj = Path(file_path)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-self-reflect",
3
- "version": "2.5.18",
3
+ "version": "2.6.0",
4
4
  "description": "Give Claude perfect memory of all your conversations - Installation wizard for Python MCP server",
5
5
  "keywords": [
6
6
  "claude",
@@ -13,6 +13,12 @@
13
13
  "ai-memory",
14
14
  "claude-code"
15
15
  ],
16
+ "badges": {
17
+ "npm": "https://badge.fury.io/js/claude-self-reflect.svg",
18
+ "license": "https://img.shields.io/badge/License-MIT-yellow.svg",
19
+ "docker": "https://img.shields.io/badge/Docker-Required-blue.svg",
20
+ "claude": "https://img.shields.io/badge/Claude%20Code-Compatible-green.svg"
21
+ },
16
22
  "homepage": "https://github.com/ramakay/claude-self-reflect#readme",
17
23
  "bugs": {
18
24
  "url": "https://github.com/ramakay/claude-self-reflect/issues"
@@ -34,6 +40,8 @@
34
40
  "mcp-server/run-mcp.sh",
35
41
  "mcp-server/run-mcp-docker.sh",
36
42
  "scripts/import-*.py",
43
+ "scripts/delta-metadata-update-safe.py",
44
+ "scripts/force-metadata-recovery.py",
37
45
  ".claude/agents/*.md",
38
46
  "config/qdrant-config.yaml",
39
47
  "docker-compose.yaml",