claude-self-reflect 2.8.0 โ†’ 2.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -4,7 +4,7 @@ description: Docker Compose orchestration expert for container management, servi
4
4
  tools: Read, Edit, Bash, Grep, LS
5
5
  ---
6
6
 
7
- You are a Docker orchestration specialist for the memento-stack project. You manage multi-container deployments, monitor service health, and troubleshoot container issues.
7
+ You are a Docker orchestration specialist for the claude-self-reflect project. You manage multi-container deployments, monitor service health, and troubleshoot container issues.
8
8
 
9
9
  ## Project Context
10
10
  - Main stack: Qdrant vector database + MCP server + Python importer
@@ -4,7 +4,7 @@ description: Import pipeline debugging specialist for JSONL processing, Python s
4
4
  tools: Read, Edit, Bash, Grep, Glob, LS
5
5
  ---
6
6
 
7
- You are an import pipeline debugging expert for the memento-stack project. You specialize in troubleshooting JSONL file processing, Python import scripts, and conversation chunking strategies.
7
+ You are an import pipeline debugging expert for the claude-self-reflect project. You specialize in troubleshooting JSONL file processing, Python import scripts, and conversation chunking strategies.
8
8
 
9
9
  ## Project Context
10
10
  - Processes Claude Desktop logs from ~/.claude/projects/
@@ -4,7 +4,7 @@ description: MCP (Model Context Protocol) server development expert for Claude D
4
4
  tools: Read, Edit, Bash, Grep, Glob, WebFetch
5
5
  ---
6
6
 
7
- You are an MCP server development specialist for the memento-stack project. You handle Claude Desktop integration, implement MCP tools, and ensure seamless communication between Claude and the vector database.
7
+ You are an MCP server development specialist for the claude-self-reflect project. You handle Claude Desktop integration, implement MCP tools, and ensure seamless communication between Claude and the vector database.
8
8
 
9
9
  ## Project Context
10
10
  - MCP server: claude-self-reflection
@@ -4,7 +4,7 @@ description: Qdrant vector database expert for collection management, troublesho
4
4
  tools: Read, Bash, Grep, Glob, LS, WebFetch
5
5
  ---
6
6
 
7
- You are a Qdrant vector database specialist for the memento-stack project. Your expertise covers collection management, vector search optimization, and embedding strategies.
7
+ You are a Qdrant vector database specialist for the claude-self-reflect project. Your expertise covers collection management, vector search optimization, and embedding strategies.
8
8
 
9
9
  ## Project Context
10
10
  - The system uses Qdrant for storing conversation embeddings from Claude Desktop logs
@@ -4,7 +4,7 @@ description: Search quality optimization expert for improving semantic search ac
4
4
  tools: Read, Edit, Bash, Grep, Glob, WebFetch
5
5
  ---
6
6
 
7
- You are a search optimization specialist for the memento-stack project. You improve semantic search quality, tune parameters, and analyze embedding model performance.
7
+ You are a search optimization specialist for the claude-self-reflect project. You improve semantic search quality, tune parameters, and analyze embedding model performance.
8
8
 
9
9
  ## Project Context
10
10
  - Current baseline: 66.1% search accuracy with Voyage AI
package/README.md CHANGED
@@ -24,67 +24,12 @@
24
24
 
25
25
  Give Claude perfect memory of all your conversations. Search past discussions instantly. Never lose context again.
26
26
 
27
- **100% Local by Default** - Your conversations never leave your machine. No cloud services required, no API keys needed, complete privacy out of the box.
27
+ **๐Ÿ”’ 100% Local by Default** โ€ข **โšก Blazing Fast Search** โ€ข **๐Ÿš€ Zero Configuration** โ€ข **๐Ÿญ Production Ready**
28
28
 
29
- **Blazing Fast Search** - Semantic search across thousands of conversations in milliseconds. Find that discussion about database schemas from three weeks ago in seconds.
29
+ ## ๐Ÿš€ Quick Install
30
30
 
31
- **Zero Configuration** - Works immediately after installation. Smart auto-detection handles everything. No manual setup, no environment variables, just install and use.
32
-
33
- **Production Ready** - Battle-tested with 600+ conversations across 24 projects. Handles mixed embedding types automatically. Scales from personal use to team deployments.
34
-
35
- ## Table of Contents
36
-
37
- - [What You Get](#what-you-get)
38
- - [Requirements](#requirements)
39
- - [Quick Install/Uninstall](#quick-installuninstall)
40
- - [The Magic](#the-magic)
41
- - [Before & After](#before--after)
42
- - [Real Examples](#real-examples-that-made-us-build-this)
43
- - [How It Works](#how-it-works)
44
- - [Import Architecture](#import-architecture)
45
- - [Using It](#using-it)
46
- - [Key Features](#key-features)
47
- - [Performance](#performance)
48
- - [Configuration](#configuration)
49
- - [Technical Stack](#the-technical-stack)
50
- - [Problems](#problems)
51
- - [What's New](#whats-new)
52
- - [Advanced Topics](#advanced-topics)
53
- - [Contributors](#contributors)
54
-
55
- ## What You Get
56
-
57
- Ask Claude about past conversations. Get actual answers. **100% local by default** - your conversations never leave your machine. Cloud-enhanced search available when you need it.
58
-
59
- **Proven at Scale**: Successfully indexed 682 conversation files with 100% reliability. No data loss, no corruption, just seamless conversation memory that works.
60
-
61
- **Before**: "I don't have access to previous conversations"
62
- **After**:
63
- ```
64
- reflection-specialist(Search FastEmbed vs cloud embedding decision)
65
- โŽฟ Done (3 tool uses ยท 8.2k tokens ยท 12.4s)
66
-
67
- "Found it! Yesterday we decided on FastEmbed for local mode - better privacy,
68
- no API calls, 384-dimensional embeddings. Works offline too."
69
- ```
70
-
71
- The reflection specialist is a specialized sub-agent that Claude automatically spawns when you ask about past conversations. It searches your conversation history in its own isolated context, keeping your main chat clean and focused.
72
-
73
- Your conversations become searchable. Your decisions stay remembered. Your context persists.
74
-
75
- ## Requirements
76
-
77
- - **Docker Desktop** (macOS/Windows) or **Docker Engine** (Linux)
78
- - **Node.js** 16+ (for the setup wizard)
79
- - **Claude Desktop** app
80
-
81
- ## Quick Install/Uninstall
82
-
83
- ### Install
84
-
85
- #### Local Mode (Default - Your Data Stays Private)
86
31
  ```bash
87
- # Install and run automatic setup
32
+ # Install and run automatic setup (5 minutes, everything automatic)
88
33
  npm install -g claude-self-reflect
89
34
  claude-self-reflect setup
90
35
 
@@ -93,11 +38,12 @@ claude-self-reflect setup
93
38
  # โœ… Configure everything automatically
94
39
  # โœ… Install the MCP in Claude Code
95
40
  # โœ… Start monitoring for new conversations
96
- # โœ… Verify the reflection tools work
97
41
  # ๐Ÿ”’ Keep all data local - no API keys needed
98
42
  ```
99
43
 
100
- #### Cloud Mode (Better Search Accuracy)
44
+ <details open>
45
+ <summary>๐Ÿ“ก Cloud Mode (Better Search Accuracy)</summary>
46
+
101
47
  ```bash
102
48
  # Step 1: Get your free Voyage AI key
103
49
  # Sign up at https://www.voyageai.com/ - it takes 30 seconds
@@ -108,17 +54,17 @@ claude-self-reflect setup --voyage-key=YOUR_ACTUAL_KEY_HERE
108
54
  ```
109
55
  *Note: Cloud mode provides more accurate semantic search but sends conversation data to Voyage AI for processing.*
110
56
 
111
- 5 minutes. Everything automatic. Just works.
57
+ </details>
112
58
 
113
- ## The Magic
59
+ ## โœจ The Magic
114
60
 
115
61
  ![Self Reflection vs The Grind](docs/images/red-reflection.webp)
116
62
 
117
- ## Before & After
63
+ ## ๐Ÿ“Š Before & After
118
64
 
119
65
  ![Before and After Claude Self-Reflect](docs/diagrams/before-after-combined.webp)
120
66
 
121
- ## Real Examples That Made Us Build This
67
+ ## ๐Ÿ’ฌ Real Examples
122
68
 
123
69
  ```
124
70
  You: "What was that PostgreSQL optimization we figured out?"
@@ -137,41 +83,7 @@ Claude: "3 conversations found:
137
83
  - Nov 20: Added rate limiting per authenticated connection"
138
84
  ```
139
85
 
140
- ## How It Works
141
-
142
- Your conversations โ†’ Vector embeddings โ†’ Semantic search โ†’ Claude remembers
143
-
144
- Technical details exist. You don't need them to start.
145
-
146
- ## Import Architecture
147
-
148
- Here's how your conversations get imported and prioritized:
149
-
150
- ![Import Architecture](docs/diagrams/import-architecture.png)
151
-
152
- **The system intelligently processes your conversations:**
153
- - Runs every 60 seconds checking for new conversations
154
- - Processes newest conversations first (delta import pattern)
155
- - Maintains low memory usage (<50MB) through streaming
156
- - Handles up to 5 files per cycle to prevent blocking
157
-
158
- **HOT/WARM/COLD Intelligent Prioritization:**
159
- - **๐Ÿ”ฅ HOT** (< 5 minutes): Switches to 2-second intervals for near real-time import
160
- - **๐ŸŒก๏ธ WARM** (< 24 hours): Normal priority with starvation prevention (urgent after 30 min wait)
161
- - **โ„๏ธ COLD** (> 24 hours): Batch processed, max 5 per cycle to prevent blocking new content
162
- - Files are categorized by age and processed with priority queuing to ensure newest content gets imported quickly while preventing older files from being starved
163
-
164
- ## Using It
165
-
166
- Once installed, just talk naturally:
167
-
168
- - "What did we discuss about database optimization?"
169
- - "Find our debugging session from last week"
170
- - "Remember this solution for next time"
171
-
172
- The reflection specialist automatically activates. No special commands needed.
173
-
174
- ## Key Features
86
+ ## ๐ŸŽฏ Key Features
175
87
 
176
88
  ### Project-Scoped Search
177
89
  Searches are **project-aware by default**. Claude automatically searches within your current project:
@@ -189,16 +101,37 @@ Claude: [Searches across ALL your projects]
189
101
  ### โฑ๏ธ Memory Decay
190
102
  Recent conversations matter more. Old ones fade. Like your brain, but reliable.
191
103
 
192
- ### ๐Ÿš€ Performance
193
- - **Search**: <3ms average response time across 121+ collections (7.55ms max)
194
- - **Import**: Production streaming importer with 100% reliability
195
- - **Memory**: 302MB operational (60% of 500MB limit) - 96% reduction from v2.5.15
196
- - **CPU**: <1% sustained usage (99.93% reduction from 1437% peak)
197
- - **Scale**: 100% indexing success rate across all conversation types
198
- - **V2 Migration**: 100% complete - all conversations use token-aware chunking
104
+ ### โšก Performance at Scale
105
+ - **Search**: <3ms average response time
106
+ - **Scale**: 600+ conversations across 24 projects
107
+ - **Reliability**: 100% indexing success rate
108
+ - **Memory**: 96% reduction from v2.5.15
109
+
110
+ ## ๐Ÿ—๏ธ Architecture
111
+
112
+ ![Import Architecture](docs/diagrams/import-architecture.png)
113
+
114
+ <details>
115
+ <summary>๐Ÿ”ฅ HOT/WARM/COLD Intelligent Prioritization</summary>
116
+
117
+ - **๐Ÿ”ฅ HOT** (< 5 minutes): 2-second intervals for near real-time import
118
+ - **๐ŸŒก๏ธ WARM** (< 24 hours): Normal priority with starvation prevention
119
+ - **โ„๏ธ COLD** (> 24 hours): Batch processed to prevent blocking
120
+
121
+ Files are categorized by age and processed with priority queuing to ensure newest content gets imported quickly while preventing older files from being starved.
122
+
123
+ </details>
124
+
125
+ ## ๐Ÿ› ๏ธ Requirements
126
+
127
+ - **Docker Desktop** (macOS/Windows) or **Docker Engine** (Linux)
128
+ - **Node.js** 16+ (for the setup wizard)
129
+ - **Claude Desktop** app
199
130
 
131
+ ## ๐Ÿ“– Documentation
200
132
 
201
- ## The Technical Stack
133
+ <details>
134
+ <summary>๐Ÿ”ง Technical Stack</summary>
202
135
 
203
136
  - **Vector DB**: Qdrant (local, your data stays yours)
204
137
  - **Embeddings**:
@@ -207,18 +140,62 @@ Recent conversations matter more. Old ones fade. Like your brain, but reliable.
207
140
  - **MCP Server**: Python + FastMCP
208
141
  - **Search**: Semantic similarity with time decay
209
142
 
210
- ## Problems
143
+ </details>
144
+
145
+ <details>
146
+ <summary>๐Ÿ“š Advanced Topics</summary>
147
+
148
+ - [Performance tuning](docs/performance-guide.md)
149
+ - [Security & privacy](docs/security.md)
150
+ - [Windows setup](docs/windows-setup.md)
151
+ - [Architecture details](docs/architecture-details.md)
152
+ - [Contributing](CONTRIBUTING.md)
153
+
154
+ </details>
155
+
156
+ <details>
157
+ <summary>๐Ÿ› Troubleshooting</summary>
211
158
 
212
159
  - [Troubleshooting Guide](docs/troubleshooting.md)
213
160
  - [GitHub Issues](https://github.com/ramakay/claude-self-reflect/issues)
214
161
  - [Discussions](https://github.com/ramakay/claude-self-reflect/discussions)
215
162
 
216
- ## Upgrading to v2.5.19
163
+ </details>
164
+
165
+ <details>
166
+ <summary>๐Ÿ—‘๏ธ Uninstall</summary>
167
+
168
+ For complete uninstall instructions, see [docs/UNINSTALL.md](docs/UNINSTALL.md).
169
+
170
+ Quick uninstall:
171
+ ```bash
172
+ # Remove MCP server
173
+ claude mcp remove claude-self-reflect
174
+
175
+ # Stop Docker containers
176
+ docker-compose down
177
+
178
+ # Uninstall npm package
179
+ npm uninstall -g claude-self-reflect
180
+ ```
181
+
182
+ </details>
183
+
184
+ ## ๐Ÿ“ฆ What's New
185
+
186
+ <details>
187
+ <summary>๐ŸŽ‰ v2.8.0 - Latest Release</summary>
217
188
 
218
- ### ๐Ÿ†• New Feature: Metadata Enrichment
219
- v2.5.19 adds searchable metadata to your conversations - concepts, files, and tools!
189
+ - **๐Ÿ”ง Fixed MCP Indexing**: Now correctly shows 97.1% progress (was showing 0%)
190
+ - **๐Ÿ”ฅ HOT/WARM/COLD**: Intelligent file prioritization for near real-time imports
191
+ - **๐Ÿ“Š Enhanced Monitoring**: Real-time status with visual indicators
220
192
 
221
- #### For Existing Users
193
+ </details>
194
+
195
+ <details>
196
+ <summary>โœจ v2.5.19 - Metadata Enrichment</summary>
197
+
198
+ ### For Existing Users
222
199
  ```bash
223
200
  # Update to latest version
224
201
  npm update -g claude-self-reflect
@@ -231,50 +208,30 @@ claude-self-reflect setup
231
208
  docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py
232
209
  ```
233
210
 
234
- #### What You Get
211
+ ### What You Get
235
212
  - `search_by_concept("docker")` - Find conversations by topic
236
213
  - `search_by_file("server.py")` - Find conversations that touched specific files
237
214
  - Better search accuracy with metadata-based filtering
238
215
 
239
- ## What's New
216
+ </details>
217
+
218
+ <details>
219
+ <summary>๐Ÿ“œ Release History</summary>
240
220
 
241
- - **v2.5.19** - Metadata Enrichment! Search by concepts, files, and tools. [Full release notes](docs/releases/v2.5.19-RELEASE-NOTES.md)
242
221
  - **v2.5.18** - Security dependency updates
243
- - **v2.5.17** - Critical CPU fix and memory limit adjustment. [Full release notes](docs/releases/v2.5.17-release-notes.md)
244
- - **v2.5.16** - (Pre-release only) Initial streaming importer with CPU throttling
222
+ - **v2.5.17** - Critical CPU fix and memory limit adjustment
223
+ - **v2.5.16** - Initial streaming importer with CPU throttling
245
224
  - **v2.5.15** - Critical bug fixes and collection creation improvements
246
- - **v2.5.14** - Async importer collection fix - All conversations now searchable
247
- - **v2.5.11** - Critical cloud mode fix - Environment variables now properly passed to MCP server
248
- - **v2.5.10** - Emergency hotfix for MCP server startup failure (dead code removal)
249
- - **v2.5.6** - Tool Output Extraction - Captures git changes & tool outputs for cross-agent discovery
225
+ - **v2.5.14** - Async importer collection fix
226
+ - **v2.5.11** - Critical cloud mode fix
227
+ - **v2.5.10** - Emergency hotfix for MCP server startup
228
+ - **v2.5.6** - Tool Output Extraction
250
229
 
251
230
  [Full changelog](docs/release-history.md)
252
231
 
253
- ## Advanced Topics
254
-
255
- - [Performance tuning](docs/performance-guide.md)
256
- - [Security & privacy](docs/security.md)
257
- - [Windows setup](docs/windows-setup.md)
258
- - [Architecture details](docs/architecture-details.md)
259
- - [Contributing](CONTRIBUTING.md)
260
-
261
- ### Uninstall
262
-
263
- For complete uninstall instructions, see [docs/UNINSTALL.md](docs/UNINSTALL.md).
264
-
265
- Quick uninstall:
266
- ```bash
267
- # Remove MCP server
268
- claude mcp remove claude-self-reflect
269
-
270
- # Stop Docker containers
271
- docker-compose down
272
-
273
- # Uninstall npm package
274
- npm uninstall -g claude-self-reflect
275
- ```
232
+ </details>
276
233
 
277
- ## Contributors
234
+ ## ๐Ÿ‘ฅ Contributors
278
235
 
279
236
  Special thanks to our contributors:
280
237
  - **[@TheGordon](https://github.com/TheGordon)** - Fixed timestamp parsing (#10)
@@ -283,4 +240,4 @@ Special thanks to our contributors:
283
240
 
284
241
  ---
285
242
 
286
- Built with โค๏ธ by [ramakay](https://github.com/ramakay) for the Claude community.
243
+ Built with โค๏ธ by [ramakay](https://github.com/ramakay) for the Claude community.
@@ -54,7 +54,7 @@ class ProjectResolver:
54
54
  4. Fuzzy matching on collection names
55
55
 
56
56
  Args:
57
- user_project_name: User-provided project name (e.g., "anukruti", "Anukruti", full path)
57
+ user_project_name: User-provided project name (e.g., "example-project", "Example-Project", full path)
58
58
 
59
59
  Returns:
60
60
  List of collection names that match the project
@@ -362,7 +362,7 @@ class ProjectResolver:
362
362
 
363
363
  Examples:
364
364
  - -Users-name-projects-my-app-src -> ['my', 'app', 'src']
365
- - -Users-name-Code-freightwise-documents -> ['freightwise', 'documents']
365
+ - -Users-name-Code-example-project -> ['example', 'project']
366
366
 
367
367
  Args:
368
368
  path: Path in any format
@@ -81,15 +81,23 @@ def initialize_embeddings():
81
81
  print(f"[ERROR] Failed to initialize embeddings: {e}")
82
82
  return False
83
83
 
84
- # Debug environment loading
85
- print(f"[DEBUG] Environment variables loaded:")
86
- print(f"[DEBUG] ENABLE_MEMORY_DECAY: {ENABLE_MEMORY_DECAY}")
87
- print(f"[DEBUG] USE_NATIVE_DECAY: {USE_NATIVE_DECAY}")
88
- print(f"[DEBUG] DECAY_WEIGHT: {DECAY_WEIGHT}")
89
- print(f"[DEBUG] DECAY_SCALE_DAYS: {DECAY_SCALE_DAYS}")
90
- print(f"[DEBUG] PREFER_LOCAL_EMBEDDINGS: {PREFER_LOCAL_EMBEDDINGS}")
91
- print(f"[DEBUG] EMBEDDING_MODEL: {EMBEDDING_MODEL}")
92
- print(f"[DEBUG] env_path: {env_path}")
84
+ # Debug environment loading and startup
85
+ import sys
86
+ import datetime as dt
87
+ startup_time = dt.datetime.now().isoformat()
88
+ print(f"[STARTUP] MCP Server starting at {startup_time}", file=sys.stderr)
89
+ print(f"[STARTUP] Python: {sys.version}", file=sys.stderr)
90
+ print(f"[STARTUP] Working directory: {os.getcwd()}", file=sys.stderr)
91
+ print(f"[STARTUP] Script location: {__file__}", file=sys.stderr)
92
+ print(f"[DEBUG] Environment variables loaded:", file=sys.stderr)
93
+ print(f"[DEBUG] QDRANT_URL: {QDRANT_URL}", file=sys.stderr)
94
+ print(f"[DEBUG] ENABLE_MEMORY_DECAY: {ENABLE_MEMORY_DECAY}", file=sys.stderr)
95
+ print(f"[DEBUG] USE_NATIVE_DECAY: {USE_NATIVE_DECAY}", file=sys.stderr)
96
+ print(f"[DEBUG] DECAY_WEIGHT: {DECAY_WEIGHT}", file=sys.stderr)
97
+ print(f"[DEBUG] DECAY_SCALE_DAYS: {DECAY_SCALE_DAYS}", file=sys.stderr)
98
+ print(f"[DEBUG] PREFER_LOCAL_EMBEDDINGS: {PREFER_LOCAL_EMBEDDINGS}", file=sys.stderr)
99
+ print(f"[DEBUG] EMBEDDING_MODEL: {EMBEDDING_MODEL}", file=sys.stderr)
100
+ print(f"[DEBUG] env_path: {env_path}", file=sys.stderr)
93
101
 
94
102
 
95
103
  class SearchResult(BaseModel):
@@ -259,31 +267,31 @@ async def update_indexing_status(cache_ttl: int = 5):
259
267
 
260
268
  # Convert set to list for compatibility
261
269
  imported_files_list = list(all_imported_files)
262
-
263
- # Count files that have been imported
264
- for file_path in jsonl_files:
265
- # Normalize the current file path for consistent comparison
266
- normalized_file = normalize_path(str(file_path))
267
-
268
- # Try multiple path formats to match Docker's state file
269
- file_str = str(file_path).replace(str(Path.home()), "/logs").replace("\\", "/")
270
- # Also try without .claude/projects prefix (Docker mounts directly)
271
- file_str_alt = file_str.replace("/.claude/projects", "")
272
-
273
- # Normalize alternative paths as well
274
- normalized_alt = normalize_path(file_str)
275
- normalized_alt2 = normalize_path(file_str_alt)
276
-
277
- # Check if file is in imported_files list (fully imported)
278
- if normalized_file in imported_files_list or normalized_alt in imported_files_list or normalized_alt2 in imported_files_list:
279
- indexed_files += 1
280
- # Or if it has metadata with position > 0 (partially imported)
281
- elif normalized_file in file_metadata and file_metadata[normalized_file].get("position", 0) > 0:
282
- indexed_files += 1
283
- elif normalized_alt in file_metadata and file_metadata[normalized_alt].get("position", 0) > 0:
284
- indexed_files += 1
285
- elif normalized_alt2 in file_metadata and file_metadata[normalized_alt2].get("position", 0) > 0:
286
- indexed_files += 1
270
+
271
+ # Count files that have been imported
272
+ for file_path in jsonl_files:
273
+ # Normalize the current file path for consistent comparison
274
+ normalized_file = normalize_path(str(file_path))
275
+
276
+ # Try multiple path formats to match Docker's state file
277
+ file_str = str(file_path).replace(str(Path.home()), "/logs").replace("\\", "/")
278
+ # Also try without .claude/projects prefix (Docker mounts directly)
279
+ file_str_alt = file_str.replace("/.claude/projects", "")
280
+
281
+ # Normalize alternative paths as well
282
+ normalized_alt = normalize_path(file_str)
283
+ normalized_alt2 = normalize_path(file_str_alt)
284
+
285
+ # Check if file is in imported_files list (fully imported)
286
+ if normalized_file in imported_files_list or normalized_alt in imported_files_list or normalized_alt2 in imported_files_list:
287
+ indexed_files += 1
288
+ # Or if it has metadata with position > 0 (partially imported)
289
+ elif normalized_file in file_metadata and file_metadata[normalized_file].get("position", 0) > 0:
290
+ indexed_files += 1
291
+ elif normalized_alt in file_metadata and file_metadata[normalized_alt].get("position", 0) > 0:
292
+ indexed_files += 1
293
+ elif normalized_alt2 in file_metadata and file_metadata[normalized_alt2].get("position", 0) > 0:
294
+ indexed_files += 1
287
295
 
288
296
  # Update status
289
297
  indexing_status["last_check"] = current_time
@@ -1519,4 +1527,8 @@ if __name__ == "__main__":
1519
1527
  sys.exit(0)
1520
1528
 
1521
1529
  # Normal MCP server operation
1530
+ print(f"[STARTUP] Starting FastMCP server in stdio mode...", file=sys.stderr)
1531
+ print(f"[STARTUP] Server name: {mcp.name}", file=sys.stderr)
1532
+ print(f"[STARTUP] Calling mcp.run()...", file=sys.stderr)
1522
1533
  mcp.run()
1534
+ print(f"[STARTUP] Server exited normally", file=sys.stderr)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-self-reflect",
3
- "version": "2.8.0",
3
+ "version": "2.8.1",
4
4
  "description": "Give Claude perfect memory of all your conversations - Installation wizard for Python MCP server",
5
5
  "keywords": [
6
6
  "claude",
@@ -0,0 +1,124 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ Quick import script for current project's latest conversations.
4
+ Designed for PreCompact hook integration - targets <10 second imports.
5
+ """
6
+
7
+ import os
8
+ import sys
9
+ import json
10
+ import subprocess
11
+ from datetime import datetime, timedelta
12
+ from pathlib import Path
13
+ import logging
14
+
15
+ # Configuration
16
+ LOGS_DIR = os.getenv("LOGS_DIR", os.path.expanduser("~/.claude/projects"))
17
+ STATE_FILE = os.getenv("STATE_FILE", os.path.expanduser("~/.claude-self-reflect-state.json"))
18
+ HOURS_BACK = int(os.getenv("IMPORT_HOURS_BACK", "2")) # Only import last 2 hours by default
19
+
20
+ # Set up logging
21
+ logging.basicConfig(
22
+ level=logging.INFO,
23
+ format='%(asctime)s - %(levelname)s - %(message)s'
24
+ )
25
+ logger = logging.getLogger(__name__)
26
+
27
+ def load_state():
28
+ """Load import state from file."""
29
+ if os.path.exists(STATE_FILE):
30
+ try:
31
+ with open(STATE_FILE, 'r') as f:
32
+ return json.load(f)
33
+ except:
34
+ return {}
35
+ return {}
36
+
37
+ def save_state(state):
38
+ """Save import state to file."""
39
+ os.makedirs(os.path.dirname(STATE_FILE), exist_ok=True)
40
+ with open(STATE_FILE, 'w') as f:
41
+ json.dump(state, f, indent=2)
42
+
43
+ def get_project_from_cwd():
44
+ """Detect project from current working directory."""
45
+ cwd = os.getcwd()
46
+ # Convert path to project name format used in logs
47
+ # Claude logs use format: -Users-username-path-to-project
48
+ project_name = cwd.replace('/', '-')
49
+ # Keep the leading dash as that's how Claude stores it
50
+ if not project_name.startswith('-'):
51
+ project_name = '-' + project_name
52
+ return project_name
53
+
54
+ def get_recent_files(project_path: Path, hours_back: int):
55
+ """Get JSONL files modified in the last N hours."""
56
+ cutoff_time = datetime.now() - timedelta(hours=hours_back)
57
+ recent_files = []
58
+
59
+ for jsonl_file in project_path.glob("*.jsonl"):
60
+ mtime = datetime.fromtimestamp(jsonl_file.stat().st_mtime)
61
+ if mtime > cutoff_time:
62
+ recent_files.append(jsonl_file)
63
+
64
+ return sorted(recent_files, key=lambda f: f.stat().st_mtime, reverse=True)
65
+
66
+ def main():
67
+ """Main quick import function."""
68
+ start_time = datetime.now()
69
+
70
+ # Detect current project
71
+ project_name = get_project_from_cwd()
72
+ project_path = Path(LOGS_DIR) / project_name
73
+
74
+ if not project_path.exists():
75
+ logger.warning(f"Project logs not found: {project_path}")
76
+ logger.info("Make sure you're in a project directory with Claude conversations.")
77
+ return
78
+
79
+ logger.info(f"Quick importing latest conversations for: {project_name}")
80
+
81
+ # Get recent files
82
+ recent_files = get_recent_files(project_path, HOURS_BACK)
83
+ logger.info(f"Found {len(recent_files)} files modified in last {HOURS_BACK} hours")
84
+
85
+ if not recent_files:
86
+ logger.info("No recent conversations to import")
87
+ return
88
+
89
+ # For now, just call the unified importer with the specific project
90
+ # This is a temporary solution until we implement incremental imports
91
+ script_dir = os.path.dirname(os.path.abspath(__file__))
92
+ unified_script = os.path.join(script_dir, "import-conversations-unified.py")
93
+
94
+ # Set environment to only process this project
95
+ env = os.environ.copy()
96
+ env['LOGS_DIR'] = str(project_path.parent)
97
+ env['IMPORT_PROJECT'] = project_name
98
+
99
+ try:
100
+ # Run the unified importer for just this project
101
+ result = subprocess.run(
102
+ [sys.executable, unified_script],
103
+ env=env,
104
+ capture_output=True,
105
+ text=True,
106
+ timeout=60 # 60 second timeout
107
+ )
108
+
109
+ if result.returncode == 0:
110
+ logger.info("Quick import completed successfully")
111
+ else:
112
+ logger.error(f"Import failed: {result.stderr}")
113
+
114
+ except subprocess.TimeoutExpired:
115
+ logger.warning("Import timed out after 60 seconds")
116
+ except Exception as e:
117
+ logger.error(f"Error during import: {e}")
118
+
119
+ # Report timing
120
+ elapsed = (datetime.now() - start_time).total_seconds()
121
+ logger.info(f"Quick import completed in {elapsed:.1f} seconds")
122
+
123
+ if __name__ == "__main__":
124
+ main()
@@ -0,0 +1,171 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ Import old format JSONL files from Claude conversations.
4
+ These files have a different structure with type/summary fields instead of messages.
5
+ """
6
+
7
+ import json
8
+ import sys
9
+ from pathlib import Path
10
+ import hashlib
11
+ import uuid
12
+ from datetime import datetime
13
+ from qdrant_client import QdrantClient
14
+ from qdrant_client.models import Distance, VectorParams, PointStruct
15
+ from fastembed import TextEmbedding
16
+ import logging
17
+
18
+ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
19
+ logger = logging.getLogger(__name__)
20
+
21
+ def import_old_format_project(project_dir: Path, project_path: str = None):
22
+ """Import old format JSONL files from a project directory."""
23
+
24
+ # Initialize
25
+ client = QdrantClient(url='http://localhost:6333')
26
+ model = TextEmbedding(model_name='sentence-transformers/all-MiniLM-L6-v2', max_length=512)
27
+
28
+ # Determine project path from directory name if not provided
29
+ if not project_path:
30
+ # Convert -Users-username-projects-projectname back to path
31
+ dir_name = project_dir.name
32
+ project_path = '/' + dir_name.strip('-').replace('-', '/')
33
+
34
+ # Create collection name
35
+ project_hash = hashlib.md5(project_path.encode()).hexdigest()[:8]
36
+ collection_name = f'conv_{project_hash}_local'
37
+
38
+ logger.info(f'Project: {project_path}')
39
+ logger.info(f'Collection: {collection_name}')
40
+
41
+ # Create collection if needed
42
+ try:
43
+ client.get_collection(collection_name)
44
+ logger.info('Collection exists')
45
+ except:
46
+ client.create_collection(
47
+ collection_name=collection_name,
48
+ vectors_config=VectorParams(size=384, distance=Distance.COSINE)
49
+ )
50
+ logger.info('Created collection')
51
+
52
+ # Process all JSONL files
53
+ jsonl_files = list(project_dir.glob('*.jsonl'))
54
+ logger.info(f'Found {len(jsonl_files)} files to import')
55
+
56
+ total_points = 0
57
+ for file_path in jsonl_files:
58
+ logger.info(f'Processing {file_path.name}...')
59
+ points_batch = []
60
+
61
+ with open(file_path, 'r', encoding='utf-8') as f:
62
+ conversation_text = []
63
+ file_timestamp = file_path.stat().st_mtime
64
+
65
+ for line_num, line in enumerate(f, 1):
66
+ try:
67
+ data = json.loads(line)
68
+ msg_type = data.get('type', '')
69
+
70
+ # Extract text content based on type
71
+ content = None
72
+ if msg_type == 'summary' and data.get('summary'):
73
+ content = f"[Conversation Summary] {data['summary']}"
74
+ elif msg_type == 'user' and data.get('summary'):
75
+ content = f"User: {data['summary']}"
76
+ elif msg_type == 'assistant' and data.get('summary'):
77
+ content = f"Assistant: {data['summary']}"
78
+ elif msg_type in ['user', 'assistant']:
79
+ # Try to get content from other fields
80
+ if 'content' in data:
81
+ content = f"{msg_type.title()}: {data['content']}"
82
+ elif 'text' in data:
83
+ content = f"{msg_type.title()}: {data['text']}"
84
+
85
+ if content:
86
+ conversation_text.append(content)
87
+
88
+ # Create chunks every 5 messages or at end
89
+ if len(conversation_text) >= 5:
90
+ chunk_text = '\n\n'.join(conversation_text)
91
+ if chunk_text.strip():
92
+ # Generate embedding
93
+ embedding = list(model.embed([chunk_text[:2000]]))[0] # Limit to 2000 chars
94
+
95
+ point = PointStruct(
96
+ id=str(uuid.uuid4()),
97
+ vector=embedding.tolist(),
98
+ payload={
99
+ 'content': chunk_text[:1000], # Store first 1000 chars
100
+ 'full_content': chunk_text[:4000], # Store more for context
101
+ 'project_path': project_path,
102
+ 'file_path': str(file_path),
103
+ 'file_name': file_path.name,
104
+ 'conversation_id': file_path.stem,
105
+ 'chunk_index': len(points_batch),
106
+ 'timestamp': file_timestamp,
107
+ 'type': 'conversation_chunk'
108
+ }
109
+ )
110
+ points_batch.append(point)
111
+ conversation_text = []
112
+
113
+ except json.JSONDecodeError:
114
+ logger.warning(f'Invalid JSON at line {line_num} in {file_path.name}')
115
+ except Exception as e:
116
+ logger.warning(f'Error processing line {line_num}: {e}')
117
+
118
+ # Handle remaining text
119
+ if conversation_text:
120
+ chunk_text = '\n\n'.join(conversation_text)
121
+ if chunk_text.strip():
122
+ embedding = list(model.embed([chunk_text[:2000]]))[0]
123
+
124
+ point = PointStruct(
125
+ id=str(uuid.uuid4()),
126
+ vector=embedding.tolist(),
127
+ payload={
128
+ 'content': chunk_text[:1000],
129
+ 'full_content': chunk_text[:4000],
130
+ 'project_path': project_path,
131
+ 'file_path': str(file_path),
132
+ 'file_name': file_path.name,
133
+ 'conversation_id': file_path.stem,
134
+ 'chunk_index': len(points_batch),
135
+ 'timestamp': file_timestamp,
136
+ 'type': 'conversation_chunk'
137
+ }
138
+ )
139
+ points_batch.append(point)
140
+
141
+ # Upload batch
142
+ if points_batch:
143
+ client.upsert(collection_name=collection_name, points=points_batch)
144
+ logger.info(f' Uploaded {len(points_batch)} chunks from {file_path.name}')
145
+ total_points += len(points_batch)
146
+
147
+ # Verify
148
+ info = client.get_collection(collection_name)
149
+ logger.info(f'\nImport complete!')
150
+ logger.info(f'Collection {collection_name} now has {info.points_count} points')
151
+ logger.info(f'Added {total_points} new points in this import')
152
+
153
+ return collection_name, total_points
154
+
155
+ def main():
156
+ if len(sys.argv) < 2:
157
+ print("Usage: python import-old-format.py <project-directory> [project-path]")
158
+ print("Example: python import-old-format.py ~/.claude/projects/-Users-me-projects-myapp /Users/me/projects/myapp")
159
+ sys.exit(1)
160
+
161
+ project_dir = Path(sys.argv[1]).expanduser()
162
+ project_path = sys.argv[2] if len(sys.argv) > 2 else None
163
+
164
+ if not project_dir.exists():
165
+ print(f"Error: Directory {project_dir} does not exist")
166
+ sys.exit(1)
167
+
168
+ import_old_format_project(project_dir, project_path)
169
+
170
+ if __name__ == "__main__":
171
+ main()