claude-self-reflect 2.8.0 → 2.8.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -4,7 +4,7 @@ description: Docker Compose orchestration expert for container management, servi
4
4
  tools: Read, Edit, Bash, Grep, LS
5
5
  ---
6
6
 
7
- You are a Docker orchestration specialist for the memento-stack project. You manage multi-container deployments, monitor service health, and troubleshoot container issues.
7
+ You are a Docker orchestration specialist for the claude-self-reflect project. You manage multi-container deployments, monitor service health, and troubleshoot container issues.
8
8
 
9
9
  ## Project Context
10
10
  - Main stack: Qdrant vector database + MCP server + Python importer
@@ -4,7 +4,7 @@ description: Import pipeline debugging specialist for JSONL processing, Python s
4
4
  tools: Read, Edit, Bash, Grep, Glob, LS
5
5
  ---
6
6
 
7
- You are an import pipeline debugging expert for the memento-stack project. You specialize in troubleshooting JSONL file processing, Python import scripts, and conversation chunking strategies.
7
+ You are an import pipeline debugging expert for the claude-self-reflect project. You specialize in troubleshooting JSONL file processing, Python import scripts, and conversation chunking strategies.
8
8
 
9
9
  ## Project Context
10
10
  - Processes Claude Desktop logs from ~/.claude/projects/
@@ -4,7 +4,7 @@ description: MCP (Model Context Protocol) server development expert for Claude D
4
4
  tools: Read, Edit, Bash, Grep, Glob, WebFetch
5
5
  ---
6
6
 
7
- You are an MCP server development specialist for the memento-stack project. You handle Claude Desktop integration, implement MCP tools, and ensure seamless communication between Claude and the vector database.
7
+ You are an MCP server development specialist for the claude-self-reflect project. You handle Claude Desktop integration, implement MCP tools, and ensure seamless communication between Claude and the vector database.
8
8
 
9
9
  ## Project Context
10
10
  - MCP server: claude-self-reflection
@@ -4,7 +4,7 @@ description: Qdrant vector database expert for collection management, troublesho
4
4
  tools: Read, Bash, Grep, Glob, LS, WebFetch
5
5
  ---
6
6
 
7
- You are a Qdrant vector database specialist for the memento-stack project. Your expertise covers collection management, vector search optimization, and embedding strategies.
7
+ You are a Qdrant vector database specialist for the claude-self-reflect project. Your expertise covers collection management, vector search optimization, and embedding strategies.
8
8
 
9
9
  ## Project Context
10
10
  - The system uses Qdrant for storing conversation embeddings from Claude Desktop logs
@@ -4,7 +4,7 @@ description: Search quality optimization expert for improving semantic search ac
4
4
  tools: Read, Edit, Bash, Grep, Glob, WebFetch
5
5
  ---
6
6
 
7
- You are a search optimization specialist for the memento-stack project. You improve semantic search quality, tune parameters, and analyze embedding model performance.
7
+ You are a search optimization specialist for the claude-self-reflect project. You improve semantic search quality, tune parameters, and analyze embedding model performance.
8
8
 
9
9
  ## Project Context
10
10
  - Current baseline: 66.1% search accuracy with Voyage AI
package/README.md CHANGED
@@ -24,67 +24,12 @@
24
24
 
25
25
  Give Claude perfect memory of all your conversations. Search past discussions instantly. Never lose context again.
26
26
 
27
- **100% Local by Default** - Your conversations never leave your machine. No cloud services required, no API keys needed, complete privacy out of the box.
27
+ **🔒 100% Local by Default** **⚡ Blazing Fast Search** **🚀 Zero Configuration** **🏭 Production Ready**
28
28
 
29
- **Blazing Fast Search** - Semantic search across thousands of conversations in milliseconds. Find that discussion about database schemas from three weeks ago in seconds.
29
+ ## 🚀 Quick Install
30
30
 
31
- **Zero Configuration** - Works immediately after installation. Smart auto-detection handles everything. No manual setup, no environment variables, just install and use.
32
-
33
- **Production Ready** - Battle-tested with 600+ conversations across 24 projects. Handles mixed embedding types automatically. Scales from personal use to team deployments.
34
-
35
- ## Table of Contents
36
-
37
- - [What You Get](#what-you-get)
38
- - [Requirements](#requirements)
39
- - [Quick Install/Uninstall](#quick-installuninstall)
40
- - [The Magic](#the-magic)
41
- - [Before & After](#before--after)
42
- - [Real Examples](#real-examples-that-made-us-build-this)
43
- - [How It Works](#how-it-works)
44
- - [Import Architecture](#import-architecture)
45
- - [Using It](#using-it)
46
- - [Key Features](#key-features)
47
- - [Performance](#performance)
48
- - [Configuration](#configuration)
49
- - [Technical Stack](#the-technical-stack)
50
- - [Problems](#problems)
51
- - [What's New](#whats-new)
52
- - [Advanced Topics](#advanced-topics)
53
- - [Contributors](#contributors)
54
-
55
- ## What You Get
56
-
57
- Ask Claude about past conversations. Get actual answers. **100% local by default** - your conversations never leave your machine. Cloud-enhanced search available when you need it.
58
-
59
- **Proven at Scale**: Successfully indexed 682 conversation files with 100% reliability. No data loss, no corruption, just seamless conversation memory that works.
60
-
61
- **Before**: "I don't have access to previous conversations"
62
- **After**:
63
- ```
64
- reflection-specialist(Search FastEmbed vs cloud embedding decision)
65
- ⎿ Done (3 tool uses · 8.2k tokens · 12.4s)
66
-
67
- "Found it! Yesterday we decided on FastEmbed for local mode - better privacy,
68
- no API calls, 384-dimensional embeddings. Works offline too."
69
- ```
70
-
71
- The reflection specialist is a specialized sub-agent that Claude automatically spawns when you ask about past conversations. It searches your conversation history in its own isolated context, keeping your main chat clean and focused.
72
-
73
- Your conversations become searchable. Your decisions stay remembered. Your context persists.
74
-
75
- ## Requirements
76
-
77
- - **Docker Desktop** (macOS/Windows) or **Docker Engine** (Linux)
78
- - **Node.js** 16+ (for the setup wizard)
79
- - **Claude Desktop** app
80
-
81
- ## Quick Install/Uninstall
82
-
83
- ### Install
84
-
85
- #### Local Mode (Default - Your Data Stays Private)
86
31
  ```bash
87
- # Install and run automatic setup
32
+ # Install and run automatic setup (5 minutes, everything automatic)
88
33
  npm install -g claude-self-reflect
89
34
  claude-self-reflect setup
90
35
 
@@ -93,11 +38,12 @@ claude-self-reflect setup
93
38
  # ✅ Configure everything automatically
94
39
  # ✅ Install the MCP in Claude Code
95
40
  # ✅ Start monitoring for new conversations
96
- # ✅ Verify the reflection tools work
97
41
  # 🔒 Keep all data local - no API keys needed
98
42
  ```
99
43
 
100
- #### Cloud Mode (Better Search Accuracy)
44
+ <details open>
45
+ <summary>📡 Cloud Mode (Better Search Accuracy)</summary>
46
+
101
47
  ```bash
102
48
  # Step 1: Get your free Voyage AI key
103
49
  # Sign up at https://www.voyageai.com/ - it takes 30 seconds
@@ -108,17 +54,17 @@ claude-self-reflect setup --voyage-key=YOUR_ACTUAL_KEY_HERE
108
54
  ```
109
55
  *Note: Cloud mode provides more accurate semantic search but sends conversation data to Voyage AI for processing.*
110
56
 
111
- 5 minutes. Everything automatic. Just works.
57
+ </details>
112
58
 
113
- ## The Magic
59
+ ## The Magic
114
60
 
115
61
  ![Self Reflection vs The Grind](docs/images/red-reflection.webp)
116
62
 
117
- ## Before & After
63
+ ## 📊 Before & After
118
64
 
119
65
  ![Before and After Claude Self-Reflect](docs/diagrams/before-after-combined.webp)
120
66
 
121
- ## Real Examples That Made Us Build This
67
+ ## 💬 Real Examples
122
68
 
123
69
  ```
124
70
  You: "What was that PostgreSQL optimization we figured out?"
@@ -137,41 +83,7 @@ Claude: "3 conversations found:
137
83
  - Nov 20: Added rate limiting per authenticated connection"
138
84
  ```
139
85
 
140
- ## How It Works
141
-
142
- Your conversations → Vector embeddings → Semantic search → Claude remembers
143
-
144
- Technical details exist. You don't need them to start.
145
-
146
- ## Import Architecture
147
-
148
- Here's how your conversations get imported and prioritized:
149
-
150
- ![Import Architecture](docs/diagrams/import-architecture.png)
151
-
152
- **The system intelligently processes your conversations:**
153
- - Runs every 60 seconds checking for new conversations
154
- - Processes newest conversations first (delta import pattern)
155
- - Maintains low memory usage (<50MB) through streaming
156
- - Handles up to 5 files per cycle to prevent blocking
157
-
158
- **HOT/WARM/COLD Intelligent Prioritization:**
159
- - **🔥 HOT** (< 5 minutes): Switches to 2-second intervals for near real-time import
160
- - **🌡️ WARM** (< 24 hours): Normal priority with starvation prevention (urgent after 30 min wait)
161
- - **❄️ COLD** (> 24 hours): Batch processed, max 5 per cycle to prevent blocking new content
162
- - Files are categorized by age and processed with priority queuing to ensure newest content gets imported quickly while preventing older files from being starved
163
-
164
- ## Using It
165
-
166
- Once installed, just talk naturally:
167
-
168
- - "What did we discuss about database optimization?"
169
- - "Find our debugging session from last week"
170
- - "Remember this solution for next time"
171
-
172
- The reflection specialist automatically activates. No special commands needed.
173
-
174
- ## Key Features
86
+ ## 🎯 Key Features
175
87
 
176
88
  ### Project-Scoped Search
177
89
  Searches are **project-aware by default**. Claude automatically searches within your current project:
@@ -189,16 +101,37 @@ Claude: [Searches across ALL your projects]
189
101
  ### ⏱️ Memory Decay
190
102
  Recent conversations matter more. Old ones fade. Like your brain, but reliable.
191
103
 
192
- ### 🚀 Performance
193
- - **Search**: <3ms average response time across 121+ collections (7.55ms max)
194
- - **Import**: Production streaming importer with 100% reliability
195
- - **Memory**: 302MB operational (60% of 500MB limit) - 96% reduction from v2.5.15
196
- - **CPU**: <1% sustained usage (99.93% reduction from 1437% peak)
197
- - **Scale**: 100% indexing success rate across all conversation types
198
- - **V2 Migration**: 100% complete - all conversations use token-aware chunking
104
+ ### Performance at Scale
105
+ - **Search**: <3ms average response time
106
+ - **Scale**: 600+ conversations across 24 projects
107
+ - **Reliability**: 100% indexing success rate
108
+ - **Memory**: 96% reduction from v2.5.15
109
+
110
+ ## 🏗️ Architecture
111
+
112
+ ![Import Architecture](docs/diagrams/import-architecture.png)
113
+
114
+ <details>
115
+ <summary>🔥 HOT/WARM/COLD Intelligent Prioritization</summary>
116
+
117
+ - **🔥 HOT** (< 5 minutes): 2-second intervals for near real-time import
118
+ - **🌡️ WARM** (< 24 hours): Normal priority with starvation prevention
119
+ - **❄️ COLD** (> 24 hours): Batch processed to prevent blocking
120
+
121
+ Files are categorized by age and processed with priority queuing to ensure newest content gets imported quickly while preventing older files from being starved.
122
+
123
+ </details>
124
+
125
+ ## 🛠️ Requirements
126
+
127
+ - **Docker Desktop** (macOS/Windows) or **Docker Engine** (Linux)
128
+ - **Node.js** 16+ (for the setup wizard)
129
+ - **Claude Desktop** app
199
130
 
131
+ ## 📖 Documentation
200
132
 
201
- ## The Technical Stack
133
+ <details>
134
+ <summary>🔧 Technical Stack</summary>
202
135
 
203
136
  - **Vector DB**: Qdrant (local, your data stays yours)
204
137
  - **Embeddings**:
@@ -207,18 +140,62 @@ Recent conversations matter more. Old ones fade. Like your brain, but reliable.
207
140
  - **MCP Server**: Python + FastMCP
208
141
  - **Search**: Semantic similarity with time decay
209
142
 
210
- ## Problems
143
+ </details>
144
+
145
+ <details>
146
+ <summary>📚 Advanced Topics</summary>
147
+
148
+ - [Performance tuning](docs/performance-guide.md)
149
+ - [Security & privacy](docs/security.md)
150
+ - [Windows setup](docs/windows-setup.md)
151
+ - [Architecture details](docs/architecture-details.md)
152
+ - [Contributing](CONTRIBUTING.md)
153
+
154
+ </details>
155
+
156
+ <details>
157
+ <summary>🐛 Troubleshooting</summary>
211
158
 
212
159
  - [Troubleshooting Guide](docs/troubleshooting.md)
213
160
  - [GitHub Issues](https://github.com/ramakay/claude-self-reflect/issues)
214
161
  - [Discussions](https://github.com/ramakay/claude-self-reflect/discussions)
215
162
 
216
- ## Upgrading to v2.5.19
163
+ </details>
164
+
165
+ <details>
166
+ <summary>🗑️ Uninstall</summary>
167
+
168
+ For complete uninstall instructions, see [docs/UNINSTALL.md](docs/UNINSTALL.md).
169
+
170
+ Quick uninstall:
171
+ ```bash
172
+ # Remove MCP server
173
+ claude mcp remove claude-self-reflect
174
+
175
+ # Stop Docker containers
176
+ docker-compose down
177
+
178
+ # Uninstall npm package
179
+ npm uninstall -g claude-self-reflect
180
+ ```
181
+
182
+ </details>
183
+
184
+ ## 📦 What's New
185
+
186
+ <details>
187
+ <summary>🎉 v2.8.0 - Latest Release</summary>
217
188
 
218
- ### 🆕 New Feature: Metadata Enrichment
219
- v2.5.19 adds searchable metadata to your conversations - concepts, files, and tools!
189
+ - **🔧 Fixed MCP Indexing**: Now correctly shows 97.1% progress (was showing 0%)
190
+ - **🔥 HOT/WARM/COLD**: Intelligent file prioritization for near real-time imports
191
+ - **📊 Enhanced Monitoring**: Real-time status with visual indicators
220
192
 
221
- #### For Existing Users
193
+ </details>
194
+
195
+ <details>
196
+ <summary>✨ v2.5.19 - Metadata Enrichment</summary>
197
+
198
+ ### For Existing Users
222
199
  ```bash
223
200
  # Update to latest version
224
201
  npm update -g claude-self-reflect
@@ -231,50 +208,30 @@ claude-self-reflect setup
231
208
  docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.py
232
209
  ```
233
210
 
234
- #### What You Get
211
+ ### What You Get
235
212
  - `search_by_concept("docker")` - Find conversations by topic
236
213
  - `search_by_file("server.py")` - Find conversations that touched specific files
237
214
  - Better search accuracy with metadata-based filtering
238
215
 
239
- ## What's New
216
+ </details>
217
+
218
+ <details>
219
+ <summary>📜 Release History</summary>
240
220
 
241
- - **v2.5.19** - Metadata Enrichment! Search by concepts, files, and tools. [Full release notes](docs/releases/v2.5.19-RELEASE-NOTES.md)
242
221
  - **v2.5.18** - Security dependency updates
243
- - **v2.5.17** - Critical CPU fix and memory limit adjustment. [Full release notes](docs/releases/v2.5.17-release-notes.md)
244
- - **v2.5.16** - (Pre-release only) Initial streaming importer with CPU throttling
222
+ - **v2.5.17** - Critical CPU fix and memory limit adjustment
223
+ - **v2.5.16** - Initial streaming importer with CPU throttling
245
224
  - **v2.5.15** - Critical bug fixes and collection creation improvements
246
- - **v2.5.14** - Async importer collection fix - All conversations now searchable
247
- - **v2.5.11** - Critical cloud mode fix - Environment variables now properly passed to MCP server
248
- - **v2.5.10** - Emergency hotfix for MCP server startup failure (dead code removal)
249
- - **v2.5.6** - Tool Output Extraction - Captures git changes & tool outputs for cross-agent discovery
225
+ - **v2.5.14** - Async importer collection fix
226
+ - **v2.5.11** - Critical cloud mode fix
227
+ - **v2.5.10** - Emergency hotfix for MCP server startup
228
+ - **v2.5.6** - Tool Output Extraction
250
229
 
251
230
  [Full changelog](docs/release-history.md)
252
231
 
253
- ## Advanced Topics
254
-
255
- - [Performance tuning](docs/performance-guide.md)
256
- - [Security & privacy](docs/security.md)
257
- - [Windows setup](docs/windows-setup.md)
258
- - [Architecture details](docs/architecture-details.md)
259
- - [Contributing](CONTRIBUTING.md)
260
-
261
- ### Uninstall
262
-
263
- For complete uninstall instructions, see [docs/UNINSTALL.md](docs/UNINSTALL.md).
264
-
265
- Quick uninstall:
266
- ```bash
267
- # Remove MCP server
268
- claude mcp remove claude-self-reflect
269
-
270
- # Stop Docker containers
271
- docker-compose down
272
-
273
- # Uninstall npm package
274
- npm uninstall -g claude-self-reflect
275
- ```
232
+ </details>
276
233
 
277
- ## Contributors
234
+ ## 👥 Contributors
278
235
 
279
236
  Special thanks to our contributors:
280
237
  - **[@TheGordon](https://github.com/TheGordon)** - Fixed timestamp parsing (#10)
@@ -283,4 +240,4 @@ Special thanks to our contributors:
283
240
 
284
241
  ---
285
242
 
286
- Built with ❤️ by [ramakay](https://github.com/ramakay) for the Claude community.
243
+ Built with ❤️ by [ramakay](https://github.com/ramakay) for the Claude community.
@@ -54,7 +54,7 @@ class ProjectResolver:
54
54
  4. Fuzzy matching on collection names
55
55
 
56
56
  Args:
57
- user_project_name: User-provided project name (e.g., "anukruti", "Anukruti", full path)
57
+ user_project_name: User-provided project name (e.g., "example-project", "Example-Project", full path)
58
58
 
59
59
  Returns:
60
60
  List of collection names that match the project
@@ -362,7 +362,7 @@ class ProjectResolver:
362
362
 
363
363
  Examples:
364
364
  - -Users-name-projects-my-app-src -> ['my', 'app', 'src']
365
- - -Users-name-Code-freightwise-documents -> ['freightwise', 'documents']
365
+ - -Users-name-Code-example-project -> ['example', 'project']
366
366
 
367
367
  Args:
368
368
  path: Path in any format
@@ -81,15 +81,23 @@ def initialize_embeddings():
81
81
  print(f"[ERROR] Failed to initialize embeddings: {e}")
82
82
  return False
83
83
 
84
- # Debug environment loading
85
- print(f"[DEBUG] Environment variables loaded:")
86
- print(f"[DEBUG] ENABLE_MEMORY_DECAY: {ENABLE_MEMORY_DECAY}")
87
- print(f"[DEBUG] USE_NATIVE_DECAY: {USE_NATIVE_DECAY}")
88
- print(f"[DEBUG] DECAY_WEIGHT: {DECAY_WEIGHT}")
89
- print(f"[DEBUG] DECAY_SCALE_DAYS: {DECAY_SCALE_DAYS}")
90
- print(f"[DEBUG] PREFER_LOCAL_EMBEDDINGS: {PREFER_LOCAL_EMBEDDINGS}")
91
- print(f"[DEBUG] EMBEDDING_MODEL: {EMBEDDING_MODEL}")
92
- print(f"[DEBUG] env_path: {env_path}")
84
+ # Debug environment loading and startup
85
+ import sys
86
+ import datetime as dt
87
+ startup_time = dt.datetime.now().isoformat()
88
+ print(f"[STARTUP] MCP Server starting at {startup_time}", file=sys.stderr)
89
+ print(f"[STARTUP] Python: {sys.version}", file=sys.stderr)
90
+ print(f"[STARTUP] Working directory: {os.getcwd()}", file=sys.stderr)
91
+ print(f"[STARTUP] Script location: {__file__}", file=sys.stderr)
92
+ print(f"[DEBUG] Environment variables loaded:", file=sys.stderr)
93
+ print(f"[DEBUG] QDRANT_URL: {QDRANT_URL}", file=sys.stderr)
94
+ print(f"[DEBUG] ENABLE_MEMORY_DECAY: {ENABLE_MEMORY_DECAY}", file=sys.stderr)
95
+ print(f"[DEBUG] USE_NATIVE_DECAY: {USE_NATIVE_DECAY}", file=sys.stderr)
96
+ print(f"[DEBUG] DECAY_WEIGHT: {DECAY_WEIGHT}", file=sys.stderr)
97
+ print(f"[DEBUG] DECAY_SCALE_DAYS: {DECAY_SCALE_DAYS}", file=sys.stderr)
98
+ print(f"[DEBUG] PREFER_LOCAL_EMBEDDINGS: {PREFER_LOCAL_EMBEDDINGS}", file=sys.stderr)
99
+ print(f"[DEBUG] EMBEDDING_MODEL: {EMBEDDING_MODEL}", file=sys.stderr)
100
+ print(f"[DEBUG] env_path: {env_path}", file=sys.stderr)
93
101
 
94
102
 
95
103
  class SearchResult(BaseModel):
@@ -259,31 +267,31 @@ async def update_indexing_status(cache_ttl: int = 5):
259
267
 
260
268
  # Convert set to list for compatibility
261
269
  imported_files_list = list(all_imported_files)
262
-
263
- # Count files that have been imported
264
- for file_path in jsonl_files:
265
- # Normalize the current file path for consistent comparison
266
- normalized_file = normalize_path(str(file_path))
267
-
268
- # Try multiple path formats to match Docker's state file
269
- file_str = str(file_path).replace(str(Path.home()), "/logs").replace("\\", "/")
270
- # Also try without .claude/projects prefix (Docker mounts directly)
271
- file_str_alt = file_str.replace("/.claude/projects", "")
272
-
273
- # Normalize alternative paths as well
274
- normalized_alt = normalize_path(file_str)
275
- normalized_alt2 = normalize_path(file_str_alt)
276
-
277
- # Check if file is in imported_files list (fully imported)
278
- if normalized_file in imported_files_list or normalized_alt in imported_files_list or normalized_alt2 in imported_files_list:
279
- indexed_files += 1
280
- # Or if it has metadata with position > 0 (partially imported)
281
- elif normalized_file in file_metadata and file_metadata[normalized_file].get("position", 0) > 0:
282
- indexed_files += 1
283
- elif normalized_alt in file_metadata and file_metadata[normalized_alt].get("position", 0) > 0:
284
- indexed_files += 1
285
- elif normalized_alt2 in file_metadata and file_metadata[normalized_alt2].get("position", 0) > 0:
286
- indexed_files += 1
270
+
271
+ # Count files that have been imported
272
+ for file_path in jsonl_files:
273
+ # Normalize the current file path for consistent comparison
274
+ normalized_file = normalize_path(str(file_path))
275
+
276
+ # Try multiple path formats to match Docker's state file
277
+ file_str = str(file_path).replace(str(Path.home()), "/logs").replace("\\", "/")
278
+ # Also try without .claude/projects prefix (Docker mounts directly)
279
+ file_str_alt = file_str.replace("/.claude/projects", "")
280
+
281
+ # Normalize alternative paths as well
282
+ normalized_alt = normalize_path(file_str)
283
+ normalized_alt2 = normalize_path(file_str_alt)
284
+
285
+ # Check if file is in imported_files list (fully imported)
286
+ if normalized_file in imported_files_list or normalized_alt in imported_files_list or normalized_alt2 in imported_files_list:
287
+ indexed_files += 1
288
+ # Or if it has metadata with position > 0 (partially imported)
289
+ elif normalized_file in file_metadata and file_metadata[normalized_file].get("position", 0) > 0:
290
+ indexed_files += 1
291
+ elif normalized_alt in file_metadata and file_metadata[normalized_alt].get("position", 0) > 0:
292
+ indexed_files += 1
293
+ elif normalized_alt2 in file_metadata and file_metadata[normalized_alt2].get("position", 0) > 0:
294
+ indexed_files += 1
287
295
 
288
296
  # Update status
289
297
  indexing_status["last_check"] = current_time
@@ -1519,4 +1527,8 @@ if __name__ == "__main__":
1519
1527
  sys.exit(0)
1520
1528
 
1521
1529
  # Normal MCP server operation
1530
+ print(f"[STARTUP] Starting FastMCP server in stdio mode...", file=sys.stderr)
1531
+ print(f"[STARTUP] Server name: {mcp.name}", file=sys.stderr)
1532
+ print(f"[STARTUP] Calling mcp.run()...", file=sys.stderr)
1522
1533
  mcp.run()
1534
+ print(f"[STARTUP] Server exited normally", file=sys.stderr)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-self-reflect",
3
- "version": "2.8.0",
3
+ "version": "2.8.2",
4
4
  "description": "Give Claude perfect memory of all your conversations - Installation wizard for Python MCP server",
5
5
  "keywords": [
6
6
  "claude",