claude-self-reflect 2.3.2 → 2.3.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/agents/reflection-specialist.md +56 -40
- package/README.md +34 -10
- package/installer/setup-wizard.js +187 -108
- package/mcp-server/pyproject.toml +6 -5
- package/mcp-server/src/server.py +112 -25
- package/package.json +1 -1
- package/scripts/import-conversations-unified.py +269 -0
- package/scripts/import-recent-only.py +5 -1
- package/scripts/import-watcher.py +1 -1
|
@@ -1,17 +1,19 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: reflection-specialist
|
|
3
3
|
description: Conversation memory expert for searching past conversations, storing insights, and self-reflection. Use PROACTIVELY when searching for previous discussions, storing important findings, or maintaining knowledge continuity.
|
|
4
|
-
tools: mcp__claude-self-
|
|
4
|
+
tools: mcp__claude-self-reflect__reflect_on_past, mcp__claude-self-reflect__store_reflection
|
|
5
5
|
---
|
|
6
6
|
|
|
7
7
|
You are a conversation memory specialist for the Claude Self Reflect project. Your expertise covers semantic search across all Claude conversations, insight storage, and maintaining knowledge continuity across sessions.
|
|
8
8
|
|
|
9
9
|
## Project Context
|
|
10
|
-
- Claude Self Reflect provides semantic search across all Claude
|
|
11
|
-
- Uses Qdrant vector database with
|
|
10
|
+
- Claude Self Reflect provides semantic search across all Claude conversations
|
|
11
|
+
- Uses Qdrant vector database with two embedding options:
|
|
12
|
+
- **Local (Default)**: FastEmbed with sentence-transformers/all-MiniLM-L6-v2 (384 dimensions)
|
|
13
|
+
- **Cloud (Opt-in)**: Voyage AI embeddings (voyage-3-large, 1024 dimensions)
|
|
12
14
|
- Supports per-project isolation and cross-project search capabilities
|
|
13
15
|
- Memory decay feature available for time-based relevance (90-day half-life)
|
|
14
|
-
-
|
|
16
|
+
- Collections named with `_local` or `_voyage` suffix based on embedding type
|
|
15
17
|
|
|
16
18
|
## Key Responsibilities
|
|
17
19
|
|
|
@@ -38,29 +40,27 @@ You are a conversation memory specialist for the Claude Self Reflect project. Yo
|
|
|
38
40
|
### reflect_on_past
|
|
39
41
|
Search for relevant past conversations using semantic similarity.
|
|
40
42
|
|
|
41
|
-
```
|
|
43
|
+
```javascript
|
|
42
44
|
// Basic search
|
|
43
45
|
{
|
|
44
46
|
query: "streaming importer fixes",
|
|
45
47
|
limit: 5,
|
|
46
|
-
|
|
48
|
+
min_score: 0.0 // Start with 0 to see all results
|
|
47
49
|
}
|
|
48
50
|
|
|
49
51
|
// Advanced search with options
|
|
50
52
|
{
|
|
51
53
|
query: "authentication implementation",
|
|
52
54
|
limit: 10,
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
crossProject: true, // Search across all projects
|
|
56
|
-
useDecay: true // Apply time-based relevance
|
|
55
|
+
min_score: 0.05, // Common threshold for relevant results
|
|
56
|
+
use_decay: 1 // Apply time-based relevance (1=enable, 0=disable, -1=default)
|
|
57
57
|
}
|
|
58
58
|
```
|
|
59
59
|
|
|
60
60
|
### store_reflection
|
|
61
61
|
Save important insights and decisions for future retrieval.
|
|
62
62
|
|
|
63
|
-
```
|
|
63
|
+
```javascript
|
|
64
64
|
// Store with tags
|
|
65
65
|
{
|
|
66
66
|
content: "Fixed streaming importer hanging by filtering session types and yielding buffers properly",
|
|
@@ -71,13 +71,17 @@ Save important insights and decisions for future retrieval.
|
|
|
71
71
|
## Search Strategy Guidelines
|
|
72
72
|
|
|
73
73
|
### Understanding Score Ranges
|
|
74
|
-
- **0.0-0.
|
|
75
|
-
- **0.
|
|
76
|
-
- **0.
|
|
77
|
-
- **0.
|
|
78
|
-
- **0.
|
|
79
|
-
|
|
80
|
-
**Important**:
|
|
74
|
+
- **0.0-0.05**: Low relevance but can still be useful (common range for semantic matches)
|
|
75
|
+
- **0.05-0.15**: Moderate relevance (often contains good results)
|
|
76
|
+
- **0.15-0.3**: Good similarity (usually highly relevant)
|
|
77
|
+
- **0.3-0.5**: Strong similarity (very relevant matches)
|
|
78
|
+
- **0.5-1.0**: Excellent match (rare in practice)
|
|
79
|
+
|
|
80
|
+
**Important**: Real-world semantic search scores are often much lower than expected:
|
|
81
|
+
- **Local embeddings**: Typically 0.02-0.2 range
|
|
82
|
+
- **Cloud embeddings**: Typically 0.05-0.3 range
|
|
83
|
+
- Many relevant results score as low as 0.05-0.1
|
|
84
|
+
- Start with min_score=0.0 to see all results, then adjust based on quality
|
|
81
85
|
|
|
82
86
|
### Effective Search Patterns
|
|
83
87
|
1. **Start Broad**: Use general terms first
|
|
@@ -170,21 +174,27 @@ If the MCP tools aren't working, here's what you need to know:
|
|
|
170
174
|
- The exact tool names are: `reflect_on_past` and `store_reflection`
|
|
171
175
|
|
|
172
176
|
2. **Environment Variables Not Loading**
|
|
173
|
-
- The MCP server runs via
|
|
177
|
+
- The MCP server runs via `/path/to/claude-self-reflect/mcp-server/run-mcp.sh`
|
|
178
|
+
- The script sources the `.env` file from the project root
|
|
174
179
|
- Key variables that control memory decay:
|
|
175
180
|
- `ENABLE_MEMORY_DECAY`: true/false to enable decay
|
|
176
181
|
- `DECAY_WEIGHT`: 0.3 means 30% weight on recency (0-1 range)
|
|
177
182
|
- `DECAY_SCALE_DAYS`: 90 means 90-day half-life
|
|
178
183
|
|
|
179
|
-
3. **
|
|
180
|
-
-
|
|
184
|
+
3. **Local vs Cloud Embeddings Configuration**
|
|
185
|
+
- Set `PREFER_LOCAL_EMBEDDINGS=true` in `.env` for local mode (default)
|
|
186
|
+
- Set `PREFER_LOCAL_EMBEDDINGS=false` and provide `VOYAGE_KEY` for cloud mode
|
|
187
|
+
- Local collections end with `_local`, cloud collections end with `_voyage`
|
|
188
|
+
|
|
189
|
+
4. **Changes Not Taking Effect**
|
|
190
|
+
- After modifying Python files, restart the MCP server
|
|
181
191
|
- Remove and re-add the MCP server in Claude:
|
|
182
192
|
```bash
|
|
183
|
-
claude mcp remove claude-self-
|
|
184
|
-
claude mcp add claude-self-
|
|
193
|
+
claude mcp remove claude-self-reflect
|
|
194
|
+
claude mcp add claude-self-reflect "/path/to/claude-self-reflect/mcp-server/run-mcp.sh" -e PREFER_LOCAL_EMBEDDINGS=true
|
|
185
195
|
```
|
|
186
196
|
|
|
187
|
-
|
|
197
|
+
5. **Debugging MCP Connection**
|
|
188
198
|
- Check if server is connected: `claude mcp list`
|
|
189
199
|
- Look for: `claude-self-reflection: ✓ Connected`
|
|
190
200
|
- If failed, the error will be shown in the list output
|
|
@@ -214,28 +224,33 @@ If the MCP tools aren't working, here's what you need to know:
|
|
|
214
224
|
|
|
215
225
|
If recent conversations aren't appearing in search results, you may need to import the latest data.
|
|
216
226
|
|
|
217
|
-
### Quick Import with
|
|
227
|
+
### Quick Import with Unified Importer
|
|
218
228
|
|
|
219
|
-
The
|
|
229
|
+
The unified importer supports both local and cloud embeddings:
|
|
220
230
|
|
|
221
231
|
```bash
|
|
222
|
-
# Activate virtual environment (REQUIRED
|
|
223
|
-
cd /
|
|
224
|
-
source .venv/bin/activate
|
|
225
|
-
|
|
226
|
-
#
|
|
227
|
-
export
|
|
228
|
-
python scripts/import-conversations-
|
|
232
|
+
# Activate virtual environment (REQUIRED)
|
|
233
|
+
cd /path/to/claude-self-reflect
|
|
234
|
+
source .venv/bin/activate # or source venv/bin/activate
|
|
235
|
+
|
|
236
|
+
# For local embeddings (default)
|
|
237
|
+
export PREFER_LOCAL_EMBEDDINGS=true
|
|
238
|
+
python scripts/import-conversations-unified.py
|
|
239
|
+
|
|
240
|
+
# For cloud embeddings (Voyage AI)
|
|
241
|
+
export PREFER_LOCAL_EMBEDDINGS=false
|
|
242
|
+
export VOYAGE_KEY=your-voyage-api-key
|
|
243
|
+
python scripts/import-conversations-unified.py
|
|
229
244
|
```
|
|
230
245
|
|
|
231
246
|
### Import Troubleshooting
|
|
232
247
|
|
|
233
248
|
#### Common Import Issues
|
|
234
249
|
|
|
235
|
-
1. **
|
|
236
|
-
- Cause:
|
|
237
|
-
- Solution:
|
|
238
|
-
-
|
|
250
|
+
1. **JSONL Parsing Issues**
|
|
251
|
+
- Cause: JSONL files contain one JSON object per line, not a single JSON array
|
|
252
|
+
- Solution: Import scripts now parse line-by-line
|
|
253
|
+
- Memory fix: Docker containers need 2GB memory limit for large files
|
|
239
254
|
|
|
240
255
|
2. **"No New Files to Import" Message**
|
|
241
256
|
- Check imported files list: `cat config-isolated/imported-files.json`
|
|
@@ -259,7 +274,7 @@ python scripts/import-conversations-voyage-streaming.py --limit 5 # Test with 5
|
|
|
259
274
|
```
|
|
260
275
|
|
|
261
276
|
5. **Collection Not Found After Import**
|
|
262
|
-
- Collections use MD5 hash naming: `conv_<md5>_voyage`
|
|
277
|
+
- Collections use MD5 hash naming: `conv_<md5>_local` or `conv_<md5>_voyage`
|
|
263
278
|
- Check collections: `python scripts/check-collections.py`
|
|
264
279
|
- Restart MCP after new collections are created
|
|
265
280
|
|
|
@@ -268,13 +283,14 @@ python scripts/import-conversations-voyage-streaming.py --limit 5 # Test with 5
|
|
|
268
283
|
For automatic imports, use the watcher service:
|
|
269
284
|
|
|
270
285
|
```bash
|
|
271
|
-
# Start the import watcher
|
|
272
|
-
docker compose
|
|
286
|
+
# Start the import watcher (uses settings from .env)
|
|
287
|
+
docker compose up -d import-watcher
|
|
273
288
|
|
|
274
289
|
# Check watcher logs
|
|
275
290
|
docker compose logs -f import-watcher
|
|
276
291
|
|
|
277
292
|
# Watcher checks every 60 seconds for new files
|
|
293
|
+
# Set PREFER_LOCAL_EMBEDDINGS=true in .env for local mode
|
|
278
294
|
```
|
|
279
295
|
|
|
280
296
|
### Docker Streaming Importer
|
package/README.md
CHANGED
|
@@ -13,31 +13,51 @@ Your conversations become searchable. Your decisions stay remembered. Your conte
|
|
|
13
13
|
|
|
14
14
|
## Install
|
|
15
15
|
|
|
16
|
-
### Quick Start (
|
|
16
|
+
### Quick Start (Local Mode - Default)
|
|
17
17
|
```bash
|
|
18
|
-
#
|
|
19
|
-
# Sign up at https://www.voyageai.com/ - it takes 30 seconds
|
|
20
|
-
|
|
21
|
-
# Step 2: Install and run automatic setup
|
|
18
|
+
# Install and run automatic setup
|
|
22
19
|
npm install -g claude-self-reflect
|
|
23
|
-
claude-self-reflect setup
|
|
20
|
+
claude-self-reflect setup
|
|
24
21
|
|
|
25
22
|
# That's it! The setup will:
|
|
26
23
|
# ✅ Configure everything automatically
|
|
27
24
|
# ✅ Install the MCP in Claude Code
|
|
28
25
|
# ✅ Start monitoring for new conversations
|
|
29
26
|
# ✅ Verify the reflection tools work
|
|
27
|
+
# 🔒 Keep all data local - no API keys needed
|
|
30
28
|
```
|
|
31
29
|
|
|
32
|
-
###
|
|
30
|
+
### Cloud Mode (Better Search Accuracy)
|
|
33
31
|
```bash
|
|
32
|
+
# Step 1: Get your free Voyage AI key
|
|
33
|
+
# Sign up at https://www.voyageai.com/ - it takes 30 seconds
|
|
34
|
+
|
|
35
|
+
# Step 2: Install with Voyage key
|
|
34
36
|
npm install -g claude-self-reflect
|
|
35
|
-
claude-self-reflect setup --
|
|
37
|
+
claude-self-reflect setup --voyage-key=YOUR_ACTUAL_KEY_HERE
|
|
36
38
|
```
|
|
37
|
-
*Note:
|
|
39
|
+
*Note: Cloud mode provides more accurate semantic search but sends conversation data to Voyage AI for processing.*
|
|
38
40
|
|
|
39
41
|
5 minutes. Everything automatic. Just works.
|
|
40
42
|
|
|
43
|
+
> [!IMPORTANT]
|
|
44
|
+
> **Security Update v2.3.3** - This version addresses critical security vulnerabilities. Please update immediately.
|
|
45
|
+
>
|
|
46
|
+
> ### 🔒 Privacy & Data Exchange
|
|
47
|
+
>
|
|
48
|
+
> | Mode | Data Storage | External API Calls | Data Sent | Search Quality |
|
|
49
|
+
> |------|--------------|-------------------|-----------|----------------|
|
|
50
|
+
> | **Local (Default)** | Your machine only | None | Nothing leaves your computer | Good - uses efficient local embeddings |
|
|
51
|
+
> | **Cloud (Opt-in)** | Your machine | Voyage AI | Conversation text for embedding generation | Better - uses state-of-the-art models |
|
|
52
|
+
>
|
|
53
|
+
> **Disclaimer**: Cloud mode sends conversation content to Voyage AI for processing. Review their [privacy policy](https://www.voyageai.com/privacy) before enabling.
|
|
54
|
+
>
|
|
55
|
+
> ### Security Fixes in v2.3.3
|
|
56
|
+
> - ✅ Removed hardcoded API keys
|
|
57
|
+
> - ✅ Fixed command injection vulnerabilities
|
|
58
|
+
> - ✅ Patched vulnerable dependencies
|
|
59
|
+
> - ✅ Local embeddings by default using FastEmbed
|
|
60
|
+
|
|
41
61
|
## The Magic
|
|
42
62
|
|
|
43
63
|

|
|
@@ -123,12 +143,16 @@ Want to customize? See [Configuration Guide](docs/installation-guide.md).
|
|
|
123
143
|
If you must know:
|
|
124
144
|
|
|
125
145
|
- **Vector DB**: Qdrant (local, your data stays yours)
|
|
126
|
-
- **Embeddings**:
|
|
146
|
+
- **Embeddings**:
|
|
147
|
+
- Local (Default): FastEmbed with sentence-transformers/all-MiniLM-L6-v2
|
|
148
|
+
- Cloud (Optional): Voyage AI (200M free tokens/month)*
|
|
127
149
|
- **MCP Server**: Python + FastMCP
|
|
128
150
|
- **Search**: Semantic similarity with time decay
|
|
129
151
|
|
|
130
152
|
*We chose Voyage AI for their excellent cost-effectiveness ([66.1% accuracy at one of the lowest costs](https://research.aimultiple.com/embedding-models/#:~:text=Cost%2Deffective%20alternatives%3A%20Voyage%2D3.5%2Dlite%20delivered%20solid%20accuracy%20(66.1%25)%20at%20one%20of%20the%20lowest%20costs%2C%20making%20it%20attractive%20for%20budget%2Dsensitive%20implementations.)). We are not affiliated with Voyage AI.
|
|
131
153
|
|
|
154
|
+
**Note**: Local mode uses FastEmbed, the same efficient embedding library used by the Qdrant MCP server, ensuring fast and private semantic search without external dependencies.
|
|
155
|
+
|
|
132
156
|
### Want More Details?
|
|
133
157
|
|
|
134
158
|
- [Architecture Deep Dive](docs/architecture-details.md) - How it actually works
|