claude-self-reflect 2.3.4 → 2.3.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +13 -6
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -4,10 +4,19 @@ Claude forgets everything. This fixes that.
|
|
|
4
4
|
|
|
5
5
|
## What You Get
|
|
6
6
|
|
|
7
|
-
Ask Claude about past conversations. Get actual answers.
|
|
7
|
+
Ask Claude about past conversations. Get actual answers. Local-first with no cloud dependencies, but cloud-enhanced search available when you need it.
|
|
8
8
|
|
|
9
9
|
**Before**: "I don't have access to previous conversations"
|
|
10
|
-
**After**:
|
|
10
|
+
**After**:
|
|
11
|
+
```
|
|
12
|
+
⏺ reflection-specialist(Search FastEmbed vs cloud embedding decision)
|
|
13
|
+
⎿ Done (3 tool uses · 8.2k tokens · 12.4s)
|
|
14
|
+
|
|
15
|
+
"Found it! Yesterday we decided on FastEmbed for local mode - better privacy,
|
|
16
|
+
no API calls, 384-dimensional embeddings. Works offline too."
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
The reflection specialist is a specialized sub-agent that Claude automatically spawns when you ask about past conversations. It searches your conversation history in its own isolated context, keeping your main chat clean and focused.
|
|
11
20
|
|
|
12
21
|
Your conversations become searchable. Your decisions stay remembered. Your context persists.
|
|
13
22
|
|
|
@@ -145,13 +154,11 @@ If you must know:
|
|
|
145
154
|
- **Vector DB**: Qdrant (local, your data stays yours)
|
|
146
155
|
- **Embeddings**:
|
|
147
156
|
- Local (Default): FastEmbed with sentence-transformers/all-MiniLM-L6-v2
|
|
148
|
-
- Cloud (Optional): Voyage AI (200M free tokens/month)
|
|
157
|
+
- Cloud (Optional): Voyage AI (200M free tokens/month)
|
|
149
158
|
- **MCP Server**: Python + FastMCP
|
|
150
159
|
- **Search**: Semantic similarity with time decay
|
|
151
160
|
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
**Note**: Local mode uses FastEmbed, the same efficient embedding library used by the Qdrant MCP server, ensuring fast and private semantic search without external dependencies.
|
|
161
|
+
Both embedding options work well. Local mode uses FastEmbed for privacy and offline use. Cloud mode uses Voyage AI for enhanced accuracy when internet is available. We are not affiliated with Voyage AI.
|
|
155
162
|
|
|
156
163
|
### Want More Details?
|
|
157
164
|
|