claude-self-reflect 2.3.4 → 2.3.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +13 -6
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -4,10 +4,19 @@ Claude forgets everything. This fixes that.
4
4
 
5
5
  ## What You Get
6
6
 
7
- Ask Claude about past conversations. Get actual answers.
7
+ Ask Claude about past conversations. Get actual answers. Local-first with no cloud dependencies, but cloud-enhanced search available when you need it.
8
8
 
9
9
  **Before**: "I don't have access to previous conversations"
10
- **After**: "We discussed JWT auth on Tuesday. You decided on 15-minute tokens."
10
+ **After**:
11
+ ```
12
+ ⏺ reflection-specialist(Search FastEmbed vs cloud embedding decision)
13
+ ⎿ Done (3 tool uses · 8.2k tokens · 12.4s)
14
+
15
+ "Found it! Yesterday we decided on FastEmbed for local mode - better privacy,
16
+ no API calls, 384-dimensional embeddings. Works offline too."
17
+ ```
18
+
19
+ The reflection specialist is a specialized sub-agent that Claude automatically spawns when you ask about past conversations. It searches your conversation history in its own isolated context, keeping your main chat clean and focused.
11
20
 
12
21
  Your conversations become searchable. Your decisions stay remembered. Your context persists.
13
22
 
@@ -145,13 +154,11 @@ If you must know:
145
154
  - **Vector DB**: Qdrant (local, your data stays yours)
146
155
  - **Embeddings**:
147
156
  - Local (Default): FastEmbed with sentence-transformers/all-MiniLM-L6-v2
148
- - Cloud (Optional): Voyage AI (200M free tokens/month)*
157
+ - Cloud (Optional): Voyage AI (200M free tokens/month)
149
158
  - **MCP Server**: Python + FastMCP
150
159
  - **Search**: Semantic similarity with time decay
151
160
 
152
- *We chose Voyage AI for their excellent cost-effectiveness ([66.1% accuracy at one of the lowest costs](https://research.aimultiple.com/embedding-models/#:~:text=Cost%2Deffective%20alternatives%3A%20Voyage%2D3.5%2Dlite%20delivered%20solid%20accuracy%20(66.1%25)%20at%20one%20of%20the%20lowest%20costs%2C%20making%20it%20attractive%20for%20budget%2Dsensitive%20implementations.)). We are not affiliated with Voyage AI.
153
-
154
- **Note**: Local mode uses FastEmbed, the same efficient embedding library used by the Qdrant MCP server, ensuring fast and private semantic search without external dependencies.
161
+ Both embedding options work well. Local mode uses FastEmbed for privacy and offline use. Cloud mode uses Voyage AI for enhanced accuracy when internet is available. We are not affiliated with Voyage AI.
155
162
 
156
163
  ### Want More Details?
157
164
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-self-reflect",
3
- "version": "2.3.4",
3
+ "version": "2.3.6",
4
4
  "description": "Give Claude perfect memory of all your conversations - Installation wizard for Python MCP server",
5
5
  "keywords": [
6
6
  "claude",