claude-self-reflect 2.8.7 → 2.8.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/Dockerfile.safe-watcher +1 -1
- package/Dockerfile.streaming-importer +1 -1
- package/README.md +71 -49
- package/installer/setup-wizard-docker.js +1 -1
- package/mcp-server/pyproject.toml +1 -1
- package/mcp-server/src/server.py +459 -27
- package/package.json +1 -1
package/Dockerfile.safe-watcher
CHANGED
|
@@ -25,7 +25,7 @@ RUN pip install --no-cache-dir torch==2.3.0 --index-url https://download.pytorch
|
|
|
25
25
|
RUN pip install --no-cache-dir \
|
|
26
26
|
qdrant-client==1.15.0 \
|
|
27
27
|
fastembed==0.4.0 \
|
|
28
|
-
numpy
|
|
28
|
+
numpy>=2.1.0 \
|
|
29
29
|
psutil==7.0.0 \
|
|
30
30
|
tenacity==8.2.3 \
|
|
31
31
|
python-dotenv==1.0.0 \
|
package/README.md
CHANGED
|
@@ -24,22 +24,22 @@
|
|
|
24
24
|
|
|
25
25
|
Give Claude perfect memory of all your conversations. Search past discussions instantly. Never lose context again.
|
|
26
26
|
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
##
|
|
30
|
-
|
|
31
|
-
- [
|
|
32
|
-
- [
|
|
33
|
-
- [
|
|
34
|
-
- [
|
|
35
|
-
- [
|
|
36
|
-
- [
|
|
37
|
-
- [
|
|
38
|
-
- [
|
|
39
|
-
- [
|
|
40
|
-
- [
|
|
41
|
-
- [
|
|
42
|
-
- [
|
|
27
|
+
**100% Local by Default** • **Blazing Fast Search** • **Zero Configuration** • **Production Ready**
|
|
28
|
+
|
|
29
|
+
## Table of Contents
|
|
30
|
+
|
|
31
|
+
- [Quick Install](#-quick-install)
|
|
32
|
+
- [The Magic](#the-magic)
|
|
33
|
+
- [Before & After](#before--after)
|
|
34
|
+
- [Real Examples](#real-examples)
|
|
35
|
+
- [NEW: Real-time Indexing Status](#new-real-time-indexing-status-in-your-terminal)
|
|
36
|
+
- [Key Features](#key-features)
|
|
37
|
+
- [Architecture](#architecture)
|
|
38
|
+
- [Requirements](#requirements)
|
|
39
|
+
- [Documentation](#documentation)
|
|
40
|
+
- [What's New](#whats-new)
|
|
41
|
+
- [Troubleshooting](#troubleshooting)
|
|
42
|
+
- [Contributors](#contributors)
|
|
43
43
|
|
|
44
44
|
## 🚀 Quick Install
|
|
45
45
|
|
|
@@ -71,15 +71,15 @@ claude-self-reflect setup --voyage-key=YOUR_ACTUAL_KEY_HERE
|
|
|
71
71
|
|
|
72
72
|
</details>
|
|
73
73
|
|
|
74
|
-
##
|
|
74
|
+
## The Magic
|
|
75
75
|
|
|
76
76
|

|
|
77
77
|
|
|
78
|
-
##
|
|
78
|
+
## Before & After
|
|
79
79
|
|
|
80
80
|

|
|
81
81
|
|
|
82
|
-
##
|
|
82
|
+
## Real Examples
|
|
83
83
|
|
|
84
84
|
```
|
|
85
85
|
You: "What was that PostgreSQL optimization we figured out?"
|
|
@@ -98,7 +98,7 @@ Claude: "3 conversations found:
|
|
|
98
98
|
- Nov 20: Added rate limiting per authenticated connection"
|
|
99
99
|
```
|
|
100
100
|
|
|
101
|
-
##
|
|
101
|
+
## NEW: Real-time Indexing Status in Your Terminal
|
|
102
102
|
|
|
103
103
|
See your conversation indexing progress directly in your statusline:
|
|
104
104
|
|
|
@@ -110,10 +110,32 @@ See your conversation indexing progress directly in your statusline:
|
|
|
110
110
|
|
|
111
111
|
Works with [Claude Code Statusline](https://github.com/sirmalloc/ccstatusline) - shows progress bars, percentages, and indexing lag in real-time!
|
|
112
112
|
|
|
113
|
-
##
|
|
113
|
+
## Key Features
|
|
114
114
|
|
|
115
115
|
<details>
|
|
116
|
-
<summary><b
|
|
116
|
+
<summary><b>MCP Tools Available to Claude</b></summary>
|
|
117
|
+
|
|
118
|
+
**Search & Memory Tools:**
|
|
119
|
+
- `reflect_on_past` - Search past conversations using semantic similarity with time decay
|
|
120
|
+
- `store_reflection` - Store important insights or learnings for future reference
|
|
121
|
+
- `quick_search` - Fast search returning only count and top result
|
|
122
|
+
- `search_summary` - Get aggregated insights without individual details
|
|
123
|
+
- `get_more_results` - Paginate through additional search results
|
|
124
|
+
- `search_by_file` - Find conversations that analyzed specific files
|
|
125
|
+
- `search_by_concept` - Search for conversations about development concepts
|
|
126
|
+
- `get_full_conversation` - Retrieve complete JSONL conversation files (v2.8.8)
|
|
127
|
+
|
|
128
|
+
**Status & Monitoring Tools:**
|
|
129
|
+
- `get_status` - Real-time import progress and system status
|
|
130
|
+
- `get_health` - Comprehensive system health check
|
|
131
|
+
- `collection_status` - Check Qdrant collection health and stats
|
|
132
|
+
|
|
133
|
+
All tools are automatically available when the MCP server is connected to Claude Code.
|
|
134
|
+
|
|
135
|
+
</details>
|
|
136
|
+
|
|
137
|
+
<details>
|
|
138
|
+
<summary><b>Statusline Integration</b></summary>
|
|
117
139
|
|
|
118
140
|
See your indexing progress right in your terminal! Works with [Claude Code Statusline](https://github.com/sirmalloc/ccstatusline):
|
|
119
141
|
- **Progress Bar** - Visual indicator `[████████ ] 91%`
|
|
@@ -126,7 +148,7 @@ See your indexing progress right in your terminal! Works with [Claude Code Statu
|
|
|
126
148
|
</details>
|
|
127
149
|
|
|
128
150
|
<details>
|
|
129
|
-
<summary><b
|
|
151
|
+
<summary><b>Project-Scoped Search</b></summary>
|
|
130
152
|
|
|
131
153
|
Searches are **project-aware by default**. Claude automatically searches within your current project:
|
|
132
154
|
|
|
@@ -143,7 +165,7 @@ Claude: [Searches across ALL your projects]
|
|
|
143
165
|
</details>
|
|
144
166
|
|
|
145
167
|
<details>
|
|
146
|
-
<summary><b
|
|
168
|
+
<summary><b>Memory Decay</b></summary>
|
|
147
169
|
|
|
148
170
|
Recent conversations matter more. Old ones fade. Like your brain, but reliable.
|
|
149
171
|
- **90-day half-life**: Recent memories stay strong
|
|
@@ -153,7 +175,7 @@ Recent conversations matter more. Old ones fade. Like your brain, but reliable.
|
|
|
153
175
|
</details>
|
|
154
176
|
|
|
155
177
|
<details>
|
|
156
|
-
<summary><b
|
|
178
|
+
<summary><b>Performance at Scale</b></summary>
|
|
157
179
|
|
|
158
180
|
- **Search**: <3ms average response time
|
|
159
181
|
- **Scale**: 600+ conversations across 24 projects
|
|
@@ -163,18 +185,18 @@ Recent conversations matter more. Old ones fade. Like your brain, but reliable.
|
|
|
163
185
|
|
|
164
186
|
</details>
|
|
165
187
|
|
|
166
|
-
##
|
|
188
|
+
## Architecture
|
|
167
189
|
|
|
168
190
|
<details>
|
|
169
191
|
<summary><b>View Architecture Diagram & Details</b></summary>
|
|
170
192
|
|
|
171
193
|

|
|
172
194
|
|
|
173
|
-
###
|
|
195
|
+
### HOT/WARM/COLD Intelligent Prioritization
|
|
174
196
|
|
|
175
|
-
-
|
|
176
|
-
-
|
|
177
|
-
-
|
|
197
|
+
- **HOT** (< 5 minutes): 2-second intervals for near real-time import
|
|
198
|
+
- **WARM** (< 24 hours): Normal priority with starvation prevention
|
|
199
|
+
- **COLD** (> 24 hours): Batch processed to prevent blocking
|
|
178
200
|
|
|
179
201
|
Files are categorized by age and processed with priority queuing to ensure newest content gets imported quickly while preventing older files from being starved.
|
|
180
202
|
|
|
@@ -186,7 +208,7 @@ Files are categorized by age and processed with priority queuing to ensure newes
|
|
|
186
208
|
|
|
187
209
|
</details>
|
|
188
210
|
|
|
189
|
-
##
|
|
211
|
+
## Requirements
|
|
190
212
|
|
|
191
213
|
<details>
|
|
192
214
|
<summary><b>System Requirements</b></summary>
|
|
@@ -204,16 +226,16 @@ Files are categorized by age and processed with priority queuing to ensure newes
|
|
|
204
226
|
- **Docker Desktop 4.0+** for best compatibility
|
|
205
227
|
|
|
206
228
|
### Operating Systems
|
|
207
|
-
-
|
|
208
|
-
-
|
|
209
|
-
-
|
|
229
|
+
- macOS 11+ (Intel & Apple Silicon)
|
|
230
|
+
- Windows 10/11 with WSL2
|
|
231
|
+
- Linux (Ubuntu 20.04+, Debian 11+)
|
|
210
232
|
|
|
211
233
|
</details>
|
|
212
234
|
|
|
213
|
-
##
|
|
235
|
+
## Documentation
|
|
214
236
|
|
|
215
237
|
<details>
|
|
216
|
-
<summary
|
|
238
|
+
<summary>Technical Stack</summary>
|
|
217
239
|
|
|
218
240
|
- **Vector DB**: Qdrant (local, your data stays yours)
|
|
219
241
|
- **Embeddings**:
|
|
@@ -225,7 +247,7 @@ Files are categorized by age and processed with priority queuing to ensure newes
|
|
|
225
247
|
</details>
|
|
226
248
|
|
|
227
249
|
<details>
|
|
228
|
-
<summary
|
|
250
|
+
<summary>Advanced Topics</summary>
|
|
229
251
|
|
|
230
252
|
- [Performance tuning](docs/performance-guide.md)
|
|
231
253
|
- [Security & privacy](docs/security.md)
|
|
@@ -236,7 +258,7 @@ Files are categorized by age and processed with priority queuing to ensure newes
|
|
|
236
258
|
</details>
|
|
237
259
|
|
|
238
260
|
<details>
|
|
239
|
-
<summary
|
|
261
|
+
<summary>Troubleshooting</summary>
|
|
240
262
|
|
|
241
263
|
- [Troubleshooting Guide](docs/troubleshooting.md)
|
|
242
264
|
- [GitHub Issues](https://github.com/ramakay/claude-self-reflect/issues)
|
|
@@ -245,7 +267,7 @@ Files are categorized by age and processed with priority queuing to ensure newes
|
|
|
245
267
|
</details>
|
|
246
268
|
|
|
247
269
|
<details>
|
|
248
|
-
<summary
|
|
270
|
+
<summary>Uninstall</summary>
|
|
249
271
|
|
|
250
272
|
For complete uninstall instructions, see [docs/UNINSTALL.md](docs/UNINSTALL.md).
|
|
251
273
|
|
|
@@ -263,19 +285,19 @@ npm uninstall -g claude-self-reflect
|
|
|
263
285
|
|
|
264
286
|
</details>
|
|
265
287
|
|
|
266
|
-
##
|
|
288
|
+
## What's New
|
|
267
289
|
|
|
268
290
|
<details>
|
|
269
|
-
<summary
|
|
291
|
+
<summary>v2.8.8 - Latest Release</summary>
|
|
270
292
|
|
|
271
|
-
-
|
|
272
|
-
-
|
|
273
|
-
-
|
|
293
|
+
- **Full Conversation Access**: New `get_full_conversation` tool provides complete JSONL files instead of 200-char excerpts
|
|
294
|
+
- **95% Value Increase**: Agents can now access entire conversations with full implementation details
|
|
295
|
+
- **Direct File Access**: Returns absolute paths for efficient reading with standard tools
|
|
274
296
|
|
|
275
297
|
</details>
|
|
276
298
|
|
|
277
299
|
<details>
|
|
278
|
-
<summary
|
|
300
|
+
<summary>v2.5.19 - Metadata Enrichment</summary>
|
|
279
301
|
|
|
280
302
|
### For Existing Users
|
|
281
303
|
```bash
|
|
@@ -298,7 +320,7 @@ docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.
|
|
|
298
320
|
</details>
|
|
299
321
|
|
|
300
322
|
<details>
|
|
301
|
-
<summary
|
|
323
|
+
<summary>Release History</summary>
|
|
302
324
|
|
|
303
325
|
- **v2.5.18** - Security dependency updates
|
|
304
326
|
- **v2.5.17** - Critical CPU fix and memory limit adjustment
|
|
@@ -313,7 +335,7 @@ docker compose run --rm importer python /app/scripts/delta-metadata-update-safe.
|
|
|
313
335
|
|
|
314
336
|
</details>
|
|
315
337
|
|
|
316
|
-
##
|
|
338
|
+
## Troubleshooting
|
|
317
339
|
|
|
318
340
|
<details>
|
|
319
341
|
<summary><b>Common Issues and Solutions</b></summary>
|
|
@@ -432,7 +454,7 @@ claude-self-reflect doctor > diagnostic.txt
|
|
|
432
454
|
|
|
433
455
|
</details>
|
|
434
456
|
|
|
435
|
-
##
|
|
457
|
+
## Contributors
|
|
436
458
|
|
|
437
459
|
Special thanks to our contributors:
|
|
438
460
|
- **[@TheGordon](https://github.com/TheGordon)** - Fixed timestamp parsing (#10)
|
|
@@ -441,4 +463,4 @@ Special thanks to our contributors:
|
|
|
441
463
|
|
|
442
464
|
---
|
|
443
465
|
|
|
444
|
-
Built with
|
|
466
|
+
Built with care by [ramakay](https://github.com/ramakay) for the Claude community.
|
|
@@ -306,7 +306,7 @@ async function configureClaude() {
|
|
|
306
306
|
const isDockerMode = process.env.USE_DOCKER_MCP === 'true';
|
|
307
307
|
const mcpScript = isDockerMode
|
|
308
308
|
? join(projectRoot, 'mcp-server', 'run-mcp-docker.sh')
|
|
309
|
-
: join(projectRoot, 'mcp-server', 'run-mcp
|
|
309
|
+
: join(projectRoot, 'mcp-server', 'run-mcp.sh');
|
|
310
310
|
|
|
311
311
|
if (isDockerMode) {
|
|
312
312
|
// Create a script that runs the MCP server in Docker
|
package/mcp-server/src/server.py
CHANGED
|
@@ -10,6 +10,7 @@ import numpy as np
|
|
|
10
10
|
import hashlib
|
|
11
11
|
import time
|
|
12
12
|
import logging
|
|
13
|
+
from xml.sax.saxutils import escape
|
|
13
14
|
|
|
14
15
|
from fastmcp import FastMCP, Context
|
|
15
16
|
from .utils import normalize_project_name
|
|
@@ -47,6 +48,20 @@ QDRANT_URL = os.getenv('QDRANT_URL', 'http://localhost:6333')
|
|
|
47
48
|
VOYAGE_API_KEY = os.getenv('VOYAGE_KEY') or os.getenv('VOYAGE_KEY-2') or os.getenv('VOYAGE_KEY_2')
|
|
48
49
|
ENABLE_MEMORY_DECAY = os.getenv('ENABLE_MEMORY_DECAY', 'false').lower() == 'true'
|
|
49
50
|
DECAY_WEIGHT = float(os.getenv('DECAY_WEIGHT', '0.3'))
|
|
51
|
+
|
|
52
|
+
# Setup file logging
|
|
53
|
+
LOG_FILE = Path.home() / '.claude-self-reflect' / 'logs' / 'mcp-server.log'
|
|
54
|
+
LOG_FILE.parent.mkdir(parents=True, exist_ok=True)
|
|
55
|
+
|
|
56
|
+
# Configure logging to both file and console
|
|
57
|
+
logging.basicConfig(
|
|
58
|
+
level=logging.DEBUG,
|
|
59
|
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
|
60
|
+
handlers=[
|
|
61
|
+
logging.FileHandler(LOG_FILE, mode='a'),
|
|
62
|
+
logging.StreamHandler()
|
|
63
|
+
]
|
|
64
|
+
)
|
|
50
65
|
DECAY_SCALE_DAYS = float(os.getenv('DECAY_SCALE_DAYS', '90'))
|
|
51
66
|
USE_NATIVE_DECAY = os.getenv('USE_NATIVE_DECAY', 'false').lower() == 'true'
|
|
52
67
|
|
|
@@ -101,7 +116,7 @@ print(f"[DEBUG] env_path: {env_path}", file=sys.stderr)
|
|
|
101
116
|
|
|
102
117
|
|
|
103
118
|
class SearchResult(BaseModel):
|
|
104
|
-
"""A single search result."""
|
|
119
|
+
"""A single search result with pattern intelligence."""
|
|
105
120
|
id: str
|
|
106
121
|
score: float
|
|
107
122
|
timestamp: str
|
|
@@ -112,6 +127,11 @@ class SearchResult(BaseModel):
|
|
|
112
127
|
base_conversation_id: Optional[str] = None
|
|
113
128
|
collection_name: str
|
|
114
129
|
raw_payload: Optional[Dict[str, Any]] = None # Full Qdrant payload when debug mode enabled
|
|
130
|
+
# Pattern intelligence fields
|
|
131
|
+
code_patterns: Optional[Dict[str, List[str]]] = None # Extracted AST patterns
|
|
132
|
+
files_analyzed: Optional[List[str]] = None # Files referenced in conversation
|
|
133
|
+
tools_used: Optional[List[str]] = None # Tools/commands used
|
|
134
|
+
concepts: Optional[List[str]] = None # Domain concepts discussed
|
|
115
135
|
|
|
116
136
|
|
|
117
137
|
# Initialize FastMCP instance
|
|
@@ -138,6 +158,8 @@ _indexing_cache = {"result": None, "timestamp": 0}
|
|
|
138
158
|
|
|
139
159
|
# Setup logger
|
|
140
160
|
logger = logging.getLogger(__name__)
|
|
161
|
+
logger.info(f"MCP Server starting - Log file: {LOG_FILE}")
|
|
162
|
+
logger.info(f"Configuration: QDRANT_URL={QDRANT_URL}, DECAY={ENABLE_MEMORY_DECAY}, VOYAGE_API_STATUS={'Configured' if VOYAGE_API_KEY else 'Not Configured'}")
|
|
141
163
|
|
|
142
164
|
def normalize_path(path_str: str) -> str:
|
|
143
165
|
"""Normalize path for consistent comparison across platforms.
|
|
@@ -378,6 +400,86 @@ def get_collection_suffix() -> str:
|
|
|
378
400
|
return "_local"
|
|
379
401
|
else:
|
|
380
402
|
return "_voyage"
|
|
403
|
+
|
|
404
|
+
def aggregate_pattern_intelligence(results: List[SearchResult]) -> Dict[str, Any]:
|
|
405
|
+
"""Aggregate pattern intelligence across search results."""
|
|
406
|
+
|
|
407
|
+
# Initialize counters
|
|
408
|
+
all_patterns = {}
|
|
409
|
+
all_files = set()
|
|
410
|
+
all_tools = set()
|
|
411
|
+
all_concepts = set()
|
|
412
|
+
pattern_by_category = {}
|
|
413
|
+
|
|
414
|
+
for result in results:
|
|
415
|
+
# Aggregate code patterns
|
|
416
|
+
if result.code_patterns:
|
|
417
|
+
for category, patterns in result.code_patterns.items():
|
|
418
|
+
if category not in pattern_by_category:
|
|
419
|
+
pattern_by_category[category] = {}
|
|
420
|
+
for pattern in patterns:
|
|
421
|
+
if pattern not in pattern_by_category[category]:
|
|
422
|
+
pattern_by_category[category][pattern] = 0
|
|
423
|
+
pattern_by_category[category][pattern] += 1
|
|
424
|
+
|
|
425
|
+
# Overall pattern count
|
|
426
|
+
if pattern not in all_patterns:
|
|
427
|
+
all_patterns[pattern] = 0
|
|
428
|
+
all_patterns[pattern] += 1
|
|
429
|
+
|
|
430
|
+
# Aggregate files
|
|
431
|
+
if result.files_analyzed:
|
|
432
|
+
all_files.update(result.files_analyzed)
|
|
433
|
+
|
|
434
|
+
# Aggregate tools
|
|
435
|
+
if result.tools_used:
|
|
436
|
+
all_tools.update(result.tools_used)
|
|
437
|
+
|
|
438
|
+
# Aggregate concepts
|
|
439
|
+
if result.concepts:
|
|
440
|
+
all_concepts.update(result.concepts)
|
|
441
|
+
|
|
442
|
+
# Find most common patterns
|
|
443
|
+
sorted_patterns = sorted(all_patterns.items(), key=lambda x: x[1], reverse=True)
|
|
444
|
+
most_common_patterns = sorted_patterns[:10] if sorted_patterns else []
|
|
445
|
+
|
|
446
|
+
# Find pattern categories with most coverage
|
|
447
|
+
category_coverage = {
|
|
448
|
+
cat: sum(counts.values())
|
|
449
|
+
for cat, counts in pattern_by_category.items()
|
|
450
|
+
}
|
|
451
|
+
|
|
452
|
+
# Build intelligence summary
|
|
453
|
+
intelligence = {
|
|
454
|
+
"total_unique_patterns": len(all_patterns),
|
|
455
|
+
"most_common_patterns": most_common_patterns,
|
|
456
|
+
"pattern_categories": list(pattern_by_category.keys()),
|
|
457
|
+
"category_coverage": category_coverage,
|
|
458
|
+
"files_referenced": list(all_files)[:20], # Limit to top 20
|
|
459
|
+
"tools_used": list(all_tools),
|
|
460
|
+
"concepts_discussed": list(all_concepts)[:15], # Limit to top 15
|
|
461
|
+
"pattern_by_category": pattern_by_category,
|
|
462
|
+
"pattern_diversity_score": len(all_patterns) / max(len(results), 1) # Patterns per result
|
|
463
|
+
}
|
|
464
|
+
|
|
465
|
+
# Add cross-pattern insights
|
|
466
|
+
if pattern_by_category:
|
|
467
|
+
# Check for common pattern combinations
|
|
468
|
+
async_error_combo = (
|
|
469
|
+
'async_patterns' in pattern_by_category and
|
|
470
|
+
'error_handling' in pattern_by_category
|
|
471
|
+
)
|
|
472
|
+
react_state_combo = (
|
|
473
|
+
'react_hooks' in pattern_by_category and
|
|
474
|
+
any('useState' in p for p in pattern_by_category.get('react_hooks', {}).keys())
|
|
475
|
+
)
|
|
476
|
+
|
|
477
|
+
intelligence["pattern_combinations"] = {
|
|
478
|
+
"async_with_error_handling": async_error_combo,
|
|
479
|
+
"react_with_state": react_state_combo
|
|
480
|
+
}
|
|
481
|
+
|
|
482
|
+
return intelligence
|
|
381
483
|
|
|
382
484
|
# Register tools
|
|
383
485
|
@mcp.tool()
|
|
@@ -394,6 +496,8 @@ async def reflect_on_past(
|
|
|
394
496
|
) -> str:
|
|
395
497
|
"""Search for relevant past conversations using semantic search with optional time decay."""
|
|
396
498
|
|
|
499
|
+
logger.info(f"=== SEARCH START === Query: '{query}', Project: '{project}', Limit: {limit}")
|
|
500
|
+
|
|
397
501
|
# Start timing
|
|
398
502
|
start_time = time.time()
|
|
399
503
|
timing_info = {}
|
|
@@ -537,6 +641,7 @@ async def reflect_on_past(
|
|
|
537
641
|
continue
|
|
538
642
|
|
|
539
643
|
query_embedding = query_embeddings[embedding_type_for_collection]
|
|
644
|
+
|
|
540
645
|
if should_use_decay and USE_NATIVE_DECAY and NATIVE_DECAY_AVAILABLE:
|
|
541
646
|
# Use native Qdrant decay with newer API
|
|
542
647
|
await ctx.debug(f"Using NATIVE Qdrant decay (new API) for {collection_name}")
|
|
@@ -642,20 +747,37 @@ async def reflect_on_past(
|
|
|
642
747
|
if target_project != 'all' and not project_collections and not is_reflection_collection:
|
|
643
748
|
# The stored project name is like "-Users-username-projects-ShopifyMCPMockShop"
|
|
644
749
|
# We want to match just "ShopifyMCPMockShop"
|
|
645
|
-
|
|
750
|
+
# Also handle underscore/dash variations (procsolve-website vs procsolve_website)
|
|
751
|
+
normalized_target = target_project.replace('-', '_')
|
|
752
|
+
normalized_stored = point_project.replace('-', '_')
|
|
753
|
+
if not (normalized_stored.endswith(f"_{normalized_target}") or
|
|
754
|
+
normalized_stored == normalized_target or
|
|
755
|
+
point_project.endswith(f"-{target_project}") or
|
|
756
|
+
point_project == target_project):
|
|
646
757
|
continue # Skip results from other projects
|
|
647
758
|
|
|
648
759
|
# For reflections with project context, optionally filter by project
|
|
649
760
|
if is_reflection_collection and target_project != 'all' and 'project' in point.payload:
|
|
650
761
|
# Only filter if the reflection has project metadata
|
|
651
762
|
reflection_project = point.payload.get('project', '')
|
|
652
|
-
if reflection_project
|
|
653
|
-
|
|
654
|
-
|
|
655
|
-
reflection_project.
|
|
656
|
-
|
|
657
|
-
|
|
763
|
+
if reflection_project:
|
|
764
|
+
# Normalize both for comparison (handle underscore/dash variations)
|
|
765
|
+
normalized_target = target_project.replace('-', '_')
|
|
766
|
+
normalized_reflection = reflection_project.replace('-', '_')
|
|
767
|
+
if not (
|
|
768
|
+
reflection_project == target_project or
|
|
769
|
+
normalized_reflection == normalized_target or
|
|
770
|
+
reflection_project.endswith(f"/{target_project}") or
|
|
771
|
+
reflection_project.endswith(f"-{target_project}") or
|
|
772
|
+
normalized_reflection.endswith(f"_{normalized_target}") or
|
|
773
|
+
normalized_reflection.endswith(f"/{normalized_target}")
|
|
774
|
+
):
|
|
775
|
+
continue # Skip reflections from other projects
|
|
658
776
|
|
|
777
|
+
# Log pattern data
|
|
778
|
+
patterns = point.payload.get('code_patterns')
|
|
779
|
+
logger.info(f"DEBUG: Creating SearchResult for point {point.id} from {collection_name}: has_patterns={bool(patterns)}, pattern_keys={list(patterns.keys()) if patterns else None}")
|
|
780
|
+
|
|
659
781
|
all_results.append(SearchResult(
|
|
660
782
|
id=str(point.id),
|
|
661
783
|
score=point.score, # Score already includes decay
|
|
@@ -666,7 +788,12 @@ async def reflect_on_past(
|
|
|
666
788
|
conversation_id=point.payload.get('conversation_id'),
|
|
667
789
|
base_conversation_id=point.payload.get('base_conversation_id'),
|
|
668
790
|
collection_name=collection_name,
|
|
669
|
-
raw_payload=point.payload
|
|
791
|
+
raw_payload=point.payload, # Always include payload for metadata extraction
|
|
792
|
+
# Pattern intelligence metadata
|
|
793
|
+
code_patterns=point.payload.get('code_patterns'),
|
|
794
|
+
files_analyzed=point.payload.get('files_analyzed'),
|
|
795
|
+
tools_used=list(point.payload.get('tools_used', [])) if isinstance(point.payload.get('tools_used'), set) else point.payload.get('tools_used'),
|
|
796
|
+
concepts=point.payload.get('concepts')
|
|
670
797
|
))
|
|
671
798
|
|
|
672
799
|
elif should_use_decay:
|
|
@@ -736,19 +863,32 @@ async def reflect_on_past(
|
|
|
736
863
|
if target_project != 'all' and not project_collections and not is_reflection_collection:
|
|
737
864
|
# The stored project name is like "-Users-username-projects-ShopifyMCPMockShop"
|
|
738
865
|
# We want to match just "ShopifyMCPMockShop"
|
|
739
|
-
|
|
866
|
+
# Also handle underscore/dash variations (procsolve-website vs procsolve_website)
|
|
867
|
+
normalized_target = target_project.replace('-', '_')
|
|
868
|
+
normalized_stored = point_project.replace('-', '_')
|
|
869
|
+
if not (normalized_stored.endswith(f"_{normalized_target}") or
|
|
870
|
+
normalized_stored == normalized_target or
|
|
871
|
+
point_project.endswith(f"-{target_project}") or
|
|
872
|
+
point_project == target_project):
|
|
740
873
|
continue # Skip results from other projects
|
|
741
874
|
|
|
742
875
|
# For reflections with project context, optionally filter by project
|
|
743
876
|
if is_reflection_collection and target_project != 'all' and 'project' in point.payload:
|
|
744
877
|
# Only filter if the reflection has project metadata
|
|
745
878
|
reflection_project = point.payload.get('project', '')
|
|
746
|
-
if reflection_project
|
|
747
|
-
|
|
748
|
-
|
|
749
|
-
reflection_project.
|
|
750
|
-
|
|
751
|
-
|
|
879
|
+
if reflection_project:
|
|
880
|
+
# Normalize both for comparison (handle underscore/dash variations)
|
|
881
|
+
normalized_target = target_project.replace('-', '_')
|
|
882
|
+
normalized_reflection = reflection_project.replace('-', '_')
|
|
883
|
+
if not (
|
|
884
|
+
reflection_project == target_project or
|
|
885
|
+
normalized_reflection == normalized_target or
|
|
886
|
+
reflection_project.endswith(f"/{target_project}") or
|
|
887
|
+
reflection_project.endswith(f"-{target_project}") or
|
|
888
|
+
normalized_reflection.endswith(f"_{normalized_target}") or
|
|
889
|
+
normalized_reflection.endswith(f"/{normalized_target}")
|
|
890
|
+
):
|
|
891
|
+
continue # Skip reflections from other projects
|
|
752
892
|
|
|
753
893
|
all_results.append(SearchResult(
|
|
754
894
|
id=str(point.id),
|
|
@@ -760,7 +900,12 @@ async def reflect_on_past(
|
|
|
760
900
|
conversation_id=point.payload.get('conversation_id'),
|
|
761
901
|
base_conversation_id=point.payload.get('base_conversation_id'),
|
|
762
902
|
collection_name=collection_name,
|
|
763
|
-
raw_payload=point.payload
|
|
903
|
+
raw_payload=point.payload, # Always include payload for metadata extraction
|
|
904
|
+
# Pattern intelligence metadata
|
|
905
|
+
code_patterns=point.payload.get('code_patterns'),
|
|
906
|
+
files_analyzed=point.payload.get('files_analyzed'),
|
|
907
|
+
tools_used=list(point.payload.get('tools_used', [])) if isinstance(point.payload.get('tools_used'), set) else point.payload.get('tools_used'),
|
|
908
|
+
concepts=point.payload.get('concepts')
|
|
764
909
|
))
|
|
765
910
|
else:
|
|
766
911
|
# Standard search without decay
|
|
@@ -787,19 +932,32 @@ async def reflect_on_past(
|
|
|
787
932
|
if target_project != 'all' and not project_collections and not is_reflection_collection:
|
|
788
933
|
# The stored project name is like "-Users-username-projects-ShopifyMCPMockShop"
|
|
789
934
|
# We want to match just "ShopifyMCPMockShop"
|
|
790
|
-
|
|
935
|
+
# Also handle underscore/dash variations (procsolve-website vs procsolve_website)
|
|
936
|
+
normalized_target = target_project.replace('-', '_')
|
|
937
|
+
normalized_stored = point_project.replace('-', '_')
|
|
938
|
+
if not (normalized_stored.endswith(f"_{normalized_target}") or
|
|
939
|
+
normalized_stored == normalized_target or
|
|
940
|
+
point_project.endswith(f"-{target_project}") or
|
|
941
|
+
point_project == target_project):
|
|
791
942
|
continue # Skip results from other projects
|
|
792
943
|
|
|
793
944
|
# For reflections with project context, optionally filter by project
|
|
794
945
|
if is_reflection_collection and target_project != 'all' and 'project' in point.payload:
|
|
795
946
|
# Only filter if the reflection has project metadata
|
|
796
947
|
reflection_project = point.payload.get('project', '')
|
|
797
|
-
if reflection_project
|
|
798
|
-
|
|
799
|
-
|
|
800
|
-
reflection_project.
|
|
801
|
-
|
|
802
|
-
|
|
948
|
+
if reflection_project:
|
|
949
|
+
# Normalize both for comparison (handle underscore/dash variations)
|
|
950
|
+
normalized_target = target_project.replace('-', '_')
|
|
951
|
+
normalized_reflection = reflection_project.replace('-', '_')
|
|
952
|
+
if not (
|
|
953
|
+
reflection_project == target_project or
|
|
954
|
+
normalized_reflection == normalized_target or
|
|
955
|
+
reflection_project.endswith(f"/{target_project}") or
|
|
956
|
+
reflection_project.endswith(f"-{target_project}") or
|
|
957
|
+
normalized_reflection.endswith(f"_{normalized_target}") or
|
|
958
|
+
normalized_reflection.endswith(f"/{normalized_target}")
|
|
959
|
+
):
|
|
960
|
+
continue # Skip reflections from other projects
|
|
803
961
|
|
|
804
962
|
# BOOST V2 CHUNKS: Apply score boost for v2 chunks (better quality)
|
|
805
963
|
original_score = point.score
|
|
@@ -816,7 +974,7 @@ async def reflect_on_past(
|
|
|
816
974
|
if final_score < min_score:
|
|
817
975
|
continue
|
|
818
976
|
|
|
819
|
-
|
|
977
|
+
search_result = SearchResult(
|
|
820
978
|
id=str(point.id),
|
|
821
979
|
score=final_score,
|
|
822
980
|
timestamp=clean_timestamp,
|
|
@@ -826,8 +984,15 @@ async def reflect_on_past(
|
|
|
826
984
|
conversation_id=point.payload.get('conversation_id'),
|
|
827
985
|
base_conversation_id=point.payload.get('base_conversation_id'),
|
|
828
986
|
collection_name=collection_name,
|
|
829
|
-
raw_payload=point.payload
|
|
830
|
-
|
|
987
|
+
raw_payload=point.payload, # Always include payload for metadata extraction
|
|
988
|
+
# Pattern intelligence metadata
|
|
989
|
+
code_patterns=point.payload.get('code_patterns'),
|
|
990
|
+
files_analyzed=point.payload.get('files_analyzed'),
|
|
991
|
+
tools_used=list(point.payload.get('tools_used', [])) if isinstance(point.payload.get('tools_used'), set) else point.payload.get('tools_used'),
|
|
992
|
+
concepts=point.payload.get('concepts')
|
|
993
|
+
)
|
|
994
|
+
|
|
995
|
+
all_results.append(search_result)
|
|
831
996
|
|
|
832
997
|
except Exception as e:
|
|
833
998
|
await ctx.debug(f"Error searching {collection_name}: {str(e)}")
|
|
@@ -875,9 +1040,16 @@ async def reflect_on_past(
|
|
|
875
1040
|
all_results = all_results[:limit]
|
|
876
1041
|
timing_info['sort_end'] = time.time()
|
|
877
1042
|
|
|
1043
|
+
logger.info(f"Total results: {len(all_results)}, Returning: {len(all_results[:limit])}")
|
|
1044
|
+
for r in all_results[:3]: # Log first 3
|
|
1045
|
+
logger.debug(f"Result: id={r.id}, has_patterns={bool(r.code_patterns)}, pattern_keys={list(r.code_patterns.keys()) if r.code_patterns else None}")
|
|
1046
|
+
|
|
878
1047
|
if not all_results:
|
|
879
1048
|
return f"No conversations found matching '{query}'. Try different keywords or check if conversations have been imported."
|
|
880
1049
|
|
|
1050
|
+
# Aggregate pattern intelligence across results
|
|
1051
|
+
pattern_intelligence = aggregate_pattern_intelligence(all_results)
|
|
1052
|
+
|
|
881
1053
|
# Update indexing status before returning results
|
|
882
1054
|
await update_indexing_status()
|
|
883
1055
|
|
|
@@ -1031,8 +1203,181 @@ async def reflect_on_past(
|
|
|
1031
1203
|
result_text += " </meta>\n"
|
|
1032
1204
|
result_text += " </raw>\n"
|
|
1033
1205
|
|
|
1206
|
+
# Add patterns if they exist - with detailed logging
|
|
1207
|
+
if result.code_patterns and isinstance(result.code_patterns, dict):
|
|
1208
|
+
logger.info(f"DEBUG: Point {result.id} has code_patterns dict with keys: {list(result.code_patterns.keys())}")
|
|
1209
|
+
patterns_to_show = []
|
|
1210
|
+
for category, patterns in result.code_patterns.items():
|
|
1211
|
+
if patterns and isinstance(patterns, list) and len(patterns) > 0:
|
|
1212
|
+
# Take up to 5 patterns from each category
|
|
1213
|
+
patterns_to_show.append((category, patterns[:5]))
|
|
1214
|
+
logger.info(f"DEBUG: Added category '{category}' with {len(patterns)} patterns")
|
|
1215
|
+
|
|
1216
|
+
if patterns_to_show:
|
|
1217
|
+
logger.info(f"DEBUG: Adding patterns XML for point {result.id}")
|
|
1218
|
+
result_text += " <patterns>\n"
|
|
1219
|
+
for category, patterns in patterns_to_show:
|
|
1220
|
+
# Escape both category name and pattern content for XML safety
|
|
1221
|
+
safe_patterns = ', '.join(escape(str(p)) for p in patterns)
|
|
1222
|
+
result_text += f" <cat name=\"{escape(category)}\">{safe_patterns}</cat>\n"
|
|
1223
|
+
result_text += " </patterns>\n"
|
|
1224
|
+
else:
|
|
1225
|
+
logger.info(f"DEBUG: Point {result.id} has code_patterns but no valid patterns to show")
|
|
1226
|
+
else:
|
|
1227
|
+
logger.info(f"DEBUG: Point {result.id} has no patterns. code_patterns={result.code_patterns}, type={type(result.code_patterns)}")
|
|
1228
|
+
|
|
1229
|
+
if result.files_analyzed and len(result.files_analyzed) > 0:
|
|
1230
|
+
result_text += f" <files>{', '.join(result.files_analyzed[:5])}</files>\n"
|
|
1231
|
+
if result.concepts and len(result.concepts) > 0:
|
|
1232
|
+
result_text += f" <concepts>{', '.join(result.concepts[:5])}</concepts>\n"
|
|
1233
|
+
|
|
1234
|
+
# Include structured metadata for agent consumption
|
|
1235
|
+
# This provides clean, parsed fields that agents can easily use
|
|
1236
|
+
if hasattr(result, 'raw_payload') and result.raw_payload:
|
|
1237
|
+
import json
|
|
1238
|
+
payload = result.raw_payload
|
|
1239
|
+
|
|
1240
|
+
# Files section - structured for easy agent parsing
|
|
1241
|
+
files_analyzed = payload.get('files_analyzed', [])
|
|
1242
|
+
files_edited = payload.get('files_edited', [])
|
|
1243
|
+
if files_analyzed or files_edited:
|
|
1244
|
+
result_text += " <files>\n"
|
|
1245
|
+
if files_analyzed:
|
|
1246
|
+
result_text += f" <analyzed count=\"{len(files_analyzed)}\">"
|
|
1247
|
+
result_text += ", ".join(files_analyzed[:5]) # First 5 files
|
|
1248
|
+
if len(files_analyzed) > 5:
|
|
1249
|
+
result_text += f" ... and {len(files_analyzed)-5} more"
|
|
1250
|
+
result_text += "</analyzed>\n"
|
|
1251
|
+
if files_edited:
|
|
1252
|
+
result_text += f" <edited count=\"{len(files_edited)}\">"
|
|
1253
|
+
result_text += ", ".join(files_edited[:5]) # First 5 files
|
|
1254
|
+
if len(files_edited) > 5:
|
|
1255
|
+
result_text += f" ... and {len(files_edited)-5} more"
|
|
1256
|
+
result_text += "</edited>\n"
|
|
1257
|
+
result_text += " </files>\n"
|
|
1258
|
+
|
|
1259
|
+
# Concepts section - clean list for agents
|
|
1260
|
+
concepts = payload.get('concepts', [])
|
|
1261
|
+
if concepts:
|
|
1262
|
+
result_text += f" <concepts>{', '.join(concepts)}</concepts>\n"
|
|
1263
|
+
|
|
1264
|
+
# Tools section - summarized with counts
|
|
1265
|
+
tools_used = payload.get('tools_used', [])
|
|
1266
|
+
if tools_used:
|
|
1267
|
+
# Count tool usage
|
|
1268
|
+
tool_counts = {}
|
|
1269
|
+
for tool in tools_used:
|
|
1270
|
+
tool_counts[tool] = tool_counts.get(tool, 0) + 1
|
|
1271
|
+
# Sort by frequency
|
|
1272
|
+
sorted_tools = sorted(tool_counts.items(), key=lambda x: x[1], reverse=True)
|
|
1273
|
+
tool_summary = ", ".join(f"{tool}({count})" for tool, count in sorted_tools[:5])
|
|
1274
|
+
if len(sorted_tools) > 5:
|
|
1275
|
+
tool_summary += f" ... and {len(sorted_tools)-5} more"
|
|
1276
|
+
result_text += f" <tools>{tool_summary}</tools>\n"
|
|
1277
|
+
|
|
1278
|
+
# Code patterns section - structured by category
|
|
1279
|
+
code_patterns = payload.get('code_patterns', {})
|
|
1280
|
+
if code_patterns:
|
|
1281
|
+
result_text += " <code_patterns>\n"
|
|
1282
|
+
for category, patterns in code_patterns.items():
|
|
1283
|
+
if patterns:
|
|
1284
|
+
pattern_list = patterns if isinstance(patterns, list) else [patterns]
|
|
1285
|
+
# Clean up pattern names
|
|
1286
|
+
clean_patterns = []
|
|
1287
|
+
for p in pattern_list[:5]:
|
|
1288
|
+
# Remove common prefixes like $FUNC, $VAR
|
|
1289
|
+
clean_p = str(p).replace('$FUNC', '').replace('$VAR', '').strip()
|
|
1290
|
+
if clean_p:
|
|
1291
|
+
clean_patterns.append(clean_p)
|
|
1292
|
+
if clean_patterns:
|
|
1293
|
+
result_text += f" <{category}>{', '.join(clean_patterns)}</{category}>\n"
|
|
1294
|
+
result_text += " </code_patterns>\n"
|
|
1295
|
+
|
|
1296
|
+
# Pattern inheritance info - shows propagation details
|
|
1297
|
+
pattern_inheritance = payload.get('pattern_inheritance', {})
|
|
1298
|
+
if pattern_inheritance:
|
|
1299
|
+
source_chunk = pattern_inheritance.get('source_chunk', '')
|
|
1300
|
+
confidence = pattern_inheritance.get('confidence', 0)
|
|
1301
|
+
distance = pattern_inheritance.get('distance', 0)
|
|
1302
|
+
if source_chunk:
|
|
1303
|
+
result_text += f" <pattern_source chunk=\"{source_chunk}\" confidence=\"{confidence:.2f}\" distance=\"{distance}\"/>\n"
|
|
1304
|
+
|
|
1305
|
+
# Message stats for context
|
|
1306
|
+
msg_count = payload.get('message_count')
|
|
1307
|
+
total_length = payload.get('total_length')
|
|
1308
|
+
if msg_count or total_length:
|
|
1309
|
+
stats_attrs = []
|
|
1310
|
+
if msg_count:
|
|
1311
|
+
stats_attrs.append(f'messages="{msg_count}"')
|
|
1312
|
+
if total_length:
|
|
1313
|
+
stats_attrs.append(f'length="{total_length}"')
|
|
1314
|
+
result_text += f" <stats {' '.join(stats_attrs)}/>\n"
|
|
1315
|
+
|
|
1316
|
+
# Raw metadata dump for backwards compatibility
|
|
1317
|
+
# Kept minimal - only truly unique fields
|
|
1318
|
+
remaining_metadata = {}
|
|
1319
|
+
excluded_keys = {'text', 'conversation_id', 'timestamp', 'role', 'project', 'chunk_index',
|
|
1320
|
+
'files_analyzed', 'files_edited', 'concepts', 'tools_used',
|
|
1321
|
+
'code_patterns', 'pattern_inheritance', 'message_count', 'total_length',
|
|
1322
|
+
'chunking_version', 'chunk_method', 'chunk_overlap', 'migration_type'}
|
|
1323
|
+
for key, value in payload.items():
|
|
1324
|
+
if key not in excluded_keys and value is not None:
|
|
1325
|
+
if isinstance(value, set):
|
|
1326
|
+
value = list(value)
|
|
1327
|
+
remaining_metadata[key] = value
|
|
1328
|
+
|
|
1329
|
+
if remaining_metadata:
|
|
1330
|
+
try:
|
|
1331
|
+
# Only include if there's actually extra data
|
|
1332
|
+
result_text += f" <metadata_extra><![CDATA[{json.dumps(remaining_metadata, default=str)}]]></metadata_extra>\n"
|
|
1333
|
+
except:
|
|
1334
|
+
pass
|
|
1335
|
+
|
|
1034
1336
|
result_text += " </r>\n"
|
|
1035
1337
|
result_text += " </results>\n"
|
|
1338
|
+
|
|
1339
|
+
# Add aggregated pattern intelligence section
|
|
1340
|
+
if pattern_intelligence and pattern_intelligence.get('total_unique_patterns', 0) > 0:
|
|
1341
|
+
result_text += " <pattern_intelligence>\n"
|
|
1342
|
+
|
|
1343
|
+
# Summary statistics
|
|
1344
|
+
result_text += f" <summary>\n"
|
|
1345
|
+
result_text += f" <unique_patterns>{pattern_intelligence['total_unique_patterns']}</unique_patterns>\n"
|
|
1346
|
+
result_text += f" <pattern_diversity>{pattern_intelligence['pattern_diversity_score']:.2f}</pattern_diversity>\n"
|
|
1347
|
+
result_text += f" </summary>\n"
|
|
1348
|
+
|
|
1349
|
+
# Most common patterns
|
|
1350
|
+
if pattern_intelligence.get('most_common_patterns'):
|
|
1351
|
+
result_text += " <common_patterns>\n"
|
|
1352
|
+
for pattern, count in pattern_intelligence['most_common_patterns'][:5]:
|
|
1353
|
+
result_text += f" <pattern count=\"{count}\">{pattern}</pattern>\n"
|
|
1354
|
+
result_text += " </common_patterns>\n"
|
|
1355
|
+
|
|
1356
|
+
# Pattern categories
|
|
1357
|
+
if pattern_intelligence.get('category_coverage'):
|
|
1358
|
+
result_text += " <categories>\n"
|
|
1359
|
+
for category, count in pattern_intelligence['category_coverage'].items():
|
|
1360
|
+
result_text += f" <cat name=\"{category}\" count=\"{count}\"/>\n"
|
|
1361
|
+
result_text += " </categories>\n"
|
|
1362
|
+
|
|
1363
|
+
# Pattern combinations insight
|
|
1364
|
+
if pattern_intelligence.get('pattern_combinations'):
|
|
1365
|
+
combos = pattern_intelligence['pattern_combinations']
|
|
1366
|
+
if combos.get('async_with_error_handling'):
|
|
1367
|
+
result_text += " <insight>Async patterns combined with error handling detected</insight>\n"
|
|
1368
|
+
if combos.get('react_with_state'):
|
|
1369
|
+
result_text += " <insight>React hooks with state management patterns detected</insight>\n"
|
|
1370
|
+
|
|
1371
|
+
# Files referenced across results
|
|
1372
|
+
if pattern_intelligence.get('files_referenced') and len(pattern_intelligence['files_referenced']) > 0:
|
|
1373
|
+
result_text += f" <files_across_results>{', '.join(pattern_intelligence['files_referenced'][:10])}</files_across_results>\n"
|
|
1374
|
+
|
|
1375
|
+
# Concepts discussed
|
|
1376
|
+
if pattern_intelligence.get('concepts_discussed') and len(pattern_intelligence['concepts_discussed']) > 0:
|
|
1377
|
+
result_text += f" <concepts_discussed>{', '.join(pattern_intelligence['concepts_discussed'][:10])}</concepts_discussed>\n"
|
|
1378
|
+
|
|
1379
|
+
result_text += " </pattern_intelligence>\n"
|
|
1380
|
+
|
|
1036
1381
|
result_text += "</search>"
|
|
1037
1382
|
|
|
1038
1383
|
else:
|
|
@@ -1504,6 +1849,93 @@ async def search_by_concept(
|
|
|
1504
1849
|
# Debug output
|
|
1505
1850
|
print(f"[DEBUG] FastMCP server created with name: {mcp.name}")
|
|
1506
1851
|
|
|
1852
|
+
@mcp.tool()
|
|
1853
|
+
async def get_full_conversation(
|
|
1854
|
+
ctx: Context,
|
|
1855
|
+
conversation_id: str = Field(description="The conversation ID from search results (cid)"),
|
|
1856
|
+
project: Optional[str] = Field(default=None, description="Optional project name to help locate the file")
|
|
1857
|
+
) -> str:
|
|
1858
|
+
"""Get the full JSONL conversation file path for a conversation ID.
|
|
1859
|
+
This allows agents to read complete conversations instead of truncated excerpts."""
|
|
1860
|
+
|
|
1861
|
+
# Base path for Claude conversations
|
|
1862
|
+
base_path = Path.home() / '.claude/projects'
|
|
1863
|
+
|
|
1864
|
+
# Build list of directories to search
|
|
1865
|
+
search_dirs = []
|
|
1866
|
+
|
|
1867
|
+
if project:
|
|
1868
|
+
# Try various project directory name formats
|
|
1869
|
+
sanitized_project = project.replace('/', '-')
|
|
1870
|
+
search_dirs.extend([
|
|
1871
|
+
base_path / project,
|
|
1872
|
+
base_path / sanitized_project,
|
|
1873
|
+
base_path / f"-Users-ramakrishnanannaswamy-projects-{project}",
|
|
1874
|
+
base_path / f"-Users-ramakrishnanannaswamy-projects-{sanitized_project}"
|
|
1875
|
+
])
|
|
1876
|
+
else:
|
|
1877
|
+
# Search all project directories
|
|
1878
|
+
search_dirs = list(base_path.glob("*"))
|
|
1879
|
+
|
|
1880
|
+
# Search for the JSONL file
|
|
1881
|
+
jsonl_path = None
|
|
1882
|
+
for search_dir in search_dirs:
|
|
1883
|
+
if not search_dir.is_dir():
|
|
1884
|
+
continue
|
|
1885
|
+
|
|
1886
|
+
potential_path = search_dir / f"{conversation_id}.jsonl"
|
|
1887
|
+
if potential_path.exists():
|
|
1888
|
+
jsonl_path = potential_path
|
|
1889
|
+
break
|
|
1890
|
+
|
|
1891
|
+
if not jsonl_path:
|
|
1892
|
+
# Try searching all directories as fallback
|
|
1893
|
+
for proj_dir in base_path.glob("*"):
|
|
1894
|
+
if proj_dir.is_dir():
|
|
1895
|
+
potential_path = proj_dir / f"{conversation_id}.jsonl"
|
|
1896
|
+
if potential_path.exists():
|
|
1897
|
+
jsonl_path = potential_path
|
|
1898
|
+
break
|
|
1899
|
+
|
|
1900
|
+
if not jsonl_path:
|
|
1901
|
+
return f"""<full_conversation>
|
|
1902
|
+
<conversation_id>{conversation_id}</conversation_id>
|
|
1903
|
+
<status>not_found</status>
|
|
1904
|
+
<message>Conversation file not found. Searched {len(search_dirs)} directories.</message>
|
|
1905
|
+
<hint>Try using the project parameter or check if the conversation ID is correct.</hint>
|
|
1906
|
+
</full_conversation>"""
|
|
1907
|
+
|
|
1908
|
+
# Get file stats
|
|
1909
|
+
file_stats = jsonl_path.stat()
|
|
1910
|
+
|
|
1911
|
+
# Count messages
|
|
1912
|
+
try:
|
|
1913
|
+
with open(jsonl_path, 'r', encoding='utf-8') as f:
|
|
1914
|
+
message_count = sum(1 for _ in f)
|
|
1915
|
+
except:
|
|
1916
|
+
message_count = 0
|
|
1917
|
+
|
|
1918
|
+
return f"""<full_conversation>
|
|
1919
|
+
<conversation_id>{conversation_id}</conversation_id>
|
|
1920
|
+
<status>found</status>
|
|
1921
|
+
<file_path>{jsonl_path}</file_path>
|
|
1922
|
+
<file_size>{file_stats.st_size}</file_size>
|
|
1923
|
+
<message_count>{message_count}</message_count>
|
|
1924
|
+
<project>{jsonl_path.parent.name}</project>
|
|
1925
|
+
<instructions>
|
|
1926
|
+
You can now use the Read tool to read the full conversation from:
|
|
1927
|
+
{jsonl_path}
|
|
1928
|
+
|
|
1929
|
+
Each line in the JSONL file is a separate message with complete content.
|
|
1930
|
+
This gives you access to:
|
|
1931
|
+
- Complete code blocks (not truncated)
|
|
1932
|
+
- Full problem descriptions and solutions
|
|
1933
|
+
- Entire debugging sessions
|
|
1934
|
+
- Complete architectural decisions and discussions
|
|
1935
|
+
</instructions>
|
|
1936
|
+
</full_conversation>"""
|
|
1937
|
+
|
|
1938
|
+
|
|
1507
1939
|
# Run the server
|
|
1508
1940
|
if __name__ == "__main__":
|
|
1509
1941
|
import sys
|