claude-self-reflect 7.1.14 → 8.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (146) hide show
  1. package/README.md +158 -537
  2. package/installer/cli.js +78 -275
  3. package/installer/postinstall.js +329 -28
  4. package/installer/statusline-setup.js +53 -251
  5. package/package.json +7 -35
  6. package/.claude/agents/csr-validator.md +0 -280
  7. package/.claude/agents/docker-orchestrator.md +0 -284
  8. package/.claude/agents/documentation-writer.md +0 -262
  9. package/.claude/agents/import-debugger.md +0 -221
  10. package/.claude/agents/mcp-integration.md +0 -333
  11. package/.claude/agents/open-source-maintainer.md +0 -665
  12. package/.claude/agents/performance-tuner.md +0 -276
  13. package/.claude/agents/qdrant-specialist.md +0 -201
  14. package/.claude/agents/quality-fixer.md +0 -314
  15. package/.claude/agents/reflection-specialist.md +0 -912
  16. package/.claude/agents/search-optimizer.md +0 -309
  17. package/.env.example +0 -79
  18. package/Dockerfile.async-importer +0 -38
  19. package/Dockerfile.batch-monitor +0 -36
  20. package/Dockerfile.batch-watcher +0 -38
  21. package/Dockerfile.importer +0 -39
  22. package/Dockerfile.importer-isolated +0 -28
  23. package/Dockerfile.importer-isolated.alpine +0 -21
  24. package/Dockerfile.importer.alpine +0 -23
  25. package/Dockerfile.mcp-server +0 -28
  26. package/Dockerfile.mcp-server.alpine +0 -22
  27. package/Dockerfile.mcp-server.ubuntu +0 -34
  28. package/Dockerfile.safe-watcher +0 -61
  29. package/Dockerfile.streaming-importer +0 -61
  30. package/Dockerfile.streaming-importer.alpine +0 -24
  31. package/Dockerfile.watcher +0 -49
  32. package/Dockerfile.watcher.alpine +0 -24
  33. package/config/qdrant-config.yaml +0 -59
  34. package/docker-compose.yaml +0 -326
  35. package/docs/design/GRADER_PROMPT.md +0 -81
  36. package/docs/design/batch_ground_truth_generator.py +0 -496
  37. package/docs/design/batch_import_all_projects.py +0 -488
  38. package/docs/design/batch_import_v3.py +0 -278
  39. package/docs/design/conversation-analyzer/SKILL.md +0 -133
  40. package/docs/design/conversation-analyzer/SKILL_V2.md +0 -218
  41. package/docs/design/conversation-analyzer/extract_structured.py +0 -186
  42. package/docs/design/extract_events_v3.py +0 -533
  43. package/docs/design/import_existing_batch.py +0 -188
  44. package/docs/design/recover_all_batches.py +0 -297
  45. package/docs/design/recover_batch_results.py +0 -287
  46. package/installer/.claude/agents/README.md +0 -138
  47. package/installer/fastembed-fallback.js +0 -271
  48. package/installer/setup-wizard-docker.js +0 -894
  49. package/installer/setup-wizard.js +0 -14
  50. package/installer/update-manager.js +0 -538
  51. package/mcp-server/pyproject.toml +0 -30
  52. package/mcp-server/requirements.txt +0 -9
  53. package/mcp-server/run-mcp-clean.sh +0 -29
  54. package/mcp-server/run-mcp-docker.sh +0 -5
  55. package/mcp-server/run-mcp.sh +0 -155
  56. package/mcp-server/src/__init__.py +0 -1
  57. package/mcp-server/src/__main__.py +0 -37
  58. package/mcp-server/src/app_context.py +0 -64
  59. package/mcp-server/src/code_reload_tool.py +0 -380
  60. package/mcp-server/src/config.py +0 -57
  61. package/mcp-server/src/connection_pool.py +0 -286
  62. package/mcp-server/src/decay_manager.py +0 -106
  63. package/mcp-server/src/embedding_manager.py +0 -295
  64. package/mcp-server/src/embeddings_old.py +0 -141
  65. package/mcp-server/src/enhanced_tool_registry.py +0 -407
  66. package/mcp-server/src/health.py +0 -190
  67. package/mcp-server/src/mode_switch_tool.py +0 -181
  68. package/mcp-server/src/models.py +0 -64
  69. package/mcp-server/src/parallel_search.py +0 -325
  70. package/mcp-server/src/project_resolver.py +0 -568
  71. package/mcp-server/src/reflection_tools.py +0 -365
  72. package/mcp-server/src/rich_formatting.py +0 -303
  73. package/mcp-server/src/safe_getters.py +0 -217
  74. package/mcp-server/src/search_tools.py +0 -972
  75. package/mcp-server/src/security_patches.py +0 -555
  76. package/mcp-server/src/server.py +0 -806
  77. package/mcp-server/src/standalone_client.py +0 -380
  78. package/mcp-server/src/status.py +0 -232
  79. package/mcp-server/src/status_unified.py +0 -286
  80. package/mcp-server/src/temporal_design.py +0 -132
  81. package/mcp-server/src/temporal_tools.py +0 -604
  82. package/mcp-server/src/temporal_utils.py +0 -384
  83. package/mcp-server/src/utils.py +0 -167
  84. package/requirements.txt +0 -67
  85. package/scripts/auto-migrate.cjs +0 -161
  86. package/scripts/csr-status +0 -614
  87. package/scripts/ralph/backup_and_restore.sh +0 -309
  88. package/scripts/ralph/install_hooks.sh +0 -244
  89. package/scripts/ralph/test_with_rollback.sh +0 -195
  90. package/scripts/requirements.txt +0 -9
  91. package/shared/__init__.py +0 -5
  92. package/shared/ast_grep_utils.py +0 -89
  93. package/shared/normalization.py +0 -54
  94. package/src/__init__.py +0 -0
  95. package/src/cli/__init__.py +0 -0
  96. package/src/importer/__init__.py +0 -25
  97. package/src/importer/__main__.py +0 -14
  98. package/src/importer/core/__init__.py +0 -25
  99. package/src/importer/core/config.py +0 -120
  100. package/src/importer/core/exceptions.py +0 -52
  101. package/src/importer/core/models.py +0 -184
  102. package/src/importer/embeddings/__init__.py +0 -22
  103. package/src/importer/embeddings/base.py +0 -141
  104. package/src/importer/embeddings/fastembed_provider.py +0 -164
  105. package/src/importer/embeddings/validator.py +0 -136
  106. package/src/importer/embeddings/voyage_provider.py +0 -251
  107. package/src/importer/main.py +0 -393
  108. package/src/importer/processors/__init__.py +0 -15
  109. package/src/importer/processors/ast_extractor.py +0 -197
  110. package/src/importer/processors/chunker.py +0 -157
  111. package/src/importer/processors/concept_extractor.py +0 -109
  112. package/src/importer/processors/conversation_parser.py +0 -181
  113. package/src/importer/processors/tool_extractor.py +0 -165
  114. package/src/importer/state/__init__.py +0 -5
  115. package/src/importer/state/state_manager.py +0 -190
  116. package/src/importer/storage/__init__.py +0 -5
  117. package/src/importer/storage/qdrant_storage.py +0 -250
  118. package/src/importer/utils/__init__.py +0 -9
  119. package/src/importer/utils/logger.py +0 -87
  120. package/src/importer/utils/project_normalizer.py +0 -133
  121. package/src/runtime/__init__.py +0 -0
  122. package/src/runtime/batch_monitor.py +0 -300
  123. package/src/runtime/batch_watcher.py +0 -455
  124. package/src/runtime/config.py +0 -61
  125. package/src/runtime/delta-metadata-update-safe.py +0 -443
  126. package/src/runtime/delta-metadata-update.py +0 -547
  127. package/src/runtime/doctor.py +0 -342
  128. package/src/runtime/embedding_service.py +0 -243
  129. package/src/runtime/force-metadata-recovery.py +0 -305
  130. package/src/runtime/hooks/__init__.py +0 -21
  131. package/src/runtime/hooks/iteration_hook.py +0 -196
  132. package/src/runtime/hooks/ralph_state.py +0 -402
  133. package/src/runtime/hooks/session_end_hook.py +0 -254
  134. package/src/runtime/hooks/session_start_hook.py +0 -259
  135. package/src/runtime/import-conversations-unified.py +0 -359
  136. package/src/runtime/import-latest.py +0 -124
  137. package/src/runtime/import_strategies.py +0 -344
  138. package/src/runtime/message_processors.py +0 -248
  139. package/src/runtime/metadata_extractor.py +0 -262
  140. package/src/runtime/precompact-hook.sh +0 -112
  141. package/src/runtime/qdrant_connection.py +0 -73
  142. package/src/runtime/streaming-importer.py +0 -995
  143. package/src/runtime/streaming-watcher.py +0 -1533
  144. package/src/runtime/unified_state_manager.py +0 -671
  145. package/src/runtime/utils.py +0 -39
  146. package/src/runtime/watcher-loop.sh +0 -56
package/README.md CHANGED
@@ -1,655 +1,276 @@
1
1
  # Claude Self-Reflect
2
- <div align="center">
3
- <img src="https://repobeats.axiom.co/api/embed/e45aa7276c6b2d1fbc46a9a3324e2231718787bb.svg" alt="Repobeats analytics image" />
4
- </div>
5
- <div align="center">
6
2
 
7
- [![npm version](https://badge.fury.io/js/claude-self-reflect.svg)](https://www.npmjs.com/package/claude-self-reflect)
8
- [![npm downloads](https://img.shields.io/npm/dm/claude-self-reflect.svg)](https://www.npmjs.com/package/claude-self-reflect)
9
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
10
- [![GitHub CI](https://github.com/ramakay/claude-self-reflect/actions/workflows/ci.yml/badge.svg)](https://github.com/ramakay/claude-self-reflect/actions/workflows/ci.yml)
3
+ <div align="center">
11
4
 
12
- [![Claude Code](https://img.shields.io/badge/Claude%20Code-Compatible-6B4FBB)](https://github.com/anthropics/claude-code)
13
- [![MCP Protocol](https://img.shields.io/badge/MCP-Enabled-FF6B6B)](https://modelcontextprotocol.io/)
14
- [![Docker](https://img.shields.io/badge/Docker-Ready-2496ED?logo=docker&logoColor=white)](https://www.docker.com/)
15
- [![Local First](https://img.shields.io/badge/Local%20First-Privacy-4A90E2)](https://github.com/ramakay/claude-self-reflect)
5
+ <img src="docs-site/public/favicon.svg" alt="Claude Self-Reflect" width="80" height="80" />
16
6
 
17
- [![GitHub stars](https://img.shields.io/github/stars/ramakay/claude-self-reflect.svg?style=social)](https://github.com/ramakay/claude-self-reflect/stargazers)
18
- [![GitHub issues](https://img.shields.io/github/issues/ramakay/claude-self-reflect.svg)](https://github.com/ramakay/claude-self-reflect/issues)
19
- [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/ramakay/claude-self-reflect/pulls)
20
-
21
- </div>
7
+ [![npm](https://badge.fury.io/js/claude-self-reflect.svg)](https://www.npmjs.com/package/claude-self-reflect) [![downloads](https://img.shields.io/npm/dm/claude-self-reflect.svg)](https://www.npmjs.com/package/claude-self-reflect) [![MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Release](https://github.com/ramakay/claude-self-reflect/actions/workflows/release.yml/badge.svg)](https://github.com/ramakay/claude-self-reflect/releases/latest) [![Claude Code](https://img.shields.io/badge/Claude%20Code-6B4FBB)](https://github.com/anthropics/claude-code) [![MCP](https://img.shields.io/badge/MCP-FF6B6B)](https://modelcontextprotocol.io/) [![Rust](https://img.shields.io/badge/Rust-000000?logo=rust)](https://github.com/ramakay/claude-self-reflect/tree/main/csr-engine) [![Local First](https://img.shields.io/badge/Local-4A90E2)](https://github.com/ramakay/claude-self-reflect) [![stars](https://img.shields.io/github/stars/ramakay/claude-self-reflect.svg?style=social)](https://github.com/ramakay/claude-self-reflect/stargazers)
22
8
 
23
9
  **Claude forgets everything. This fixes that.**
24
10
 
25
- Give Claude perfect memory of all your conversations. Search past discussions instantly. Never lose context again.
26
-
27
- **100% Local by Default** • **20x Faster** • **Zero Configuration** • **Production Ready**
11
+ Single 44MB binary. No databases. No containers. No API keys required.
28
12
 
29
- > **Latest: v7.1.9 Cross-Project Iteration Memory** - Ralph loops now share memory across ALL projects automatically. [Learn more →](#ralph-loop-memory)
13
+ [Install](#install) | [How It Works](#how-it-works) | [MCP Tools](#mcp-tools) | [FAQ](https://ramakay.github.io/claude-self-reflect/#/docs/troubleshooting)
30
14
 
31
- ## Why This Exists
32
-
33
- Claude starts fresh every conversation. You've solved complex bugs, designed architectures, made critical decisions - all forgotten. Until now.
15
+ </div>
34
16
 
35
17
  ## Table of Contents
36
18
 
37
- - [Quick Install](#quick-install)
38
- - [Performance](#performance)
39
- - [The Magic](#the-magic)
40
- - [Before & After](#before--after)
41
- - [Real Examples](#real-examples)
42
- - [Ralph Loop Memory](#ralph-loop-memory)
43
- - [Key Features](#key-features)
44
- - [Code Quality Insights](#code-quality-insights)
45
- - [Architecture](#architecture)
46
- - [Requirements](#requirements)
47
- - [Documentation](#documentation)
48
- - [Keeping Up to Date](#keeping-up-to-date)
49
- - [Troubleshooting](#troubleshooting)
50
- - [Contributors](#contributors)
51
-
52
- ## Quick Install
53
-
54
- ```bash
55
- # Install and run automatic setup (5 minutes, everything automatic)
56
- npm install -g claude-self-reflect
57
- claude-self-reflect setup
58
-
59
- # That's it! The setup will:
60
- # - Run everything in Docker (no Python issues!)
61
- # - Configure everything automatically
62
- # - Install the MCP in Claude Code
63
- # - Start monitoring for new conversations
64
- # - Keep all data local - no API keys needed
65
- ```
66
-
67
- > [!TIP]
68
- > **Auto-Migration**: Updates automatically handle breaking changes. Simply run `npm update -g claude-self-reflect`.
69
-
70
- <details open>
71
- <summary>Cloud Mode (Better Search Accuracy)</summary>
72
-
73
- ```bash
74
- # Step 1: Get your free Voyage AI key
75
- # Sign up at https://www.voyageai.com/ - it takes 30 seconds
76
-
77
- # Step 2: Install with Voyage key
78
- npm install -g claude-self-reflect
79
- claude-self-reflect setup --voyage-key=YOUR_ACTUAL_KEY_HERE
80
- ```
81
-
82
- > [!NOTE]
83
- > Cloud mode provides 1024-dimensional embeddings (vs 384 local) for more accurate semantic search but sends conversation data to Voyage AI for processing.
84
-
85
- </details>
86
-
87
- ## Performance
88
-
89
- | Metric | Before | After | Improvement |
90
- |--------|--------|-------|-------------|
91
- | **Status Check** | 119ms | 6ms | **20x faster** |
92
- | **Storage Usage** | 100MB | 50MB | **50% reduction** |
93
- | **Import Speed** | 10/sec | 100/sec | **10x faster** |
94
- | **Memory Usage** | 500MB | 50MB | **90% reduction** |
95
- | **Search Latency** | 15ms | 3ms | **5x faster** |
96
-
97
- ### Competitive Comparison
19
+ - [The Problem](#the-forgetting-problem) — Why Claude needs memory
20
+ - [The Architecture](#one-binary-44mb) — How CSR solves it
21
+ - [The Pipeline](#the-pipeline) — Progressive enrichment (9.3x improvement)
22
+ - [Install](#install) — One command setup
23
+ - [What You'll Ask](#what-youll-ask) — Natural language, no syntax
24
+ - [Performance](#performance) | [MCP Tools](#mcp-tools) | [Hooks](#hooks) | [CLI](#cli-reference)
25
+ - [AI Narratives](#ai-narratives-optional) | [Upgrading](#upgrading-from-v7x) | [Troubleshooting](#troubleshooting)
98
26
 
99
- | Feature | Claude Self-Reflect | MemGPT | LangChain Memory |
100
- |---------|---------------------|---------|------------------|
101
- | **Local-first** | Yes | No | Partial |
102
- | **No API keys** | Yes | No | No |
103
- | **Real-time indexing** | Yes 2-sec | Manual | No |
104
- | **Search speed** | <3ms | ~50ms | ~100ms |
105
- | **Setup time** | 5 min | 30+ min | 20+ min |
106
- | **Docker required** | Yes | Python | Python |
107
-
108
- ## The Magic
109
-
110
- ![Self Reflection vs The Grind](docs/images/red-reflection.webp)
111
-
112
- ## Before & After
27
+ ---
113
28
 
114
- ![Before and After Claude Self-Reflect](docs/diagrams/before-after-combined.webp)
29
+ ## The Forgetting Problem
115
30
 
116
- ## Real Examples
31
+ <a href="https://ramakay.github.io/claude-self-reflect/">
32
+ <picture>
33
+ <source media="(prefers-color-scheme: dark)" srcset="docs-site/public/images/card-01-hook-dark.png" />
34
+ <img align="right" src="docs-site/public/images/card-01-hook-light.png" alt="The Forgetting Problem" width="420" />
35
+ </picture>
36
+ </a>
117
37
 
118
- ```
119
- You: "How did we fix that 100% CPU usage bug?"
120
- Claude: "Found it - we fixed the circular reference causing 100% CPU usage
121
- in the server modularization. Also fixed store_reflection dimension
122
- mismatch by creating separate reflections_local and reflections_voyage."
123
-
124
- You: "What about that Docker memory issue?"
125
- Claude: "The container was limited to 2GB but only using 266MB. We found
126
- the issue only happened with MAX_QUEUE_SIZE=1000 outside Docker.
127
- With proper Docker limits, memory stays stable at 341MB."
128
-
129
- You: "Have we worked with JWT authentication?"
130
- Claude: "Found conversations about JWT patterns including User.authenticate
131
- methods, TokenHandler classes, and concepts like token rotation,
132
- PKCE, and social login integration."
133
- ```
38
+ Claude starts fresh every session. Solutions you found, architectures you designed, bugs you debugged — all gone.
134
39
 
135
- ## Ralph Loop Memory
40
+ Context retention drops below **20% after 10 sessions**. CSR fixes this with a single binary that gives Claude perfect memory.
136
41
 
137
- <div align="center">
138
- <img src="docs/images/ralph-loop-csr.png" alt="Ralph Loop with CSR Memory - From hamster wheel to upward spiral" width="800"/>
139
- </div>
42
+ No special syntax. No commands. Install once, and past context appears automatically when you need it.
140
43
 
141
- **The difference between spinning in circles and building on every iteration.**
44
+ <br clear="both" />
142
45
 
143
- Use the [ralph-wiggum plugin](https://github.com/anthropics/claude-code-plugins/tree/main/ralph-wiggum) for long tasks? CSR gives your Ralph loops **persistent memory across sessions and projects**.
46
+ > **[Explore the full documentation →](https://ramakay.github.io/claude-self-reflect/)**
144
47
 
145
- ### Without CSR: The Hamster Wheel
146
- - Each context compaction = everything forgotten
147
- - Same mistakes repeated across iterations
148
- - No learning from past sessions
149
- - Cross-project insights lost forever
48
+ ---
150
49
 
151
- ### With CSR: The Upward Spiral
152
- - **Automatic backup** before context compaction
153
- - **Anti-pattern injection** - "DON'T RETRY THESE" surfaces first
154
- - **Success pattern learning** - reuse what worked before
155
- - **Cross-project memory** - learn from ALL your projects
50
+ ## One Binary. 44MB.
156
51
 
157
- ### Quick Setup
158
- ```bash
159
- ./scripts/ralph/install_hooks.sh # Install hooks globally
160
- ./scripts/ralph/install_hooks.sh --check # Verify installation
161
- ```
52
+ <a href="https://ramakay.github.io/claude-self-reflect/#/docs/architecture">
53
+ <picture>
54
+ <source media="(prefers-color-scheme: dark)" srcset="docs-site/public/images/card-02-arch-dark.png" />
55
+ <img align="right" src="docs-site/public/images/card-02-arch-light.png" alt="Architecture One Binary, 44MB" width="420" />
56
+ </picture>
57
+ </a>
162
58
 
163
- ### How It Works
164
- 1. Start a Ralph loop: `/ralph-wiggum:ralph-loop "Build feature X"`
165
- 2. Work naturally - CSR hooks capture state automatically
166
- 3. **Stop hook** stores each iteration's learnings
167
- 4. **PreCompact hook** backs up state before compaction
168
- 5. Next session retrieves past insights, failed approaches, and wins
59
+ Everything runs locally in a single process. No Docker, no database server, no API keys required.
169
60
 
170
- > **v7.1.9+**: Cross-project iteration memory - hooks work for ALL projects, entries tagged with `project_{name}` for global searchability.
61
+ - **SQLite** storage for chunks, embeddings, enrichment state
62
+ - **HNSW** — sub-millisecond vector search (<1ms p95)
63
+ - **FastEmbed** — 384-dim local embeddings
64
+ - **AST** — code-aware search across 6 languages
171
65
 
172
- [Full documentation →](docs/development/ralph-memory-integration.md)
66
+ **6 hooks** fire across the session lifecycle. **12 MCP tools** for explicit search.
173
67
 
174
- ## Code Quality Insights
68
+ <br clear="both" />
175
69
 
176
- <details>
177
- <summary><b>AST-GREP Pattern Analysis (100+ Patterns)</b></summary>
178
-
179
- ### Real-time Quality Scoring in Statusline
180
- Your code quality displayed live as you work:
181
- - 🟢 **A+** (95-100): Exceptional code quality
182
- - 🟢 **A** (90-95): Excellent, production-ready
183
- - 🟢 **B** (80-90): Good, minor improvements possible
184
- - 🟡 **C** (60-80): Fair, needs refactoring
185
- - 🔴 **D** (40-60): Poor, significant issues
186
- - 🔴 **F** (0-40): Critical problems detected
187
-
188
- ### Pattern Categories Analyzed
189
- - **Security Patterns**: SQL injection, XSS vulnerabilities, hardcoded secrets
190
- - **Performance Patterns**: N+1 queries, inefficient loops, memory leaks
191
- - **Error Handling**: Bare exceptions, missing error boundaries
192
- - **Type Safety**: Missing type hints, unsafe casts
193
- - **Async Patterns**: Missing await, promise handling
194
- - **Testing Patterns**: Test coverage, assertion quality
195
-
196
- ### How It Works
197
- 1. **During Import**: AST elements extracted from all code blocks
198
- 2. **Pattern Matching**: 100+ patterns from unified registry
199
- 3. **Quality Scoring**: Weighted scoring normalized by lines of code
200
- 4. **Statusline Display**: Real-time feedback as you code
70
+ > **[Explore the full documentation →](https://ramakay.github.io/claude-self-reflect/#/docs/architecture)**
201
71
 
202
- </details>
203
-
204
- ## v7.0 Automated Narrative Generation
72
+ ---
205
73
 
206
- **9.3x Better Search Quality** • **50% Cost Savings** • **Fully Automated**
74
+ ## The Pipeline
207
75
 
208
- > [!IMPORTANT]
209
- > **Opt-In Feature**: AI narratives require an Anthropic API key and are **enabled during CLI setup** when you answer "yes" to "Enable AI-powered narratives?". This sends conversation data to Anthropic for processing. Without an API key, CSR works normally with local-only search.
76
+ <a href="https://ramakay.github.io/claude-self-reflect/#/docs/enrichment">
77
+ <picture>
78
+ <source media="(prefers-color-scheme: dark)" srcset="docs-site/public/images/card-03-pipeline-dark.png" />
79
+ <img align="right" src="docs-site/public/images/card-03-pipeline-light.png" alt="The Pipeline — 3 layers, 9.3x improvement" width="420" />
80
+ </picture>
81
+ </a>
210
82
 
211
- v7.0 introduces AI-powered conversation narratives that transform raw conversation excerpts into rich problem-solution summaries with comprehensive metadata extraction.
83
+ Three layers progressively improve search quality from raw chunks to AI-enriched narratives **9.3x improvement**.
212
84
 
213
- ### Before/After Comparison
85
+ Higher quality context. Better decisions. Fewer tokens.
214
86
 
215
- | Metric | v6.x (Raw Excerpts) | v7.0 (AI Narratives) | Improvement |
216
- |--------|---------------------|----------------------|-------------|
217
- | **Search Quality** | 0.074 | 0.691 | **9.3x better** |
218
- | **Token Compression** | 100% | 18% | **82% reduction** |
219
- | **Cost per Conversation** | $0.025 | $0.012 | **50% savings** |
220
- | **Metadata Richness** | Basic | Tools + Concepts + Files | **Full context** |
87
+ <br clear="both" />
221
88
 
222
- ### What You Get
89
+ > **[Explore the full documentation →](https://ramakay.github.io/claude-self-reflect/#/docs/enrichment)**
223
90
 
224
- **Enhanced Search Results:**
225
- - **Problem-Solution Patterns**: Conversations structured as challenges encountered and solutions implemented
226
- - **Rich Metadata**: Automatic extraction of tools used, technical concepts, and files modified
227
- - **Context Compression**: 82% token reduction while maintaining searchability
228
- - **Better Relevance**: Search scores improved from 0.074 to 0.691 (9.3x)
91
+ ---
229
92
 
230
- **Cost-Effective Processing:**
231
- - Anthropic Batch API: $0.012 per conversation (vs $0.025 standard)
232
- - Automatic batch queuing and processing
233
- - Progress monitoring via Docker containers
234
- - Evaluation generation for quality assurance
93
+ ## Install
235
94
 
236
- **Fully Automated Workflow:**
237
95
  ```bash
238
- # 1. Watch for new conversations
239
- docker compose up batch-watcher
240
-
241
- # 2. Auto-trigger batch processing when threshold reached
242
- # (Configurable: BATCH_THRESHOLD_FILES, default 10)
243
-
244
- # 3. Monitor batch progress
245
- docker compose logs batch-monitor -f
246
-
247
- # 4. Enhanced narratives automatically imported to Qdrant
248
- ```
249
-
250
- ### Example: Raw Excerpt vs AI Narrative
251
-
252
- **Before (v6.x)** - Raw excerpt showing basic conversation flow:
253
- ```
254
- User: How do I fix the Docker memory issue?
255
- Assistant: The container was limited to 2GB but only using 266MB...
96
+ curl -fsSL https://raw.githubusercontent.com/ramakay/claude-self-reflect/main/scripts/install.sh | sh
256
97
  ```
257
98
 
258
- **After (v7.0)** - Rich narrative with metadata:
259
- ```
260
- PROBLEM: Docker container memory consumption investigation revealed
261
- discrepancy between limits (2GB) and actual usage (266MB). Analysis
262
- required to determine if memory limit was appropriate.
99
+ One command. Downloads the binary, runs setup, registers MCP server, installs 6 hooks. Restart Claude Code.
263
100
 
264
- SOLUTION: Discovered issue occurred with MAX_QUEUE_SIZE=1000 outside
265
- Docker environment. Implemented proper Docker resource constraints
266
- stabilizing memory at 341MB.
101
+ | Platform | Support |
102
+ |----------|---------|
103
+ | macOS (Apple Silicon) | Prebuilt binary |
104
+ | Linux x86_64 / WSL | Prebuilt binary |
105
+ | Linux ARM64 | Prebuilt binary |
106
+ | macOS (Intel) | Build from source |
267
107
 
268
- TOOLS USED: Docker, grep, Edit
269
- CONCEPTS: container-memory, resource-limits, queue-sizing
270
- FILES: docker-compose.yaml, batch_watcher.py
271
- ```
272
-
273
- ### Getting Started with Narratives
274
-
275
- Narratives are automatically generated for new conversations. To process existing conversations:
108
+ <details>
109
+ <summary>Alternative: npm</summary>
276
110
 
277
111
  ```bash
278
- # Process all existing conversations in batch
279
- python docs/design/batch_import_all_projects.py
280
-
281
- # Monitor batch progress
282
- docker compose logs batch-monitor -f
283
-
284
- # Check completion status
285
- curl http://localhost:6333/collections/csr_claude-self-reflect_local_384d
112
+ npm install -g claude-self-reflect
286
113
  ```
287
114
 
288
- For complete documentation, see [Batch Automation Guide](docs/testing/NARRATIVE_TESTING_SUMMARY.md).
289
-
290
- ## Key Features
291
-
292
- <details>
293
- <summary><b>MCP Tools Available to Claude</b></summary>
294
-
295
- **Search & Memory:**
296
- - `reflect_on_past` - Search past conversations using semantic similarity with time decay (supports quick/summary modes)
297
- - `store_reflection` - Store important insights or learnings for future reference
298
- - `get_next_results` - Paginate through additional search results
299
- - `search_by_file` - Find conversations that analyzed specific files
300
- - `search_by_concept` - Search for conversations about development concepts
301
- - `get_full_conversation` - Retrieve complete JSONL conversation files
302
-
303
- **Temporal Queries:**
304
- - `get_recent_work` - Answer "What did we work on last?" with session grouping
305
- - `search_by_recency` - Time-constrained search like "docker issues last week"
306
- - `get_timeline` - Activity timeline with statistics and patterns
307
-
308
- **Runtime Configuration:**
309
- - `switch_embedding_mode` - Switch between local/cloud modes without restart
310
- - `get_embedding_mode` - Check current embedding configuration
311
- - `reload_code` - Hot reload Python code changes
312
- - `reload_status` - Check reload state
313
- - `clear_module_cache` - Clear Python cache
314
-
315
- **Status & Monitoring:**
316
- - `get_status` - Real-time import progress and system status
317
- - `get_health` - Comprehensive system health check
318
- - `collection_status` - Check Qdrant collection health and stats
319
-
320
- > [!TIP]
321
- > Use `reflect_on_past --mode quick` for instant existence checks - returns count + top match only!
322
-
323
- All tools are automatically available when the MCP server is connected to Claude Code.
324
-
325
- </details>
326
-
327
- <details>
328
- <summary><b>Statusline Integration</b></summary>
329
-
330
- See your indexing progress right in your terminal! Works with [Claude Code Statusline](https://github.com/sirmalloc/ccstatusline):
331
- - **Progress Bar** - Visual indicator `[████████ ] 91%`
332
- - **Indexing Lag** - Shows backlog `• 7h behind`
333
- - **Auto-updates** every 60 seconds
334
- - **Zero overhead** with intelligent caching
335
-
336
- [Learn more about statusline integration →](docs/statusline-integration.md)
337
-
338
115
  </details>
339
116
 
340
117
  <details>
341
- <summary><b>Project-Scoped Search</b></summary>
118
+ <summary>Build from source</summary>
342
119
 
343
- Searches are **project-aware by default**. Claude automatically searches within your current project:
344
-
345
- ```
346
- # In ~/projects/MyApp
347
- You: "What authentication method did we use?"
348
- Claude: [Searches ONLY MyApp conversations]
349
-
350
- # To search everywhere
351
- You: "Search all projects for WebSocket implementations"
352
- Claude: [Searches across ALL your projects]
120
+ ```bash
121
+ git clone https://github.com/ramakay/claude-self-reflect.git
122
+ cd claude-self-reflect/csr-engine
123
+ cargo build --release
124
+ cp target/release/csr-engine ~/.local/bin/
125
+ csr-engine setup
353
126
  ```
354
127
 
355
128
  </details>
356
129
 
357
130
  <details>
358
- <summary><b>Memory Decay</b></summary>
131
+ <summary><strong>What You'll Ask</strong> — after install, just ask Claude naturally</summary>
359
132
 
360
- Recent conversations matter more. Old ones fade. Like your brain, but reliable.
361
- - **90-day half-life**: Recent memories stay strong
362
- - **Graceful aging**: Old information fades naturally
363
- - **Configurable**: Adjust decay rate to your needs
133
+ - *"How did we solve re-renders on this component?"*
134
+ - *"What did we tell Joe about that commit?"*
135
+ - *"What were our frustrations with this approach?"*
136
+ - *"Where did we put the auth middleware config?"*
364
137
 
365
- > [!NOTE]
366
- > Memory decay ensures recent solutions are prioritized while still maintaining historical context.
138
+ No special syntax. No commands. CSR finds relevant past context and injects it automatically.
367
139
 
368
140
  </details>
369
141
 
370
142
  <details>
371
- <summary><b>Performance at Scale</b></summary>
372
-
373
- - **Search**: <3ms average response time
374
- - **Scale**: 600+ conversations across 24 projects
375
- - **Reliability**: 100% indexing success rate
376
- - **Memory**: 96% reduction from v2.5.15
377
- - **Real-time**: HOT/WARM/COLD intelligent prioritization
143
+ <summary><strong>Performance</strong> sub-millisecond search, 93ms startup</summary>
378
144
 
379
- > [!TIP]
380
- > For best performance, keep Docker allocated 4GB+ RAM and use SSD storage.
145
+ | Metric | Value |
146
+ |--------|-------|
147
+ | **Cached startup** | 93ms |
148
+ | **Search latency (p95)** | <1ms |
149
+ | **Binary size** | 44MB |
150
+ | **Import speed** | ~20 conversations/sec |
151
+ | **Embedding** | 0.73ms/text (batch) |
381
152
 
382
153
  </details>
383
154
 
384
- ## Architecture
385
-
386
155
  <details>
387
- <summary><b>View Architecture Diagram & Details</b></summary>
388
-
389
- ![Import Architecture](docs/diagrams/import-architecture.png)
390
-
391
- ### HOT/WARM/COLD Intelligent Prioritization
392
-
393
- - **HOT** (< 5 minutes): 2-second intervals for near real-time import
394
- - **WARM** (< 24 hours): Normal priority with starvation prevention
395
- - **COLD** (> 24 hours): Batch processed to prevent blocking
396
-
397
- Files are categorized by age and processed with priority queuing to ensure newest content gets imported quickly while preventing older files from being starved.
398
-
399
- ### Components
400
- - **Vector Database**: Qdrant for semantic search
401
- - **MCP Server**: Python-based using FastMCP
402
- - **Embeddings**: Local (FastEmbed) or Cloud (Voyage AI)
403
- - **Import Pipeline**: Docker-based with automatic monitoring
156
+ <summary><strong>MCP Tools</strong> 12 tools available to Claude</summary>
157
+
158
+ | Tool | Description |
159
+ |------|-------------|
160
+ | `csr_reflect_on_past` | Semantic search across past conversations |
161
+ | `store_reflection` | Store insights for future retrieval |
162
+ | `csr_quick_check` | Fast existence check (count + top match) |
163
+ | `search_by_recency` | Time-constrained search ("last week") |
164
+ | `get_recent_work` | "What did we work on?" with session grouping |
165
+ | `get_timeline` | Activity timeline with statistics |
166
+ | `csr_search_by_file` | Find conversations that touched a file |
167
+ | `csr_search_by_concept` | Theme-based search ("security", "testing") |
168
+ | `csr_search_insights` | Aggregated patterns from search results |
169
+ | `csr_get_more` | Paginate through additional results |
170
+ | `get_full_conversation` | Retrieve complete JSONL conversation |
171
+ | `get_session_learnings` | Iteration-level memory for Ralph loops |
404
172
 
405
173
  </details>
406
174
 
407
- ## Requirements
408
-
409
175
  <details>
410
- <summary><b>System Requirements</b></summary>
411
-
412
- ### Minimum Requirements
413
- - **Docker Desktop** (macOS/Windows) or **Docker Engine** (Linux)
414
- - **Node.js** 16+ (for the setup wizard)
415
- - **Claude Code** CLI
416
- - **4GB RAM** available for Docker
417
- - **2GB disk space** for vector database
418
-
419
- ### Recommended
420
- - **8GB RAM** for optimal performance
421
- - **SSD storage** for faster indexing
422
- - **Docker Desktop 4.0+** for best compatibility
423
-
424
- ### Operating Systems
425
- - macOS 11+ (Intel & Apple Silicon)
426
- - Windows 10/11 with WSL2
427
- - Linux (Ubuntu 20.04+, Debian 11+)
428
-
429
- </details>
176
+ <summary><strong>Hooks</strong> — 6 session lifecycle hooks</summary>
430
177
 
431
- ## Documentation
178
+ | Hook | What it does |
179
+ |------|-------------|
180
+ | **SessionStart** | Surfaces relevant past context at conversation start |
181
+ | **UserPromptSubmit** | Predicts and injects context before Claude responds |
182
+ | **PostToolUse** | Tracks file edits with session-scoped dedup |
183
+ | **Stop** | Stores iteration learnings, detects stuck patterns |
184
+ | **PreCompact** | Backs up state before context compaction |
185
+ | **SessionEnd** | Stores session narrative for future retrieval |
432
186
 
433
- <details>
434
- <summary>Technical Stack</summary>
435
-
436
- - **Vector DB**: Qdrant (local, your data stays yours)
437
- - **Embeddings**:
438
- - Local (Default): FastEmbed with all-MiniLM-L6-v2
439
- - Cloud (Optional): Voyage AI
440
- - **MCP Server**: Python + FastMCP
441
- - **Search**: Semantic similarity with time decay
187
+ All hooks use catch-all error handling. They never block Claude Code.
442
188
 
443
189
  </details>
444
190
 
445
191
  <details>
446
- <summary>Advanced Topics</summary>
192
+ <summary><strong>AI Narratives</strong> — optional 9.3x quality boost</summary>
447
193
 
448
- - [Performance tuning](docs/performance-guide.md)
449
- - [Security & privacy](docs/security.md)
450
- - [Windows setup](docs/windows-setup.md)
451
- - [Architecture details](docs/architecture-details.md)
452
- - [Contributing](CONTRIBUTING.md)
194
+ Transform raw conversations into rich, searchable narratives. Requires an Anthropic API key.
453
195
 
454
- </details>
455
-
456
- <details>
457
- <summary>Troubleshooting</summary>
196
+ ```bash
197
+ csr-engine daemon
198
+ ```
458
199
 
459
- - [Troubleshooting Guide](docs/troubleshooting.md)
460
- - [GitHub Issues](https://github.com/ramakay/claude-self-reflect/issues)
461
- - [Discussions](https://github.com/ramakay/claude-self-reflect/discussions)
200
+ | Metric | Without | With AI Narratives |
201
+ |--------|---------|-------------------|
202
+ | Search quality | 0.074 | 0.691 (9.3x) |
203
+ | Token compression | 100% | 18% (82% reduction) |
204
+ | Cost per conversation | - | ~$0.012 (Batch API) |
462
205
 
463
206
  </details>
464
207
 
465
208
  <details>
466
- <summary>Uninstall</summary>
209
+ <summary><strong>CLI Reference</strong></summary>
467
210
 
468
- For complete uninstall instructions, see [docs/UNINSTALL.md](docs/UNINSTALL.md).
469
-
470
- Quick uninstall:
471
- ```bash
472
- # Remove MCP server
473
- claude mcp remove claude-self-reflect
474
-
475
- # Stop Docker containers
476
- docker-compose down
477
-
478
- # Uninstall npm package
479
- npm uninstall -g claude-self-reflect
480
211
  ```
481
-
482
- </details>
483
-
484
- ## Keeping Up to Date
485
-
486
- ```bash
487
- npm update -g claude-self-reflect
212
+ csr-engine Start MCP server (default)
213
+ csr-engine setup One-shot setup: import + MCP + hooks
214
+ csr-engine status System status (JSON)
215
+ csr-engine status --compact One-line statusline output
216
+ csr-engine daemon Background enrichment daemon
217
+ csr-engine hook install --apply Install Claude Code hooks
218
+ csr-engine eval Quick eval (5 tests)
219
+ csr-engine eval --full Full eval (20 tests)
220
+ csr-engine quality <file> AST-based code quality analysis
488
221
  ```
489
222
 
490
- Updates are automatic and preserve your data. See [full changelog](docs/release-history.md) for details.
491
-
492
- <details>
493
- <summary><b>Release Evolution</b></summary>
494
-
495
- ### v7.0 - Automated Narratives (Oct 2025)
496
- - **9.3x better search quality** via AI-powered conversation summaries
497
- - **50% cost savings** using Anthropic Batch API ($0.012 per conversation)
498
- - **82% token compression** while maintaining searchability
499
- - Rich metadata extraction (tools, concepts, files)
500
- - Problem-solution narrative structure
501
- - Automated batch processing with Docker monitoring
502
-
503
- ### v4.0 - Performance Revolution (Sep 2025)
504
- - **20x faster** status checks (119ms → 6ms)
505
- - **50% storage reduction** via unified state management
506
- - **10x faster imports** (10/sec → 100/sec)
507
- - **90% memory reduction** (500MB → 50MB)
508
- - Runtime mode switching (no restart required)
509
- - Prefixed collection naming (breaking change)
510
- - Code quality tracking with AST-GREP (100+ patterns)
511
-
512
- ### v3.3 - Temporal Intelligence (Aug 2025)
513
- - Time-based search: "docker issues last week"
514
- - Session grouping: "What did we work on last?"
515
- - Activity timelines with statistics
516
- - Recency-aware queries
517
-
518
- ### v2.8 - Full Context Access (Jul 2025)
519
- - Complete conversation retrieval
520
- - JSONL file access for deeper analysis
521
- - Enhanced debugging capabilities
522
-
523
- [View complete changelog →](docs/release-history.md)
524
-
525
223
  </details>
526
224
 
527
- ## Troubleshooting
528
-
529
225
  <details>
530
- <summary><b>Common Issues and Solutions</b></summary>
531
-
532
- ### 1. "No collections created" after import
533
- **Symptom**: Import runs but Qdrant shows no collections
534
- **Cause**: Docker can't access Claude projects directory
535
- **Solution**:
536
- ```bash
537
- # Run diagnostics to identify the issue
538
- claude-self-reflect doctor
539
-
540
- # Fix: Re-run setup to set correct paths
541
- claude-self-reflect setup
542
-
543
- # Verify .env has full paths (no ~):
544
- cat .env | grep CLAUDE_LOGS_PATH
545
- # Should show: CLAUDE_LOGS_PATH=/Users/YOUR_NAME/.claude/projects
546
- ```
226
+ <summary><strong>Upgrading from v7.x</strong></summary>
547
227
 
548
- ### 2. MCP server shows "ERROR" but it's actually INFO
549
- **Symptom**: `[ERROR] MCP server "claude-self-reflect" Server stderr: INFO Starting MCP server`
550
- **Cause**: Claude Code displays all stderr output as errors
551
- **Solution**: This is not an actual error - the MCP is working correctly. The INFO message confirms successful startup.
228
+ v8.0 replaces the Python/Docker stack with a single Rust binary.
552
229
 
553
- ### 3. "No JSONL files found"
554
- **Symptom**: Setup can't find any conversation files
555
- **Cause**: Claude Code hasn't been used yet or stores files elsewhere
556
- **Solution**:
557
230
  ```bash
558
- # Check if files exist
559
- ls ~/.claude/projects/
560
-
561
- # If empty, use Claude Code to create some conversations first
562
- # The watcher will import them automatically
563
- ```
564
-
565
- ### 4. Docker volume mount issues
566
- **Symptom**: Import fails with permission errors
567
- **Cause**: Docker can't access home directory
568
- **Solution**:
569
- ```bash
570
- # Ensure Docker has file sharing permissions
571
- # macOS: Docker Desktop → Settings → Resources → File Sharing
572
- # Add: /Users/YOUR_USERNAME/.claude
573
-
574
- # Restart Docker and re-run setup
575
- docker compose down
576
- claude-self-reflect setup
231
+ docker compose down 2>/dev/null
232
+ curl -fsSL https://raw.githubusercontent.com/ramakay/claude-self-reflect/main/scripts/install.sh | sh
577
233
  ```
578
234
 
579
- ### 5. Qdrant not accessible
580
- **Symptom**: Can't connect to localhost:6333
581
- **Solution**:
582
- ```bash
583
- # Start services
584
- docker compose --profile mcp up -d
585
-
586
- # Check if running
587
- docker compose ps
588
-
589
- # View logs for errors
590
- docker compose logs qdrant
591
- ```
235
+ Your conversation data (`~/.claude/projects/`) is untouched. The new engine re-imports from the same JSONL files.
592
236
 
593
237
  </details>
594
238
 
595
239
  <details>
596
- <summary><b>Diagnostic Tools</b></summary>
240
+ <summary><strong>Troubleshooting</strong></summary>
597
241
 
598
- ### Run Comprehensive Diagnostics
599
- ```bash
600
- claude-self-reflect doctor
601
- ```
242
+ | Symptom | Fix |
243
+ |---------|-----|
244
+ | No search results | Run `csr-engine setup` |
245
+ | MCP tools not available | Run `csr-engine setup`, restart Claude Code |
246
+ | "spawn ENOENT" in MCP | Ensure `csr-engine` is in PATH |
247
+ | Slow first startup | Normal (~14s for index rebuild, subsequent: ~93ms) |
602
248
 
603
- This checks:
604
- - Docker installation and configuration
605
- - Environment variables and paths
606
- - Claude projects and JSONL files
607
- - Import status and collections
608
- - Service health
249
+ Full guide: [Documentation](https://ramakay.github.io/claude-self-reflect/#/docs/troubleshooting)
609
250
 
610
- ### Check Logs
611
- ```bash
612
- # View all service logs
613
- docker compose logs -f
251
+ </details>
614
252
 
615
- # View specific service
616
- docker compose logs qdrant
617
- docker compose logs watcher
618
- ```
253
+ <details>
254
+ <summary><strong>Uninstall</strong></summary>
619
255
 
620
- ### Generate Diagnostic Report
621
256
  ```bash
622
- # Create diagnostic file for issue reporting
623
- claude-self-reflect doctor > diagnostic.txt
257
+ claude mcp remove claude-self-reflect
258
+ rm -rf ~/.claude-self-reflect/
259
+ rm ~/.local/bin/csr-engine
260
+ npm uninstall -g claude-self-reflect # if installed via npm
624
261
  ```
625
262
 
626
263
  </details>
627
264
 
628
265
  <details>
629
- <summary><b>Getting Help</b></summary>
266
+ <summary><strong>Contributors (v1–v7)</strong></summary>
630
267
 
631
- 1. **Check Documentation**
632
- - [Troubleshooting Guide](docs/troubleshooting.md)
633
- - [FAQ](docs/faq.md)
634
- - [Windows Setup](docs/windows-setup.md)
635
-
636
- 2. **Community Support**
637
- - [GitHub Discussions](https://github.com/ramakay/claude-self-reflect/discussions)
638
- - [Discord Community](https://discord.gg/claude-self-reflect)
639
-
640
- 3. **Report Issues**
641
- - [GitHub Issues](https://github.com/ramakay/claude-self-reflect/issues)
642
- - Include diagnostic output when reporting
643
-
644
- </details>
645
-
646
- ## Contributors
647
-
648
- Special thanks to our contributors:
649
268
  - **[@TheGordon](https://github.com/TheGordon)** - Fixed timestamp parsing (#10)
650
269
  - **[@akamalov](https://github.com/akamalov)** - Ubuntu WSL insights
651
270
  - **[@kylesnowschwartz](https://github.com/kylesnowschwartz)** - Security review (#6)
652
271
 
272
+ </details>
273
+
653
274
  ---
654
275
 
655
- Built with care by [ramakay](https://github.com/ramakay) for the Claude community.
276
+ [Documentation](https://ramakay.github.io/claude-self-reflect/) | [npm](https://www.npmjs.com/package/claude-self-reflect) | [Issues](https://github.com/ramakay/claude-self-reflect/issues) | MIT License