clawmem 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/AGENTS.md +660 -0
- package/CLAUDE.md +660 -0
- package/LICENSE +21 -0
- package/README.md +993 -0
- package/SKILL.md +717 -0
- package/bin/clawmem +75 -0
- package/package.json +72 -0
- package/src/amem.ts +797 -0
- package/src/beads.ts +263 -0
- package/src/clawmem.ts +1849 -0
- package/src/collections.ts +405 -0
- package/src/config.ts +178 -0
- package/src/consolidation.ts +123 -0
- package/src/directory-context.ts +248 -0
- package/src/errors.ts +41 -0
- package/src/formatter.ts +427 -0
- package/src/graph-traversal.ts +247 -0
- package/src/hooks/context-surfacing.ts +317 -0
- package/src/hooks/curator-nudge.ts +89 -0
- package/src/hooks/decision-extractor.ts +639 -0
- package/src/hooks/feedback-loop.ts +214 -0
- package/src/hooks/handoff-generator.ts +345 -0
- package/src/hooks/postcompact-inject.ts +226 -0
- package/src/hooks/precompact-extract.ts +314 -0
- package/src/hooks/pretool-inject.ts +79 -0
- package/src/hooks/session-bootstrap.ts +324 -0
- package/src/hooks/staleness-check.ts +130 -0
- package/src/hooks.ts +367 -0
- package/src/indexer.ts +327 -0
- package/src/intent.ts +294 -0
- package/src/limits.ts +26 -0
- package/src/llm.ts +1175 -0
- package/src/mcp.ts +2138 -0
- package/src/memory.ts +336 -0
- package/src/mmr.ts +93 -0
- package/src/observer.ts +269 -0
- package/src/openclaw/engine.ts +283 -0
- package/src/openclaw/index.ts +221 -0
- package/src/openclaw/plugin.json +83 -0
- package/src/openclaw/shell.ts +207 -0
- package/src/openclaw/tools.ts +304 -0
- package/src/profile.ts +346 -0
- package/src/promptguard.ts +218 -0
- package/src/retrieval-gate.ts +106 -0
- package/src/search-utils.ts +127 -0
- package/src/server.ts +783 -0
- package/src/splitter.ts +325 -0
- package/src/store.ts +4062 -0
- package/src/validation.ts +67 -0
- package/src/watcher.ts +58 -0
package/README.md
ADDED
|
@@ -0,0 +1,993 @@
|
|
|
1
|
+
# ClawMem — Context engine for Claude Code and AI agents
|
|
2
|
+
|
|
3
|
+
<p align="center">
|
|
4
|
+
<img src="docs/clawmem_hero.jpg" alt="ClawMem" width="100%">
|
|
5
|
+
</p>
|
|
6
|
+
|
|
7
|
+
**On-device memory for Claude Code and AI agents.** Retrieval-augmented search, hooks, and an MCP server in a single local system. No API keys, no cloud dependencies.
|
|
8
|
+
|
|
9
|
+
ClawMem fuses recent research into a retrieval-augmented memory layer that agents actually use. The hybrid architecture combines [QMD](https://github.com/tobi/qmd)-derived multi-signal retrieval (BM25 + vector search + reciprocal rank fusion + query expansion + cross-encoder reranking), [SAME](https://github.com/sgx-labs/statelessagent)-inspired composite scoring (recency decay, confidence, content-type half-lives, co-activation reinforcement), [MAGMA](https://arxiv.org/abs/2501.13956)-style intent classification with multi-graph traversal (semantic, temporal, and causal beam search), and [A-MEM](https://arxiv.org/abs/2510.02178) self-evolving memory notes that enrich documents with keywords, tags, and causal links between entries. Pattern extraction from [Engram](https://github.com/Gentleman-Programming/engram) adds deduplication windows, frequency-based durability scoring, and temporal navigation.
|
|
10
|
+
|
|
11
|
+
Two integration paths: Claude Code hooks paired with an MCP server, or a native OpenClaw ContextEngine plugin. Both write to the same local SQLite vault. A decision captured during a Claude Code session shows up immediately when an OpenClaw agent picks up the same project.
|
|
12
|
+
|
|
13
|
+
TypeScript on Bun. MIT License.
|
|
14
|
+
|
|
15
|
+
## What It Does
|
|
16
|
+
|
|
17
|
+
ClawMem turns your markdown notes, project docs, and research dumps into persistent memory for AI coding agents. It automatically:
|
|
18
|
+
|
|
19
|
+
- **Surfaces relevant context** on every prompt (context-surfacing hook)
|
|
20
|
+
- **Bootstraps sessions** with your profile, latest handoff, recent decisions, and stale notes
|
|
21
|
+
- **Captures decisions** from session transcripts using a local GGUF observer model
|
|
22
|
+
- **Generates handoffs** at session end so the next session can pick up where you left off
|
|
23
|
+
- **Learns what matters** via a feedback loop that boosts referenced notes and decays unused ones
|
|
24
|
+
- **Guards against prompt injection** in surfaced content
|
|
25
|
+
- **Classifies query intent** (WHY / WHEN / ENTITY / WHAT) to weight search strategies
|
|
26
|
+
- **Traverses multi-graphs** (semantic, temporal, causal) via adaptive beam search
|
|
27
|
+
- **Evolves memory metadata** as new documents create or refine connections
|
|
28
|
+
- **Infers causal relationships** between facts extracted from session observations
|
|
29
|
+
- **Detects contradictions** between new and prior decisions, auto-decaying superseded ones
|
|
30
|
+
- **Scores document quality** using structure, keywords, and metadata richness signals
|
|
31
|
+
- **Boosts co-accessed documents** — notes frequently surfaced together get retrieval reinforcement
|
|
32
|
+
- **Decomposes complex queries** into typed retrieval clauses (BM25/vector/graph) for multi-topic questions
|
|
33
|
+
- **Cleans stale embeddings** automatically before embed runs, removing orphans from deleted/changed documents
|
|
34
|
+
- **Transaction-safe indexing** — crash mid-index leaves zero partial state (atomic commit with rollback)
|
|
35
|
+
- **Deduplicates hook-generated observations** within a 30-minute window using normalized content hashing, preventing memory bloat from repeated hook output
|
|
36
|
+
- **Navigates temporal neighborhoods** around any document via the `timeline` tool — progressive disclosure from search to chronological context to full content
|
|
37
|
+
- **Boosts frequently-revised memories** — documents with higher revision counts get a durability signal in composite scoring (capped at 10%)
|
|
38
|
+
- **Supports pin/snooze lifecycle** for persistent boosts and temporary suppression
|
|
39
|
+
- **Manages document lifecycle** — policy-driven archival sweeps with restore capability
|
|
40
|
+
- **Auto-routes queries** via `memory_retrieve` — classifies intent and dispatches to the optimal search backend
|
|
41
|
+
- **Syncs project issues** from Beads issue trackers into searchable memory
|
|
42
|
+
|
|
43
|
+
Runs fully local with no API keys and no cloud services. Integrates via Claude Code hooks and MCP tools, or as an OpenClaw ContextEngine plugin. Both modes share the same vault for cross-runtime memory. Works with any MCP-compatible client.
|
|
44
|
+
|
|
45
|
+
## Architecture
|
|
46
|
+
|
|
47
|
+
<p align="center">
|
|
48
|
+
<img src="docs/clawmem-architecture.png" alt="ClawMem Architecture" width="100%">
|
|
49
|
+
</p>
|
|
50
|
+
|
|
51
|
+
## Install
|
|
52
|
+
|
|
53
|
+
### Platform Support
|
|
54
|
+
|
|
55
|
+
| Platform | Status | Notes |
|
|
56
|
+
|---|---|---|
|
|
57
|
+
| **Linux** | Full support | Primary target. systemd services for watcher + embed timer. |
|
|
58
|
+
| **macOS** | Full support | Homebrew SQLite handled automatically. GPU via Metal (llama.cpp). |
|
|
59
|
+
| **Windows (WSL2)** | Full support | Recommended for Windows users. Install Bun + ClawMem inside WSL2. |
|
|
60
|
+
| **Windows (native)** | Not recommended | Bun and sqlite-vec work, but `bin/clawmem` wrapper is bash, hooks expect bash commands, and systemd services have no equivalent. Use WSL2 instead. |
|
|
61
|
+
|
|
62
|
+
### Prerequisites
|
|
63
|
+
|
|
64
|
+
- [Bun](https://bun.sh) v1.0+
|
|
65
|
+
- SQLite with FTS5 support (included with Bun)
|
|
66
|
+
|
|
67
|
+
### Install via npm (recommended)
|
|
68
|
+
|
|
69
|
+
```bash
|
|
70
|
+
bun add -g clawmem
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
Or with npm:
|
|
74
|
+
|
|
75
|
+
```bash
|
|
76
|
+
npm install -g clawmem
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
### Install from source
|
|
80
|
+
|
|
81
|
+
```bash
|
|
82
|
+
git clone https://github.com/yoloshii/clawmem.git ~/clawmem
|
|
83
|
+
cd ~/clawmem && bun install
|
|
84
|
+
ln -sf ~/clawmem/bin/clawmem ~/.bun/bin/clawmem
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
### Quick Start (Bootstrap)
|
|
88
|
+
|
|
89
|
+
One command to set up a vault:
|
|
90
|
+
|
|
91
|
+
```bash
|
|
92
|
+
# Initialize, index, embed, install hooks, register MCP
|
|
93
|
+
./bin/clawmem bootstrap ~/notes --name notes
|
|
94
|
+
|
|
95
|
+
# Or step by step:
|
|
96
|
+
./bin/clawmem init
|
|
97
|
+
./bin/clawmem collection add ~/notes --name notes
|
|
98
|
+
./bin/clawmem update --embed
|
|
99
|
+
./bin/clawmem setup hooks
|
|
100
|
+
./bin/clawmem setup mcp
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
### Integration
|
|
104
|
+
|
|
105
|
+
#### Claude Code
|
|
106
|
+
|
|
107
|
+
ClawMem integrates via hooks (`settings.json`) and an MCP stdio server. Hooks handle 90% of retrieval automatically - the agent never needs to call tools for routine context.
|
|
108
|
+
|
|
109
|
+
```bash
|
|
110
|
+
clawmem setup hooks # Install lifecycle hooks (SessionStart, UserPromptSubmit, Stop, PreCompact)
|
|
111
|
+
clawmem setup mcp # Register MCP server in ~/.claude.json (20+ agent tools)
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
**Automatic (90%):** `context-surfacing` injects relevant memory on every prompt. `postcompact-inject` re-injects state after compaction. `decision-extractor`, `handoff-generator`, `feedback-loop` capture session state on stop.
|
|
115
|
+
|
|
116
|
+
**Agent-initiated (10%):** MCP tools (`query`, `intent_search`, `find_causal_links`, `timeline`, etc.) for targeted retrieval when hooks don't surface what's needed.
|
|
117
|
+
|
|
118
|
+
#### OpenClaw
|
|
119
|
+
|
|
120
|
+
ClawMem registers as a native ContextEngine plugin - OpenClaw's pluggable interface for context management. Same 90/10 automatic retrieval, delivered through OpenClaw's lifecycle system instead of Claude Code hooks.
|
|
121
|
+
|
|
122
|
+
```bash
|
|
123
|
+
clawmem setup openclaw # Shows installation steps
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
**What the plugin provides:**
|
|
127
|
+
- **`before_prompt_build` hook** - prompt-aware retrieval (context-surfacing + session-bootstrap)
|
|
128
|
+
- **`ContextEngine.afterTurn()`** - decision extraction, handoff generation, feedback loop
|
|
129
|
+
- **`ContextEngine.compact()`** - pre-compaction state preservation, delegates real compaction to legacy engine
|
|
130
|
+
- **5 agent tools** - `clawmem_search`, `clawmem_get`, `clawmem_session_log`, `clawmem_timeline`, `clawmem_similar`
|
|
131
|
+
- **Session lifecycle hooks** - `session_start`, `session_end`, `before_reset` safety net
|
|
132
|
+
|
|
133
|
+
Disable OpenClaw's native memory and `memory-lancedb` auto-recall/capture to avoid duplicate injection:
|
|
134
|
+
```bash
|
|
135
|
+
openclaw config set agents.defaults.memorySearch.extraPaths "[]"
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
**Alternative:** You can also use the Claude Code-style hooks + MCP approach with OpenClaw (`clawmem setup hooks && clawmem setup mcp`). This works but bypasses OpenClaw's ContextEngine lifecycle - you lose token budget awareness, native compaction orchestration, and the `afterTurn()` message pipeline. The ContextEngine plugin is recommended for new OpenClaw setups.
|
|
139
|
+
|
|
140
|
+
#### Dual-Mode Operation
|
|
141
|
+
|
|
142
|
+
Both integrations share the same SQLite vault by default. Claude Code and OpenClaw can run simultaneously - decisions captured in one runtime are immediately available in the other, giving agents persistent shared memory across sessions and platforms. WAL mode + busy_timeout handles concurrent access.
|
|
143
|
+
|
|
144
|
+
#### Multi-Vault (Optional)
|
|
145
|
+
|
|
146
|
+
By default, ClawMem uses a single vault at `~/.cache/clawmem/index.sqlite`. For users who want separate memory domains (e.g., work vs personal, or isolated vaults per project), ClawMem supports named vaults.
|
|
147
|
+
|
|
148
|
+
**Configure in `~/.config/clawmem/config.yaml`:**
|
|
149
|
+
|
|
150
|
+
```yaml
|
|
151
|
+
vaults:
|
|
152
|
+
work: ~/.cache/clawmem/work.sqlite
|
|
153
|
+
personal: ~/.cache/clawmem/personal.sqlite
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
**Or via environment variable:**
|
|
157
|
+
|
|
158
|
+
```bash
|
|
159
|
+
export CLAWMEM_VAULTS='{"work":"~/.cache/clawmem/work.sqlite","personal":"~/.cache/clawmem/personal.sqlite"}'
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
**Using vaults with MCP tools:**
|
|
163
|
+
|
|
164
|
+
All retrieval tools (`memory_retrieve`, `query`, `search`, `vsearch`, `intent_search`) accept an optional `vault` parameter. Omit it to use the default vault.
|
|
165
|
+
|
|
166
|
+
```
|
|
167
|
+
# Search the default vault (no vault param needed)
|
|
168
|
+
query("authentication flow")
|
|
169
|
+
|
|
170
|
+
# Search a named vault
|
|
171
|
+
query("project timeline", vault="work")
|
|
172
|
+
|
|
173
|
+
# List configured vaults
|
|
174
|
+
list_vaults()
|
|
175
|
+
|
|
176
|
+
# Sync content into a vault
|
|
177
|
+
vault_sync(vault="work", content_root="~/work/docs")
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
**Single-vault users:** No action needed. Everything works without configuration. The `vault` parameter is always optional and ignored when no vaults are configured.
|
|
181
|
+
|
|
182
|
+
### GPU Services
|
|
183
|
+
|
|
184
|
+
ClawMem uses three `llama-server` (llama.cpp) instances for neural inference. All three have in-process fallbacks via `node-llama-cpp` (auto-downloads on first use), so ClawMem works without a dedicated GPU. `node-llama-cpp` auto-detects the best available backend — Metal on Apple Silicon, Vulkan where available, CPU as last resort. With GPU acceleration (Metal/Vulkan), in-process inference is fast for these small models (0.3B–1.7B); on CPU-only systems it is significantly slower. For production use, run the servers via [systemd services](docs/guides/systemd-services.md) to prevent silent fallback.
|
|
185
|
+
|
|
186
|
+
**GPU with VRAM to spare (12GB+, recommended):** ZeroEntropy's distillation-paired stack delivers best retrieval quality — total ~10GB VRAM.
|
|
187
|
+
|
|
188
|
+
| Service | Port | Model | VRAM | Purpose |
|
|
189
|
+
|---|---|---|---|---|
|
|
190
|
+
| Embedding | 8088 | [zembed-1-Q4_K_M](https://huggingface.co/Abhiray/zembed-1-Q4_K_M-GGUF) | ~4.4GB | SOTA embedding (2560d, 32K context). Distilled from zerank-2 via zELO. |
|
|
191
|
+
| LLM | 8089 | [qmd-query-expansion-1.7B-q4_k_m](https://huggingface.co/tobil/qmd-query-expansion-1.7B-gguf) | ~2.2GB | Intent classification, query expansion, A-MEM |
|
|
192
|
+
| Reranker | 8090 | [zerank-2-Q4_K_M](https://huggingface.co/keisuke-miyako/zerank-2-gguf-q4_k_m) | ~3.3GB | SOTA reranker. Outperforms Cohere rerank-3.5. Optimal pairing with zembed-1. |
|
|
193
|
+
|
|
194
|
+
**Important:** zembed-1 and zerank-2 use non-causal attention — `-ub` must equal `-b` on llama-server (e.g. `-b 2048 -ub 2048`). See [Reranker Server](#reranker-server) for details.
|
|
195
|
+
|
|
196
|
+
**License:** zembed-1 and zerank-2 are released under **CC-BY-NC-4.0** — non-commercial only. The QMD native models below have no such restriction.
|
|
197
|
+
|
|
198
|
+
**No dedicated GPU / GPU without VRAM to spare:** The QMD native combo — total ~4GB VRAM, also runs via `node-llama-cpp` (Metal on Apple Silicon, Vulkan where available, CPU as last resort). Fast with GPU acceleration; significantly slower on CPU-only.
|
|
199
|
+
|
|
200
|
+
| Service | Port | Model | VRAM | Purpose |
|
|
201
|
+
|---|---|---|---|---|
|
|
202
|
+
| Embedding | 8088 | [EmbeddingGemma-300M-Q8_0](https://huggingface.co/ggml-org/embeddinggemma-300M-GGUF) | ~400MB | Vector search, indexing, context-surfacing (768d, 2K context) |
|
|
203
|
+
| LLM | 8089 | [qmd-query-expansion-1.7B-q4_k_m](https://huggingface.co/tobil/qmd-query-expansion-1.7B-gguf) | ~2.2GB | Intent classification, query expansion, A-MEM |
|
|
204
|
+
| Reranker | 8090 | [qwen3-reranker-0.6B-Q8_0](https://huggingface.co/ggml-org/Qwen3-Reranker-0.6B-Q8_0-GGUF) | ~1.3GB | Cross-encoder reranking (query, intent_search) |
|
|
205
|
+
|
|
206
|
+
The `bin/clawmem` wrapper defaults to `localhost:8088/8089/8090`. If a server is unreachable, ClawMem silently falls back to in-process inference via `node-llama-cpp` (auto-downloads the QMD native models on first use, uses Metal/Vulkan/CPU depending on hardware). With GPU acceleration this is fast; on CPU-only it is significantly slower. ClawMem always works either way, but **if you're running dedicated GPU servers, use [systemd services](docs/guides/systemd-services.md) to ensure they stay up** — otherwise a crashed server silently degrades without warning.
|
|
207
|
+
|
|
208
|
+
To prevent silent fallback and fail fast instead, set `CLAWMEM_NO_LOCAL_MODELS=true`.
|
|
209
|
+
|
|
210
|
+
#### Remote GPU (optional)
|
|
211
|
+
|
|
212
|
+
If your GPU lives on a separate machine, point the env vars at it:
|
|
213
|
+
|
|
214
|
+
```bash
|
|
215
|
+
export CLAWMEM_EMBED_URL=http://gpu-host:8088
|
|
216
|
+
export CLAWMEM_LLM_URL=http://gpu-host:8089
|
|
217
|
+
export CLAWMEM_RERANK_URL=http://gpu-host:8090
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
For remote setups, set `CLAWMEM_NO_LOCAL_MODELS=true` to prevent `node-llama-cpp` from auto-downloading multi-GB model files if a server is unreachable.
|
|
221
|
+
|
|
222
|
+
#### No Dedicated GPU (in-process inference)
|
|
223
|
+
|
|
224
|
+
All three QMD native models run locally without a dedicated GPU. `node-llama-cpp` auto-downloads them on first use (~300MB embedding + ~1.1GB LLM + ~600MB reranker) and auto-detects the best backend — **Metal on Apple Silicon** (fast, uses integrated GPU), **Vulkan where available** (fast, uses discrete or integrated GPU), or **CPU as last resort** (significantly slower). With Metal or Vulkan, in-process inference handles these small models well; CPU-only is functional but noticeably slower.
|
|
225
|
+
|
|
226
|
+
Alternatively, use a [cloud embedding provider](#option-c-cloud-embedding-api) if you prefer not to run models locally.
|
|
227
|
+
|
|
228
|
+
### Embedding
|
|
229
|
+
|
|
230
|
+
ClawMem calls the OpenAI-compatible `/v1/embeddings` endpoint for all embedding operations. This works with local llama-server instances and cloud providers alike.
|
|
231
|
+
|
|
232
|
+
#### Option A: GPU with VRAM to spare (recommended)
|
|
233
|
+
|
|
234
|
+
Use [zembed-1-Q4_K_M](https://huggingface.co/Abhiray/zembed-1-Q4_K_M-GGUF) — SOTA retrieval quality, distilled from zerank-2 via [ZeroEntropy's zELO methodology](https://docs.zeroentropy.dev). **CC-BY-NC-4.0** — non-commercial only.
|
|
235
|
+
|
|
236
|
+
- Size: 2.4GB, Dimensions: 2560, VRAM: ~4.4GB, Context: 32K tokens
|
|
237
|
+
|
|
238
|
+
```bash
|
|
239
|
+
wget https://huggingface.co/Abhiray/zembed-1-Q4_K_M-GGUF/resolve/main/zembed-1-Q4_K_M.gguf
|
|
240
|
+
|
|
241
|
+
# -ub must match -b for non-causal attention
|
|
242
|
+
llama-server -m zembed-1-Q4_K_M.gguf \
|
|
243
|
+
--embeddings --port 8088 --host 0.0.0.0 \
|
|
244
|
+
-ngl 99 -c 8192 -b 2048 -ub 2048
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
#### Option B: No GPU / GPU without VRAM to spare
|
|
248
|
+
|
|
249
|
+
Use [EmbeddingGemma-300M-Q8_0](https://huggingface.co/ggml-org/embeddinggemma-300M-GGUF) — the QMD native embedding model. Only 300MB, runs on CPU or any GPU.
|
|
250
|
+
|
|
251
|
+
- Size: 314MB, Dimensions: 768, VRAM: ~400MB (or CPU), Context: 2048 tokens
|
|
252
|
+
|
|
253
|
+
```bash
|
|
254
|
+
wget https://huggingface.co/ggml-org/embeddinggemma-300M-GGUF/resolve/main/embeddinggemma-300M-Q8_0.gguf
|
|
255
|
+
|
|
256
|
+
# On GPU (add -ngl 99):
|
|
257
|
+
llama-server -m embeddinggemma-300M-Q8_0.gguf \
|
|
258
|
+
--embeddings --port 8088 --host 0.0.0.0 \
|
|
259
|
+
-ngl 99 -c 2048 --batch-size 2048
|
|
260
|
+
|
|
261
|
+
# On CPU (omit -ngl):
|
|
262
|
+
llama-server -m embeddinggemma-300M-Q8_0.gguf \
|
|
263
|
+
--embeddings --port 8088 --host 0.0.0.0 \
|
|
264
|
+
-c 2048 --batch-size 2048
|
|
265
|
+
```
|
|
266
|
+
|
|
267
|
+
For multilingual corpora, the SOTA zembed-1 (Option A) supports multilingual out of the box. For a lightweight alternative: [granite-embedding-278m-multilingual-Q6_K](https://huggingface.co/bartowski/granite-embedding-278m-multilingual-GGUF) (314MB, set `CLAWMEM_EMBED_MAX_CHARS=1100` due to 512-token context).
|
|
268
|
+
|
|
269
|
+
#### Option C: Cloud Embedding API
|
|
270
|
+
|
|
271
|
+
Alternatively, use a cloud embedding provider instead of running a local server. Any provider with an OpenAI-compatible `/v1/embeddings` endpoint works.
|
|
272
|
+
|
|
273
|
+
**Configuration:** Copy `.env.example` to `.env` and set your provider credentials:
|
|
274
|
+
|
|
275
|
+
```bash
|
|
276
|
+
cp .env.example .env
|
|
277
|
+
# Edit .env:
|
|
278
|
+
CLAWMEM_EMBED_URL=https://api.jina.ai
|
|
279
|
+
CLAWMEM_EMBED_API_KEY=jina_your-key-here
|
|
280
|
+
CLAWMEM_EMBED_MODEL=jina-embeddings-v5-text-small
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
Or export them in your shell. **Precedence:** shell environment > `.env` file > `bin/clawmem` wrapper defaults.
|
|
284
|
+
|
|
285
|
+
| Provider | `CLAWMEM_EMBED_URL` | `CLAWMEM_EMBED_MODEL` | Dimensions | Notes |
|
|
286
|
+
|---|---|---|---|---|
|
|
287
|
+
| Jina AI | `https://api.jina.ai` | `jina-embeddings-v5-text-small` | 1024 | 32K context, task-specific LoRA adapters |
|
|
288
|
+
| OpenAI | `https://api.openai.com` | `text-embedding-3-small` | 1536 | 8K context, Matryoshka dimensions via `CLAWMEM_EMBED_DIMENSIONS` |
|
|
289
|
+
| Voyage AI | `https://api.voyageai.com` | `voyage-4-large` | 1024 | 32K context |
|
|
290
|
+
| Cohere | `https://api.cohere.com` | `embed-v4.0` | 1024 | 128K context |
|
|
291
|
+
|
|
292
|
+
Cloud mode auto-detects your provider from the URL and sends the right parameters (Jina `task`, Voyage/Cohere `input_type`, OpenAI `dimensions`). Batch embedding (50 fragments/request), server-side truncation, adaptive TPM-aware pacing, and retry with jitter are all handled automatically. Set `CLAWMEM_EMBED_TPM_LIMIT` to match your provider tier (default: 100000). See [docs/guides/cloud-embedding.md](docs/guides/cloud-embedding.md) for full details.
|
|
293
|
+
|
|
294
|
+
**Note:** Cloud providers handle their own context window limits — ClawMem skips client-side truncation when an API key is set. Local llama-server truncates at `CLAWMEM_EMBED_MAX_CHARS` (default: 6000 chars).
|
|
295
|
+
|
|
296
|
+
#### Verify and embed
|
|
297
|
+
|
|
298
|
+
```bash
|
|
299
|
+
# Verify endpoint is reachable
|
|
300
|
+
curl $CLAWMEM_EMBED_URL/v1/embeddings \
|
|
301
|
+
-H "Content-Type: application/json" \
|
|
302
|
+
-H "Authorization: Bearer $CLAWMEM_EMBED_API_KEY" \
|
|
303
|
+
-d "{\"input\":\"test\",\"model\":\"$CLAWMEM_EMBED_MODEL\"}"
|
|
304
|
+
|
|
305
|
+
# Embed your vault
|
|
306
|
+
./bin/clawmem embed
|
|
307
|
+
```
|
|
308
|
+
|
|
309
|
+
### LLM Server
|
|
310
|
+
|
|
311
|
+
Intent classification, query expansion, and A-MEM extraction use [qmd-query-expansion-1.7B](https://huggingface.co/tobil/qmd-query-expansion-1.7B-gguf) — a Qwen3-1.7B finetuned by QMD specifically for generating search expansion terms (hyde, lexical, and vector variants). ~1.1GB at q4_k_m quantization, served via `llama-server` on port 8089.
|
|
312
|
+
|
|
313
|
+
**Without a server:** If `CLAWMEM_LLM_URL` is unset, `node-llama-cpp` auto-downloads the model on first use.
|
|
314
|
+
|
|
315
|
+
**Performance (RTX 3090):**
|
|
316
|
+
- Intent classification: **27ms**
|
|
317
|
+
- Query expansion: **333 tok/s**
|
|
318
|
+
- VRAM: ~2.2-2.8GB depending on quantization
|
|
319
|
+
|
|
320
|
+
**Qwen3 /no_think flag:** Qwen3 uses thinking tokens by default. ClawMem appends `/no_think` to all prompts automatically to get structured output in the `content` field.
|
|
321
|
+
|
|
322
|
+
**Intent classification:** Uses a dual-path approach:
|
|
323
|
+
1. **Heuristic regex classifier** (instant) — handles strong signals (why/when/who keywords) with 0.8+ confidence
|
|
324
|
+
2. **LLM refinement** (27ms on GPU) — only for ambiguous queries below 0.8 confidence
|
|
325
|
+
|
|
326
|
+
**Server setup:**
|
|
327
|
+
|
|
328
|
+
```bash
|
|
329
|
+
# Download the finetuned model
|
|
330
|
+
wget https://huggingface.co/tobil/qmd-query-expansion-1.7B-gguf/resolve/main/qmd-query-expansion-1.7B-q4_k_m.gguf
|
|
331
|
+
|
|
332
|
+
# Start llama-server for LLM inference
|
|
333
|
+
llama-server -m qmd-query-expansion-1.7B-q4_k_m.gguf \
|
|
334
|
+
--port 8089 --host 0.0.0.0 \
|
|
335
|
+
-ngl 99 -c 4096 --batch-size 512
|
|
336
|
+
```
|
|
337
|
+
|
|
338
|
+
### Reranker Server
|
|
339
|
+
|
|
340
|
+
Cross-encoder reranking for `query` and `intent_search` pipelines on port 8090. ClawMem calls the `/v1/rerank` endpoint (or falls back to scoring via `/v1/completions` for compatible servers).
|
|
341
|
+
|
|
342
|
+
Scores each candidate against the original query (cross-encoder architecture). `query` pipeline: 4000 char context per doc (deep reranking); `intent_search`: 200 char context per doc (fast reranking).
|
|
343
|
+
|
|
344
|
+
**GPU with VRAM to spare (recommended):** [zerank-2-Q4_K_M](https://huggingface.co/keisuke-miyako/zerank-2-gguf-q4_k_m) (2.4GB, ~3.3GB VRAM). Outperforms Cohere rerank-3.5 and Gemini 2.5 Flash. Optimal pairing with zembed-1 (same distillation architecture via zELO). **CC-BY-NC-4.0** — non-commercial only.
|
|
345
|
+
|
|
346
|
+
```bash
|
|
347
|
+
wget https://huggingface.co/keisuke-miyako/zerank-2-gguf-q4_k_m/resolve/main/zerank-2-Q4_k_m.gguf
|
|
348
|
+
|
|
349
|
+
# -ub must match -b for non-causal attention
|
|
350
|
+
llama-server -m zerank-2-Q4_K_M.gguf \
|
|
351
|
+
--reranking --port 8090 --host 0.0.0.0 \
|
|
352
|
+
-ngl 99 -c 2048 -b 2048 -ub 2048
|
|
353
|
+
```
|
|
354
|
+
|
|
355
|
+
**CPU / GPU without VRAM to spare:** [qwen3-reranker-0.6B-Q8_0](https://huggingface.co/ggml-org/Qwen3-Reranker-0.6B-Q8_0-GGUF) (~600MB, ~1.3GB VRAM). The QMD native reranker — auto-downloaded by `node-llama-cpp` if no server is running.
|
|
356
|
+
|
|
357
|
+
```bash
|
|
358
|
+
wget https://huggingface.co/ggml-org/Qwen3-Reranker-0.6B-Q8_0-GGUF/resolve/main/Qwen3-Reranker-0.6B-Q8_0.gguf
|
|
359
|
+
|
|
360
|
+
llama-server -m Qwen3-Reranker-0.6B-Q8_0.gguf \
|
|
361
|
+
--reranking --port 8090 --host 0.0.0.0 \
|
|
362
|
+
-ngl 99 -c 2048 --batch-size 512
|
|
363
|
+
```
|
|
364
|
+
|
|
365
|
+
**Note:** zerank-2 and zembed-1 use non-causal attention — `-ub` (ubatch) must equal `-b` (batch). Omitting `-ub` or setting it lower causes assertion crashes. qwen3-reranker-0.6B does not have this requirement. See [llama.cpp#12836](https://github.com/ggml-org/llama.cpp/issues/12836).
|
|
366
|
+
|
|
367
|
+
### MCP Server
|
|
368
|
+
|
|
369
|
+
ClawMem exposes 26 MCP tools via the [Model Context Protocol](https://modelcontextprotocol.io) and an optional HTTP REST API. Any MCP-compatible client or HTTP client can use it.
|
|
370
|
+
|
|
371
|
+
**Claude Code (automatic):**
|
|
372
|
+
|
|
373
|
+
```bash
|
|
374
|
+
./bin/clawmem setup mcp # Registers in ~/.claude.json
|
|
375
|
+
```
|
|
376
|
+
|
|
377
|
+
**Manual (any MCP client):**
|
|
378
|
+
|
|
379
|
+
Add to your MCP config (e.g. `~/.claude.json`, `claude_desktop_config.json`, or your client's equivalent):
|
|
380
|
+
|
|
381
|
+
```json
|
|
382
|
+
{
|
|
383
|
+
"mcpServers": {
|
|
384
|
+
"clawmem": {
|
|
385
|
+
"command": "/absolute/path/to/clawmem/bin/clawmem",
|
|
386
|
+
"args": ["mcp"]
|
|
387
|
+
}
|
|
388
|
+
}
|
|
389
|
+
}
|
|
390
|
+
```
|
|
391
|
+
|
|
392
|
+
The server runs via stdio — no network port needed. The `bin/clawmem` wrapper sets the GPU endpoint env vars automatically.
|
|
393
|
+
|
|
394
|
+
**Verify:** After registering, your client should see tools including `memory_retrieve`, `search`, `vsearch`, `query`, `query_plan`, `intent_search`, `timeline`, etc.
|
|
395
|
+
|
|
396
|
+
### HTTP REST API (optional)
|
|
397
|
+
|
|
398
|
+
For web dashboards, non-MCP agents, cross-machine access, or programmatic use:
|
|
399
|
+
|
|
400
|
+
```bash
|
|
401
|
+
./bin/clawmem serve # localhost:7438, no auth
|
|
402
|
+
./bin/clawmem serve --port 8080 # custom port
|
|
403
|
+
CLAWMEM_API_TOKEN=secret ./bin/clawmem serve # with bearer token auth
|
|
404
|
+
```
|
|
405
|
+
|
|
406
|
+
**Endpoints:**
|
|
407
|
+
|
|
408
|
+
| Method | Path | Description |
|
|
409
|
+
|---|---|---|
|
|
410
|
+
| GET | `/health` | Liveness probe + version + doc count |
|
|
411
|
+
| GET | `/stats` | Full index statistics |
|
|
412
|
+
| POST | `/search` | Unified search (`mode`: auto/keyword/semantic/hybrid) |
|
|
413
|
+
| POST | `/retrieve` | Smart retrieve with auto-routing (`mode`: auto/keyword/semantic/causal/timeline/hybrid) |
|
|
414
|
+
| GET | `/documents/:docid` | Single document by 6-char hash prefix |
|
|
415
|
+
| GET | `/documents?pattern=...` | Multi-get by glob pattern |
|
|
416
|
+
| GET | `/timeline/:docid` | Temporal neighborhood (before/after) |
|
|
417
|
+
| GET | `/sessions` | Recent session history |
|
|
418
|
+
| GET | `/collections` | List all collections |
|
|
419
|
+
| GET | `/lifecycle/status` | Active/archived/pinned/snoozed counts |
|
|
420
|
+
| POST | `/documents/:docid/pin` | Pin/unpin |
|
|
421
|
+
| POST | `/documents/:docid/snooze` | Snooze until date |
|
|
422
|
+
| POST | `/documents/:docid/forget` | Deactivate |
|
|
423
|
+
| POST | `/lifecycle/sweep` | Archive stale docs (dry_run default) |
|
|
424
|
+
| GET | `/graph/causal/:docid` | Causal chain traversal |
|
|
425
|
+
| GET | `/graph/similar/:docid` | k-NN neighbors |
|
|
426
|
+
| GET | `/export` | Full vault export as JSON |
|
|
427
|
+
| POST | `/reindex` | Trigger re-scan |
|
|
428
|
+
| POST | `/graphs/build` | Rebuild temporal + semantic graphs |
|
|
429
|
+
|
|
430
|
+
**Auth:** Set `CLAWMEM_API_TOKEN` env var to require `Authorization: Bearer <token>` on all requests. If unset, access is open (localhost-only by default). See `.env.example`.
|
|
431
|
+
|
|
432
|
+
**Search example:**
|
|
433
|
+
|
|
434
|
+
```bash
|
|
435
|
+
curl -X POST http://localhost:7438/search \
|
|
436
|
+
-H 'Content-Type: application/json' \
|
|
437
|
+
-d '{"query": "authentication decisions", "mode": "hybrid", "compact": true}'
|
|
438
|
+
```
|
|
439
|
+
|
|
440
|
+
### Verify Installation
|
|
441
|
+
|
|
442
|
+
```bash
|
|
443
|
+
./bin/clawmem doctor # Full health check
|
|
444
|
+
./bin/clawmem status # Quick index status
|
|
445
|
+
bun test # Run test suite
|
|
446
|
+
```
|
|
447
|
+
|
|
448
|
+
## Agent Instructions
|
|
449
|
+
|
|
450
|
+
ClawMem ships three instruction files and an optional maintenance agent:
|
|
451
|
+
|
|
452
|
+
| File | Loaded | Purpose |
|
|
453
|
+
|------|--------|---------|
|
|
454
|
+
| `CLAUDE.md` | Automatically (Claude Code, when working in this repo) | Complete operational reference — hooks, tools, query optimization, scoring, pipeline details, troubleshooting |
|
|
455
|
+
| `AGENTS.md` | Framework-dependent | Identical to CLAUDE.md — cross-framework compatibility (Cursor, Windsurf, Codex, etc.) |
|
|
456
|
+
| `SKILL.md` | On-demand via Claude Code skill system | Same reference as CLAUDE.md, available across all projects |
|
|
457
|
+
| `agents/clawmem-curator.md` | On-demand via `clawmem setup curator` | Maintenance agent — lifecycle triage, retrieval health checks, dedup sweeps, graph rebuilds |
|
|
458
|
+
|
|
459
|
+
**Working in the ClawMem repo:** No action needed — `CLAUDE.md` loads automatically.
|
|
460
|
+
|
|
461
|
+
**Using ClawMem from other projects:** Your agent needs instructions on how to use ClawMem's hooks and MCP tools. Two options:
|
|
462
|
+
|
|
463
|
+
### Option A: Copy instructions into your project
|
|
464
|
+
|
|
465
|
+
Copy the contents of `CLAUDE.md` (or the relevant sections) into your project's own `CLAUDE.md` or `AGENTS.md`. Simple but requires manual updates when ClawMem changes.
|
|
466
|
+
|
|
467
|
+
### Option B: Install as a skill (recommended)
|
|
468
|
+
|
|
469
|
+
Symlink ClawMem into Claude Code's skill directory for on-demand reference across all projects:
|
|
470
|
+
|
|
471
|
+
```bash
|
|
472
|
+
mkdir -p ~/.claude/skills
|
|
473
|
+
ln -sf ~/clawmem ~/.claude/skills/clawmem
|
|
474
|
+
```
|
|
475
|
+
|
|
476
|
+
Then add this minimal trigger block to your global `~/.claude/CLAUDE.md`:
|
|
477
|
+
|
|
478
|
+
```markdown
|
|
479
|
+
## ClawMem
|
|
480
|
+
|
|
481
|
+
Architecture: hooks (automatic, ~90%) + MCP tools (explicit, ~10%).
|
|
482
|
+
|
|
483
|
+
Vault: `~/.cache/clawmem/index.sqlite` | Config: `~/.config/clawmem/config.yaml`
|
|
484
|
+
|
|
485
|
+
### Escalation Gate (3 rules — ONLY escalate to MCP tools when one fires)
|
|
486
|
+
|
|
487
|
+
1. **Low-specificity injection** — `<vault-context>` is empty or lacks the specific fact needed
|
|
488
|
+
2. **Cross-session question** — "why did we decide X", "what changed since last time"
|
|
489
|
+
3. **Pre-irreversible check** — before destructive or hard-to-reverse changes
|
|
490
|
+
|
|
491
|
+
### Tool Routing (once escalated)
|
|
492
|
+
|
|
493
|
+
**Preferred:** `memory_retrieve(query)` — auto-classifies and routes to the optimal backend.
|
|
494
|
+
|
|
495
|
+
**Direct routing** (when calling specific tools):
|
|
496
|
+
|
|
497
|
+
"why did we decide X" → intent_search(query) NOT query()
|
|
498
|
+
"what happened last session" → session_log() NOT query()
|
|
499
|
+
"what else relates to X" → find_similar(file) NOT query()
|
|
500
|
+
Complex multi-topic → query_plan(query) NOT query()
|
|
501
|
+
General recall → query(query, compact=true)
|
|
502
|
+
Keyword spot check → search(query, compact=true)
|
|
503
|
+
Conceptual/fuzzy → vsearch(query, compact=true)
|
|
504
|
+
Full content → multi_get("path1,path2")
|
|
505
|
+
Lifecycle health → lifecycle_status()
|
|
506
|
+
Stale sweep → lifecycle_sweep(dry_run=true)
|
|
507
|
+
Restore archived → lifecycle_restore(query)
|
|
508
|
+
|
|
509
|
+
ALWAYS `compact=true` first → review → `multi_get` for full content.
|
|
510
|
+
|
|
511
|
+
### Proactive Use (no escalation gate needed)
|
|
512
|
+
|
|
513
|
+
- User says "remember this" / critical decision made → `memory_pin(query)` immediately
|
|
514
|
+
- User corrects a misconception → `memory_pin(query)` the correction
|
|
515
|
+
- `<vault-context>` surfaces irrelevant/noisy content → `memory_snooze(query, until)` for 30 days
|
|
516
|
+
- Need to correct a memory → `memory_forget(query)`
|
|
517
|
+
- After bulk ingestion → `build_graphs`
|
|
518
|
+
|
|
519
|
+
### Anti-Patterns
|
|
520
|
+
|
|
521
|
+
- Do NOT use `query()` for everything — match query type to tool, or use `memory_retrieve`
|
|
522
|
+
- Do NOT call query/intent_search every turn — 3 rules above are the only gates
|
|
523
|
+
- Do NOT re-search what's already in `<vault-context>`
|
|
524
|
+
- Do NOT pin everything — pin is for persistent high-priority items, not routine decisions
|
|
525
|
+
- Do NOT forget memories to "clean up" — let confidence decay handle it
|
|
526
|
+
- Do NOT wait for curator to pin decisions — pin immediately when critical
|
|
527
|
+
|
|
528
|
+
Invoke `Skill tool with skill="clawmem"` when:
|
|
529
|
+
- Retrieval quality is poor or results miss expected content (query optimization, troubleshooting)
|
|
530
|
+
- Adding new content directories or indexing something (collection setup, embedding workflow)
|
|
531
|
+
- After bulk document creation or ingestion (graph building, embedding)
|
|
532
|
+
- Need lifecycle triage beyond basic status/sweep (run curator: "curate memory")
|
|
533
|
+
- Any operation beyond the basic tool routing above
|
|
534
|
+
```
|
|
535
|
+
|
|
536
|
+
This gives your agent the 3-rule gate, tool routing, and proactive behaviors always loaded, with situation-triggered skill invocation for the ~10% manual operations.
|
|
537
|
+
|
|
538
|
+
---
|
|
539
|
+
|
|
540
|
+
## CLI Reference
|
|
541
|
+
|
|
542
|
+
```
|
|
543
|
+
clawmem init Create DB + config
|
|
544
|
+
clawmem bootstrap <vault> [--name N] [--skip-embed] One-command setup
|
|
545
|
+
clawmem collection add <path> --name <name> Add a collection
|
|
546
|
+
clawmem collection list List collections
|
|
547
|
+
clawmem collection remove <name> Remove a collection
|
|
548
|
+
|
|
549
|
+
clawmem update [--pull] [--embed] Incremental re-scan
|
|
550
|
+
clawmem embed [-f] Generate fragment embeddings
|
|
551
|
+
clawmem reindex [--force] Full re-index
|
|
552
|
+
clawmem watch File watcher daemon
|
|
553
|
+
|
|
554
|
+
clawmem search <query> [-n N] [--json] BM25 keyword search
|
|
555
|
+
clawmem vsearch <query> [-n N] [--json] Vector semantic search
|
|
556
|
+
clawmem query <query> [-n N] [--json] Full hybrid pipeline
|
|
557
|
+
|
|
558
|
+
clawmem profile Show user profile
|
|
559
|
+
clawmem profile rebuild Force profile rebuild
|
|
560
|
+
clawmem update-context Regenerate per-folder CLAUDE.md
|
|
561
|
+
|
|
562
|
+
clawmem budget [--session ID] Token utilization
|
|
563
|
+
clawmem log [--last N] Session history
|
|
564
|
+
clawmem hook <name> Manual hook trigger
|
|
565
|
+
clawmem surface --context --stdin IO6: pre-prompt context injection
|
|
566
|
+
clawmem surface --bootstrap --stdin IO6: per-session bootstrap injection
|
|
567
|
+
|
|
568
|
+
clawmem reflect [N] Cross-session reflection (last N days, default 14)
|
|
569
|
+
clawmem consolidate [--dry-run] [N] Find and archive duplicate low-confidence docs
|
|
570
|
+
|
|
571
|
+
clawmem install-service [--enable] [--remove] Systemd watcher service
|
|
572
|
+
clawmem setup hooks [--remove] Install/remove Claude Code hooks
|
|
573
|
+
clawmem setup mcp [--remove] Register/remove MCP server
|
|
574
|
+
clawmem setup curator [--remove] Install/remove curator maintenance agent
|
|
575
|
+
clawmem mcp Start stdio MCP server
|
|
576
|
+
clawmem serve [--port 7438] [--host 127.0.0.1] Start HTTP REST API server
|
|
577
|
+
clawmem path Print database path
|
|
578
|
+
clawmem doctor Full health check
|
|
579
|
+
clawmem status Quick index status
|
|
580
|
+
```
|
|
581
|
+
|
|
582
|
+
## MCP Tools (25)
|
|
583
|
+
|
|
584
|
+
Registered by `clawmem setup mcp`. Available to any MCP-compatible client.
|
|
585
|
+
|
|
586
|
+
| Tool | Description |
|
|
587
|
+
|---|---|
|
|
588
|
+
| `__IMPORTANT` | Workflow guide: prefer `memory_retrieve` → match query type to tool → `multi_get` for full content |
|
|
589
|
+
|
|
590
|
+
### Core Search & Retrieval
|
|
591
|
+
|
|
592
|
+
| Tool | Description |
|
|
593
|
+
|---|---|
|
|
594
|
+
| `memory_retrieve` | **Preferred entry point.** Auto-classifies query and routes to optimal backend (query, intent_search, session_log, find_similar, or query_plan). Use instead of manually choosing a search tool. |
|
|
595
|
+
| `search` | BM25 keyword search — for exact terms, config names, error codes, filenames. Composite scoring + co-activation boost + compact mode. Collection filter supports comma-separated values. Prefer `memory_retrieve` for auto-routing. |
|
|
596
|
+
| `vsearch` | Vector semantic search — for conceptual/fuzzy matching when exact keywords are unknown. Composite scoring + co-activation boost + compact mode. Collection filter supports comma-separated values. Prefer `memory_retrieve` for auto-routing. |
|
|
597
|
+
| `query` | Full hybrid pipeline (BM25 + vector + rerank) — general-purpose when query type is unclear. WRONG for "why" questions (use `intent_search`) or cross-session queries (use `session_log`). Prefer `memory_retrieve` for auto-routing. Intent hint, strong-signal bypass, chunk dedup, candidateLimit, MMR diversity, compact mode. |
|
|
598
|
+
| `get` | Retrieve single document by path or docid |
|
|
599
|
+
| `multi_get` | Retrieve multiple docs by glob or comma-separated list |
|
|
600
|
+
| `find_similar` | USE THIS for "what else relates to X", "show me similar docs". Finds k-NN vector neighbors — discovers connections beyond keyword overlap that search/query cannot find. |
|
|
601
|
+
|
|
602
|
+
### Intent-Aware Search
|
|
603
|
+
|
|
604
|
+
| Tool | Description |
|
|
605
|
+
|---|---|
|
|
606
|
+
| `intent_search` | USE THIS for "why did we decide X", "what caused Y", "who worked on Z". Classifies intent (WHY/WHEN/ENTITY/WHAT), traverses causal + semantic graph edges. Returns decision chains that `query()` cannot find. |
|
|
607
|
+
| `query_plan` | USE THIS for complex multi-topic queries ("tell me about X and also Y", "compare A with B"). Decomposes into parallel typed clauses (bm25/vector/graph), executes each, merges via RRF. `query()` searches as one blob — this tool splits topics and routes each optimally. |
|
|
608
|
+
|
|
609
|
+
**`intent_search` pipeline:** Query → Intent Classification → BM25 + Vector → Intent-Weighted RRF → Graph Expansion (WHY/ENTITY intents) → Cross-Encoder Reranking → Composite Scoring
|
|
610
|
+
|
|
611
|
+
**`query_plan` pipeline:** Query → LLM decomposition into 2-4 typed clauses → Parallel execution (BM25/vector/graph per clause) → RRF merge across clauses → Composite scoring. Falls back to single-query for simple inputs.
|
|
612
|
+
|
|
613
|
+
### Multi-Graph & Causal
|
|
614
|
+
|
|
615
|
+
| Tool | Description |
|
|
616
|
+
|---|---|
|
|
617
|
+
| `build_graphs` | Build temporal and/or semantic graphs from document corpus |
|
|
618
|
+
| `find_causal_links` | Trace decision chains: "what led to X", "how we got from A to B". Follow up `intent_search` with this tool on a top result to walk the full causal chain. Traverses causes / caused_by / both up to N hops with depth-annotated reasoning. |
|
|
619
|
+
| `memory_evolution_status` | Show how a document's A-MEM metadata evolved over time |
|
|
620
|
+
| `timeline` | Show the temporal neighborhood around a document — what was created/modified before and after it. Progressive disclosure: search → timeline (context) → get (full content). Supports same-collection scoping and session correlation. |
|
|
621
|
+
|
|
622
|
+
### Beads Integration
|
|
623
|
+
|
|
624
|
+
| Tool | Description |
|
|
625
|
+
|---|---|
|
|
626
|
+
| `beads_sync` | Sync Beads issues from Dolt backend (`bd` CLI) into memory: creates docs, bridges all dep types to `memory_relations`, runs A-MEM enrichment |
|
|
627
|
+
|
|
628
|
+
### Memory Management & Lifecycle
|
|
629
|
+
|
|
630
|
+
| Tool | Description |
|
|
631
|
+
|---|---|
|
|
632
|
+
| `memory_forget` | Search → deactivate closest match (with audit trail) |
|
|
633
|
+
| `memory_pin` | Pin a memory for +0.3 composite boost. USE PROACTIVELY when: user states a persistent constraint, makes an architecture decision, or corrects a misconception. Don't wait for curator — pin critical decisions immediately. |
|
|
634
|
+
| `memory_snooze` | Temporarily hide a memory from context surfacing until a date. USE PROACTIVELY when `<vault-context>` repeatedly surfaces irrelevant content — snooze for 30 days instead of ignoring it. |
|
|
635
|
+
| `status` | Index health with content type distribution |
|
|
636
|
+
| `reindex` | Trigger vault re-scan |
|
|
637
|
+
| `index_stats` | Detailed stats: types, staleness, access counts, sessions |
|
|
638
|
+
| `session_log` | USE THIS for "last time", "yesterday", "what happened", "what did we do". Returns session history with handoffs and file changes. DO NOT use `query()` for cross-session questions — this tool has session-specific data that search cannot find. |
|
|
639
|
+
| `profile` | Current static + dynamic user profile |
|
|
640
|
+
| `lifecycle_status` | Document lifecycle statistics: active, archived, forgotten, pinned, snoozed counts and policy summary |
|
|
641
|
+
| `lifecycle_sweep` | Run lifecycle policies: archive stale docs past retention threshold, optionally purge old archives. Defaults to dry_run (preview only) |
|
|
642
|
+
| `lifecycle_restore` | Restore documents that were auto-archived by lifecycle policies. Filter by query, collection, or restore all |
|
|
643
|
+
|
|
644
|
+
### Compact Mode
|
|
645
|
+
|
|
646
|
+
`search`, `vsearch`, and `query` accept `compact: true` to return `{ id, path, title, score, snippet, content_type, fragment }` instead of full content. Saves ~5x tokens for initial filtering.
|
|
647
|
+
|
|
648
|
+
## Hooks (Claude Code Integration)
|
|
649
|
+
|
|
650
|
+
Hooks installed by `clawmem setup hooks`:
|
|
651
|
+
|
|
652
|
+
| Hook | Event | What It Does |
|
|
653
|
+
|---|---|---|
|
|
654
|
+
| `context-surfacing` | UserPromptSubmit | Hybrid search → FTS supplement → file-aware search (E13) → snooze filter → spreading activation (E11) → memory type diversification (E10) → tiered injection (HOT/WARM/COLD) → `<vault-context>` + `<vault-routing>` hint. Profile-driven budget/results/timeout. |
|
|
655
|
+
| `postcompact-inject` | SessionStart | Re-injects authoritative context after compaction: precompact state + recent decisions + antipatterns + vault context (1200 token budget) |
|
|
656
|
+
| `curator-nudge` | SessionStart | Surfaces curator report actions, nudges when report is stale (>7 days) |
|
|
657
|
+
| `precompact-extract` | PreCompact | Extracts decisions, file paths, open questions before auto-compaction → writes `precompact-state.md` to auto-memory |
|
|
658
|
+
| `decision-extractor` | Stop | GGUF observer extracts structured decisions, infers causal links, detects contradictions with prior decisions |
|
|
659
|
+
| `handoff-generator` | Stop | GGUF observer generates rich handoff, regex fallback |
|
|
660
|
+
| `feedback-loop` | Stop | Silently boosts referenced notes, decays unused ones, records co-activation + usage relations between co-referenced docs, tracks utility signals (surfaced vs referenced ratio for lifecycle automation) |
|
|
661
|
+
|
|
662
|
+
Additional hooks available but not installed by default:
|
|
663
|
+
|
|
664
|
+
| Hook | Event | Why Not Default |
|
|
665
|
+
|---|---|---|
|
|
666
|
+
| `session-bootstrap` | SessionStart | Injects ~2000 tokens before user types anything. `context-surfacing` on first prompt is more precise. |
|
|
667
|
+
| `staleness-check` | SessionStart | Redundant without `session-bootstrap` (stale notes are part of its output). |
|
|
668
|
+
| `pretool-inject` | PreToolUse | Disabled in HOOK_EVENT_MAP (cannot inject additionalContext via PreToolUse). |
|
|
669
|
+
|
|
670
|
+
Hooks handle ~90% of retrieval automatically. For agent escalation logic (when to use MCP tools vs rely on hooks), see `CLAUDE.md`.
|
|
671
|
+
|
|
672
|
+
## Search Pipeline
|
|
673
|
+
|
|
674
|
+
```
|
|
675
|
+
User Query + optional intent hint
|
|
676
|
+
→ BM25 Probe → Strong Signal Check (skip expansion if top hit ≥ 0.85 with gap ≥ 0.15; disabled when intent provided)
|
|
677
|
+
→ Query Expansion (intent steers LLM prompt when provided)
|
|
678
|
+
→ BM25 + Vector Search (parallel, original query 2× weight)
|
|
679
|
+
→ Reciprocal Rank Fusion → slice to candidateLimit (default 30)
|
|
680
|
+
→ Intent-Aware Chunk Selection (intent terms at 0.5× weight alongside query terms at 1.0×)
|
|
681
|
+
→ Cross-Encoder Reranking (4000 char context; intent prepended; chunk dedup; batch cap=4)
|
|
682
|
+
→ Position-Aware Blending (α=0.75 top3, 0.60 mid, 0.40 tail)
|
|
683
|
+
→ SAME Composite Scoring ((search × 0.5 + recency × 0.25 + confidence × 0.25) × qualityMultiplier × lengthNorm × coActivationBoost + pinBoost)
|
|
684
|
+
→ MMR Diversity Filter (Jaccard bigram similarity > 0.6 → demoted)
|
|
685
|
+
→ Ranked Results
|
|
686
|
+
```
|
|
687
|
+
|
|
688
|
+
For agent-facing query optimization (tool selection, query string quality, intent parameter, candidateLimit), see `CLAUDE.md`.
|
|
689
|
+
|
|
690
|
+
### Multi-Graph Traversal
|
|
691
|
+
|
|
692
|
+
For WHY and ENTITY queries, the search pipeline expands results through the memory graph:
|
|
693
|
+
|
|
694
|
+
1. Start from top-10 baseline results as anchor nodes
|
|
695
|
+
2. For each frontier node: get neighbors via any relation type
|
|
696
|
+
3. Score transitions: `λ1·structure + λ2·semantic_affinity`
|
|
697
|
+
4. Apply decay: `new_score = parent_score * γ + transition_score`
|
|
698
|
+
5. Keep top-k (beam search), repeat until max depth or budget
|
|
699
|
+
|
|
700
|
+
**Graph types:**
|
|
701
|
+
- **Semantic** — vector similarity edges (threshold > 0.7)
|
|
702
|
+
- **Temporal** — chronological document ordering
|
|
703
|
+
- **Causal** — LLM-inferred cause→effect from Observer facts + Beads `blocks`/`waits-for` deps
|
|
704
|
+
- **Supporting** — LLM-analyzed document relationships + Beads `discovered-from` deps
|
|
705
|
+
- **Contradicts** — LLM-analyzed document relationships
|
|
706
|
+
|
|
707
|
+
### Content Type Scoring
|
|
708
|
+
|
|
709
|
+
| Type | Half-life | Baseline | Notes |
|
|
710
|
+
|---|---|---|---|
|
|
711
|
+
| `decision` | ∞ | 0.85 | Never decays |
|
|
712
|
+
| `hub` | ∞ | 0.80 | Never decays |
|
|
713
|
+
| `research` | 90 days | 0.70 | |
|
|
714
|
+
| `project` | 120 days | 0.65 | |
|
|
715
|
+
| `handoff` | 30 days | 0.60 | Fast decay — most recent matters |
|
|
716
|
+
| `progress` | 45 days | 0.50 | |
|
|
717
|
+
| `note` | 60 days | 0.50 | Default |
|
|
718
|
+
| `antipattern` | ∞ | 0.75 | Never decays — accumulated negative patterns persist |
|
|
719
|
+
|
|
720
|
+
Content types are inferred from frontmatter or file path patterns. Half-lives extend up to 3× for frequently-accessed memories (access reinforcement, decays over 90 days). Non-durable types (handoff, progress, note, project) lose 5% confidence per week without access (attention decay). Decision/hub/research/antipattern are exempt.
|
|
721
|
+
|
|
722
|
+
**Quality scoring:** Each document gets a `quality_score` (0.0–1.0) computed during indexing based on length, structure (headings, lists), decision/correction keywords, and frontmatter richness. Applied as `qualityMultiplier = 0.7 + 0.6 × qualityScore` (range: 0.7× penalty to 1.3× boost).
|
|
723
|
+
|
|
724
|
+
**Length normalization:** `1/(1 + 0.5 × log2(max(bodyLength/500, 1)))` — penalizes verbose entries that dominate via keyword density. Floor at 30% of original score.
|
|
725
|
+
|
|
726
|
+
**Frequency boost:** Documents with higher revision counts or duplicate counts get a durability signal: `freqSignal = (revisions - 1) × 2 + (duplicates - 1)`, `freqBoost = min(0.10, log1p(freqSignal) × 0.03)`. Revision count (content evolution) is weighted 2× vs duplicate count (ingest repetition). Capped at 10%.
|
|
727
|
+
|
|
728
|
+
**Pin boost:** Pinned documents get +0.3 additive boost (capped at 1.0). Use `memory_pin` to pin critical memories.
|
|
729
|
+
|
|
730
|
+
**Snooze:** Snoozed documents are filtered out of context surfacing until their snooze date. Use `memory_snooze` for temporary suppression.
|
|
731
|
+
|
|
732
|
+
**Contradiction detection:** When `decision-extractor` identifies a new decision that contradicts a prior one, the old decision's confidence is automatically lowered (−0.25 for contradictions, −0.15 for updates). Superseded decisions naturally fade from context surfacing without manual intervention.
|
|
733
|
+
|
|
734
|
+
## Features
|
|
735
|
+
|
|
736
|
+
### A-MEM (Adaptive Memory Evolution)
|
|
737
|
+
|
|
738
|
+
Documents are automatically enriched with structured metadata when indexed:
|
|
739
|
+
- **Keywords** (3-7 specific terms)
|
|
740
|
+
- **Tags** (3-5 broad categories)
|
|
741
|
+
- **Context** (1-2 sentence description)
|
|
742
|
+
|
|
743
|
+
When new documents create links, neighboring documents' metadata evolves — keywords merge, context updates, and the evolution history is tracked with version numbers and reasoning.
|
|
744
|
+
|
|
745
|
+
### Causal Inference
|
|
746
|
+
|
|
747
|
+
The decision-extractor hook analyzes Observer facts for causal relationships. When multiple facts exist in an observation, an LLM identifies cause→effect pairs (confidence ≥ 0.6). Causal chains can be queried via `find_causal_links` with multi-hop traversal using recursive CTEs.
|
|
748
|
+
|
|
749
|
+
### Beads Integration
|
|
750
|
+
|
|
751
|
+
Projects using [Beads](https://github.com/steveyegge/beads) (v0.58.0+, Dolt backend) issue tracking are fully integrated into the MAGMA memory graph:
|
|
752
|
+
|
|
753
|
+
- **Auto-sync**: Watcher detects `.beads/` directory changes → `syncBeadsIssues()` queries `bd` CLI for live Dolt data → creates markdown docs in `beads` collection
|
|
754
|
+
- **Dependency bridging**: All Beads dependency types map to `memory_relations` edges — `blocks`/`conditional-blocks`/`waits-for`/`caused-by`→causal, `discovered-from`/`supersedes`/`duplicates`→supporting, `relates-to`/`related`/`parent-child`→semantic. Tagged `{origin: "beads"}` for traceability.
|
|
755
|
+
- **A-MEM enrichment**: New beads docs get full `postIndexEnrich()` — memory note construction, semantic/entity link generation, memory evolution
|
|
756
|
+
- **Graph traversal**: `intent_search` and `find_causal_links` traverse beads dependency edges alongside observation-inferred causal chains
|
|
757
|
+
- **Requirement**: `bd` binary on PATH or at `~/go/bin/bd`
|
|
758
|
+
|
|
759
|
+
`beads_sync` MCP tool for manual sync; watcher handles routine operations automatically.
|
|
760
|
+
|
|
761
|
+
### Fragment-Level Embedding
|
|
762
|
+
|
|
763
|
+
Documents are split into semantic fragments (sections, lists, code blocks, frontmatter, facts) and each fragment gets its own vector embedding. Full-doc embedding is preserved for broad-match queries.
|
|
764
|
+
|
|
765
|
+
### Local Observer Agent
|
|
766
|
+
|
|
767
|
+
Uses the LLM server (shared with query expansion and intent classification) to extract structured observations from session transcripts: type, title, facts, narrative, concepts, files read/modified. Falls back to regex patterns if the model is unavailable.
|
|
768
|
+
|
|
769
|
+
### User Profile
|
|
770
|
+
|
|
771
|
+
Two-tier auto-curated profile extracted from your decisions and hub documents:
|
|
772
|
+
- **Static**: persistent facts (Levenshtein-deduplicated)
|
|
773
|
+
- **Dynamic**: recent session context
|
|
774
|
+
|
|
775
|
+
Injected at session start for instant personalization.
|
|
776
|
+
|
|
777
|
+
### Prompt Injection Filtering
|
|
778
|
+
|
|
779
|
+
Five detection layers protect injected content: legacy string patterns, role injection regex, instruction override patterns, delimiter injection, and unicode obfuscation detection. Filtered results are skipped entirely (no placeholder tokens wasted).
|
|
780
|
+
|
|
781
|
+
### Consolidation Worker
|
|
782
|
+
|
|
783
|
+
Optional background process that enriches documents missing A-MEM metadata. Runs on a configurable interval, processing 3 documents per tick. Non-blocking (Timer.unref).
|
|
784
|
+
|
|
785
|
+
### Per-Folder CLAUDE.md Generation
|
|
786
|
+
|
|
787
|
+
Automatically generates context sections in per-folder CLAUDE.md files from recent decisions and session activity related to that directory.
|
|
788
|
+
|
|
789
|
+
### Feedback Loop
|
|
790
|
+
|
|
791
|
+
Notes referenced by the agent during a session get boosted (`access_count++`). Unreferenced notes decay via recency. Over time, useful notes rise and noise fades.
|
|
792
|
+
|
|
793
|
+
## Feature Flags
|
|
794
|
+
|
|
795
|
+
| Variable | Default | Effect |
|
|
796
|
+
|---|---|---|
|
|
797
|
+
| `CLAWMEM_ENABLE_AMEM` | enabled | A-MEM note construction + link generation during indexing |
|
|
798
|
+
| `CLAWMEM_ENABLE_CONSOLIDATION` | disabled | Background worker for backlog A-MEM enrichment |
|
|
799
|
+
| `CLAWMEM_CONSOLIDATION_INTERVAL` | 300000 | Worker interval in ms (min 15000) |
|
|
800
|
+
| `CLAWMEM_EMBED_URL` | `http://localhost:8088` | Embedding server URL. Uses llama-server (GPU or CPU) or cloud API. Falls back to in-process `node-llama-cpp` if unset. |
|
|
801
|
+
| `CLAWMEM_EMBED_API_KEY` | (none) | API key for cloud embedding. Enables cloud mode: batch embedding, provider-specific params, TPM-aware pacing. |
|
|
802
|
+
| `CLAWMEM_EMBED_MODEL` | `embedding` | Model name for embedding requests. Override for cloud providers (e.g. `jina-embeddings-v5-text-small`). |
|
|
803
|
+
| `CLAWMEM_EMBED_TPM_LIMIT` | `100000` | Tokens-per-minute limit for cloud embedding pacing. Match to your provider tier. |
|
|
804
|
+
| `CLAWMEM_EMBED_DIMENSIONS` | (none) | Output dimensions for OpenAI `text-embedding-3-*` Matryoshka models (e.g. `512`, `1024`). |
|
|
805
|
+
| `CLAWMEM_LLM_URL` | `http://localhost:8089` | LLM server URL for intent/query/A-MEM. Without it, falls to `node-llama-cpp` (if allowed). |
|
|
806
|
+
| `CLAWMEM_RERANK_URL` | `http://localhost:8090` | Reranker server URL. Without it, falls to `node-llama-cpp` (if allowed). |
|
|
807
|
+
| `CLAWMEM_NO_LOCAL_MODELS` | `false` | Block `node-llama-cpp` from auto-downloading GGUF models. Set `true` for remote-only setups where you want fail-fast on unreachable endpoints. |
|
|
808
|
+
|
|
809
|
+
## Configuration
|
|
810
|
+
|
|
811
|
+
### Collection Config
|
|
812
|
+
|
|
813
|
+
`~/.config/clawmem/config.yaml`:
|
|
814
|
+
|
|
815
|
+
```yaml
|
|
816
|
+
collections:
|
|
817
|
+
notes:
|
|
818
|
+
path: /home/user/notes
|
|
819
|
+
pattern: "**/*.md"
|
|
820
|
+
autoEmbed: true
|
|
821
|
+
docs:
|
|
822
|
+
path: /home/user/docs
|
|
823
|
+
pattern: "**/*.md"
|
|
824
|
+
update: "git pull"
|
|
825
|
+
directoryContext: false # opt-in per-folder CLAUDE.md generation
|
|
826
|
+
```
|
|
827
|
+
|
|
828
|
+
### Database
|
|
829
|
+
|
|
830
|
+
`~/.cache/clawmem/index.sqlite` — single SQLite file with FTS5 + sqlite-vec extensions.
|
|
831
|
+
|
|
832
|
+
### Frontmatter
|
|
833
|
+
|
|
834
|
+
Parsed via `gray-matter`. Supported fields:
|
|
835
|
+
|
|
836
|
+
```yaml
|
|
837
|
+
---
|
|
838
|
+
title: "Document Title"
|
|
839
|
+
tags: [tag1, tag2]
|
|
840
|
+
domain: "infrastructure"
|
|
841
|
+
workstream: "project-name"
|
|
842
|
+
content_type: "decision" # decision|hub|research|project|handoff|progress|note
|
|
843
|
+
review_by: "2026-03-01"
|
|
844
|
+
---
|
|
845
|
+
```
|
|
846
|
+
|
|
847
|
+
## Suggested Memory Filesystem
|
|
848
|
+
|
|
849
|
+
This structure separates human-curated content from auto-generated memories, and within auto-generated content, separates **user memories** (persist across agents, owned by the human) from **agent memories** (operational, generated from sessions). Static knowledge lives in `resources/` with no recency decay, distinct from ephemeral session logs. The layout works with any MCP-compatible client (Claude Code, OpenClaw, custom agents).
|
|
850
|
+
|
|
851
|
+
### Workspace Collection
|
|
852
|
+
|
|
853
|
+
The primary workspace where the agent operates. Path varies by client (e.g., `~/workspace/`, `~/.openclaw/workspace/`, or any directory you choose).
|
|
854
|
+
|
|
855
|
+
```
|
|
856
|
+
<workspace>/ ← Collection: "workspace"
|
|
857
|
+
├── MEMORY.md # Human-curated long-term memory
|
|
858
|
+
├── memory/ # Session logs (daily entries)
|
|
859
|
+
│ ├── 2026-02-05.md
|
|
860
|
+
│ └── 2026-02-06.md
|
|
861
|
+
├── resources/ # Static knowledge — use content_type: hub (∞ half-life)
|
|
862
|
+
│ ├── runbooks/
|
|
863
|
+
│ │ └── deploy-checklist.md
|
|
864
|
+
│ └── onboarding.md
|
|
865
|
+
├── _clawmem/ # Auto-generated — DO NOT EDIT
|
|
866
|
+
│ ├── user/ # User memories (persist across agents/sessions)
|
|
867
|
+
│ │ ├── profile.md # Static facts + dynamic context
|
|
868
|
+
│ │ ├── preferences/ # Extracted preferences (update_existing merge policy)
|
|
869
|
+
│ │ └── entities/ # Named entities (people, services, repos)
|
|
870
|
+
│ ├── agent/ # Agent memories (operational, session-derived)
|
|
871
|
+
│ │ ├── observations/ # Decisions + observations from transcripts
|
|
872
|
+
│ │ ├── handoffs/ # Session summaries with next steps
|
|
873
|
+
│ │ └── antipatterns/ # Accumulated negative patterns (∞ half-life)
|
|
874
|
+
│ └── precompact-state.md # Pre-compaction snapshot (transient)
|
|
875
|
+
└── ...
|
|
876
|
+
```
|
|
877
|
+
|
|
878
|
+
### Project Collections
|
|
879
|
+
|
|
880
|
+
Each project gets its own collection. Same structure, with optional Beads integration.
|
|
881
|
+
|
|
882
|
+
```
|
|
883
|
+
~/Projects/<project>/ ← Collection: "<project>"
|
|
884
|
+
├── .beads/ # Beads issue tracker (Dolt backend, auto-synced)
|
|
885
|
+
│ └── dolt/ # Dolt SQL database (source of truth)
|
|
886
|
+
├── MEMORY.md # Human-curated project memory
|
|
887
|
+
├── memory/ # Project session logs
|
|
888
|
+
│ └── 2026-02-06.md
|
|
889
|
+
├── resources/ # Static project knowledge (∞ half-life)
|
|
890
|
+
│ ├── architecture.md
|
|
891
|
+
│ └── api-reference.md
|
|
892
|
+
├── research/ # Research dumps (fragment-embedded, 90-day decay)
|
|
893
|
+
│ └── 2026-02-06-topic-slug.md
|
|
894
|
+
├── _clawmem/ # Auto-generated per-project
|
|
895
|
+
│ ├── user/
|
|
896
|
+
│ │ └── preferences/
|
|
897
|
+
│ ├── agent/
|
|
898
|
+
│ │ ├── observations/
|
|
899
|
+
│ │ ├── handoffs/
|
|
900
|
+
│ │ ├── antipatterns/
|
|
901
|
+
│ │ └── beads/ # Beads issues as searchable markdown
|
|
902
|
+
│ └── precompact-state.md
|
|
903
|
+
├── CLAUDE.md
|
|
904
|
+
├── src/
|
|
905
|
+
└── README.md
|
|
906
|
+
```
|
|
907
|
+
|
|
908
|
+
### Design Principles
|
|
909
|
+
|
|
910
|
+
| Principle | Rationale | ClawMem Mechanism |
|
|
911
|
+
|---|---|---|
|
|
912
|
+
| **User/agent separation** | User memories (preferences, entities, profile) are owned by the human and persist indefinitely. Agent memories (observations, handoffs) are operational artifacts with lifecycle management. | `_clawmem/user/` vs `_clawmem/agent/` — different merge policies and decay rules per content type |
|
|
913
|
+
| **Resources are first-class** | Static knowledge (runbooks, architecture docs, API refs) should never lose relevance due to recency decay. | `resources/` indexed with `content_type: hub` → ∞ half-life in composite scoring |
|
|
914
|
+
| **Progressive disclosure** | Hook injection (2000 token budget) benefits from tiered loading: compact snippets first, full content on demand. | `compact=true` (L1) → `multi_get` (L2) at query time. Pre-computed abstracts not yet implemented — candidate for future L0 tier. |
|
|
915
|
+
| **Beads as memory edges** | Issue tracker data bridges into the knowledge graph via typed relations, not just as flat documents. | `syncBeadsIssues()` maps deps → `memory_relations`: blocks→causal, discovered-from→supporting, relates-to→semantic |
|
|
916
|
+
| **Merge policies per facet** | Different memory types need different deduplication strategies to prevent bloat. | `saveMemory()` dedup window (30min, normalized hash) + `getMergePolicy()`: decision→dedup_check (cosine>0.92), antipattern→merge_recent (7d), preference→update_existing, handoff→always_new |
|
|
917
|
+
|
|
918
|
+
### Layer Mapping
|
|
919
|
+
|
|
920
|
+
| Layer | Path | Owner | Decay | ClawMem Role |
|
|
921
|
+
|---|---|---|---|---|
|
|
922
|
+
| Long-Term Memory | `MEMORY.md` | Human | ∞ | Indexed, profile supplements, human-curated anchor |
|
|
923
|
+
| Session Logs | `memory/*.md` | Human | 60 days | Indexed, daily entries, handoff auto-generated |
|
|
924
|
+
| Static Resources | `resources/**/*.md` | Human | ∞ (hub) | Fragment-embedded, no recency penalty |
|
|
925
|
+
| Research | `research/*.md` | Human | 90 days | Fragment-embedded for granular retrieval |
|
|
926
|
+
| User Profile | `_clawmem/user/profile.md` | Auto | ∞ | Static facts + dynamic context |
|
|
927
|
+
| User Preferences | `_clawmem/user/preferences/*.md` | Auto | ∞ | Extracted preferences (update_existing merge) |
|
|
928
|
+
| User Entities | `_clawmem/user/entities/*.md` | Auto | ∞ | Named entities across sessions |
|
|
929
|
+
| Observations | `_clawmem/agent/observations/*.md` | Auto | ∞ (decision) | Decisions + observations from transcripts |
|
|
930
|
+
| Handoffs | `_clawmem/agent/handoffs/*.md` | Auto | 30 days | Session summaries with next steps |
|
|
931
|
+
| Antipatterns | `_clawmem/agent/antipatterns/*.md` | Auto | ∞ | Accumulated negative patterns |
|
|
932
|
+
| Beads | `_clawmem/agent/beads/*.md` | Auto | ∞ | Beads issues synced from Dolt, relations in memory graph |
|
|
933
|
+
|
|
934
|
+
Manual layers benefit from periodic re-indexing — a cron job running `clawmem update --embed` keeps the index fresh for content edited outside of watched directories.
|
|
935
|
+
|
|
936
|
+
### Setup
|
|
937
|
+
|
|
938
|
+
```bash
|
|
939
|
+
# Bootstrap workspace collection (use your agent's workspace path)
|
|
940
|
+
./bin/clawmem bootstrap ~/workspace --name workspace
|
|
941
|
+
|
|
942
|
+
# Bootstrap each project
|
|
943
|
+
./bin/clawmem bootstrap ~/Projects/my-project --name my-project
|
|
944
|
+
|
|
945
|
+
# Enable auto-embed for real-time indexing
|
|
946
|
+
# Edit ~/.config/clawmem/config.yaml → autoEmbed: true
|
|
947
|
+
|
|
948
|
+
# Install watcher as systemd service
|
|
949
|
+
./bin/clawmem install-service --enable
|
|
950
|
+
```
|
|
951
|
+
|
|
952
|
+
#### OpenClaw-Specific
|
|
953
|
+
|
|
954
|
+
```bash
|
|
955
|
+
# OpenClaw uses ~/.openclaw/workspace/ as its workspace root
|
|
956
|
+
./bin/clawmem bootstrap ~/.openclaw/workspace --name workspace
|
|
957
|
+
```
|
|
958
|
+
|
|
959
|
+
## Dependencies
|
|
960
|
+
|
|
961
|
+
| Package | Purpose |
|
|
962
|
+
|---|---|
|
|
963
|
+
| `@modelcontextprotocol/sdk` | MCP server |
|
|
964
|
+
| `gray-matter` | YAML frontmatter parsing |
|
|
965
|
+
| `node-llama-cpp` | GGUF model inference (reranking, query expansion, A-MEM) |
|
|
966
|
+
| `sqlite-vec` | Vector similarity extension |
|
|
967
|
+
| `yaml` | Config parsing |
|
|
968
|
+
| `zod` | MCP schema validation |
|
|
969
|
+
|
|
970
|
+
## Deployment
|
|
971
|
+
|
|
972
|
+
Three-tier retrieval architecture: infrastructure (watcher + embed timer) → hooks (~90%) → agent MCP (~10%). Works out of the box without a dedicated GPU (all models auto-download via `node-llama-cpp`, uses Metal on Apple Silicon). For best performance, run three `llama-server` instances — see [GPU Services](#gpu-services) for model tiers (SOTA vs QMD native) and [Cloud Embedding](#option-c-cloud-embedding-api) for cloud embedding alternatives.
|
|
973
|
+
|
|
974
|
+
Key services: `clawmem-watcher` (auto-index on file change + beads sync), `clawmem-embed` timer (daily embedding sweep), 9 Claude Code hooks (context injection, session bootstrap, decision extraction, handoffs, feedback, compaction support). Optional `clawmem-curator` agent for on-demand lifecycle triage, retrieval health checks, and maintenance (`clawmem setup curator`).
|
|
975
|
+
|
|
976
|
+
## Acknowledgments
|
|
977
|
+
|
|
978
|
+
Built on the shoulders of:
|
|
979
|
+
|
|
980
|
+
- [A-MEM](https://arxiv.org/abs/2510.02178) — self-evolving memory architecture
|
|
981
|
+
- [Beads](https://github.com/steveyegge/beads) — Dolt-backed issue tracker for AI agents
|
|
982
|
+
- [claude-mem](https://github.com/thedotmack/claude-mem) — Claude Code memory integration reference
|
|
983
|
+
- [Engram](https://github.com/Gentleman-Programming/engram) — observation dedup window, topic-key upsert pattern, temporal timeline navigation, duplicate metadata scoring signals
|
|
984
|
+
- [MAGMA](https://arxiv.org/abs/2501.13956) — multi-graph memory agent
|
|
985
|
+
- [memory-lancedb-pro](https://github.com/CortexReach/memory-lancedb-pro) — retrieval gate, length normalization, MMR diversity, access reinforcement algorithms
|
|
986
|
+
- [OpenViking](https://github.com/volcengine/OpenViking) — query decomposition patterns, collection-scoped retrieval, transaction-safe indexing
|
|
987
|
+
- [QMD](https://github.com/tobi/qmd) — search backend (BM25 + vectors + RRF + reranking)
|
|
988
|
+
- [SAME](https://github.com/sgx-labs/statelessagent) — agent memory concepts (recency decay, confidence scoring, session tracking)
|
|
989
|
+
- [supermemory](https://github.com/supermemoryai/clawdbot-supermemory) — hook patterns and context surfacing ideas
|
|
990
|
+
|
|
991
|
+
## License
|
|
992
|
+
|
|
993
|
+
MIT
|