clawmem 0.8.1 → 0.8.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AGENTS.md CHANGED
@@ -94,9 +94,9 @@ curl http://host:8090/v1/models
94
94
  | `CLAWMEM_NO_LOCAL_MODELS` | `false` | Blocks `node-llama-cpp` from auto-downloading GGUF models. Set `true` for remote-only setups. |
95
95
  | `CLAWMEM_VAULTS` | (none) | JSON map of vault name → SQLite path for multi-vault mode. E.g. `{"work":"~/.cache/clawmem/work.sqlite"}` |
96
96
  | `CLAWMEM_ENABLE_AMEM` | enabled | A-MEM note construction + link generation during indexing. |
97
- | `CLAWMEM_ENABLE_CONSOLIDATION` | disabled | Background worker backfills unenriched docs. Needs long-lived MCP process. |
97
+ | `CLAWMEM_ENABLE_CONSOLIDATION` | disabled | Background worker backfills unenriched docs and runs Phase 2/3 consolidation + deductive synthesis. **v0.8.2:** every tick is wrapped in a DB-backed `worker_leases` row (`light-consolidation` key), so multiple host processes against the same vault cannot race on Phase 2 merge writes. Hosted by either `clawmem watch` (canonical, long-lived) or `clawmem mcp` (per-session fallback). |
98
98
  | `CLAWMEM_CONSOLIDATION_INTERVAL` | 300000 | Worker interval in ms (min 15000). |
99
- | `CLAWMEM_HEAVY_LANE` | disabled | **v0.8.0.** Enable the quiet-window heavy maintenance worker — a second, longer-interval consolidation lane with DB-backed `worker_leases` exclusivity, stale-first batching, and `maintenance_runs` journaling. Runs alongside the light lane; off by default. |
99
+ | `CLAWMEM_HEAVY_LANE` | disabled | **v0.8.0.** Enable the quiet-window heavy maintenance worker — a second, longer-interval consolidation lane with DB-backed `worker_leases` exclusivity, stale-first batching, and `maintenance_runs` journaling. Runs alongside the light lane; off by default. **v0.8.2:** canonical host is `clawmem watch` (e.g. systemd `clawmem-watcher.service`); `clawmem mcp` retains the same gate as a fallback host but emits a stderr warning advising operators to move heavy-lane hosting to the watcher because per-session stdio MCPs may never be alive during the configured quiet window. |
100
100
  | `CLAWMEM_HEAVY_LANE_INTERVAL` | 1800000 | **v0.8.0.** Heavy-lane tick interval in ms (min 30000, default 30 min). |
101
101
  | `CLAWMEM_HEAVY_LANE_WINDOW_START` | (none) | **v0.8.0.** Start hour (0-23) of the quiet window. Unset → no window. |
102
102
  | `CLAWMEM_HEAVY_LANE_WINDOW_END` | (none) | **v0.8.0.** End hour (0-23, exclusive) of the quiet window. Supports midnight wrap (22→6). |
package/CLAUDE.md CHANGED
@@ -94,9 +94,9 @@ curl http://host:8090/v1/models
94
94
  | `CLAWMEM_NO_LOCAL_MODELS` | `false` | Blocks `node-llama-cpp` from auto-downloading GGUF models. Set `true` for remote-only setups. |
95
95
  | `CLAWMEM_VAULTS` | (none) | JSON map of vault name → SQLite path for multi-vault mode. E.g. `{"work":"~/.cache/clawmem/work.sqlite"}` |
96
96
  | `CLAWMEM_ENABLE_AMEM` | enabled | A-MEM note construction + link generation during indexing. |
97
- | `CLAWMEM_ENABLE_CONSOLIDATION` | disabled | Background worker backfills unenriched docs. Needs long-lived MCP process. |
97
+ | `CLAWMEM_ENABLE_CONSOLIDATION` | disabled | Background worker backfills unenriched docs and runs Phase 2/3 consolidation + deductive synthesis. **v0.8.2:** every tick is wrapped in a DB-backed `worker_leases` row (`light-consolidation` key), so multiple host processes against the same vault cannot race on Phase 2 merge writes. Hosted by either `clawmem watch` (canonical, long-lived) or `clawmem mcp` (per-session fallback). |
98
98
  | `CLAWMEM_CONSOLIDATION_INTERVAL` | 300000 | Worker interval in ms (min 15000). |
99
- | `CLAWMEM_HEAVY_LANE` | disabled | **v0.8.0.** Enable the quiet-window heavy maintenance worker — a second, longer-interval consolidation lane with DB-backed `worker_leases` exclusivity, stale-first batching, and `maintenance_runs` journaling. Runs alongside the light lane; off by default. |
99
+ | `CLAWMEM_HEAVY_LANE` | disabled | **v0.8.0.** Enable the quiet-window heavy maintenance worker — a second, longer-interval consolidation lane with DB-backed `worker_leases` exclusivity, stale-first batching, and `maintenance_runs` journaling. Runs alongside the light lane; off by default. **v0.8.2:** canonical host is `clawmem watch` (e.g. systemd `clawmem-watcher.service`); `clawmem mcp` retains the same gate as a fallback host but emits a stderr warning advising operators to move heavy-lane hosting to the watcher because per-session stdio MCPs may never be alive during the configured quiet window. |
100
100
  | `CLAWMEM_HEAVY_LANE_INTERVAL` | 1800000 | **v0.8.0.** Heavy-lane tick interval in ms (min 30000, default 30 min). |
101
101
  | `CLAWMEM_HEAVY_LANE_WINDOW_START` | (none) | **v0.8.0.** Start hour (0-23) of the quiet window. Unset → no window. |
102
102
  | `CLAWMEM_HEAVY_LANE_WINDOW_END` | (none) | **v0.8.0.** End hour (0-23, exclusive) of the quiet window. Supports midnight wrap (22→6). |
package/README.md CHANGED
@@ -47,66 +47,7 @@ ClawMem turns your markdown notes, project docs, and research dumps into persist
47
47
 
48
48
  Runs fully local with no API keys and no cloud services. Integrates via Claude Code hooks and MCP tools, as an OpenClaw ContextEngine plugin, or as a Hermes Agent MemoryProvider plugin. All modes share the same vault for cross-runtime memory. Works with any MCP-compatible client.
49
49
 
50
- ### v0.2.0 Enhancements
51
-
52
- - **Entity resolution + co-occurrence graph** — LLM entity extraction with quality filters, type-agnostic canonical resolution within [compatibility buckets](docs/internals/entity-resolution.md) (extensible type vocabulary), IDF-based entity edge scoring, co-occurrence tracking, entity graph traversal for ENTITY intent queries
53
- - **MPFP graph retrieval** — Multi-Path Fact Propagation with meta-path patterns per intent, hop-synchronized edge cache, Forward Push with α=0.15 teleport probability. Replaces single-beam traversal for causal/entity/temporal queries.
54
- - **Temporal query extraction** — regex-based date range extraction from natural language queries ("last week", "March 2026"), wired as WHERE filters into BM25 and vector search
55
- - **4-way parallel retrieval** — temporal proximity and entity graph channels added as parallel RRF legs in `query` tool (Tier 3 only), alongside existing BM25 + vector channels
56
- - **3-tier consolidation** — facts to observations (auto-generated, with proof_count and trend enum) to mental models. Background worker synthesizes clusters of related observations into consolidated patterns.
57
- - **Observation invalidation** — soft invalidation (invalidated_at/invalidated_by/superseded_by columns). Observations with confidence ≤ 0.2 after contradiction are filtered from search results.
58
- - **Memory nudge** — periodic ephemeral `<vault-nudge>` injection prompting lifecycle tool use after N turns of inactivity. Configurable via `CLAWMEM_NUDGE_INTERVAL`.
59
-
60
- ### v0.7.1 Safety Release
61
-
62
- Five independent safety gates around the consolidation pipeline and context surfacing, aimed at preventing contamination, cross-entity merges, and unchecked contradictions from landing in the vault. Every extraction ships with full unit + integration test coverage (+158 tests on top of the v0.7.0 baseline). See [consolidation safety](docs/concepts/architecture.md#consolidation-safety-v071) for the architectural walkthrough.
63
-
64
- - **Taxonomy cleanup** — standardized on the A-MEM `contradicts` (plural) convention across the entire codebase, eliminating silent query misses on the legacy singular form
65
- - **Name-aware merge safety** — the Phase 2 consolidation worker gate extracts entity anchors (via `entity_mentions`, with lexical proper-noun fallback) and runs dual-threshold normalized 3-gram cosine similarity before merging similar observations. Cross-entity merges are hard-rejected when anchor sets differ materially, preventing context bleed where "Alice decided X" merges into "Bob decided X". Thresholds are env-overridable (`CLAWMEM_MERGE_SCORE_NORMAL`=0.93, `_STRICT`=0.98). Dry-run mode via `CLAWMEM_MERGE_GUARD_DRY_RUN` for calibration.
66
- - **Contradiction-aware merge gate** — after the name-aware gate passes, a deterministic heuristic (negation asymmetry, number/date mismatch) plus an LLM check detect contradictory merges. Blocked merges route to `link` policy (insert new row + `contradicts` edge, default) or `supersede` policy (mark old row `status='inactive'`). Configurable via `CLAWMEM_CONTRADICTION_POLICY` and `CLAWMEM_CONTRADICTION_MIN_CONFIDENCE`. Phase 3 deductive synthesis applies the same gate to deductive dedupe matches.
67
- - **Anti-contamination deductive synthesis** — every Phase 3 draft runs through a three-layer validator: deterministic pre-checks (empty conclusion, invalid source_indices, pool-only entity contamination via `entity_mentions`) + LLM validator (fail-open with `validatorFallbackAccepts` counter) + dedupe. Per-reason rejection stats exposed via `DeductiveSynthesisStats` so Phase 3 yield can be diagnosed without enabling extra logging.
68
- - **Context instruction + relationship snippets** — `context-surfacing` now always prepends an `<instruction>` block framing the surfaced facts as background knowledge the model already holds, and appends an optional `<relationships>` block listing memory-graph edges where BOTH endpoints are in the surfaced doc set. The relationships block is the first thing dropped when the payload would overflow `CLAWMEM_PROFILE`'s token budget, preserving facts-first behaviour while giving the model graph-level reasoning hooks directly in-prompt.
69
-
70
- ### v0.7.2 Post-Import Conversation Synthesis
71
-
72
- Opt-in LLM pass that runs **after** `clawmem mine` finishes indexing an imported collection. Operates on the freshly imported `content_type='conversation'` documents and extracts structured knowledge facts (decisions / preferences / milestones / problems) plus cross-fact relations, writing each fact as a first-class searchable document alongside the raw conversation exchanges. See [post-import synthesis](docs/concepts/architecture.md#post-import-conversation-synthesis-v072) for the architectural walkthrough.
73
-
74
- - **New CLI flag** — `clawmem mine <dir> --synthesize [--synthesis-max-docs N]`. Off by default. When omitted, existing mine behaviour is byte-identical to v0.7.1.
75
- - **Two-pass pipeline** — Pass 1 extracts facts per conversation via the existing LLM, saves each via dedup-aware `saveMemory`, and populates a local alias map. Pass 2 resolves cross-fact links against the local map first, falling back to collection-scoped SQL lookup. Forward references (link to a fact extracted later in the same run) are resolved correctly.
76
- - **Idempotent reruns** — synthesized fact paths are a pure function of `(sourceDocId, slug(title), short sha256(normalizedTitle))`, so reruns over the same conversation batch hit the `saveMemory` update branch instead of creating parallel rows. Same-slug collisions are disambiguated by the stable hash suffix, not encounter order.
77
- - **Fail-closed link resolution** — when two different facts claim the same normalized title or alias, the resolver treats the link as ambiguous and counts it unresolved. Pre-existing docs with duplicate titles in the collection do not silently bind either.
78
- - **Weight-monotonic relation upsert** — `memory_relations` insert uses `ON CONFLICT DO UPDATE SET weight = MAX(weight, excluded.weight)`, which is idempotent on equal-weight reruns but still accepts stronger later evidence without double-counting.
79
- - **Non-fatal failure model** — any LLM failure, JSON parse error, saveMemory collision, or relation insert error is counted and logged, never re-thrown. Synthesis failure after `indexCollection` commits does not roll back the mine import.
80
- - **Split operator counters** — `llmFailures` counts actual LLM path failures (null, thrown, non-array JSON), while `docsWithNoFacts` counts docs where the LLM responded validly but returned zero structured facts. Previously these were conflated as `nullCalls`.
81
-
82
- Adds +63 tests (46 unit + 5 integration + 12 regression) on top of the v0.7.1 baseline.
83
-
84
- ### v0.8.0 Quiet-Window Heavy Maintenance Lane
85
-
86
- A second, longer-interval consolidation worker that keeps Phase 2 + Phase 3 running on large vaults without starving interactive sessions. Off by default — set `CLAWMEM_HEAVY_LANE=true` to enable. The existing 5-minute light-lane worker is unchanged. See [heavy maintenance lane](docs/concepts/architecture.md#heavy-maintenance-lane-v080) for the architectural walkthrough.
87
-
88
- - **Quiet-window gating** — the heavy lane only runs inside the hours set by `CLAWMEM_HEAVY_LANE_WINDOW_START` / `CLAWMEM_HEAVY_LANE_WINDOW_END` (0-23). Supports midnight wraparound (e.g., 22→6). Null on either bound means "always in window".
89
- - **Query-rate gating via `context_usage`** — counts hook injections in the last 10 minutes and skips the tick when the rate exceeds `CLAWMEM_HEAVY_LANE_MAX_USAGES` (default 30). No new `query_activity` table; reuses v0.7.0 telemetry.
90
- - **DB-backed worker leases** — exclusivity enforced via a new `worker_leases` table with atomic `INSERT ... ON CONFLICT DO UPDATE ... WHERE expires_at <= ?` acquisition, random 16-byte fencing tokens, and TTL reclaim. Safe under multi-process contention; any SQLite error translates to a `lease_unavailable` skip rather than a thrown exception.
91
- - **Stale-first selection** — Phase 2 and Phase 3 reorder their candidate sets by `COALESCE(recall_stats.last_recalled_at, documents.last_accessed_at, documents.modified_at) ASC` so long-unseen docs bubble up first. Empty `recall_stats` falls through to access-time without erroring.
92
- - **Optional surprisal selector** — `CLAWMEM_HEAVY_LANE_SURPRISAL=true` plumbs k-NN anomaly-ranked doc ids (via the existing `computeSurprisalScores`) into Phase 2 as an explicit `candidateIds` filter. Degrades to stale-first on vaults without embeddings and logs `selector: 'surprisal-fallback-stale'` in the journal.
93
- - **`maintenance_runs` journal** — every scheduled attempt writes a row: `status` (`started`/`completed`/`failed`/`skipped`), `reason` for skips, selected/processed/created/null_call counts, and a `metrics_json` payload with selector type and full `DeductiveSynthesisStats` breakdown. Operators can reconstruct any lane decision without reading worker logs.
94
- - **Force-enforce merge gate** — the heavy lane passes `guarded: true` to `consolidateObservations`, which overrides `CLAWMEM_MERGE_GUARD_DRY_RUN` inside `findSimilarConsolidation` so experimenting operators cannot weaken heavy-lane enforcement via env flag.
95
-
96
- Adds +56 tests (13 worker-lease + 35 maintenance unit + 8 maintenance integration) on top of the v0.7.2 baseline.
97
-
98
- ### v0.8.1 Multi-Turn Prior-Query Lookback
99
-
100
- `context-surfacing` now builds its retrieval query from the current prompt plus up to two recent same-session prior prompts, so a short follow-up turn ("do the same for X", "explain the rationale") can still inherit the vocabulary of earlier turns. The raw prompt is persisted in a new nullable `context_usage.query_text` column so future hook ticks can reconstitute the multi-turn query from the DB. See [multi-turn lookback](docs/concepts/architecture.md#multi-turn-prior-query-lookback-v081) for the full walkthrough.
101
-
102
- - **Additive schema migration** — new nullable `query_text TEXT` column on `context_usage`, guarded by `PRAGMA table_info`. Pre-v0.8.1 stores get the column added on first open; ad-hoc stores that skip the migration path degrade transparently via a feature-detect WeakMap so `insertUsageFn` never writes a column that doesn't exist.
103
- - **Discovery path only** — the multi-turn query feeds vector search, BM25, and query expansion. Cross-encoder reranking continues to use the RAW current prompt so relevance scoring is not diluted by older turns, and composite scoring / snippet extraction / dedupe / routing-hint detection all remain on the raw prompt as well.
104
- - **Privacy-conscious persistence split** — gated skip paths (slash commands, `MIN_PROMPT_LENGTH`, `shouldSkipRetrieval`, heartbeat dedupe) do NOT persist their raw text because those turns are not meaningful user questions and carry a higher sensitivity profile. Post-retrieval empty paths (empty result set, threshold blocked, budget blocked) DO persist so a follow-up turn can still inherit the intent even when the current turn surfaced nothing.
105
- - **Current-first truncation** — the combined query is clamped to 2000 chars with the current prompt preserved verbatim at the head. Older priors are dropped first when the budget runs out. If the current prompt alone already exceeds the cap, priors are omitted entirely and the current prompt is truncated.
106
- - **SQL-level self-match guard** — duplicate submits of the same prompt are filtered out of the lookback SELECT via `AND query_text != ?` so a retry burst cannot eat into the 2-prior budget and leave the lookback window underfilled.
107
- - **10-minute max age, session-scoped** — priors older than 10 minutes or from a different `session_id` are invisible to the lookback. All fallback paths (missing column, DB error, no matching rows) return the current prompt unchanged — the hook never throws on lookback failures.
108
-
109
- Adds +27 tests (22 unit + 5 integration) on top of the v0.8.0 baseline.
50
+ Full version history is in [RELEASE_NOTES.md](RELEASE_NOTES.md). Upgrade instructions for existing vaults are in [docs/guides/upgrading.md](docs/guides/upgrading.md).
110
51
 
111
52
  ## Architecture
112
53
 
@@ -173,7 +114,7 @@ After installing, here's the full journey from zero to working memory:
173
114
  | **1. Bootstrap** | Create a vault, index your first collection, embed, install hooks and MCP | `clawmem bootstrap ~/notes --name notes` | One command does it all. Or run each step manually (see below). |
174
115
  | **2. Choose models** | Pick embedding + reranker models based on your hardware | 12GB+ VRAM → SOTA stack (zembed-1 + zerank-2). Less → QMD native combo. No GPU → cloud embedding or CPU fallback. | [GPU Services](#gpu-services) |
175
116
  | **3. Download models** | Get the GGUF files for your chosen stack | `wget` from HuggingFace, or let `node-llama-cpp` auto-download the QMD native models on first use | [Embedding](#embedding), [LLM Server](#llm-server), [Reranker Server](#reranker-server) |
176
- | **4. Start services** | Run GPU servers (if using dedicated GPU) and background services | `llama-server` for each model. systemd units for watcher + embed timer. | [systemd services](docs/guides/systemd-services.md) |
117
+ | **4. Start services** | Run GPU servers (if using dedicated GPU) and background services. Optionally enable the v0.8.2 background maintenance workers in the watcher unit so consolidation + deductive synthesis run automatically. | `llama-server` for each model. systemd units for watcher + embed timer. Drop-in for the watcher to enable workers + tune intervals + set the quiet window. | [systemd services](docs/guides/systemd-services.md), [background workers](docs/guides/systemd-services.md#background-maintenance-workers-v082) |
177
118
  | **5. Decide what to index** | Add collections for your projects, notes, research, and domain docs | `clawmem collection add ~/project --name project` | The more relevant markdown you index, the better retrieval works. See [building a rich context field](docs/introduction.md#building-a-rich-context-field). |
178
119
  | **6. Connect your agent** | Hook into Claude Code, OpenClaw, Hermes, or any MCP client | `clawmem setup hooks && clawmem setup mcp` for Claude Code. `clawmem setup openclaw` for OpenClaw. Copy `src/hermes/` to Hermes plugins for Hermes. | [Integration](#integration) |
179
120
  | **7. Verify** | Confirm everything is working | `clawmem doctor` (full health check) or `clawmem status` (quick index stats) | [Verify Installation](#verify-installation) |
@@ -223,6 +164,8 @@ clawmem embed # Re-embed if upgrading embedding models (not needed f
223
164
 
224
165
  Routine patch updates (e.g. 0.2.0 → 0.2.1) do not require reindexing.
225
166
 
167
+ For version-specific upgrade notes (opt-in features, optional cleanup steps, verification commands), see [docs/guides/upgrading.md](docs/guides/upgrading.md).
168
+
226
169
  ### Integration
227
170
 
228
171
  #### Claude Code
package/SKILL.md CHANGED
@@ -85,14 +85,14 @@ curl http://host:8090/v1/models
85
85
  | `CLAWMEM_RERANK_URL` | `http://localhost:8090` | Reranker server. Falls to `node-llama-cpp` if unset + `NO_LOCAL_MODELS=false`. |
86
86
  | `CLAWMEM_NO_LOCAL_MODELS` | `false` | Blocks `node-llama-cpp` auto-downloads. Set `true` for remote-only. |
87
87
  | `CLAWMEM_ENABLE_AMEM` | enabled | A-MEM note construction + link generation during indexing. |
88
- | `CLAWMEM_ENABLE_CONSOLIDATION` | disabled | Background worker backfills unenriched docs. Needs long-lived MCP process. |
88
+ | `CLAWMEM_ENABLE_CONSOLIDATION` | disabled | Light-lane consolidation worker (Phase 1 backfill + Phase 2 merge + Phase 3 deductive synthesis + Phase 4 recall stats). **v0.8.2:** every tick wraps in a `worker_leases` row (`light-consolidation` key) so multiple host processes against the same vault cannot race on Phase 2 merges. Hosted by `clawmem watch` (canonical) or `clawmem mcp` (per-session fallback). |
89
89
  | `CLAWMEM_CONSOLIDATION_INTERVAL` | 300000 | Worker interval in ms (min 15000). |
90
90
  | `CLAWMEM_MERGE_SCORE_NORMAL` | `0.93` | **v0.7.1.** Phase 2 merge-safety score threshold when candidate and existing anchors align. |
91
91
  | `CLAWMEM_MERGE_SCORE_STRICT` | `0.98` | **v0.7.1.** Strictest merge-safety threshold (fallback when anchors are ambiguous). |
92
92
  | `CLAWMEM_MERGE_GUARD_DRY_RUN` | `false` | **v0.7.1.** When `true`, Phase 2 merge-safety rejections are logged but not enforced — use for calibration. |
93
93
  | `CLAWMEM_CONTRADICTION_POLICY` | `link` | **v0.7.1.** How the merge-time contradiction gate handles a blocked merge. `link` (default) keeps both rows + inserts `contradicts` edge. `supersede` marks the old row `status='inactive'`. |
94
94
  | `CLAWMEM_CONTRADICTION_MIN_CONFIDENCE` | `0.5` | **v0.7.1.** Minimum combined heuristic+LLM confidence required before the contradiction gate blocks a merge. |
95
- | `CLAWMEM_HEAVY_LANE` | disabled | **v0.8.0.** Enable the quiet-window heavy maintenance worker — a second, longer-interval consolidation lane with DB-backed `worker_leases` exclusivity, stale-first batching, and `maintenance_runs` journaling. Runs alongside the light lane. |
95
+ | `CLAWMEM_HEAVY_LANE` | disabled | **v0.8.0.** Enable the quiet-window heavy maintenance worker — a second, longer-interval consolidation lane with DB-backed `worker_leases` exclusivity, stale-first batching, and `maintenance_runs` journaling. Runs alongside the light lane. **v0.8.2:** canonical host is `clawmem watch` (e.g. systemd `clawmem-watcher.service`); `clawmem mcp` retains the same gate as a fallback host but emits a stderr warning advising operators to move heavy-lane hosting to the watcher because per-session stdio MCPs may never be alive during the configured quiet window. |
96
96
  | `CLAWMEM_HEAVY_LANE_INTERVAL` | 1800000 | **v0.8.0.** Heavy-lane tick interval in ms (min 30000, default 30 min). |
97
97
  | `CLAWMEM_HEAVY_LANE_WINDOW_START` / `_END` | (none) | **v0.8.0.** Start/end hours (0-23) of the quiet window. Supports midnight wrap (22→6). Null on either bound = always in window. |
98
98
  | `CLAWMEM_HEAVY_LANE_MAX_USAGES` | 30 | **v0.8.0.** Max `context_usage` rows in the last 10 min before the heavy lane skips with `reason='query_rate_high'`. |
@@ -767,7 +767,7 @@ clawmem consolidate [--dry-run] # Find and archive duplicate low-confidence docu
767
767
  - SAME (composite scoring), MAGMA (intent + graph), A-MEM (self-evolving notes) layer on top of QMD substrate.
768
768
  - Three `llama-server` instances on local or remote GPU. Wrapper defaults to `localhost:8088/8089/8090`.
769
769
  - `CLAWMEM_NO_LOCAL_MODELS=false` (default) allows in-process fallback. Set `true` for remote-only to fail fast.
770
- - Consolidation worker (`CLAWMEM_ENABLE_CONSOLIDATION=true`) backfills unenriched docs. Only runs if MCP process stays alive long enough (every 5min).
770
+ - Consolidation worker (`CLAWMEM_ENABLE_CONSOLIDATION=true`) backfills unenriched docs and runs Phase 2 merge / Phase 3 deductive synthesis. **v0.8.2:** hosted by either `clawmem watch` (long-lived, canonical) or `clawmem mcp` (per-session fallback); every tick acquires a `light-consolidation` `worker_leases` row before doing work, so dual-hosting against the same vault is safe.
771
771
  - Beads integration: `syncBeadsIssues()` queries `bd` CLI (Dolt backend, v0.58.0+), creates markdown docs, maps dependency edges into `memory_relations`. Watcher auto-triggers on `.beads/` changes; `beads_sync` MCP for manual sync.
772
772
  - HTTP REST API: `clawmem serve [--port 7438]` — optional REST server on localhost. Search, retrieval, lifecycle, and graph traversal. `POST /retrieve` mirrors `memory_retrieve` with auto-routing (keyword/semantic/causal/timeline/hybrid). `POST /search` provides direct mode selection. Bearer token auth via `CLAWMEM_API_TOKEN` env var (disabled if unset).
773
773
  - OpenClaw ContextEngine plugin: `clawmem setup openclaw` — registers as native OpenClaw context engine. Dual-mode: shares vault with Claude Code hooks. Uses `before_prompt_build` for retrieval, `afterTurn()` for extraction, `compact()` for pre-compaction + runtime delegation (v0.3.0+, required for OpenClaw v2026.3.28+).
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "clawmem",
3
- "version": "0.8.1",
3
+ "version": "0.8.3",
4
4
  "description": "On-device context engine and memory for AI agents. Claude Code and OpenClaw. Hooks + MCP server + hybrid RAG search.",
5
5
  "type": "module",
6
6
  "bin": {
package/src/clawmem.ts CHANGED
@@ -45,6 +45,14 @@ import { enrichResults, reciprocalRankFusion, toRanked, type RankedResult } from
45
45
  import { splitDocument } from "./splitter.ts";
46
46
  import { getProfile, updateProfile, isProfileStale } from "./profile.ts";
47
47
  import { regenerateAllDirectoryContexts } from "./directory-context.ts";
48
+ import {
49
+ startConsolidationWorker,
50
+ stopConsolidationWorker,
51
+ } from "./consolidation.ts";
52
+ import {
53
+ parseHeavyLaneConfigFromEnv,
54
+ startHeavyMaintenanceWorker,
55
+ } from "./maintenance.ts";
48
56
  import { readHookInput, writeHookOutput, makeEmptyOutput, type HookOutput } from "./hooks.ts";
49
57
  import { contextSurfacing } from "./hooks/context-surfacing.ts";
50
58
  import { sessionBootstrap } from "./hooks/session-bootstrap.ts";
@@ -1363,13 +1371,74 @@ async function cmdWatch() {
1363
1371
  const dirs = collections.map(col => col.path);
1364
1372
  const s = getStore();
1365
1373
 
1374
+ // v0.8.2 Codex Turn 1 fix: register signal handlers BEFORE any async
1375
+ // startup work or worker startup. Resources are declared as null and
1376
+ // assigned once their respective creators run; the shutdown closure
1377
+ // captures the variable references so updates after registration are
1378
+ // visible. Without this ordering, a SIGTERM arriving during the brief
1379
+ // window between the worker startup banner and the handler registration
1380
+ // would terminate the watcher via the default signal action (exit 143)
1381
+ // instead of running the async drain → release → close sequence.
1382
+ let stopHeavyLane: (() => Promise<void>) | null = null;
1383
+ let watcherHandle: { close: () => void } | null = null;
1384
+ let checkpointTimerHandle: Timer | null = null;
1385
+
1386
+ // Graceful shutdown — stop workers, close watchers, then exit. SIGTERM
1387
+ // handling is critical for systemd `systemctl --user stop` to shut down
1388
+ // cleanly instead of being killed by the unit timeout. Both worker stops
1389
+ // are awaited so any mid-tick worker drains and releases its lease via
1390
+ // its own withWorkerLease finally block before we close the store.
1391
+ const shutdown = async (signal: string) => {
1392
+ console.log(`\n${c.dim}[watch] Received ${signal}, shutting down...${c.reset}`);
1393
+ if (stopHeavyLane) {
1394
+ await stopHeavyLane();
1395
+ stopHeavyLane = null;
1396
+ }
1397
+ await stopConsolidationWorker();
1398
+ if (checkpointTimerHandle) {
1399
+ clearInterval(checkpointTimerHandle);
1400
+ checkpointTimerHandle = null;
1401
+ }
1402
+ if (watcherHandle) {
1403
+ watcherHandle.close();
1404
+ watcherHandle = null;
1405
+ }
1406
+ closeStore();
1407
+ process.exit(0);
1408
+ };
1409
+ process.on("SIGINT", () => { void shutdown("SIGINT"); });
1410
+ process.on("SIGTERM", () => { void shutdown("SIGTERM"); });
1411
+
1366
1412
  console.log(`${c.bold}Watching ${dirs.length} collection(s) for changes...${c.reset}`);
1367
1413
  for (const col of collections) {
1368
1414
  console.log(` ${c.dim}${col.name}: ${col.path}${c.reset}`);
1369
1415
  }
1370
1416
  console.log(`${c.dim}Press Ctrl+C to stop.${c.reset}`);
1371
1417
 
1372
- const watcher = startWatcher(dirs, {
1418
+ // v0.8.2: Light + heavy maintenance lane workers (opt-in via env vars).
1419
+ // Hosting them in `cmdWatch` makes the long-lived watcher service the
1420
+ // canonical host for both lanes — `clawmem-watcher.service` runs 24/7
1421
+ // under systemd, so the heavy lane's quiet-window logic actually sees a
1422
+ // live worker at the configured hours regardless of whether any Claude
1423
+ // Code session is open. `cmdMcp` (stdio MCP) keeps the same env-var
1424
+ // gates as a fallback host, but warns when CLAWMEM_HEAVY_LANE=true
1425
+ // since per-session MCPs are short-lived. Both hosts share the same
1426
+ // DB-backed `worker_leases` exclusivity (heavy lane v0.8.0, light lane
1427
+ // v0.8.2), so running both at once is safe.
1428
+ if (Bun.env.CLAWMEM_ENABLE_CONSOLIDATION === "true") {
1429
+ const llm = getDefaultLlamaCpp();
1430
+ const intervalMs = parseInt(Bun.env.CLAWMEM_CONSOLIDATION_INTERVAL || "300000", 10);
1431
+ console.log(`${c.dim}[watch] Starting consolidation worker (light lane, interval=${intervalMs}ms)${c.reset}`);
1432
+ startConsolidationWorker(s, llm, intervalMs);
1433
+ }
1434
+ if (Bun.env.CLAWMEM_HEAVY_LANE === "true") {
1435
+ const llm = getDefaultLlamaCpp();
1436
+ const cfg = parseHeavyLaneConfigFromEnv();
1437
+ console.log(`${c.dim}[watch] Starting heavy maintenance lane worker${c.reset}`);
1438
+ stopHeavyLane = startHeavyMaintenanceWorker(s, llm, cfg);
1439
+ }
1440
+
1441
+ watcherHandle = startWatcher(dirs, {
1373
1442
  debounceMs: 2000,
1374
1443
  onChanged: async (fullPath, event) => {
1375
1444
  // Find which collection this belongs to
@@ -1424,45 +1493,12 @@ async function cmdWatch() {
1424
1493
  },
1425
1494
  });
1426
1495
 
1427
- // Skill vault watcher: watch _clawmem-skills/ content root if configured
1428
- let skillWatcher: { close: () => void } | null = null;
1429
- try {
1430
- const { getVaultPath, getSkillContentRoot } = await import("./config.ts");
1431
- const { resolveStore } = await import("./store.ts");
1432
- const skillVaultPath = getVaultPath("skill");
1433
- const skillRoot = getSkillContentRoot();
1434
-
1435
- if (skillVaultPath && existsSync(skillRoot)) {
1436
- const skillStore = resolveStore("skill");
1437
- console.log(`${c.bold}Watching skill vault content root...${c.reset}`);
1438
- console.log(` ${c.dim}skill: ${skillRoot} → ${skillVaultPath}${c.reset}`);
1439
-
1440
- skillWatcher = startWatcher([skillRoot], {
1441
- debounceMs: 2000,
1442
- onChanged: async (fullPath, event) => {
1443
- const relativePath = fullPath.slice(skillRoot.length + 1);
1444
- console.log(`${c.dim}[${event}]${c.reset} skill/${relativePath}`);
1445
-
1446
- const stats = await indexCollection(skillStore, "skill-observations", skillRoot, "**/*.md");
1447
- if (stats.added > 0 || stats.updated > 0 || stats.removed > 0) {
1448
- console.log(` skill: +${stats.added} ~${stats.updated} -${stats.removed}`);
1449
- }
1450
- },
1451
- onError: (err) => {
1452
- console.error(`${c.red}Skill watch error: ${err.message}${c.reset}`);
1453
- },
1454
- });
1455
- }
1456
- } catch {
1457
- // Skill vault not configured — skip
1458
- }
1459
-
1460
1496
  // Periodic WAL checkpoint: the watcher holds a long-lived DB connection which
1461
1497
  // prevents SQLite auto-checkpoint from shrinking the WAL file. Without this,
1462
1498
  // the WAL grows unbounded (observed 77MB+), slowing every concurrent DB access
1463
1499
  // (hooks, MCP) and eventually causing UserPromptSubmit hook timeouts.
1464
1500
  const WAL_CHECKPOINT_INTERVAL = 5 * 60 * 1000; // 5 minutes
1465
- const checkpointTimer = setInterval(() => {
1501
+ checkpointTimerHandle = setInterval(() => {
1466
1502
  try {
1467
1503
  s.db.exec("PRAGMA wal_checkpoint(PASSIVE)");
1468
1504
  } catch {
@@ -1470,16 +1506,7 @@ async function cmdWatch() {
1470
1506
  }
1471
1507
  }, WAL_CHECKPOINT_INTERVAL);
1472
1508
 
1473
- // Keep running until Ctrl+C
1474
- process.on("SIGINT", () => {
1475
- clearInterval(checkpointTimer);
1476
- watcher.close();
1477
- skillWatcher?.close();
1478
- closeStore();
1479
- process.exit(0);
1480
- });
1481
-
1482
- // Block forever
1509
+ // Block forever shutdown is driven by signal handlers registered above.
1483
1510
  await new Promise(() => {});
1484
1511
  }
1485
1512
 
@@ -17,6 +17,7 @@ import type { LlamaCpp } from "./llm.ts";
17
17
  import { extractJsonFromLLM } from "./amem.ts";
18
18
  import { hashContent } from "./indexer.ts";
19
19
  import { passesMergeSafety } from "./text-similarity.ts";
20
+ import { withWorkerLease } from "./worker-lease.ts";
20
21
  import {
21
22
  checkContradiction,
22
23
  isActionableContradiction,
@@ -166,22 +167,68 @@ let consolidationTimer: Timer | null = null;
166
167
  let isRunning = false;
167
168
  let tickCount = 0;
168
169
 
170
+ /**
171
+ * DB-backed worker lease name for the light consolidation lane (v0.8.2).
172
+ * Distinct from the heavy-maintenance lane's lease so both lanes can hold
173
+ * independent exclusivity against the same SQLite vault without colliding.
174
+ */
175
+ export const DEFAULT_LIGHT_LANE_WORKER_NAME = "light-consolidation";
176
+
177
+ /**
178
+ * Default worker-lease TTL for the light lane (10 min). A tick normally
179
+ * finishes in seconds, but Phase 2 consolidation + Phase 3 deductive
180
+ * synthesis can stack many LLM calls under worst-case conditions. A 10-min
181
+ * ceiling covers that case without leaving a stranded lease forever if the
182
+ * process is SIGKILL'd mid-tick — the next worker reclaims it atomically
183
+ * via the single-statement upsert in `acquireWorkerLease` once the TTL
184
+ * has elapsed.
185
+ */
186
+ export const DEFAULT_LIGHT_LANE_LEASE_TTL_MS = 10 * 60 * 1000;
187
+
188
+ /**
189
+ * Options for a single consolidation tick (v0.8.2). All fields optional;
190
+ * omitting the bag reproduces pre-v0.8.2 behavior except for the newly
191
+ * added DB-backed lease wrap, which is always on.
192
+ *
193
+ * - `workerName` override the lease name (default "light-consolidation").
194
+ * Tests should pass a unique name to avoid cross-test
195
+ * contention with other suites running in the same bun
196
+ * process.
197
+ * - `leaseTtlMs` override the lease TTL. Tests use short TTLs (e.g.
198
+ * 100 ms with a past `now`) to exercise expiry reclaim
199
+ * without real delay.
200
+ */
201
+ export interface ConsolidationTickOptions {
202
+ workerName?: string;
203
+ leaseTtlMs?: number;
204
+ }
205
+
169
206
  // =============================================================================
170
207
  // Worker Functions
171
208
  // =============================================================================
172
209
 
173
210
  /**
174
- * Starts the consolidation worker that enriches documents missing A-MEM metadata
175
- * and periodically consolidates observations.
211
+ * Starts the consolidation worker that enriches documents missing A-MEM
212
+ * metadata and periodically consolidates observations.
176
213
  *
177
- * @param store - Store instance with A-MEM methods
178
- * @param llm - LLM instance for memory note construction
179
- * @param intervalMs - Tick interval in milliseconds (default: 300000 = 5 min)
214
+ * v0.8.2 every tick is wrapped in a DB-backed worker lease (see
215
+ * `runConsolidationTick`), so multiple host processes running this worker
216
+ * against the same vault cannot run Phase 2 merge / Phase 3 deductive
217
+ * synthesis concurrently. The tick still uses an in-process `isRunning`
218
+ * reentrancy guard that fires before the lease round-trip, so the common
219
+ * case (single process, overlapping timer fires) is handled without
220
+ * touching SQLite.
221
+ *
222
+ * @param store - Store instance with A-MEM methods
223
+ * @param llm - LLM instance for memory note construction
224
+ * @param intervalMs - Tick interval in milliseconds (default 300000 = 5 min)
225
+ * @param opts - Optional lease overrides (worker name, TTL)
180
226
  */
181
227
  export function startConsolidationWorker(
182
228
  store: Store,
183
229
  llm: LlamaCpp,
184
- intervalMs: number = 300000
230
+ intervalMs: number = 300000,
231
+ opts: ConsolidationTickOptions = {},
185
232
  ): void {
186
233
  // Clamp interval to minimum 15 seconds
187
234
  const interval = Math.max(15000, intervalMs);
@@ -190,7 +237,7 @@ export function startConsolidationWorker(
190
237
 
191
238
  // Set up periodic tick
192
239
  consolidationTimer = setInterval(async () => {
193
- await tick(store, llm);
240
+ await runConsolidationTick(store, llm, opts);
194
241
  }, interval);
195
242
 
196
243
  // Use unref() to avoid blocking process exit
@@ -200,55 +247,133 @@ export function startConsolidationWorker(
200
247
  }
201
248
 
202
249
  /**
203
- * Stops the consolidation worker.
250
+ * Stops the consolidation worker. Async since v0.8.2 — clears the interval
251
+ * AND awaits any in-flight tick before resolving, so callers (signal
252
+ * handlers, test fixtures) can safely close the store afterward without
253
+ * yanking the DB out from under a mid-tick worker. The wait is bounded by
254
+ * `STOP_DRAIN_TIMEOUT_MS` (15s) so a pathologically stuck tick cannot
255
+ * wedge shutdown indefinitely; if the timeout fires, the function logs
256
+ * and returns anyway (the next process will reclaim the stale lease via
257
+ * the v0.8.0 `worker_leases` TTL upsert).
204
258
  */
205
- export function stopConsolidationWorker(): void {
259
+ export async function stopConsolidationWorker(): Promise<void> {
206
260
  if (consolidationTimer) {
207
261
  clearInterval(consolidationTimer);
208
262
  consolidationTimer = null;
263
+ console.log("[consolidation] Worker stop signaled — draining in-flight tick");
264
+ }
265
+ const deadline = Date.now() + STOP_DRAIN_TIMEOUT_MS;
266
+ while (isRunning && Date.now() < deadline) {
267
+ await new Promise<void>((resolve) => setTimeout(resolve, 50));
268
+ }
269
+ if (isRunning) {
270
+ console.log(
271
+ `[consolidation] Worker stop drain timed out after ${STOP_DRAIN_TIMEOUT_MS}ms — tick still running`,
272
+ );
273
+ } else {
209
274
  console.log("[consolidation] Worker stopped");
210
275
  }
211
276
  }
212
277
 
213
278
  /**
214
- * Single worker tick: A-MEM backfill + periodic observation consolidation.
279
+ * v0.8.2 bounded wait for in-flight light-lane tick during shutdown.
280
+ * 15 seconds is more than enough for Phase 1 + Phase 4 to drain (the
281
+ * cheap phases) and lets Phase 2/3 mid-flight LLM calls finish naturally
282
+ * in most environments. Stuck-tick scenarios (e.g. unreachable LLM with
283
+ * no socket timeout) fall back to the v0.8.0 worker_leases TTL reclaim.
284
+ */
285
+ const STOP_DRAIN_TIMEOUT_MS = 15_000;
286
+
287
+ /**
288
+ * Run one consolidation tick: Phase 1 (A-MEM backfill) → Phase 2 (observation
289
+ * consolidation, every 6th tick) → Phase 3 (deductive synthesis, every 3rd
290
+ * tick) → Phase 4 (recall stats recomputation, every tick).
291
+ *
292
+ * v0.8.2 — wrapped in a DB-backed worker lease so at most one host process
293
+ * ticks at a time against the same vault, symmetric with the v0.8.0 heavy
294
+ * maintenance lane's `worker_leases` exclusivity pattern. Phase 2 is the
295
+ * race-sensitive phase Codex flagged in the v0.8.2 pre-rollout review:
296
+ * without the lease, two concurrent workers could both INSERT a new
297
+ * consolidated observation for the same cluster, or both merge into the
298
+ * same existing row and lose source_ids from the read-modify-write update
299
+ * in `mergeIntoExistingConsolidation`.
300
+ *
301
+ * An in-process reentrancy guard (`isRunning`) fires before the lease
302
+ * round-trip, so overlapping setInterval timer fires from the same process
303
+ * do not incur a SQLite round-trip per skip.
304
+ *
305
+ * Returns `{ acquired }` so integration tests (and the setInterval wrapper)
306
+ * can distinguish ticks that did real work from ticks skipped by the lease
307
+ * or reentrancy gate.
308
+ *
309
+ * Exported in v0.8.2 so tests can drive individual ticks directly without
310
+ * spinning up the setInterval loop.
215
311
  */
216
- async function tick(store: Store, llm: LlamaCpp): Promise<void> {
217
- // Reentrancy guard
312
+ export async function runConsolidationTick(
313
+ store: Store,
314
+ llm: LlamaCpp,
315
+ opts: ConsolidationTickOptions = {},
316
+ ): Promise<{ acquired: boolean }> {
317
+ // In-process reentrancy guard: catches overlapping setInterval fires in
318
+ // the same process before we hit SQLite. Cheap; the lease is the
319
+ // cross-process authority.
218
320
  if (isRunning) {
219
- console.log("[consolidation] Skipping tick (already running)");
220
- return;
321
+ console.log("[consolidation] Skipping tick (already running in-process)");
322
+ return { acquired: false };
221
323
  }
222
324
 
223
- isRunning = true;
224
- tickCount++;
325
+ const workerName = opts.workerName ?? DEFAULT_LIGHT_LANE_WORKER_NAME;
326
+ const leaseTtlMs = opts.leaseTtlMs ?? DEFAULT_LIGHT_LANE_LEASE_TTL_MS;
225
327
 
328
+ isRunning = true;
226
329
  try {
227
- // Phase 1: A-MEM backfill (every tick)
228
- await backfillAmem(store, llm);
330
+ const lease = await withWorkerLease(
331
+ store,
332
+ workerName,
333
+ leaseTtlMs,
334
+ async () => {
335
+ tickCount++;
336
+ try {
337
+ // Phase 1: A-MEM backfill (every tick)
338
+ await backfillAmem(store, llm);
339
+
340
+ // Phase 2: Observation consolidation (every 6th tick — ~30 min
341
+ // at default interval). Race-sensitive — see doc comment above.
342
+ if (tickCount % 6 === 0) {
343
+ await consolidateObservations(store, llm);
344
+ }
229
345
 
230
- // Phase 2: Observation consolidation (every 6th tick, ~30 min at default interval)
231
- if (tickCount % 6 === 0) {
232
- await consolidateObservations(store, llm);
233
- }
346
+ // Phase 3: Deductive synthesis (every 3rd tick ~15 min).
347
+ // Writes are mostly idempotent on the hash-stable path but the
348
+ // anti-contamination validator still burns LLM calls, so
349
+ // running two workers in parallel is pure cost.
350
+ if (tickCount % 3 === 0) {
351
+ await generateDeductiveObservations(store, llm);
352
+ }
234
353
 
235
- // Phase 3: Deductive synthesis (every 3rd tick, ~15 min at default interval)
236
- if (tickCount % 3 === 0) {
237
- await generateDeductiveObservations(store, llm);
238
- }
354
+ // Phase 4: Recall stats recomputation (every tick lightweight
355
+ // SQL aggregation). Non-critical recall stats are
356
+ // informational, not retrieval-blocking.
357
+ try {
358
+ const updated = store.recomputeRecallStats();
359
+ if (updated > 0) {
360
+ console.log(`[consolidation] Phase 4: recomputed recall_stats for ${updated} docs`);
361
+ }
362
+ } catch (err) {
363
+ console.error("[consolidation] Phase 4 recall stats failed:", err);
364
+ }
365
+ } catch (err) {
366
+ console.error("[consolidation] Tick failed:", err);
367
+ }
368
+ },
369
+ );
239
370
 
240
- // Phase 4: Recall stats recomputation (every tick — lightweight SQL aggregation)
241
- try {
242
- const updated = store.recomputeRecallStats();
243
- if (updated > 0) {
244
- console.log(`[consolidation] Phase 4: recomputed recall_stats for ${updated} docs`);
245
- }
246
- } catch (err) {
247
- // Non-critical — recall stats are informational, not retrieval-blocking
248
- console.error("[consolidation] Phase 4 recall stats failed:", err);
371
+ if (!lease.acquired) {
372
+ console.log(
373
+ `[consolidation] Skipping tick (lease '${workerName}' held by another worker)`,
374
+ );
249
375
  }
250
- } catch (err) {
251
- console.error("[consolidation] Tick failed:", err);
376
+ return { acquired: lease.acquired };
252
377
  } finally {
253
378
  isRunning = false;
254
379
  }
package/src/entity.ts CHANGED
@@ -161,6 +161,53 @@ function makeEntityId(name: string, type: string, vault: string = 'default'): st
161
161
  return `${vault}:${type}:${normalized}`;
162
162
  }
163
163
 
164
+ // =============================================================================
165
+ // Entity Cap (content-type-aware, §1.5 v0.8.3)
166
+ // =============================================================================
167
+
168
+ /**
169
+ * Per-content-type entity cap applied to LLM extraction output.
170
+ *
171
+ * Long-form content (research dumps, conversation synthesis, hub/index docs)
172
+ * legitimately mentions more distinct entities than short decision records or
173
+ * handoff notes. A flat cap of 10 silently dropped real entities on long-form
174
+ * documents. This map lets each content type keep its full entity set up to a
175
+ * type-appropriate ceiling, while short types stay tight to suppress LLM noise.
176
+ *
177
+ * Unknown or untyped documents fall through to the default cap of 10 (matches
178
+ * pre-v0.8.3 behavior — backward compatible for any caller that doesn't pass
179
+ * a contentType).
180
+ */
181
+ const ENTITY_CAP_BY_TYPE: Record<string, number> = {
182
+ research: 15, // long-form research dumps
183
+ hub: 12, // architecture docs, indexes
184
+ conversation: 12, // synthesized conversation exports
185
+ decision: 8, // short decision records
186
+ deductive: 8, // inferred observations
187
+ note: 8, // session notes
188
+ handoff: 8, // session handoffs
189
+ progress: 8, // progress logs
190
+ project: 10, // generic project content
191
+ };
192
+
193
+ /**
194
+ * Return the entity cap for a given content type. Falls back to 10 for
195
+ * undefined or unknown types (pre-v0.8.3 behavior).
196
+ *
197
+ * Input is trimmed + lowercased before lookup so values from hand-authored
198
+ * frontmatter or older imported docs (e.g. "Research", " conversation ") map
199
+ * cleanly to the canonical lowercase keys in `ENTITY_CAP_BY_TYPE`. The DB
200
+ * `documents.content_type` column is not normalized at the write boundary,
201
+ * so normalization has to happen here to avoid silent fall-through to the
202
+ * default cap of 10.
203
+ */
204
+ export function entityCapForContentType(contentType?: string): number {
205
+ if (!contentType) return 10;
206
+ const key = contentType.trim().toLowerCase();
207
+ if (!key) return 10;
208
+ return ENTITY_CAP_BY_TYPE[key] ?? 10;
209
+ }
210
+
164
211
  // =============================================================================
165
212
  // Entity Extraction (LLM-based)
166
213
  // =============================================================================
@@ -168,14 +215,25 @@ function makeEntityId(name: string, type: string, vault: string = 'default'): st
168
215
  /**
169
216
  * Extract named entities from document content using LLM.
170
217
  * Returns a list of (name, type) pairs.
218
+ *
219
+ * @param contentType Optional document content_type. When provided, caps the
220
+ * returned entity list using `entityCapForContentType`. When omitted, uses
221
+ * the default cap of 10 (backward compatible).
171
222
  */
172
223
  export async function extractEntities(
173
224
  llm: LLM,
174
225
  title: string,
175
- content: string
226
+ content: string,
227
+ contentType?: string
176
228
  ): Promise<ExtractedEntity[]> {
177
229
  const truncated = content.slice(0, 2000);
178
230
 
231
+ // v0.8.3 (§1.5): compute the cap up front so we can thread it into BOTH
232
+ // the prompt ("0-N entities") and the post-LLM slice. Without the dynamic
233
+ // prompt, a compliant model stops at the hardcoded 10 even when we'd
234
+ // accept 15 — the slice becomes a no-op and §1.5 is only half-effective.
235
+ const cap = entityCapForContentType(contentType);
236
+
179
237
  const prompt = `Extract named entities from this document. Include people, projects, services, tools, organizations, and specific technical components.
180
238
 
181
239
  Title: ${title}
@@ -189,7 +247,7 @@ Return ONLY valid JSON array:
189
247
  Rules:
190
248
  - Only include specific, named entities (not generic concepts like "database" or "testing")
191
249
  - Normalize names: "VM 202" not "vm202", "ClawMem" not "clawmem"
192
- - 0-10 entities. Return empty array [] if no specific entities found
250
+ - 0-${cap} entities. Return empty array [] if no specific entities found
193
251
  - Include the most specific type for each entity
194
252
  - Do NOT extract the document's title as an entity
195
253
  - Do NOT extract heading labels, section names, or sentence fragments
@@ -217,7 +275,7 @@ Return ONLY the JSON array. /no_think`;
217
275
  ['person', 'project', 'service', 'tool', 'concept', 'org', 'location'].includes(e.type)
218
276
  )
219
277
  .filter(e => !isLowQualityEntity(e.name, e.type, title))
220
- .slice(0, 10);
278
+ .slice(0, cap);
221
279
  } catch (err) {
222
280
  console.log(`[entity] LLM extraction failed:`, err);
223
281
  return [];
@@ -454,12 +512,14 @@ export async function enrichDocumentEntities(
454
512
  ): Promise<number> {
455
513
  try {
456
514
  // Get document content (snapshot for extraction)
515
+ // v0.8.3 (§1.5): fetch content_type so extractEntities can apply a
516
+ // content-type-aware cap instead of the flat slice(0, 10).
457
517
  const doc = db.prepare(`
458
- SELECT d.title, c.doc as body
518
+ SELECT d.title, d.content_type, c.doc as body
459
519
  FROM documents d
460
520
  JOIN content c ON c.hash = d.hash
461
521
  WHERE d.id = ? AND d.active = 1
462
- `).get(docId) as { title: string; body: string } | null;
522
+ `).get(docId) as { title: string; content_type: string | null; body: string } | null;
463
523
 
464
524
  if (!doc) {
465
525
  console.log(`[entity] Document ${docId} not found or inactive`);
@@ -478,8 +538,8 @@ export async function enrichDocumentEntities(
478
538
  return 0; // Same input, already enriched — skip
479
539
  }
480
540
 
481
- // Step 1: Extract entities via LLM
482
- const entities = await extractEntities(llm, doc.title, doc.body);
541
+ // Step 1: Extract entities via LLM (cap is content-type-aware as of v0.8.3 §1.5)
542
+ const entities = await extractEntities(llm, doc.title, doc.body, doc.content_type ?? undefined);
483
543
 
484
544
  // Recheck input hash before writing — abort if content changed during LLM call
485
545
  const recheckHash = db.prepare(`
@@ -77,6 +77,37 @@ const DEFAULT_CONFIG: Required<Omit<HeavyMaintenanceConfig, "workerName" | "cloc
77
77
 
78
78
  const DEFAULT_WORKER_NAME = "heavy-maintenance";
79
79
 
80
+ /**
81
+ * Parse a `HeavyMaintenanceConfig` from `Bun.env` (v0.8.2). Shared by every
82
+ * host that can start the heavy lane (`cmdMcp` in mcp.ts, `cmdWatch` in
83
+ * clawmem.ts) so the env var convention stays in one place. Each field is
84
+ * left undefined when its env var is unset, so `DEFAULT_CONFIG` continues
85
+ * to drive any field the operator did not explicitly override.
86
+ */
87
+ export function parseHeavyLaneConfigFromEnv(): HeavyMaintenanceConfig {
88
+ return {
89
+ intervalMs: Bun.env.CLAWMEM_HEAVY_LANE_INTERVAL
90
+ ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_INTERVAL, 10)
91
+ : undefined,
92
+ windowStartHour: Bun.env.CLAWMEM_HEAVY_LANE_WINDOW_START
93
+ ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_WINDOW_START, 10)
94
+ : null,
95
+ windowEndHour: Bun.env.CLAWMEM_HEAVY_LANE_WINDOW_END
96
+ ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_WINDOW_END, 10)
97
+ : null,
98
+ maxContextUsagesPer10m: Bun.env.CLAWMEM_HEAVY_LANE_MAX_USAGES
99
+ ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_MAX_USAGES, 10)
100
+ : undefined,
101
+ staleObservationLimit: Bun.env.CLAWMEM_HEAVY_LANE_OBS_LIMIT
102
+ ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_OBS_LIMIT, 10)
103
+ : undefined,
104
+ staleDeductiveLimit: Bun.env.CLAWMEM_HEAVY_LANE_DED_LIMIT
105
+ ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_DED_LIMIT, 10)
106
+ : undefined,
107
+ useSurprisalSelector: Bun.env.CLAWMEM_HEAVY_LANE_SURPRISAL === "true",
108
+ };
109
+ }
110
+
80
111
  // =============================================================================
81
112
  // Journal helpers
82
113
  // =============================================================================
@@ -503,7 +534,7 @@ export function startHeavyMaintenanceWorker(
503
534
  store: Store,
504
535
  llm: LlamaCpp,
505
536
  cfg: HeavyMaintenanceConfig = {},
506
- ): () => void {
537
+ ): () => Promise<void> {
507
538
  const merged = { ...DEFAULT_CONFIG, ...cfg };
508
539
  // Clamp interval to minimum 30 seconds so buggy configs can't pin the CPU.
509
540
  const interval = Math.max(30_000, merged.intervalMs);
@@ -530,11 +561,35 @@ export function startHeavyMaintenanceWorker(
530
561
  }, interval);
531
562
  heavyTimer.unref();
532
563
 
533
- return () => {
564
+ // v0.8.2 async stop handle. Clears the timer AND awaits any in-flight
565
+ // tick before resolving, so callers can safely close the store afterward
566
+ // without yanking the DB from under a mid-tick worker. Bounded wait —
567
+ // a pathologically stuck tick cannot wedge shutdown indefinitely; the
568
+ // worker_leases TTL upsert reclaims any stranded lease on the next
569
+ // process startup.
570
+ return async () => {
534
571
  if (heavyTimer) {
535
572
  clearInterval(heavyTimer);
536
573
  heavyTimer = null;
574
+ console.log("[heavy-lane] Worker stop signaled — draining in-flight tick");
575
+ }
576
+ const deadline = Date.now() + HEAVY_STOP_DRAIN_TIMEOUT_MS;
577
+ while (heavyRunning && Date.now() < deadline) {
578
+ await new Promise<void>((resolve) => setTimeout(resolve, 50));
579
+ }
580
+ if (heavyRunning) {
581
+ console.log(
582
+ `[heavy-lane] Worker stop drain timed out after ${HEAVY_STOP_DRAIN_TIMEOUT_MS}ms — tick still running`,
583
+ );
584
+ } else {
537
585
  console.log("[heavy-lane] Worker stopped");
538
586
  }
539
587
  };
540
588
  }
589
+
590
+ /**
591
+ * v0.8.2 — bounded wait for in-flight heavy-lane tick during shutdown.
592
+ * 30 seconds covers a Phase 2 + Phase 3 stack with reasonable LLM latencies
593
+ * before falling back to the worker_leases TTL reclaim path.
594
+ */
595
+ const HEAVY_STOP_DRAIN_TIMEOUT_MS = 30_000;
package/src/mcp.ts CHANGED
@@ -39,7 +39,10 @@ import { classifyIntent, decomposeQuery, extractTemporalConstraint, type IntentT
39
39
  import { adaptiveTraversal, mergeTraversalResults, mpfpTraversal } from "./graph-traversal.ts";
40
40
  import { getDefaultLlamaCpp } from "./llm.ts";
41
41
  import { startConsolidationWorker, stopConsolidationWorker } from "./consolidation.ts";
42
- import { startHeavyMaintenanceWorker, type HeavyMaintenanceConfig } from "./maintenance.ts";
42
+ import {
43
+ parseHeavyLaneConfigFromEnv,
44
+ startHeavyMaintenanceWorker,
45
+ } from "./maintenance.ts";
43
46
  import { listVaults, loadVaultConfig } from "./config.ts";
44
47
  import { getEntityGraphNeighbors, searchEntities } from "./entity.ts";
45
48
 
@@ -2595,8 +2598,37 @@ This is the recommended entry point for ALL memory queries.`,
2595
2598
  await server.connect(transport);
2596
2599
 
2597
2600
  // ---------------------------------------------------------------------------
2598
- // Consolidation Worker
2599
- // ---------------------------------------------------------------------------
2601
+ // Shutdown wiring + Workers
2602
+ // ---------------------------------------------------------------------------
2603
+
2604
+ // v0.8.2 Codex Turn 2 fix: register signal handlers BEFORE any worker
2605
+ // startup, mirroring the same null-handle capture pattern that cmdWatch
2606
+ // uses. The handler is the only thing that suppresses Node's default
2607
+ // signal action (terminate), so a SIGTERM arriving in the brief window
2608
+ // between worker startup and `process.on(...)` registration would
2609
+ // exit-143 the process and skip the async drain entirely, leaking any
2610
+ // lease the worker had just acquired. Capturing `stopHeavyLane` as a
2611
+ // mutable closure variable lets the registration happen before the
2612
+ // worker is actually created — the handler reads whatever value is
2613
+ // bound at the moment a signal arrives.
2614
+ let stopHeavyLane: (() => Promise<void>) | null = null;
2615
+
2616
+ // Signal handlers for graceful shutdown. async stop sequence: both
2617
+ // worker stops await any in-flight tick before resolving so the store
2618
+ // is not closed underneath a mid-tick worker. Bounded waits inside the
2619
+ // stop functions guarantee the handler cannot wedge indefinitely.
2620
+ const shutdownMcp = async (signal: string) => {
2621
+ console.error(`\n[mcp] Received ${signal}, shutting down...`);
2622
+ if (stopHeavyLane) {
2623
+ await stopHeavyLane();
2624
+ stopHeavyLane = null;
2625
+ }
2626
+ await stopConsolidationWorker();
2627
+ closeAllStores();
2628
+ process.exit(0);
2629
+ };
2630
+ process.on("SIGINT", () => { void shutdownMcp("SIGINT"); });
2631
+ process.on("SIGTERM", () => { void shutdownMcp("SIGTERM"); });
2600
2632
 
2601
2633
  // Start consolidation worker if enabled
2602
2634
  if (Bun.env.CLAWMEM_ENABLE_CONSOLIDATION === "true") {
@@ -2609,49 +2641,25 @@ This is the recommended entry point for ALL memory queries.`,
2609
2641
  // longer interval than the light lane, only inside a configurable quiet
2610
2642
  // window, and gated by context_usage query-rate so interactive sessions
2611
2643
  // are never starved. Off by default.
2612
- let stopHeavyLane: (() => void) | null = null;
2644
+ //
2645
+ // v0.8.2: warn when this lane is enabled on a stdio MCP host. Per-session
2646
+ // MCPs spawned by Claude Code die with the session, which means the
2647
+ // configured quiet window may never see a live worker if no Claude Code
2648
+ // session is open at that time. The watcher service (`clawmem watch`) is
2649
+ // the canonical long-lived host for the heavy lane as of v0.8.2 — see
2650
+ // docs/concepts/architecture.md and docs/guides/upgrading.md for the
2651
+ // dual-host rationale.
2613
2652
  if (Bun.env.CLAWMEM_HEAVY_LANE === "true") {
2653
+ console.error(
2654
+ "[mcp] WARNING: CLAWMEM_HEAVY_LANE=true on a stdio MCP host. " +
2655
+ "Per-session MCPs are short-lived; the configured quiet window may " +
2656
+ "never see a live worker. As of v0.8.2 the canonical heavy-lane host " +
2657
+ "is `clawmem watch` (e.g. systemd user unit clawmem-watcher.service). " +
2658
+ "Set the same env var on the watcher service for reliable operation.",
2659
+ );
2614
2660
  const llm = getDefaultLlamaCpp();
2615
- const cfg: HeavyMaintenanceConfig = {
2616
- intervalMs: Bun.env.CLAWMEM_HEAVY_LANE_INTERVAL
2617
- ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_INTERVAL, 10)
2618
- : undefined,
2619
- windowStartHour: Bun.env.CLAWMEM_HEAVY_LANE_WINDOW_START
2620
- ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_WINDOW_START, 10)
2621
- : null,
2622
- windowEndHour: Bun.env.CLAWMEM_HEAVY_LANE_WINDOW_END
2623
- ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_WINDOW_END, 10)
2624
- : null,
2625
- maxContextUsagesPer10m: Bun.env.CLAWMEM_HEAVY_LANE_MAX_USAGES
2626
- ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_MAX_USAGES, 10)
2627
- : undefined,
2628
- staleObservationLimit: Bun.env.CLAWMEM_HEAVY_LANE_OBS_LIMIT
2629
- ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_OBS_LIMIT, 10)
2630
- : undefined,
2631
- staleDeductiveLimit: Bun.env.CLAWMEM_HEAVY_LANE_DED_LIMIT
2632
- ? parseInt(Bun.env.CLAWMEM_HEAVY_LANE_DED_LIMIT, 10)
2633
- : undefined,
2634
- useSurprisalSelector: Bun.env.CLAWMEM_HEAVY_LANE_SURPRISAL === "true",
2635
- };
2636
- stopHeavyLane = startHeavyMaintenanceWorker(store, llm, cfg);
2661
+ stopHeavyLane = startHeavyMaintenanceWorker(store, llm, parseHeavyLaneConfigFromEnv());
2637
2662
  }
2638
-
2639
- // Signal handlers for graceful shutdown
2640
- process.on("SIGINT", () => {
2641
- console.error("\n[mcp] Received SIGINT, shutting down...");
2642
- stopConsolidationWorker();
2643
- if (stopHeavyLane) stopHeavyLane();
2644
- closeAllStores();
2645
- process.exit(0);
2646
- });
2647
-
2648
- process.on("SIGTERM", () => {
2649
- console.error("\n[mcp] Received SIGTERM, shutting down...");
2650
- stopConsolidationWorker();
2651
- if (stopHeavyLane) stopHeavyLane();
2652
- closeAllStores();
2653
- process.exit(0);
2654
- });
2655
2663
  }
2656
2664
 
2657
2665
  if (import.meta.main) {
package/src/store.ts CHANGED
@@ -1543,6 +1543,10 @@ export function createStore(dbPath?: string, opts?: { readonly?: boolean; busyTi
1543
1543
 
1544
1544
  // Usage relation tracking — records relations between documents
1545
1545
  insertRelation: (fromDoc: number, toDoc: number, relType: string, weight: number = 1.0) => {
1546
+ // v0.8.3 (§1.3): reject self-loops at the API boundary. A document
1547
+ // relating to itself has no informational value for graph traversal
1548
+ // and would pollute intent_search/find_similar neighborhoods.
1549
+ if (fromDoc === toDoc) return;
1546
1550
  db.prepare(`
1547
1551
  INSERT INTO memory_relations (source_id, target_id, relation_type, weight, created_at)
1548
1552
  VALUES (?, ?, ?, ?, ?)
@@ -4224,6 +4228,10 @@ export async function syncBeadsIssues(
4224
4228
  const targetRow = db.prepare(`SELECT doc_id FROM beads_issues WHERE beads_id = ?`).get(dep.target_id) as { doc_id: number } | undefined;
4225
4229
 
4226
4230
  if (sourceRow && targetRow) {
4231
+ // v0.8.3 (§1.3): mirror of insertRelation self-loop guard. Beads can
4232
+ // theoretically express a self-dependency (e.g. a `relates-to` edge
4233
+ // from an issue to itself); skip those before they land in the graph.
4234
+ if (sourceRow.doc_id === targetRow.doc_id) continue;
4227
4235
  db.prepare(`
4228
4236
  INSERT OR IGNORE INTO memory_relations (source_id, target_id, relation_type, weight, metadata, created_at)
4229
4237
  VALUES (?, ?, ?, 1.0, ?, ?)