@xdarkicex/openclaw-memory-libravdb 1.3.13 → 1.3.17

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -10,6 +10,14 @@ $$
10
10
  \text{continuity} \neq \text{semantic summary alone}
11
11
  $$
12
12
 
13
+ This document also defines a proposed lossless extension to the current model.
14
+ That extension is inspired by the immutable-store and expandable-summary
15
+ architecture in the LCM paper, "Lossless Context Management"
16
+ ([Ehrlich and Blackman, 2026](https://papers.voltropy.com/LCM)). Where this
17
+ document adopts that idea directly, it cites the paper explicitly. The
18
+ mathematical notation below is adapted to this repository's existing
19
+ invariant/tail/retrieval decomposition rather than copied from the paper.
20
+
13
21
  Instead, continuity is modeled as the composition of:
14
22
 
15
23
  $$
@@ -157,6 +165,10 @@ means the runtime may extend $T_{\mathrm{recent}}$ slightly backward to keep a
157
165
  recent cause/effect pair, request/response pair, or equivalent tightly coupled
158
166
  artifact bundle intact.
159
167
 
168
+ **Policy note.** Bundle coupling is a heuristic policy layer, not a formal
169
+ theorem term. It is listed in Section 13.4 as a heuristic and is not part of
170
+ the core $C_{\mathrm{total}}(q)$ assembly theorem.
171
+
160
172
  ## 4. Budget Partition
161
173
 
162
174
  Let the total prompt budget be $\tau$. Then the continuity-aware allocation is:
@@ -267,6 +279,7 @@ The boundary must also be bundle-safe. If a cluster candidate would split a
267
279
  tightly coupled local unit across the tail boundary, the runtime should move the
268
280
  boundary backward so that the unit stays entirely in $T_{\mathrm{recent}}$ or
269
281
  entirely in $\mathcal{V}_{\mathrm{rest}}$.
282
+ *(This is a heuristic policy; see Section 13.4.)*
270
283
 
271
284
  ## 6. Compaction Progress Guarantee
272
285
 
@@ -309,6 +322,12 @@ $$
309
322
 
310
323
  whenever a cluster is actually replaced.
311
324
 
325
+ **Edge case — singleton clusters.** If a cluster contains only a single turn
326
+ ($|C_j| = 1$), the clustering algorithm produces a `trivial`-tagged summary that
327
+ does not represent meaningful compaction progress. The $\Delta_{\mathrm{compact}} > 0$
328
+ guarantee applies only to clusters with $|C_j| \ge 2$ that are meaningfully replaced;
329
+ trivial singletons are boundary cases excluded from the progress invariant.
330
+
312
331
  ## 7. Summary Lineage And Recoverability
313
332
 
314
333
  Continuity improves when summary nodes are not opaque replacements but
@@ -347,6 +366,96 @@ $$
347
366
  This does not replace retrieval scoring. It guarantees that compressed history
348
367
  remains inspectable and attributable.
349
368
 
369
+ ## 7.5 Lossless Recoverability Extension
370
+
371
+ The current implementation stores lineage metadata for summaries, but it does
372
+ not yet preserve a fully immutable raw session store after compaction. A
373
+ stronger continuity contract is to treat compaction summaries as derived views
374
+ over immutable raw history rather than destructive replacements. This is the
375
+ main architectural idea adopted from the LCM paper's immutable store, summary
376
+ DAG, and bounded expansion model
377
+ ([Ehrlich and Blackman, 2026](https://papers.voltropy.com/LCM)).
378
+
379
+ Let the raw session history be:
380
+
381
+ $$
382
+ \mathcal{R}_{\mathrm{session}}=\langle r_1,r_2,\dots,r_n\rangle
383
+ $$
384
+
385
+ where each $r_i$ is a raw persisted turn and raw-history persistence is
386
+ append-only:
387
+
388
+ $$
389
+ \mathrm{Compact}(\mathcal{R}_{\mathrm{session}})=\mathcal{R}_{\mathrm{session}}
390
+ $$
391
+
392
+ Compaction instead constructs a summary-node set:
393
+
394
+ $$
395
+ \mathbf{S}=\{s_1,s_2,\dots\}
396
+ $$
397
+
398
+ and a parent relation:
399
+
400
+ $$
401
+ E_{\triangleleft}\subseteq (\mathbf{S}\times\mathbf{S})\cup(\mathbf{S}\times\mathcal{R}_{\mathrm{session}})
402
+ $$
403
+
404
+ where an edge $(s,x)\in E_{\triangleleft}$ means summary node $s$ directly
405
+ covers child node $x$, with $x$ either a raw turn or a lower-order summary.
406
+
407
+ The resulting continuity graph is:
408
+
409
+ $$
410
+ \mathcal{G}_{\mathrm{cont}}=(\mathbf{S}\cup\mathcal{R}_{\mathrm{session}}, E_{\triangleleft})
411
+ $$
412
+
413
+ with the intended acyclicity invariant:
414
+
415
+ $$
416
+ \mathcal{G}_{\mathrm{cont}} \text{ is a DAG}
417
+ $$
418
+
419
+ Define recursive expansion to leaf raw turns:
420
+
421
+ $$
422
+ \mathrm{Expand}^{*}(x)=
423
+ \begin{cases}
424
+ \{x\} & \text{if } x\in\mathcal{R}_{\mathrm{session}} \\
425
+ \bigcup_{y:(x,y)\in E_{\triangleleft}} \mathrm{Expand}^{*}(y) & \text{if } x\in\mathbf{S}
426
+ \end{cases}
427
+ $$
428
+
429
+ Then lossless recoverability means:
430
+
431
+ $$
432
+ \forall s\in\mathbf{S},\ \mathrm{Expand}^{*}(s)\neq\emptyset
433
+ $$
434
+
435
+ and:
436
+
437
+ $$
438
+ \forall r\in\mathcal{R}_{\mathrm{session}},\ \exists x\in \mathbf{S}\cup T_{\mathrm{recent}} \text{ such that } r\in \mathrm{Expand}^{*}(x)
439
+ $$
440
+
441
+ Operationally, this means compaction may change which nodes are injected or
442
+ searched first, but it must not erase the ability to navigate back to the raw
443
+ turns covered by a summary.
444
+
445
+ The current repository should treat this as a proposed extension, not as a
446
+ claim about present behavior. Today the compactor inserts summaries with
447
+ structured lineage metadata, then deletes the covered source turns from the
448
+ session collection after successful replacement. A future lossless
449
+ implementation should separate:
450
+
451
+ - immutable raw turn storage
452
+ - active/searchable summary views
453
+ - bounded expansion and search over compacted history
454
+
455
+ The corresponding data-model change is to add a raw immutable session layer and
456
+ store summary coverage edges explicitly instead of using lineage metadata alone
457
+ as the recoverability surface.
458
+
350
459
  ## 8. Continuity-Aware Summarization Input
351
460
 
352
461
  Compaction input should be continuity-safe before it reaches the summarizer.
@@ -469,6 +578,19 @@ $$
469
578
  No continuity-critical local bundle may be split across the recent-tail and
470
579
  compaction boundary.
471
580
 
581
+ 9. Lossless recoverability when the extension is enabled:
582
+
583
+ $$
584
+ \forall s\in\mathbf{S},\ \mathrm{Expand}^{*}(s)\subseteq\mathcal{R}_{\mathrm{session}}
585
+ \qquad\text{and}\qquad
586
+ \mathrm{Expand}^{*}(s)\neq\emptyset
587
+ $$
588
+
589
+ 10. Raw-history immutability when the extension is enabled:
590
+
591
+ Compaction may add summary nodes and coverage edges, but it must not delete
592
+ raw turns from $\mathcal{R}_{\mathrm{session}}$.
593
+
472
594
  ## 12. Practical Interpretation
473
595
 
474
596
  In practical terms, continuity for this system is:
@@ -486,3 +608,101 @@ This avoids the failure mode where continuity depends entirely on a semantic
486
608
  summary being perfect. It also means compaction is not merely a storage
487
609
  optimization. It is a constrained transformation that must preserve exact
488
610
  recent state, recoverable lineage, and monotone progress.
611
+
612
+ ## 13. Layer Separation And Review Guidance
613
+
614
+ The strongest follow-on review result for this document is that the continuity
615
+ theory is healthiest when it keeps three layers separate:
616
+
617
+ 1. storage axioms
618
+ 2. core retrieval and assembly math
619
+ 3. recoverability policy
620
+
621
+ The authoritative continuity contract in this document should therefore be read
622
+ as follows.
623
+
624
+ ### 13.1 Storage Axioms
625
+
626
+ When the lossless extension is enabled, raw-history immutability is a storage
627
+ axiom:
628
+
629
+ $$
630
+ \mathrm{Compact}(\mathcal{R}_{\mathrm{session}})=\mathcal{R}_{\mathrm{session}}
631
+ $$
632
+
633
+ That statement is unconditional. It does not depend on query relevance,
634
+ summary confidence, or token budget. It is stronger than lineage metadata or
635
+ query-time expansion. It simply means compaction does not delete raw source
636
+ turns from the immutable raw layer.
637
+
638
+ ### 13.2 Recoverability Theorem
639
+
640
+ The summary-coverage DAG and $\mathrm{Expand}^{*}$ belong to recoverability,
641
+ not to the primary retrieval theorem. Their job is to guarantee that compacted
642
+ history remains navigable back to raw source turns:
643
+
644
+ $$
645
+ \forall s\in\mathbf{S},\ \mathrm{Expand}^{*}(s)\subseteq\mathcal{R}_{\mathrm{session}}
646
+ \qquad\text{and}\qquad
647
+ \mathrm{Expand}^{*}(s)\neq\emptyset
648
+ $$
649
+
650
+ This is a structural property of the continuity graph. It is not by itself a
651
+ claim that every query should traverse that graph during normal assembly.
652
+
653
+ ### 13.3 Retrieval Boundary
654
+
655
+ The core continuity theorem remains:
656
+
657
+ $$
658
+ C_{\mathrm{total}}(q)=\mathcal{I}_1\cup \mathcal{I}_2^{*}\cup T_{\mathrm{recent}}\cup \mathrm{Proj}(\mathcal{V}_{\mathrm{rest}}, q)
659
+ $$
660
+
661
+ This document treats that expression as the primary assembly law. A runtime may
662
+ experiment with query-time summary expansion, but such expansion should be
663
+ treated as a bounded policy layer wrapped around the core theorem unless it is
664
+ formally re-derived inside the governing retrieval math.
665
+
666
+ In particular, policy knobs such as:
667
+
668
+ - summary-expansion confidence thresholds
669
+ - expansion token budgets
670
+ - depth limits
671
+ - expansion penalties or attenuations
672
+
673
+ are not themselves continuity axioms. They are deployment and retrieval-policy
674
+ choices layered on top of the structural guarantees above.
675
+
676
+ ### 13.4 Heuristic vs. Theorem Boundary
677
+
678
+ The following ideas remain useful, but should be read as heuristics unless
679
+ their mathematics is defined explicitly elsewhere:
680
+
681
+ - **bundle-safe boundary extension** (Section 3): the runtime may extend
682
+ $T_{\mathrm{recent}}$ backward to avoid splitting a coupled local bundle;
683
+ this is a heuristic policy, not a formal tail selector term
684
+ - specific escalation ladders for compaction fallback
685
+ - **confidence-triggered automatic expansion**: query-time summary expansion is
686
+ explicit recovery/audit only; it was removed from the hot retrieval path and
687
+ is not the default behavior — see Section 13.3 and memory 283
688
+ - any fixed expansion penalty not derived from the governing score equations
689
+
690
+ This distinction matters because continuity should stay theorem-safe even when
691
+ those policies are tuned, replaced, or disabled.
692
+
693
+ ### 13.5 Future Theory Direction
694
+
695
+ Several mathematically interesting review suggestions are worth preserving for
696
+ future refinement, but they are not part of the current authoritative theorem:
697
+
698
+ - information-theoretic or rate-distortion views of compaction quality
699
+ - hot-spot preservation tiers based on access concentration
700
+ - causal-centrality-aware compaction vetoes
701
+ - entropy-driven tail selection instead of fixed turn-count rules
702
+ - explicit recovery-state machines triggered by retrieval failure (the vNext
703
+ retrieval-failure signals S1/S2/S3 are defined separately in the vNext spec
704
+ slice; they are not part of the current $C_{\mathrm{total}}$ theorem)
705
+
706
+ These are promising research directions for later versions. The current
707
+ document keeps the simpler invariant-first continuity model as the normative
708
+ contract until one of those stronger formulations is deliberately adopted.
@@ -36,6 +36,7 @@ bash scripts/build-daemon.sh
36
36
  ```
37
37
 
38
38
  This creates `.daemon-bin/libravdbd` and copies locally available bundled assets into `.daemon-bin/`.
39
+ That includes the embedding models, ONNX Runtime, and the bundled T5 summarizer assets when they are present under `.models/`.
39
40
 
40
41
  ## Gating Invariants
41
42
 
@@ -0,0 +1,258 @@
1
+ # Elevated Guidance Model
2
+
3
+ This document defines the Tier 1.5 elevated-guidance path that sits between
4
+ authored invariants and ordinary recalled memory. Its purpose is to preserve
5
+ high-value "shadow rules" that are too weakly structured for AST promotion but
6
+ too directive to be allowed to decay into lossy summaries or low-trust recalled
7
+ memory.
8
+
9
+ The design goal is:
10
+
11
+ $$
12
+ \text{preserve high-value guidance without promoting it to Tier 0 invariants}
13
+ $$
14
+
15
+ The elevated-guidance path is therefore:
16
+
17
+ - stronger than ordinary semantic recall
18
+ - weaker than authored hard or soft invariants
19
+ - assembled separately from `<recalled_memories>`
20
+ - bounded by its own token reservation so it cannot starve continuity or Tier 0
21
+
22
+ ## 1. Protected Summarization
23
+
24
+ During compaction, let a chronological cluster be:
25
+
26
+ $$
27
+ C_j = \{ t_1, t_2, \dots, t_m \}
28
+ $$
29
+
30
+ Define a deterministic deontic indicator:
31
+
32
+ $$
33
+ \delta(t_i) \in \{0,1\}
34
+ $$
35
+
36
+ where $\delta(t_i)=1$ means the turn contains guidance-like imperative or
37
+ prohibitive surface forms detectable by the local deontic frame.
38
+
39
+ Let $a_{t_i}\in[0,1]$ be the authored stability weight for a turn. Stable
40
+ authored sources may set $a_{t_i}=1$, while ordinary session text defaults
41
+ lower. The ideal shard-protection predicate is:
42
+
43
+ $$
44
+ P_{\mathrm{shard}}(t_i)=
45
+ \begin{cases}
46
+ 1 & \text{if } \delta(t_i)=1 \land a_{t_i}\ge\tau_{\mathrm{stable}} \\
47
+ 0 & \text{otherwise}
48
+ \end{cases}
49
+ $$
50
+
51
+ For the current first implementation, the runtime uses a conservative
52
+ deterministic approximation that protects deontic-like turns directly and gates
53
+ them by a stored stability weight rather than depending on a local model to
54
+ decide whether preservation should happen.
55
+
56
+ The cluster is partitioned into protected shards and compressible turns:
57
+
58
+ $$
59
+ C_j^{\mathrm{protected}}=\{t_i\in C_j \mid P_{\mathrm{shard}}(t_i)=1\}
60
+ $$
61
+
62
+ $$
63
+ C_j^{\mathrm{compress}}=C_j \setminus C_j^{\mathrm{protected}}
64
+ $$
65
+
66
+ Compaction then becomes:
67
+
68
+ $$
69
+ \mathrm{Compaction}(C_j)=
70
+ \left\{s_{\mathrm{abstractive}}(C_j^{\mathrm{compress}})\right\}
71
+ \cup C_j^{\mathrm{protected}}
72
+ $$
73
+
74
+ where the protected shard members survive verbatim as elevated-guidance records
75
+ instead of being melted into the cluster summary.
76
+
77
+ In the current implementation, protected records are persisted outside the live
78
+ session collection into durable elevated-guidance namespaces such as:
79
+
80
+ - `elevated:user:<userId>` when user provenance is available
81
+ - `elevated:session:<sessionId>` as a fallback
82
+
83
+ ## 2. Tier 1.5 Admission Gate
84
+
85
+ At retrieval time, let $s$ range over the protected-shard records produced by
86
+ compaction. Elevated guidance is admitted only when both conditions hold:
87
+
88
+ 1. the record was structurally protected during compaction
89
+ 2. the current query is semantically relevant to it
90
+
91
+ Formally:
92
+
93
+ $$
94
+ G_{\mathrm{elevated}}(q,s)=
95
+ \begin{cases}
96
+ 1 & \text{if } \mathrm{sim}(q,s)>\theta_1 \land s\in\bigcup_j C_j^{\mathrm{protected}} \\
97
+ 0 & \text{otherwise}
98
+ \end{cases}
99
+ $$
100
+
101
+ The elevated buffer for query $q$ is:
102
+
103
+ $$
104
+ E(q)=\{s \mid G_{\mathrm{elevated}}(q,s)=1\}
105
+ $$
106
+
107
+ This set is assembled separately from `<recalled_memories>` so it can outrank
108
+ ordinary semantic recall without claiming the full normative force of authored
109
+ context.
110
+
111
+ ## 3. Assembly Order and Budget
112
+
113
+ Let $\tau$ be the total memory prompt budget. The continuity-aware assembly with
114
+ Tier 1.5 becomes:
115
+
116
+ $$
117
+ C_{\mathrm{total}}(q)=
118
+ \mathcal{I}_1
119
+ \cup T_{\mathrm{recent}}
120
+ \cup \mathcal{I}_2^{*}
121
+ \cup E^{*}(q)
122
+ \cup \mathrm{Proj}(\mathcal{V}_{\mathrm{rest}}, q)
123
+ $$
124
+
125
+ where:
126
+
127
+ - $\mathcal{I}_1$ is hard authored context
128
+ - $T_{\mathrm{recent}}$ is the exact preserved raw recent tail
129
+ - $\mathcal{I}_2^{*}$ is the admitted soft-invariant prefix
130
+ - $E^{*}(q)$ is the budget-truncated elevated-guidance set
131
+ - $\mathrm{Proj}(\mathcal{V}_{\mathrm{rest}}, q)$ is ordinary residual semantic recall
132
+
133
+ Let $\rho_E\in(0,1)$ reserve a fraction of the prompt for elevated guidance.
134
+ The effective elevated-guidance token mass is:
135
+
136
+ $$
137
+ \tau_E^{\mathrm{eff}}=
138
+ \min\!\left(
139
+ \sum_{s\in E(q)}\mathrm{toks}(s),\,
140
+ \rho_E\tau
141
+ \right)
142
+ $$
143
+
144
+ The residual variant budget becomes:
145
+
146
+ $$
147
+ \tau_{\mathcal{V}}=
148
+ \tau
149
+ -\tau_{\mathcal{I}_1}
150
+ -\mathrm{toks}(T_{\mathrm{recent}})
151
+ -\tau_{\mathcal{I}_2}^{*}
152
+ -\tau_E^{\mathrm{eff}}
153
+ $$
154
+
155
+ If $\tau_{\mathcal{V}}\le 0$, ordinary semantic recall is intentionally starved
156
+ before elevated guidance is displaced.
157
+
158
+ ## 4. Trust Boundary
159
+
160
+ Tier 1.5 is not a replacement for authored invariants. It is an elevated
161
+ advisory enclave:
162
+
163
+ - authored context still wins on conflict
164
+ - elevated guidance outranks ordinary semantic recall
165
+ - ordinary recalled memory remains untrusted historical context
166
+
167
+ The intended prompt precedence is:
168
+
169
+ 1. authored context
170
+ 2. recent raw tail
171
+ 3. elevated guidance
172
+ 4. recalled memories
173
+
174
+ This preserves the Section 11 safety rule that recalled memory must not be
175
+ followed as instructions while still giving preserved shadow rules more weight
176
+ than generic historical recall.
177
+
178
+ ## 5. Failure Policy
179
+
180
+ Protected summarization is deterministic-first and model-optional.
181
+
182
+ If a local abstractive model is unavailable, slow, or times out, the system
183
+ must not fail open to deleting potential shadow rules. The safety rule is:
184
+
185
+ $$
186
+ \text{model failure} \Rightarrow \text{keep deterministic protected shards}
187
+ $$
188
+
189
+ In practical terms:
190
+
191
+ - destructive compaction may proceed only after protected shards are persisted
192
+ - model timeouts may reduce summary quality, but they must not erase the shard set
193
+ - when in doubt, preserve guidance verbatim rather than compressing it away
194
+
195
+ ## 6. Current Runtime Approximation
196
+
197
+ The fully general model allows provenance weighting $a_{t_i}$ to distinguish
198
+ stable authored sources from ordinary session text. The current implementation
199
+ approximates this with explicit ingest-time metadata:
200
+
201
+ - session turns receive a `provenance_class`
202
+ - session turns receive a `stability_weight`
203
+ - compaction protects only turns with deontic surface signals and
204
+ `stability_weight \ge \tau_{\mathrm{stable}}`
205
+
206
+ This is enough to make Tier 1.5 durable and provenance-weighted without yet
207
+ requiring a local model in the admission path.
208
+
209
+ ## 7. Additive Local-Model Booster
210
+
211
+ The final admission stage may use a local model only as an additive booster.
212
+ The current implementation reuses the canonical local embedder exposed by the
213
+ extractive summarizer.
214
+
215
+ Let $b_{\mathrm{sem}}(t)\in[0,1]$ be the maximum cosine similarity between turn
216
+ $t$ and a small fixed set of guidance prototypes:
217
+
218
+ $$
219
+ b_{\mathrm{sem}}(t)=\max_{p\in\mathcal{P}_{\mathrm{guide}}}\cos(\varphi(t),\varphi(p))
220
+ $$
221
+
222
+ This signal is only considered for turns that already satisfy:
223
+
224
+ - sufficient stability weight
225
+ - a lightweight guidance surface hint
226
+ - failure to pass the strict deterministic deontic gate
227
+
228
+ The current rescue condition is therefore:
229
+
230
+ $$
231
+ P_{\mathrm{boost}}(t)=
232
+ \mathbf{1}\!\left[
233
+ a_t\ge\tau_{\mathrm{stable}}
234
+ \land H_{\mathrm{surface}}(t)=1
235
+ \land \delta(t)=0
236
+ \land b_{\mathrm{sem}}(t)\ge\tau_{\mathrm{boost}}
237
+ \right]
238
+ $$
239
+
240
+ and final protection becomes:
241
+
242
+ $$
243
+ P_{\mathrm{final}}(t)=
244
+ \mathbf{1}\!\left[
245
+ P_{\mathrm{shard}}(t)=1
246
+ \;\lor\;
247
+ P_{\mathrm{boost}}(t)=1
248
+ \right]
249
+ $$
250
+
251
+ This preserves the key safety invariant:
252
+
253
+ $$
254
+ \text{model assistance may raise borderline candidates, but it is never the sole deletion-safety gate}
255
+ $$
256
+
257
+ If embedding fails or times out, the booster contributes zero and the
258
+ deterministic path remains authoritative.
@@ -5,13 +5,14 @@ reading the code piecemeal.
5
5
 
6
6
  ## Memory Kind Plus Explicit Context Engine Registration
7
7
 
8
- The plugin declares `kind: "memory"` in
8
+ The plugin declares `kind: ["memory", "context-engine"]` in
9
9
  [`openclaw.plugin.json`](../openclaw.plugin.json), but still registers both a
10
10
  context engine and a memory prompt section in [`src/index.ts`](../src/index.ts).
11
11
 
12
12
  Why:
13
13
 
14
- - the exclusive slot takeover happens through the `memory` kind
14
+ - the intended runtime contract is that `libravdb-memory` owns both the
15
+ `memory` and `contextEngine` slots together
15
16
  - the runtime behavior still needs explicit lifecycle hooks for:
16
17
  - `bootstrap`
17
18
  - `ingest`
@@ -23,6 +24,63 @@ Why:
23
24
  This is why the code registers both `registerContextEngine("libravdb-memory", …)`
24
25
  and `registerMemoryPromptSection(...)` instead of relying on only one hook.
25
26
 
27
+ On newer OpenClaw hosts, [`src/index.ts`](../src/index.ts) also registers
28
+ `registerMemoryRuntime(...)` as an additive bridge for the built-in
29
+ `memory_search` tool. That bridge reuses the same sidecar-backed retrieval path
30
+ instead of introducing a second memory backend.
31
+
32
+ ## Why `registerMemoryRuntime` Is Additive
33
+
34
+ Implemented in [`src/memory-runtime.ts`](../src/memory-runtime.ts).
35
+
36
+ The newer OpenClaw memory runtime seam is useful, but it does not replace the
37
+ spec-driven architecture in this repository.
38
+
39
+ What the runtime bridge does:
40
+
41
+ - exposes a search manager for the built-in `memory_search` tool
42
+ - routes search into the same libraVDB collections already used by the plugin
43
+ - reports sidecar status through the existing JSON-RPC `status` method
44
+
45
+ What it intentionally does not do yet:
46
+
47
+ - it does not replace context-engine ingest
48
+ - it does not replace context-engine compaction
49
+ - it does not register a host flush plan that could duplicate transcript ingest
50
+
51
+ That split is deliberate. The plugin already owns ingest and compaction through
52
+ the context engine and sidecar, so `registerMemoryRuntime` is safe as a search
53
+ bridge while `registerMemoryFlushPlan` remains deferred until it can be mapped
54
+ cleanly onto the existing lifecycle.
55
+
56
+ ## Why `before_reset` and `session_end` Stay Advisory
57
+
58
+ Implemented in [`src/lifecycle-hooks.ts`](../src/lifecycle-hooks.ts) and
59
+ [`src/plugin-runtime.ts`](../src/plugin-runtime.ts).
60
+
61
+ Newer OpenClaw hosts expose `before_reset` and `session_end` plugin hooks.
62
+ This plugin uses them, but only as hints into the sidecar.
63
+
64
+ Current behavior:
65
+
66
+ - `before_reset` forwards session identifiers, reset reason, and observed
67
+ message count
68
+ - `session_end` forwards end reason, archive linkage, and follow-on session
69
+ metadata
70
+ - the sidecar appends the hint to an internal lifecycle journal and performs a
71
+ best-effort flush/ack
72
+
73
+ Important boundary:
74
+
75
+ - these hooks are not the source of truth for memory correctness
76
+ - failure to deliver them must not break the session
77
+ - ingest, retrieval, and compaction still belong to the context engine and
78
+ sidecar runtime we control
79
+ - lifecycle journal entries live in an internal collection and are only visible
80
+ through explicit status/debug surfaces such as `openclaw memory journal`
81
+ - lifecycle retention is bounded and enforced on append by pruning the oldest
82
+ journal entries first
83
+
26
84
  ## Why Ingest Is Fire-and-Forget
27
85
 
28
86
  Implemented in [`src/context-engine.ts`](../src/context-engine.ts).
package/docs/install.md CHANGED
@@ -33,8 +33,8 @@ openclaw plugins install @xdarkicex/openclaw-memory-libravdb
33
33
  ```
34
34
 
35
35
  If you use the OpenClaw.ai plugin UI instead of the CLI, install the same
36
- package and then assign the plugin id `libravdb-memory` to the `memory` slot or
37
- the `contextEngine` slot.
36
+ package and then assign the plugin id `libravdb-memory` to both the `memory`
37
+ and `contextEngine` slots.
38
38
 
39
39
  Activate the plugin in `~/.openclaw/openclaw.json`:
40
40
 
@@ -42,7 +42,8 @@ Activate the plugin in `~/.openclaw/openclaw.json`:
42
42
  {
43
43
  "plugins": {
44
44
  "slots": {
45
- "memory": "libravdb-memory"
45
+ "memory": "libravdb-memory",
46
+ "contextEngine": "libravdb-memory"
46
47
  }
47
48
  }
48
49
  }
@@ -54,7 +55,8 @@ If you run the daemon on a non-default endpoint, add a plugin config:
54
55
  {
55
56
  "plugins": {
56
57
  "slots": {
57
- "memory": "libravdb-memory"
58
+ "memory": "libravdb-memory",
59
+ "contextEngine": "libravdb-memory"
58
60
  },
59
61
  "configs": {
60
62
  "libravdb-memory": {
@@ -145,7 +147,7 @@ you wrap it in `brew services`, `systemd`, or `launchd`.
145
147
  ### Plugin Lifecycle
146
148
 
147
149
  - Install the package with `openclaw plugins install`.
148
- - Activate it by assigning `libravdb-memory` to `memory` or `contextEngine`.
150
+ - Activate it by assigning `libravdb-memory` to both `memory` and `contextEngine`.
149
151
  - Update it with your normal OpenClaw plugin update flow.
150
152
  - Disable it by removing the slot assignment from `~/.openclaw/openclaw.json`.
151
153