@xdarkicex/openclaw-memory-libravdb 1.3.9 → 1.3.12
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +18 -0
- package/docs/README.md +9 -1
- package/docs/ast-v2.md +125 -0
- package/docs/ast.md +70 -0
- package/docs/compaction-evaluation.md +182 -0
- package/docs/continuity.md +488 -0
- package/docs/contributing.md +1 -1
- package/docs/gating.md +53 -255
- package/docs/installation.md +45 -9
- package/docs/mathematics-v2.md +1228 -0
- package/openclaw.plugin.json +2 -2
- package/package.json +1 -1
- package/src/context-engine.ts +306 -35
- package/src/continuity.ts +93 -0
- package/src/index.ts +1 -1
- package/src/openclaw-plugin-sdk.d.ts +2 -2
- package/src/recall-utils.ts +100 -8
- package/src/scoring.ts +263 -9
- package/src/tokens.ts +1 -1
- package/src/types.ts +33 -2
package/docs/gating.md
CHANGED
|
@@ -3,327 +3,125 @@
|
|
|
3
3
|
This document describes the ingestion gate used to decide whether a user turn should be promoted into durable `user:` memory. It is the most novel scoring component in the repository.
|
|
4
4
|
|
|
5
5
|
Implemented in:
|
|
6
|
-
|
|
7
|
-
-
|
|
8
|
-
-
|
|
9
|
-
- [`sidecar/compact/summarize.go`](../sidecar/compact/summarize.go) for the
|
|
10
|
-
downstream abstractive-routing threshold
|
|
6
|
+
- `sidecar/compact/gate.go`
|
|
7
|
+
- `sidecar/compact/tokens.go`
|
|
8
|
+
- `sidecar/compact/summarize.go` for the downstream abstractive-routing threshold
|
|
11
9
|
|
|
12
10
|
## 1. Why the Original Scalar Failed
|
|
13
11
|
|
|
14
12
|
The original scalar assumed conversational memory semantics:
|
|
15
|
-
|
|
16
13
|
- low novelty meant "already known"
|
|
17
14
|
- repetition meant "probably redundant"
|
|
18
15
|
- low natural-language structure meant "probably noise"
|
|
19
16
|
|
|
20
|
-
That logic breaks for technical sessions.
|
|
21
|
-
|
|
22
|
-
Repeated workflow context is often exactly what should be remembered:
|
|
23
|
-
|
|
24
|
-
- file paths
|
|
25
|
-
- APIs
|
|
26
|
-
- failure signatures
|
|
27
|
-
- configuration changes
|
|
28
|
-
- architectural decisions
|
|
29
|
-
|
|
30
|
-
In technical work, repetition can indicate persistent work context rather than low value.
|
|
17
|
+
That logic breaks for technical sessions. Repeated workflow context is often exactly what should be remembered: file paths, APIs, failure signatures, configuration changes, and architectural decisions. In technical work, repetition can indicate persistent work context rather than low value.
|
|
31
18
|
|
|
32
19
|
## 2. The Convex Mixture
|
|
33
20
|
|
|
34
21
|
The corrected gate is:
|
|
35
|
-
|
|
36
|
-
$$
|
|
37
|
-
G(t) = (1 - T(t)) \cdot G_{\mathrm{conv}}(t) + T(t) \cdot G_{\mathrm{tech}}(t)
|
|
38
|
-
$$
|
|
22
|
+
\[ G(t) = (1 - T(t)) \cdot G_{\mathrm{conv}}(t) + T(t) \cdot G_{\mathrm{tech}}(t) \]
|
|
39
23
|
|
|
40
24
|
where:
|
|
25
|
+
\[ G_{\mathrm{conv}}(t) = w_1^c H(t) + w_2^c R(t) + w_3^c D_{nl}(t) \]
|
|
26
|
+
\[ G_{\mathrm{tech}}(t) = w_1^t P(t) + w_2^t A(t) + w_3^t D_{\mathrm{tech}}(t) \]
|
|
41
27
|
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
$$
|
|
45
|
-
|
|
46
|
-
$$
|
|
47
|
-
G_{\mathrm{tech}}(t) = w_1^t P(t) + w_2^t A(t) + w_3^t D_{\mathrm{tech}}(t)
|
|
48
|
-
$$
|
|
49
|
-
|
|
50
|
-
and:
|
|
51
|
-
|
|
52
|
-
$$
|
|
53
|
-
T(t) \in [0,1]
|
|
54
|
-
$$
|
|
28
|
+
and the domain indicator is bounded:
|
|
29
|
+
\[ T(t) \in [0,1] \]
|
|
55
30
|
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
[`DefaultGatingConfig()`](../sidecar/compact/gate.go):
|
|
31
|
+
### Weight Invariants
|
|
32
|
+
To guarantee that the sub-branch scores remain strictly bounded to $[0,1]$, the configuration must satisfy:
|
|
33
|
+
\[ \sum_{i=1}^3 w_i^c = 1 \quad \text{and} \quad \sum_{i=1}^3 w_i^t = 1 \]
|
|
60
34
|
|
|
35
|
+
Current default weights from `DefaultGatingConfig()`:
|
|
61
36
|
- conversational branch: $w_1^c = 0.35$, $w_2^c = 0.40$, $w_3^c = 0.25$
|
|
62
37
|
- technical branch: $w_1^t = 0.40$, $w_2^t = 0.35$, $w_3^t = 0.25$
|
|
63
38
|
|
|
64
|
-
### Boundedness
|
|
65
|
-
|
|
66
|
-
If:
|
|
67
|
-
|
|
68
|
-
- $T(t) \in [0,1]$
|
|
69
|
-
- $G_{\mathrm{conv}}(t) \in [0,1]$
|
|
70
|
-
- $G_{\mathrm{tech}}(t) \in [0,1]$
|
|
71
|
-
|
|
72
|
-
then:
|
|
73
|
-
|
|
74
|
-
$$
|
|
75
|
-
G(t) \in [0,1]
|
|
76
|
-
$$
|
|
77
|
-
|
|
78
|
-
because $G$ is a convex combination of two values in $[0,1]$.
|
|
79
|
-
|
|
80
|
-
### Continuity
|
|
39
|
+
### Boundedness and Continuity
|
|
40
|
+
Because $T(t) \in [0,1]$, $G_{\mathrm{conv}}(t) \in [0,1]$, and $G_{\mathrm{tech}}(t) \in [0,1]$, $G(t)$ is a true convex combination bounded to $[0,1]$.
|
|
81
41
|
|
|
82
42
|
The gate is continuous in $T$:
|
|
83
|
-
|
|
84
|
-
|
|
85
|
-
\frac{\partial G}{\partial T} = G_{\mathrm{tech}} - G_{\mathrm{conv}}
|
|
86
|
-
$$
|
|
87
|
-
|
|
88
|
-
There is no discontinuous jump at a domain boundary. A mixed technical/conversational turn interpolates smoothly between the two sub-formulas.
|
|
43
|
+
\[ \frac{\partial G}{\partial T} = G_{\mathrm{tech}} - G_{\mathrm{conv}} \]
|
|
44
|
+
There is no discontinuous jump at a domain boundary. A mixed technical/conversational turn interpolates smoothly.
|
|
89
45
|
|
|
90
46
|
## 3. Domain Detection $T(t)$
|
|
91
47
|
|
|
92
48
|
Technical density is a weighted sum of technical patterns with saturation:
|
|
49
|
+
\[ T(t) = \min\left(\frac{\sum_i s_i \cdot \mathbf{1}[\mathrm{pattern}_i(t)]}{\theta_{\mathrm{norm}}}, 1\right) \]
|
|
93
50
|
|
|
94
|
-
|
|
95
|
-
T(t) = \min\left(\frac{\sum_i s_i \cdot \mathbf{1}[\mathrm{pattern}_i(t)]}{\theta_{\mathrm{norm}}}, 1\right)
|
|
96
|
-
$$
|
|
97
|
-
|
|
98
|
-
The shipped patterns include:
|
|
99
|
-
|
|
100
|
-
- code fences
|
|
101
|
-
- file paths
|
|
102
|
-
- function definitions
|
|
103
|
-
- shell commands
|
|
104
|
-
- URLs or endpoints
|
|
105
|
-
- stack traces
|
|
106
|
-
- hashes or hex identifiers
|
|
107
|
-
|
|
108
|
-
Default normalization:
|
|
109
|
-
|
|
110
|
-
$$
|
|
111
|
-
\theta_{\mathrm{norm}} = 1.5
|
|
112
|
-
$$
|
|
51
|
+
The shipped patterns include code fences, file paths, function definitions, shell commands, URLs, stack traces, and hashes.
|
|
113
52
|
|
|
114
|
-
This means two strong technical signals are enough to saturate the branch weight.
|
|
115
|
-
|
|
116
|
-
Saturation at `1.0` is correct because the gate does not need "how technical beyond fully technical"; it only needs the branch mixture weight.
|
|
53
|
+
Default normalization is $\theta_{\mathrm{norm}} = 1.5$. This means two strong technical signals are enough to saturate the branch weight. Saturation at `1.0` is correct because the gate only needs the branch mixture weight, not a unbounded "technical magnitude."
|
|
117
54
|
|
|
118
55
|
## 4. Conversational Branch
|
|
119
56
|
|
|
120
57
|
### Novelty $H(t)$
|
|
121
58
|
|
|
122
|
-
|
|
59
|
+
In the live implementation (`sidecar/compact/gate.go`), retrieval scores reaching the gate use the public higher-is-better cosine-style relevance contract from the retrieval layer, spanning $[-1, 1]$ for cosine collections. To ensure the novelty term remains in $[0,1]$ for the convex mixture, the mathematical model applies a zero-clamp:
|
|
123
60
|
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
61
|
+
\[ H(t) = \begin{cases}
|
|
62
|
+
1.0 & \text{if } |K| = 0 \\
|
|
63
|
+
1 - \frac{1}{|K|} \sum_{k \in K} \max(0, \cos(\vec{v}_t, \vec{v}_k)) & \text{otherwise}
|
|
64
|
+
\end{cases} \]
|
|
127
65
|
|
|
128
66
|
where $K$ is the retrieved nearest-neighbor set from durable `user:` memory.
|
|
129
67
|
|
|
130
68
|
Properties:
|
|
131
|
-
|
|
132
|
-
-
|
|
133
|
-
-
|
|
134
|
-
|
|
135
|
-
The implementation deliberately uses top-k mean similarity rather than centroid distance because user memory is often multimodal.
|
|
69
|
+
- An empty memory (cold start) safely returns $H=1.0$ instead of a division-by-zero.
|
|
70
|
+
- Highly similar existing memories ($\cos \to 1$) drive $H \to 0$.
|
|
71
|
+
- Negative or orthogonal neighbors are clamped to prevent $H(t) > 1$.
|
|
136
72
|
|
|
137
73
|
### Repetition Gate $R(t)$
|
|
138
74
|
|
|
139
|
-
The repetition term is:
|
|
140
|
-
|
|
141
|
-
$$
|
|
142
|
-
R(t) = F(t) \cdot (1 - S(t))
|
|
143
|
-
$$
|
|
75
|
+
The repetition term is a product, not a sum:
|
|
76
|
+
\[ R(t) = F(t) \cdot (1 - S(t)) \]
|
|
144
77
|
|
|
145
78
|
with:
|
|
79
|
+
\[ F(t) = \min\left(\frac{\mathrm{hitsAbove}(\mathrm{turns:userId}, 0.80, k=10)}{5}, 1\right) \]
|
|
80
|
+
\[ S(t) = \min\left(\frac{\mathrm{hitsAbove}(\mathrm{user:userId}, 0.85, k=5)}{3}, 1\right) \]
|
|
146
81
|
|
|
147
|
-
|
|
148
|
-
F(t) = \min\left(\frac{\mathrm{hitsAbove}(\mathrm{turns:userId}, 0.80, k=10)}{5}, 1\right)
|
|
149
|
-
$$
|
|
150
|
-
|
|
151
|
-
$$
|
|
152
|
-
S(t) = \min\left(\frac{\mathrm{hitsAbove}(\mathrm{user:userId}, 0.85, k=5)}{3}, 1\right)
|
|
153
|
-
$$
|
|
154
|
-
|
|
155
|
-
This is intentionally a product, not a sum.
|
|
156
|
-
|
|
157
|
-
Why:
|
|
158
|
-
|
|
159
|
-
- high input frequency should help only if durable memory is not already saturated
|
|
160
|
-
- high saturation should veto the repetition term regardless of frequency
|
|
161
|
-
|
|
162
|
-
The veto property is structural:
|
|
163
|
-
|
|
164
|
-
$$
|
|
165
|
-
S(t) = 1 \Rightarrow R(t) = 0
|
|
166
|
-
$$
|
|
82
|
+
Why a product? High input frequency should help only if durable memory is not already saturated. High saturation must veto the repetition term regardless of frequency. The veto property is structural: $S(t) = 1 \Rightarrow R(t) = 0$.
|
|
167
83
|
|
|
168
84
|
### Natural-Language Structural Load $D_{nl}(t)$
|
|
169
|
-
|
|
170
|
-
The conversational branch adds heuristic structure for turns that look like:
|
|
171
|
-
|
|
172
|
-
- preferences
|
|
173
|
-
- human-name references
|
|
174
|
-
- dates
|
|
175
|
-
- quantities
|
|
176
|
-
- fact assertions
|
|
177
|
-
|
|
178
|
-
This is intentionally narrow. It excludes general proper-noun detection so technical identifiers do not inflate the conversational signal.
|
|
85
|
+
Detects heuristics like preferences, human-name references, dates, and fact assertions.
|
|
179
86
|
|
|
180
87
|
## 5. Technical Branch
|
|
181
88
|
|
|
182
89
|
### Specificity $P(t)$
|
|
183
90
|
|
|
184
|
-
Specificity measures concrete artifact density:
|
|
185
|
-
|
|
186
|
-
$$
|
|
187
|
-
P(t) = \min\left(
|
|
188
|
-
\frac{
|
|
189
|
-
\sum_j p_j \cdot \mathrm{count}_j(t)
|
|
190
|
-
}{
|
|
191
|
-
\max(\mathrm{EstimateTokens}(t)/100, 1)
|
|
192
|
-
},
|
|
193
|
-
1
|
|
194
|
-
\right)
|
|
195
|
-
$$
|
|
196
|
-
|
|
197
|
-
The numerator counts things like:
|
|
198
|
-
|
|
199
|
-
- file paths
|
|
200
|
-
- function references
|
|
201
|
-
- error codes
|
|
202
|
-
- git references
|
|
203
|
-
- API endpoints
|
|
204
|
-
|
|
205
|
-
The normalization denominator is implemented in
|
|
206
|
-
[`sidecar/compact/tokens.go`](../sidecar/compact/tokens.go):
|
|
91
|
+
Specificity measures concrete artifact density normalized by turn length:
|
|
207
92
|
|
|
208
|
-
|
|
209
|
-
L(t)=\max\left(\left\lfloor \frac{\mathrm{len}(t)}{4} \right\rfloor, 1\right)
|
|
210
|
-
$$
|
|
93
|
+
\[ P(t) = \min\left( \frac{\sum_j p_j \cdot \mathrm{count}_j(t)}{\max(L(t)/100.0, 1.0)}, 1 \right) \]
|
|
211
94
|
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
95
|
+
The numerator counts things like file paths, error codes, and API endpoints.
|
|
96
|
+
The normalization denominator is the token estimator used by the gating subsystem (`sidecar/compact/tokens.go`):
|
|
97
|
+
\[ L(t) = \max\left(\left\lfloor \frac{\mathrm{RuneCount}(t)}{4} \right\rfloor, 1\right) \]
|
|
215
98
|
|
|
216
|
-
Length normalization matters. Without it, any long technical turn would score
|
|
217
|
-
high simply because it contains more surface area, not because it is more
|
|
218
|
-
memory-worthy.
|
|
99
|
+
Length normalization matters. Without it, any long technical turn would score high simply because it contains more surface area, not because it is more memory-worthy.
|
|
219
100
|
|
|
220
101
|
### Actionability $A(t)$
|
|
221
|
-
|
|
222
|
-
Actionability captures decision and outcome content:
|
|
223
|
-
|
|
224
|
-
- architectural decisions
|
|
225
|
-
- fixes or resolutions
|
|
226
|
-
- deployment or merge milestones
|
|
227
|
-
- configuration changes
|
|
228
|
-
|
|
229
|
-
These are the kinds of technical turns that are expensive to reconstruct later and therefore worth persisting.
|
|
102
|
+
Captures architectural decisions, fixes, merge milestones, and configuration changes.
|
|
230
103
|
|
|
231
104
|
### Technical Structural Load $D_{\mathrm{tech}}(t)$
|
|
232
|
-
|
|
233
|
-
This branch detects structural technical content such as:
|
|
234
|
-
|
|
235
|
-
- function definitions
|
|
236
|
-
- data structures
|
|
237
|
-
- dependencies
|
|
238
|
-
- tests
|
|
239
|
-
- documentation comments
|
|
240
|
-
|
|
241
|
-
It is the technical analogue to $D_{nl}$, not a replacement for it.
|
|
105
|
+
Detects function definitions, dependencies, and tests. It is the technical analogue to $D_{nl}$.
|
|
242
106
|
|
|
243
107
|
## 6. Calibration
|
|
244
108
|
|
|
245
|
-
Stored metadata includes:
|
|
246
|
-
|
|
247
|
-
- `gating_score`
|
|
248
|
-
- `gating_t`
|
|
249
|
-
- `gating_h`
|
|
250
|
-
- `gating_r`
|
|
251
|
-
- `gating_d`
|
|
252
|
-
- `gating_p`
|
|
253
|
-
- `gating_a`
|
|
254
|
-
- `gating_dtech`
|
|
255
|
-
- `gating_gconv`
|
|
256
|
-
- `gating_gtech`
|
|
257
|
-
|
|
258
|
-
The first calibration pass should inspect the empirical score distribution after real traffic arrives.
|
|
259
|
-
|
|
260
|
-
What to look for:
|
|
261
|
-
|
|
262
|
-
- bimodality in `gating_score`
|
|
263
|
-
- sensible spread in `gating_t`
|
|
264
|
-
- non-degenerate contributions from both `gconv` and `gtech`
|
|
265
|
-
|
|
266
109
|
For threshold tuning, isotonic regression is the correct calibration method once usefulness labels exist:
|
|
267
|
-
|
|
268
|
-
$$
|
|
269
|
-
P(\mathrm{useful} \mid G) = \mathrm{IsotonicRegression}(G, y)
|
|
270
|
-
$$
|
|
271
|
-
|
|
272
|
-
It preserves the monotonic design of the gate without assuming a sigmoid link function.
|
|
110
|
+
\[ P(\mathrm{useful} \mid G) = \mathrm{IsotonicRegression}(G, y) \]
|
|
273
111
|
|
|
274
112
|
Current thresholds implemented in code:
|
|
275
|
-
|
|
276
|
-
-
|
|
277
|
-
[`DefaultGatingConfig().Threshold = 0.35`](../sidecar/compact/gate.go)
|
|
278
|
-
- abstractive compaction routing threshold:
|
|
279
|
-
[`AbstractiveRoutingThreshold = 0.60`](../sidecar/compact/summarize.go)
|
|
113
|
+
- durable promotion: `DefaultGatingConfig().Threshold = 0.35`
|
|
114
|
+
- abstractive routing: `AbstractiveRoutingThreshold = 0.60`
|
|
280
115
|
|
|
281
116
|
## 7. Invariants
|
|
282
117
|
|
|
283
|
-
The gate
|
|
284
|
-
|
|
285
|
-
### 1. Empty memory implies full novelty
|
|
286
|
-
|
|
287
|
-
$$
|
|
288
|
-
\mathrm{memHits} = \emptyset \Rightarrow H = 1.0
|
|
289
|
-
$$
|
|
290
|
-
|
|
291
|
-
This prevents a cold start from suppressing every first durable insertion.
|
|
292
|
-
|
|
293
|
-
### 2. Saturation vetoes repetition
|
|
294
|
-
|
|
295
|
-
$$
|
|
296
|
-
\mathrm{MemSaturation} = 1 \Rightarrow R = 0
|
|
297
|
-
$$
|
|
298
|
-
|
|
299
|
-
This is what makes the repetition term a true gate instead of an accumulation bonus.
|
|
300
|
-
|
|
301
|
-
### 3. The convex blend stays in bounds
|
|
302
|
-
|
|
303
|
-
$$
|
|
304
|
-
G \in [0,1]
|
|
305
|
-
$$
|
|
306
|
-
|
|
307
|
-
and:
|
|
308
|
-
|
|
309
|
-
$$
|
|
310
|
-
G \in [\min(G_{\mathrm{conv}}, G_{\mathrm{tech}}), \max(G_{\mathrm{conv}}, G_{\mathrm{tech}})]
|
|
311
|
-
$$
|
|
312
|
-
|
|
313
|
-
### 4. Purely conversational turns collapse to the conversational branch
|
|
314
|
-
|
|
315
|
-
$$
|
|
316
|
-
T = 0 \Rightarrow G = G_{\mathrm{conv}}
|
|
317
|
-
$$
|
|
318
|
-
|
|
319
|
-
### 5. Purely technical turns collapse to the technical branch
|
|
320
|
-
|
|
321
|
-
$$
|
|
322
|
-
T = 1 \Rightarrow G = G_{\mathrm{tech}}
|
|
323
|
-
$$
|
|
324
|
-
|
|
325
|
-
### 6. Conversational structure should not overfire on pure code
|
|
118
|
+
The gate preserves six mathematical invariants mapped to `gate_test.go`:
|
|
326
119
|
|
|
327
|
-
|
|
120
|
+
1. **Empty memory implies full novelty:** $\mathrm{memHits} = \emptyset \Rightarrow H = 1.0$
|
|
121
|
+
2. **Saturation vetoes repetition:** $\mathrm{MemSaturation} = 1 \Rightarrow R = 0$
|
|
122
|
+
3. **The convex blend stays in bounds:** $G \in [0,1]$
|
|
123
|
+
4. **Monotonic Interpolation:** $G \in [\min(G_{\mathrm{conv}}, G_{\mathrm{tech}}), \max(G_{\mathrm{conv}}, G_{\mathrm{tech}})]$
|
|
124
|
+
5. **Purely conversational turns collapse:** $T = 0 \Rightarrow G = G_{\mathrm{conv}}$
|
|
125
|
+
6. **Purely technical turns collapse:** $T = 1 \Rightarrow G = G_{\mathrm{tech}}$
|
|
328
126
|
|
|
329
|
-
Together these invariants make the scalar interpretable, stable, and safe to tune
|
|
127
|
+
Conversational structure must not overfire on pure code. Together these invariants make the scalar interpretable, stable, and safe to tune.
|
package/docs/installation.md
CHANGED
|
@@ -117,6 +117,19 @@ extractive compaction. The only optional runtime network path is:
|
|
|
117
117
|
|
|
118
118
|
## Standard Install
|
|
119
119
|
|
|
120
|
+
### Fastest Path on macOS
|
|
121
|
+
|
|
122
|
+
```bash
|
|
123
|
+
brew tap xDarkicex/openclaw-libravdb-memory
|
|
124
|
+
brew install libravdbd
|
|
125
|
+
brew services start libravdbd
|
|
126
|
+
openclaw plugins install @xdarkicex/openclaw-memory-libravdb
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
This is the preferred install flow for macOS users. It gives you a managed `libravdbd` service and a scanner-clean OpenClaw plugin package.
|
|
130
|
+
|
|
131
|
+
### Plugin Package
|
|
132
|
+
|
|
120
133
|
```bash
|
|
121
134
|
openclaw plugins install @xdarkicex/openclaw-memory-libravdb
|
|
122
135
|
```
|
|
@@ -155,7 +168,15 @@ openclaw memory status
|
|
|
155
168
|
|
|
156
169
|
### Homebrew / macOS
|
|
157
170
|
|
|
158
|
-
|
|
171
|
+
Homebrew users should normally install from the published tap:
|
|
172
|
+
|
|
173
|
+
```bash
|
|
174
|
+
brew tap xDarkicex/openclaw-libravdb-memory
|
|
175
|
+
brew install libravdbd
|
|
176
|
+
brew services start libravdbd
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
The release workflow generates a publish-ready `libravdbd.rb` formula asset from [`packaging/homebrew/libravdbd.rb.tmpl`](../packaging/homebrew/libravdbd.rb.tmpl). It is designed for GitHub release assets named:
|
|
159
180
|
|
|
160
181
|
- `libravdbd-darwin-arm64`
|
|
161
182
|
- `libravdbd-darwin-amd64`
|
|
@@ -169,7 +190,7 @@ If your GitHub Actions configuration includes:
|
|
|
169
190
|
|
|
170
191
|
then tagged releases also push the generated formula into `Formula/libravdbd.rb` in that tap repository automatically.
|
|
171
192
|
|
|
172
|
-
Example:
|
|
193
|
+
Example plugin config:
|
|
173
194
|
|
|
174
195
|
```json
|
|
175
196
|
{
|
|
@@ -196,7 +217,7 @@ Installed plugin: libravdb-memory
|
|
|
196
217
|
|
|
197
218
|
## Activation
|
|
198
219
|
|
|
199
|
-
The plugin declares `kind: "memory"` and
|
|
220
|
+
The plugin declares `kind: ["memory", "context-engine"]` and registers for both the `memory` and `context-engine` slots. Either slot assignment activates the plugin.
|
|
200
221
|
|
|
201
222
|
Add this to `~/.openclaw/openclaw.json`:
|
|
202
223
|
|
|
@@ -204,8 +225,19 @@ Add this to `~/.openclaw/openclaw.json`:
|
|
|
204
225
|
{
|
|
205
226
|
"plugins": {
|
|
206
227
|
"slots": {
|
|
207
|
-
"memory": "libravdb-memory"
|
|
208
|
-
|
|
228
|
+
"memory": "libravdb-memory"
|
|
229
|
+
}
|
|
230
|
+
}
|
|
231
|
+
}
|
|
232
|
+
```
|
|
233
|
+
|
|
234
|
+
If your OpenClaw build uses the `contextEngine` slot instead, you can assign it there:
|
|
235
|
+
|
|
236
|
+
```json
|
|
237
|
+
{
|
|
238
|
+
"plugins": {
|
|
239
|
+
"slots": {
|
|
240
|
+
"contextEngine": "libravdb-memory"
|
|
209
241
|
}
|
|
210
242
|
}
|
|
211
243
|
}
|
|
@@ -213,12 +245,10 @@ Add this to `~/.openclaw/openclaw.json`:
|
|
|
213
245
|
|
|
214
246
|
Notes:
|
|
215
247
|
|
|
216
|
-
- `memory
|
|
217
|
-
- `contextEngine: "legacy"` keeps the legacy engine explicit when the host still exposes that slot.
|
|
218
|
-
- If you instead point `contextEngine` at another plugin, you are changing a separate slot from the memory replacement.
|
|
248
|
+
- Either `memory` or `contextEngine` slot assignment activates the plugin. You do not need both.
|
|
219
249
|
- The plugin id is `libravdb-memory`. The npm package name used at install time is `@xdarkicex/openclaw-memory-libravdb`.
|
|
220
250
|
|
|
221
|
-
Without
|
|
251
|
+
Without a slot entry, OpenClaw's default memory can continue to run in parallel.
|
|
222
252
|
|
|
223
253
|
## Verification
|
|
224
254
|
|
|
@@ -301,6 +331,12 @@ openclaw memory status
|
|
|
301
331
|
|
|
302
332
|
If the daemon is down, start it and verify the configured endpoint:
|
|
303
333
|
|
|
334
|
+
```bash
|
|
335
|
+
brew services start libravdbd
|
|
336
|
+
```
|
|
337
|
+
|
|
338
|
+
Or, without Homebrew:
|
|
339
|
+
|
|
304
340
|
```bash
|
|
305
341
|
libravdbd serve
|
|
306
342
|
```
|