@xdarkicex/openclaw-memory-libravdb 1.4.3 → 1.4.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +76 -16
- package/docs/README.md +3 -12
- package/docs/architecture.md +68 -153
- package/docs/contributing.md +1 -2
- package/openclaw.plugin.json +64 -1
- package/package.json +2 -2
- package/src/cli.ts +34 -0
- package/src/comparison-experiments.ts +128 -0
- package/src/context-engine.ts +276 -62
- package/src/dream-promotion.ts +492 -0
- package/src/dream-routing.ts +40 -0
- package/src/index.ts +16 -1
- package/src/markdown-hash.ts +104 -0
- package/src/markdown-ingest.ts +627 -0
- package/src/memory-runtime.ts +32 -9
- package/src/scoring.ts +6 -3
- package/src/temporal.ts +657 -80
- package/src/types.ts +48 -0
- package/docs/ast-v2.md +0 -167
- package/docs/ast.md +0 -70
- package/docs/compaction-evaluation.md +0 -182
- package/docs/continuity.md +0 -708
- package/docs/elevated-guidance.md +0 -258
- package/docs/gating.md +0 -134
- package/docs/implementation.md +0 -447
- package/docs/mathematics-v2.md +0 -1879
- package/docs/mathematics.md +0 -695
|
@@ -1,258 +0,0 @@
|
|
|
1
|
-
# Elevated Guidance Model
|
|
2
|
-
|
|
3
|
-
This document defines the Tier 1.5 elevated-guidance path that sits between
|
|
4
|
-
authored invariants and ordinary recalled memory. Its purpose is to preserve
|
|
5
|
-
high-value "shadow rules" that are too weakly structured for AST promotion but
|
|
6
|
-
too directive to be allowed to decay into lossy summaries or low-trust recalled
|
|
7
|
-
memory.
|
|
8
|
-
|
|
9
|
-
The design goal is:
|
|
10
|
-
|
|
11
|
-
$$
|
|
12
|
-
\text{preserve high-value guidance without promoting it to Tier 0 invariants}
|
|
13
|
-
$$
|
|
14
|
-
|
|
15
|
-
The elevated-guidance path is therefore:
|
|
16
|
-
|
|
17
|
-
- stronger than ordinary semantic recall
|
|
18
|
-
- weaker than authored hard or soft invariants
|
|
19
|
-
- assembled separately from `<recalled_memories>`
|
|
20
|
-
- bounded by its own token reservation so it cannot starve continuity or Tier 0
|
|
21
|
-
|
|
22
|
-
## 1. Protected Summarization
|
|
23
|
-
|
|
24
|
-
During compaction, let a chronological cluster be:
|
|
25
|
-
|
|
26
|
-
$$
|
|
27
|
-
C_j = \{ t_1, t_2, \dots, t_m \}
|
|
28
|
-
$$
|
|
29
|
-
|
|
30
|
-
Define a deterministic deontic indicator:
|
|
31
|
-
|
|
32
|
-
$$
|
|
33
|
-
\delta(t_i) \in \{0,1\}
|
|
34
|
-
$$
|
|
35
|
-
|
|
36
|
-
where $\delta(t_i)=1$ means the turn contains guidance-like imperative or
|
|
37
|
-
prohibitive surface forms detectable by the local deontic frame.
|
|
38
|
-
|
|
39
|
-
Let $a_{t_i}\in[0,1]$ be the authored stability weight for a turn. Stable
|
|
40
|
-
authored sources may set $a_{t_i}=1$, while ordinary session text defaults
|
|
41
|
-
lower. The ideal shard-protection predicate is:
|
|
42
|
-
|
|
43
|
-
$$
|
|
44
|
-
P_{\mathrm{shard}}(t_i)=
|
|
45
|
-
\begin{cases}
|
|
46
|
-
1 & \text{if } \delta(t_i)=1 \land a_{t_i}\ge\tau_{\mathrm{stable}} \\
|
|
47
|
-
0 & \text{otherwise}
|
|
48
|
-
\end{cases}
|
|
49
|
-
$$
|
|
50
|
-
|
|
51
|
-
For the current first implementation, the runtime uses a conservative
|
|
52
|
-
deterministic approximation that protects deontic-like turns directly and gates
|
|
53
|
-
them by a stored stability weight rather than depending on a local model to
|
|
54
|
-
decide whether preservation should happen.
|
|
55
|
-
|
|
56
|
-
The cluster is partitioned into protected shards and compressible turns:
|
|
57
|
-
|
|
58
|
-
$$
|
|
59
|
-
C_j^{\mathrm{protected}}=\{t_i\in C_j \mid P_{\mathrm{shard}}(t_i)=1\}
|
|
60
|
-
$$
|
|
61
|
-
|
|
62
|
-
$$
|
|
63
|
-
C_j^{\mathrm{compress}}=C_j \setminus C_j^{\mathrm{protected}}
|
|
64
|
-
$$
|
|
65
|
-
|
|
66
|
-
Compaction then becomes:
|
|
67
|
-
|
|
68
|
-
$$
|
|
69
|
-
\mathrm{Compaction}(C_j)=
|
|
70
|
-
\left\{s_{\mathrm{abstractive}}(C_j^{\mathrm{compress}})\right\}
|
|
71
|
-
\cup C_j^{\mathrm{protected}}
|
|
72
|
-
$$
|
|
73
|
-
|
|
74
|
-
where the protected shard members survive verbatim as elevated-guidance records
|
|
75
|
-
instead of being melted into the cluster summary.
|
|
76
|
-
|
|
77
|
-
In the current implementation, protected records are persisted outside the live
|
|
78
|
-
session collection into durable elevated-guidance namespaces such as:
|
|
79
|
-
|
|
80
|
-
- `elevated:user:<userId>` when user provenance is available
|
|
81
|
-
- `elevated:session:<sessionId>` as a fallback
|
|
82
|
-
|
|
83
|
-
## 2. Tier 1.5 Admission Gate
|
|
84
|
-
|
|
85
|
-
At retrieval time, let $s$ range over the protected-shard records produced by
|
|
86
|
-
compaction. Elevated guidance is admitted only when both conditions hold:
|
|
87
|
-
|
|
88
|
-
1. the record was structurally protected during compaction
|
|
89
|
-
2. the current query is semantically relevant to it
|
|
90
|
-
|
|
91
|
-
Formally:
|
|
92
|
-
|
|
93
|
-
$$
|
|
94
|
-
G_{\mathrm{elevated}}(q,s)=
|
|
95
|
-
\begin{cases}
|
|
96
|
-
1 & \text{if } \mathrm{sim}(q,s)>\theta_1 \land s\in\bigcup_j C_j^{\mathrm{protected}} \\
|
|
97
|
-
0 & \text{otherwise}
|
|
98
|
-
\end{cases}
|
|
99
|
-
$$
|
|
100
|
-
|
|
101
|
-
The elevated buffer for query $q$ is:
|
|
102
|
-
|
|
103
|
-
$$
|
|
104
|
-
E(q)=\{s \mid G_{\mathrm{elevated}}(q,s)=1\}
|
|
105
|
-
$$
|
|
106
|
-
|
|
107
|
-
This set is assembled separately from `<recalled_memories>` so it can outrank
|
|
108
|
-
ordinary semantic recall without claiming the full normative force of authored
|
|
109
|
-
context.
|
|
110
|
-
|
|
111
|
-
## 3. Assembly Order and Budget
|
|
112
|
-
|
|
113
|
-
Let $\tau$ be the total memory prompt budget. The continuity-aware assembly with
|
|
114
|
-
Tier 1.5 becomes:
|
|
115
|
-
|
|
116
|
-
$$
|
|
117
|
-
C_{\mathrm{total}}(q)=
|
|
118
|
-
\mathcal{I}_1
|
|
119
|
-
\cup T_{\mathrm{recent}}
|
|
120
|
-
\cup \mathcal{I}_2^{*}
|
|
121
|
-
\cup E^{*}(q)
|
|
122
|
-
\cup \mathrm{Proj}(\mathcal{V}_{\mathrm{rest}}, q)
|
|
123
|
-
$$
|
|
124
|
-
|
|
125
|
-
where:
|
|
126
|
-
|
|
127
|
-
- $\mathcal{I}_1$ is hard authored context
|
|
128
|
-
- $T_{\mathrm{recent}}$ is the exact preserved raw recent tail
|
|
129
|
-
- $\mathcal{I}_2^{*}$ is the admitted soft-invariant prefix
|
|
130
|
-
- $E^{*}(q)$ is the budget-truncated elevated-guidance set
|
|
131
|
-
- $\mathrm{Proj}(\mathcal{V}_{\mathrm{rest}}, q)$ is ordinary residual semantic recall
|
|
132
|
-
|
|
133
|
-
Let $\rho_E\in(0,1)$ reserve a fraction of the prompt for elevated guidance.
|
|
134
|
-
The effective elevated-guidance token mass is:
|
|
135
|
-
|
|
136
|
-
$$
|
|
137
|
-
\tau_E^{\mathrm{eff}}=
|
|
138
|
-
\min\!\left(
|
|
139
|
-
\sum_{s\in E(q)}\mathrm{toks}(s),\,
|
|
140
|
-
\rho_E\tau
|
|
141
|
-
\right)
|
|
142
|
-
$$
|
|
143
|
-
|
|
144
|
-
The residual variant budget becomes:
|
|
145
|
-
|
|
146
|
-
$$
|
|
147
|
-
\tau_{\mathcal{V}}=
|
|
148
|
-
\tau
|
|
149
|
-
-\tau_{\mathcal{I}_1}
|
|
150
|
-
-\mathrm{toks}(T_{\mathrm{recent}})
|
|
151
|
-
-\tau_{\mathcal{I}_2}^{*}
|
|
152
|
-
-\tau_E^{\mathrm{eff}}
|
|
153
|
-
$$
|
|
154
|
-
|
|
155
|
-
If $\tau_{\mathcal{V}}\le 0$, ordinary semantic recall is intentionally starved
|
|
156
|
-
before elevated guidance is displaced.
|
|
157
|
-
|
|
158
|
-
## 4. Trust Boundary
|
|
159
|
-
|
|
160
|
-
Tier 1.5 is not a replacement for authored invariants. It is an elevated
|
|
161
|
-
advisory enclave:
|
|
162
|
-
|
|
163
|
-
- authored context still wins on conflict
|
|
164
|
-
- elevated guidance outranks ordinary semantic recall
|
|
165
|
-
- ordinary recalled memory remains untrusted historical context
|
|
166
|
-
|
|
167
|
-
The intended prompt precedence is:
|
|
168
|
-
|
|
169
|
-
1. authored context
|
|
170
|
-
2. recent raw tail
|
|
171
|
-
3. elevated guidance
|
|
172
|
-
4. recalled memories
|
|
173
|
-
|
|
174
|
-
This preserves the Section 11 safety rule that recalled memory must not be
|
|
175
|
-
followed as instructions while still giving preserved shadow rules more weight
|
|
176
|
-
than generic historical recall.
|
|
177
|
-
|
|
178
|
-
## 5. Failure Policy
|
|
179
|
-
|
|
180
|
-
Protected summarization is deterministic-first and model-optional.
|
|
181
|
-
|
|
182
|
-
If a local abstractive model is unavailable, slow, or times out, the system
|
|
183
|
-
must not fail open to deleting potential shadow rules. The safety rule is:
|
|
184
|
-
|
|
185
|
-
$$
|
|
186
|
-
\text{model failure} \Rightarrow \text{keep deterministic protected shards}
|
|
187
|
-
$$
|
|
188
|
-
|
|
189
|
-
In practical terms:
|
|
190
|
-
|
|
191
|
-
- destructive compaction may proceed only after protected shards are persisted
|
|
192
|
-
- model timeouts may reduce summary quality, but they must not erase the shard set
|
|
193
|
-
- when in doubt, preserve guidance verbatim rather than compressing it away
|
|
194
|
-
|
|
195
|
-
## 6. Current Runtime Approximation
|
|
196
|
-
|
|
197
|
-
The fully general model allows provenance weighting $a_{t_i}$ to distinguish
|
|
198
|
-
stable authored sources from ordinary session text. The current implementation
|
|
199
|
-
approximates this with explicit ingest-time metadata:
|
|
200
|
-
|
|
201
|
-
- session turns receive a `provenance_class`
|
|
202
|
-
- session turns receive a `stability_weight`
|
|
203
|
-
- compaction protects only turns with deontic surface signals and
|
|
204
|
-
`stability_weight \ge \tau_{\mathrm{stable}}`
|
|
205
|
-
|
|
206
|
-
This is enough to make Tier 1.5 durable and provenance-weighted without yet
|
|
207
|
-
requiring a local model in the admission path.
|
|
208
|
-
|
|
209
|
-
## 7. Additive Local-Model Booster
|
|
210
|
-
|
|
211
|
-
The final admission stage may use a local model only as an additive booster.
|
|
212
|
-
The current implementation reuses the canonical local embedder exposed by the
|
|
213
|
-
extractive summarizer.
|
|
214
|
-
|
|
215
|
-
Let $b_{\mathrm{sem}}(t)\in[0,1]$ be the maximum cosine similarity between turn
|
|
216
|
-
$t$ and a small fixed set of guidance prototypes:
|
|
217
|
-
|
|
218
|
-
$$
|
|
219
|
-
b_{\mathrm{sem}}(t)=\max_{p\in\mathcal{P}_{\mathrm{guide}}}\cos(\varphi(t),\varphi(p))
|
|
220
|
-
$$
|
|
221
|
-
|
|
222
|
-
This signal is only considered for turns that already satisfy:
|
|
223
|
-
|
|
224
|
-
- sufficient stability weight
|
|
225
|
-
- a lightweight guidance surface hint
|
|
226
|
-
- failure to pass the strict deterministic deontic gate
|
|
227
|
-
|
|
228
|
-
The current rescue condition is therefore:
|
|
229
|
-
|
|
230
|
-
$$
|
|
231
|
-
P_{\mathrm{boost}}(t)=
|
|
232
|
-
\mathbf{1}\!\left[
|
|
233
|
-
a_t\ge\tau_{\mathrm{stable}}
|
|
234
|
-
\land H_{\mathrm{surface}}(t)=1
|
|
235
|
-
\land \delta(t)=0
|
|
236
|
-
\land b_{\mathrm{sem}}(t)\ge\tau_{\mathrm{boost}}
|
|
237
|
-
\right]
|
|
238
|
-
$$
|
|
239
|
-
|
|
240
|
-
and final protection becomes:
|
|
241
|
-
|
|
242
|
-
$$
|
|
243
|
-
P_{\mathrm{final}}(t)=
|
|
244
|
-
\mathbf{1}\!\left[
|
|
245
|
-
P_{\mathrm{shard}}(t)=1
|
|
246
|
-
\;\lor\;
|
|
247
|
-
P_{\mathrm{boost}}(t)=1
|
|
248
|
-
\right]
|
|
249
|
-
$$
|
|
250
|
-
|
|
251
|
-
This preserves the key safety invariant:
|
|
252
|
-
|
|
253
|
-
$$
|
|
254
|
-
\text{model assistance may raise borderline candidates, but it is never the sole deletion-safety gate}
|
|
255
|
-
$$
|
|
256
|
-
|
|
257
|
-
If embedding fails or times out, the booster contributes zero and the
|
|
258
|
-
deterministic path remains authoritative.
|
package/docs/gating.md
DELETED
|
@@ -1,134 +0,0 @@
|
|
|
1
|
-
# Domain-Adaptive Gating Scalar
|
|
2
|
-
|
|
3
|
-
This document describes the ingestion gate used to decide whether a user turn should be promoted into durable `user:` memory. It is the most novel scoring component in the repository.
|
|
4
|
-
|
|
5
|
-
Implemented in:
|
|
6
|
-
- `sidecar/compact/gate.go`
|
|
7
|
-
- `sidecar/compact/tokens.go`
|
|
8
|
-
- `sidecar/compact/summarize.go` for the downstream abstractive-routing threshold
|
|
9
|
-
|
|
10
|
-
## 1. Why the Original Scalar Failed
|
|
11
|
-
|
|
12
|
-
The original scalar assumed conversational memory semantics:
|
|
13
|
-
- low novelty meant "already known"
|
|
14
|
-
- repetition meant "probably redundant"
|
|
15
|
-
- low natural-language structure meant "probably noise"
|
|
16
|
-
|
|
17
|
-
That logic breaks for technical sessions. Repeated workflow context is often exactly what should be remembered: file paths, APIs, failure signatures, configuration changes, and architectural decisions. In technical work, repetition can indicate persistent work context rather than low value.
|
|
18
|
-
|
|
19
|
-
## 2. The Convex Mixture
|
|
20
|
-
|
|
21
|
-
The corrected gate is:
|
|
22
|
-
\[ G(t) = (1 - T(t)) \cdot G_{\mathrm{conv}}(t) + T(t) \cdot G_{\mathrm{tech}}(t) \]
|
|
23
|
-
|
|
24
|
-
where:
|
|
25
|
-
\[ G_{\mathrm{conv}}(t) = w_1^c H(t) + w_2^c R(t) + w_3^c D_{nl}(t) \]
|
|
26
|
-
\[ G_{\mathrm{tech}}(t) = w_1^t P(t) + w_2^t A(t) + w_3^t D_{\mathrm{tech}}(t) \]
|
|
27
|
-
|
|
28
|
-
and the domain indicator is bounded:
|
|
29
|
-
\[ T(t) \in [0,1] \]
|
|
30
|
-
|
|
31
|
-
### Weight Invariants
|
|
32
|
-
To guarantee that the sub-branch scores remain strictly bounded to $[0,1]$, the configuration must satisfy:
|
|
33
|
-
\[ \sum_{i=1}^3 w_i^c = 1 \quad \text{and} \quad \sum_{i=1}^3 w_i^t = 1 \]
|
|
34
|
-
|
|
35
|
-
Current default weights from `DefaultGatingConfig()`:
|
|
36
|
-
- conversational branch: $w_1^c = 0.35$, $w_2^c = 0.40$, $w_3^c = 0.25$
|
|
37
|
-
- technical branch: $w_1^t = 0.40$, $w_2^t = 0.35$, $w_3^t = 0.25$
|
|
38
|
-
|
|
39
|
-
### Boundedness and Continuity
|
|
40
|
-
Because $T(t) \in [0,1]$, $G_{\mathrm{conv}}(t) \in [0,1]$, and $G_{\mathrm{tech}}(t) \in [0,1]$, $G(t)$ is a true convex combination bounded to $[0,1]$.
|
|
41
|
-
|
|
42
|
-
The gate is continuous in $T$:
|
|
43
|
-
\[ \frac{\partial G}{\partial T} = G_{\mathrm{tech}} - G_{\mathrm{conv}} \]
|
|
44
|
-
There is no discontinuous jump at a domain boundary. A mixed technical/conversational turn interpolates smoothly.
|
|
45
|
-
|
|
46
|
-
## 3. Domain Detection $T(t)$
|
|
47
|
-
|
|
48
|
-
Technical density is a weighted sum of technical patterns with saturation:
|
|
49
|
-
\[ T(t) = \min\left(\frac{\sum_i s_i \cdot \mathbf{1}[\mathrm{pattern}_i(t)]}{\theta_{\mathrm{norm}}}, 1\right) \]
|
|
50
|
-
|
|
51
|
-
The shipped patterns include code fences, file paths, function definitions, shell commands, URLs, stack traces, and hashes.
|
|
52
|
-
|
|
53
|
-
Default normalization is $\theta_{\mathrm{norm}} = 1.5$. This means two strong technical signals are enough to saturate the branch weight. Saturation at `1.0` is correct because the gate only needs the branch mixture weight, not a unbounded "technical magnitude."
|
|
54
|
-
|
|
55
|
-
## 4. Conversational Branch
|
|
56
|
-
|
|
57
|
-
### Novelty $H(t)$
|
|
58
|
-
|
|
59
|
-
In the live implementation (`sidecar/compact/gate.go`), retrieval scores reaching the gate use the public higher-is-better cosine-style relevance contract from the retrieval layer, spanning $[-1, 1]$ for cosine collections. To ensure the novelty term remains in $[0,1]$ for the convex mixture, the mathematical model applies a zero-clamp:
|
|
60
|
-
|
|
61
|
-
\[ H(t) = \begin{cases}
|
|
62
|
-
1.0 & \text{if } |K| = 0 \\
|
|
63
|
-
1 - \frac{1}{|K|} \sum_{k \in K} \max(0, \cos(\vec{v}_t, \vec{v}_k)) & \text{otherwise}
|
|
64
|
-
\end{cases} \]
|
|
65
|
-
|
|
66
|
-
where $K$ is the retrieved nearest-neighbor set from durable `user:` memory.
|
|
67
|
-
|
|
68
|
-
Properties:
|
|
69
|
-
- An empty memory (cold start) safely returns $H=1.0$ instead of a division-by-zero.
|
|
70
|
-
- Highly similar existing memories ($\cos \to 1$) drive $H \to 0$.
|
|
71
|
-
- Negative or orthogonal neighbors are clamped to prevent $H(t) > 1$.
|
|
72
|
-
|
|
73
|
-
### Repetition Gate $R(t)$
|
|
74
|
-
|
|
75
|
-
The repetition term is a product, not a sum:
|
|
76
|
-
\[ R(t) = F(t) \cdot (1 - S(t)) \]
|
|
77
|
-
|
|
78
|
-
with:
|
|
79
|
-
\[ F(t) = \min\left(\frac{\mathrm{hitsAbove}(\mathrm{turns:u}, 0.80, k=10)}{5}, 1\right) \]
|
|
80
|
-
\[ S(t) = \min\left(\frac{\mathrm{hitsAbove}(\mathrm{user:u}, 0.85, k=5)}{3}, 1\right) \]
|
|
81
|
-
|
|
82
|
-
where $u$ is the resolved durable namespace used by the host boundary. The
|
|
83
|
-
resolver chooses $u$ in this order: explicit `userId`, then the
|
|
84
|
-
session-key-derived namespace, then `agentId`, and finally the resolver
|
|
85
|
-
fallback/default when no host identity is available. When the host does not
|
|
86
|
-
provide a `userId`, the gate still measures repetition and saturation against a
|
|
87
|
-
stable durable scope.
|
|
88
|
-
|
|
89
|
-
Why a product? High input frequency should help only if durable memory is not already saturated. High saturation must veto the repetition term regardless of frequency. The veto property is structural: $S(t) = 1 \Rightarrow R(t) = 0$.
|
|
90
|
-
|
|
91
|
-
### Natural-Language Structural Load $D_{nl}(t)$
|
|
92
|
-
Detects heuristics like preferences, human-name references, dates, and fact assertions.
|
|
93
|
-
|
|
94
|
-
## 5. Technical Branch
|
|
95
|
-
|
|
96
|
-
### Specificity $P(t)$
|
|
97
|
-
|
|
98
|
-
Specificity measures concrete artifact density normalized by turn length:
|
|
99
|
-
|
|
100
|
-
\[ P(t) = \min\left( \frac{\sum_j p_j \cdot \mathrm{count}_j(t)}{\max(L(t)/100.0, 1.0)}, 1 \right) \]
|
|
101
|
-
|
|
102
|
-
The numerator counts things like file paths, error codes, and API endpoints.
|
|
103
|
-
The normalization denominator is the token estimator used by the gating subsystem (`sidecar/compact/tokens.go`):
|
|
104
|
-
\[ L(t) = \max\left(\left\lfloor \frac{\mathrm{RuneCount}(t)}{4} \right\rfloor, 1\right) \]
|
|
105
|
-
|
|
106
|
-
Length normalization matters. Without it, any long technical turn would score high simply because it contains more surface area, not because it is more memory-worthy.
|
|
107
|
-
|
|
108
|
-
### Actionability $A(t)$
|
|
109
|
-
Captures architectural decisions, fixes, merge milestones, and configuration changes.
|
|
110
|
-
|
|
111
|
-
### Technical Structural Load $D_{\mathrm{tech}}(t)$
|
|
112
|
-
Detects function definitions, dependencies, and tests. It is the technical analogue to $D_{nl}$.
|
|
113
|
-
|
|
114
|
-
## 6. Calibration
|
|
115
|
-
|
|
116
|
-
For threshold tuning, isotonic regression is the correct calibration method once usefulness labels exist:
|
|
117
|
-
\[ P(\mathrm{useful} \mid G) = \mathrm{IsotonicRegression}(G, y) \]
|
|
118
|
-
|
|
119
|
-
Current thresholds implemented in code:
|
|
120
|
-
- durable promotion: `DefaultGatingConfig().Threshold = 0.35`
|
|
121
|
-
- abstractive routing: `AbstractiveRoutingThreshold = 0.60`
|
|
122
|
-
|
|
123
|
-
## 7. Invariants
|
|
124
|
-
|
|
125
|
-
The gate preserves six mathematical invariants mapped to `gate_test.go`:
|
|
126
|
-
|
|
127
|
-
1. **Empty memory implies full novelty:** $\mathrm{memHits} = \emptyset \Rightarrow H = 1.0$
|
|
128
|
-
2. **Saturation vetoes repetition:** $\mathrm{MemSaturation} = 1 \Rightarrow R = 0$
|
|
129
|
-
3. **The convex blend stays in bounds:** $G \in [0,1]$
|
|
130
|
-
4. **Monotonic Interpolation:** $G \in [\min(G_{\mathrm{conv}}, G_{\mathrm{tech}}), \max(G_{\mathrm{conv}}, G_{\mathrm{tech}})]$
|
|
131
|
-
5. **Purely conversational turns collapse:** $T = 0 \Rightarrow G = G_{\mathrm{conv}}$
|
|
132
|
-
6. **Purely technical turns collapse:** $T = 1 \Rightarrow G = G_{\mathrm{tech}}$
|
|
133
|
-
|
|
134
|
-
Conversational structure must not overfire on pure code. Together these invariants make the scalar interpretable, stable, and safe to tune.
|