@xdarkicex/openclaw-memory-libravdb 1.3.12 → 1.3.17

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,84 +1,283 @@
1
- # LibraVDB Memory
1
+ # LibraVDB Memory for OpenClaw
2
2
 
3
- ## Install
3
+ [![Go](https://img.shields.io/badge/Go-1.25%2B-00ADD8?logo=go&logoColor=white)](./sidecar/go.mod)
4
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.x-3178C6?logo=typescript&logoColor=white)](./package.json)
5
+ [![OpenClaw](https://img.shields.io/badge/OpenClaw-memory%20plugin-111827)](./openclaw.plugin.json)
4
6
 
5
- Recommended on macOS:
7
+ `@xdarkicex/openclaw-memory-libravdb` is a local-first OpenClaw memory system
8
+ for people who want more than "top-k vectors plus a prompt footer."
6
9
 
7
- ```bash
8
- brew tap xDarkicex/openclaw-libravdb-memory
9
- brew install libravdbd
10
- brew services start libravdbd
11
- openclaw plugins install @xdarkicex/openclaw-memory-libravdb
12
- ```
10
+ It replaces the default lightweight memory path with a full context lifecycle:
13
11
 
14
- Then activate the plugin in `~/.openclaw/openclaw.json`.
12
+ - active session memory
13
+ - durable per-user memory
14
+ - shared global memory
15
+ - continuity-aware compaction
16
+ - authored context partitioning
17
+ - hybrid scoring across scope, recency, and similarity
15
18
 
16
- Manual plugin install:
19
+ This repository pairs a TypeScript OpenClaw plugin with a Go daemon backed by
20
+ `libraVDB`. The plugin owns both the `memory` and `contextEngine` slots, while
21
+ the daemon handles embeddings, retrieval, storage, and compaction.
22
+ On newer OpenClaw builds, it also bridges the built-in `memory_search` runtime
23
+ to the same libraVDB sidecar instead of leaving that tool inert.
17
24
 
18
- ```bash
19
- openclaw plugins install @xdarkicex/openclaw-memory-libravdb
20
- ```
25
+ ## Why This Exists
21
26
 
22
- The published plugin is connect-only. It does not spawn a local binary during install or at runtime. For durable memory, run a local `libravdbd` daemon separately and point the plugin at its endpoint.
27
+ The stock "single memory bucket" pattern is good for simple persistence, but it
28
+ starts to break down when you care about:
23
29
 
24
- Minimum host version:
30
+ - keeping the newest working context raw and intact
31
+ - separating ephemeral session state from durable memory
32
+ - avoiding long-session prompt collapse
33
+ - preserving authored instructions differently from recalled user content
34
+ - treating memory retrieval as a ranked assembly problem instead of plain
35
+ nearest-neighbor lookup
25
36
 
26
- - OpenClaw `>= 2026.3.22`
37
+ LibraVDB Memory exists for that harder class of memory problem.
27
38
 
28
- Security note:
39
+ ## What Makes It Different
29
40
 
30
- - the published plugin package contains no `postinstall`, no `openclaw.setup`, and no direct `child_process` usage
31
- - the plugin only connects to a local `libravdbd` endpoint such as `unix:/Users/<you>/.clawdb/run/libravdb.sock` or `tcp:127.0.0.1:37421`
32
- - after install, the plugin makes no required network calls for embedding or extractive compaction
33
- - the only optional runtime network path is an explicitly configured remote summarizer endpoint such as `ollama-local`
41
+ These are the core differentiators the project is built around:
34
42
 
35
- ## Daemon
43
+ - Dual slot ownership: the plugin owns both memory prompt injection and the
44
+ full context lifecycle.
45
+ - Built-in `memory_search` bridge: newer OpenClaw memory runtime calls are
46
+ routed into the same sidecar-backed retrieval path.
47
+ - Lifecycle hint adoption: `before_reset` and `session_end` are used as
48
+ advisory signals into the sidecar without giving OpenClaw control of ingest
49
+ or compaction.
50
+ - Sidecar-owned lifecycle journal: reset/end hints are recorded internally for
51
+ debugging and auditing without entering normal memory retrieval.
52
+ The journal is bounded by a sidecar retention cap so it does not grow
53
+ forever.
54
+ - Local-first runtime: the core path does not depend on external embedding
55
+ services.
56
+ - Three-tier memory: session, durable user, and global memory stay distinct.
57
+ - Hybrid scoring: retrieval is ranked by semantic similarity, recency, scope,
58
+ and summary quality instead of cosine alone.
59
+ - Automatic compaction: long sessions compact behind a protected recent tail.
60
+ - Crash-resilient IPC: the host talks to a sidecar over a stable local socket
61
+ or loopback TCP endpoint with degraded-mode fallback.
36
62
 
37
- Install and start `libravdbd` separately, then point the plugin at the running daemon if you do not want the default endpoint.
63
+ ## Quick Start
38
64
 
39
- Default endpoints:
65
+ The supported install flow is:
40
66
 
41
- - macOS/Linux: `unix:$HOME/.clawdb/run/libravdb.sock`
42
- - Windows: `tcp:127.0.0.1:37421`
43
-
44
- Phase 2 packaging assets now live under [`packaging/`](./packaging):
45
-
46
- - `packaging/systemd/libravdbd.service` for Linux user services
47
- - `packaging/launchd/com.xdarkicex.libravdbd.plist` for macOS LaunchAgents
48
- - `packaging/homebrew/libravdbd.rb.tmpl` as the source template for a generated Homebrew formula
49
-
50
- Recommended service startup commands:
51
-
52
- - macOS: `brew services start libravdbd`
53
- - Linux: `systemctl --user enable --now libravdbd.service`
67
+ ```bash
68
+ brew tap xDarkicex/openclaw-libravdb-memory
69
+ brew install libravdbd
70
+ brew services start libravdbd
71
+ openclaw plugins install @xdarkicex/openclaw-memory-libravdb
72
+ ```
54
73
 
55
- ## Activate
74
+ The Homebrew formula installs the daemon plus the bundled ONNX Runtime, embedding assets, and T5 summarizer assets it needs to boot cleanly on supported platforms.
56
75
 
57
- Add this to `~/.openclaw/openclaw.json`:
76
+ Then assign the plugin to both required OpenClaw slots in
77
+ `~/.openclaw/openclaw.json`:
58
78
 
59
79
  ```json
60
80
  {
61
81
  "plugins": {
62
82
  "slots": {
63
- "memory": "libravdb-memory"
83
+ "memory": "libravdb-memory",
84
+ "contextEngine": "libravdb-memory"
64
85
  },
65
86
  "configs": {
66
87
  "libravdb-memory": {
67
- "sidecarPath": "unix:/Users/<you>/.clawdb/run/libravdb.sock"
88
+ "sidecarPath": "auto"
68
89
  }
69
90
  }
70
91
  }
71
92
  }
72
93
  ```
73
94
 
74
- Without the `plugins.slots.memory` entry, OpenClaw's default memory continues to run in parallel and this plugin does not take over the exclusive memory slot.
75
-
76
- ## Verify
77
-
78
- Run:
95
+ Verify the setup:
79
96
 
80
97
  ```bash
81
98
  openclaw memory status
82
99
  ```
83
100
 
84
- Expected output includes a readable status table showing the daemon is reachable, stored turn/memory counts, the active ingestion gate threshold, and whether the abstractive summarizer is provisioned.
101
+ Expected healthy state:
102
+
103
+ - the daemon is reachable
104
+ - the plugin is active as the memory provider
105
+ - the runtime can report stored counts and model readiness
106
+
107
+ ## Install Model
108
+
109
+ This plugin is intentionally **connect-only** at install time.
110
+
111
+ It does not compile Go code during plugin installation, and it does not manage
112
+ daemon lifecycle automatically from the npm package. That is deliberate: some
113
+ OpenClaw environments are strict about postinstall behavior, daemon spawning,
114
+ and anything that looks like binary bootstrap or process management.
115
+
116
+ Current model:
117
+
118
+ - npm/OpenClaw package: plugin code and docs
119
+ - `libravdbd`: installed and managed separately
120
+ - default daemon endpoint on macOS/Linux:
121
+ `unix:$HOME/.clawdb/run/libravdb.sock`
122
+ - default daemon endpoint on Windows:
123
+ `tcp:127.0.0.1:37421`
124
+
125
+ If your daemon runs elsewhere, set an explicit `sidecarPath`, for example:
126
+
127
+ - `unix:/custom/path/libravdb.sock`
128
+ - `tcp:127.0.0.1:9999`
129
+
130
+ ## Architecture At A Glance
131
+
132
+ ```text
133
+ OpenClaw host
134
+ -> memoryPromptSection (durable user/global recall)
135
+ -> memory runtime bridge (built-in memory_search)
136
+ -> context engine (bootstrap / ingest / assemble / compact)
137
+ -> plugin runtime
138
+ -> JSON-RPC
139
+ -> libravdbd
140
+ -> libraVDB + local embedding/summarization stack
141
+ ```
142
+
143
+ The main runtime split is:
144
+
145
+ - TypeScript host layer:
146
+ - OpenClaw plugin registration
147
+ - prompt assembly
148
+ - hybrid ranking
149
+ - continuity-aware token budgeting
150
+ - degraded-mode behavior
151
+ - Go daemon layer:
152
+ - vector storage
153
+ - embeddings
154
+ - search RPCs
155
+ - compaction and summarization
156
+ - stable local IPC endpoint
157
+
158
+ For the implemented architecture map, read
159
+ [docs/architecture.md](./docs/architecture.md).
160
+
161
+ ## Retrieval Model
162
+
163
+ The assembly path is not "just search some vectors and paste the top hits."
164
+
165
+ It combines:
166
+
167
+ - session search for current-work relevance
168
+ - durable user recall for long-lived personal context
169
+ - global recall for shared facts
170
+ - authored invariant and variant context
171
+ - continuity-preserving recent-tail injection
172
+ - token-budgeted fitting
173
+
174
+ The ranking model currently blends:
175
+
176
+ - semantic similarity
177
+ - scope weighting
178
+ - recency decay
179
+ - summary quality attenuation
180
+
181
+ The formal math lives in:
182
+
183
+ - [docs/mathematics-v2.md](./docs/mathematics-v2.md)
184
+ - [docs/continuity.md](./docs/continuity.md)
185
+ - [docs/ast-v2.md](./docs/ast-v2.md)
186
+ - [docs/elevated-guidance.md](./docs/elevated-guidance.md)
187
+
188
+ ## Compaction Model
189
+
190
+ This system does not treat long chats as append-only forever.
191
+
192
+ Older session turns compact behind a protected recent tail, so the plugin can:
193
+
194
+ - keep the newest working context raw
195
+ - preserve adjacency-sensitive continuity near the boundary
196
+ - promote older material into summaries
197
+ - avoid letting long sessions drown their own prompt budget
198
+
199
+ Compaction is designed as part of the memory system itself, not as a separate
200
+ maintenance convenience.
201
+
202
+ ## For Power Users
203
+
204
+ If you are evaluating this as an operator or advanced OpenClaw user, the key
205
+ practical points are:
206
+
207
+ - This plugin should own both `memory` and `contextEngine`. Partial slot
208
+ assignment is a misconfiguration.
209
+ - On hosts that expose `registerMemoryRuntime`, the built-in `memory_search`
210
+ tool now searches the same libraVDB-backed memory stores.
211
+ - The daemon is a separate operational unit. Treat plugin lifecycle and daemon
212
+ lifecycle as different concerns.
213
+ - The system is local-first by design. The critical retrieval path does not
214
+ require a remote embedding service.
215
+ - The sidecar transport is stable and explicit, which makes it service-manager
216
+ friendly on macOS, Linux, and Windows.
217
+
218
+ Good entry points:
219
+
220
+ - [docs/install.md](./docs/install.md)
221
+ - [docs/installation.md](./docs/installation.md)
222
+ - [docs/uninstall.md](./docs/uninstall.md)
223
+ - [docs/implementation.md](./docs/implementation.md)
224
+
225
+ ## For Researchers And Builders
226
+
227
+ If you are studying retrieval, memory systems, or agent architecture, the
228
+ interesting parts of this repo are:
229
+
230
+ - continuity-aware assembly:
231
+ `C_total(q) = I union T_recent union Proj(V_rest, q)`
232
+ - hybrid ranking instead of pure cosine retrieval
233
+ - separation of authored invariants from searchable authored lore
234
+ - durable-memory admission via domain-adaptive gating
235
+ - local daemon architecture rather than in-process TS vector plumbing
236
+ - compaction that preserves recent working context instead of flattening the
237
+ whole transcript
238
+
239
+ Start here:
240
+
241
+ - [docs/problem.md](./docs/problem.md)
242
+ - [docs/architecture.md](./docs/architecture.md)
243
+ - [docs/mathematics-v2.md](./docs/mathematics-v2.md)
244
+ - [docs/gating.md](./docs/gating.md)
245
+ - [docs/continuity.md](./docs/continuity.md)
246
+
247
+ ## Runtime Facts
248
+
249
+ - npm package: `@xdarkicex/openclaw-memory-libravdb`
250
+ - OpenClaw plugin id: `libravdb-memory`
251
+ - minimum host version: `openclaw >= 2026.3.22`
252
+ - default daemon data path: `$HOME/.clawdb/data.libravdb`
253
+ - default daemon endpoint on macOS/Linux:
254
+ `unix:$HOME/.clawdb/run/libravdb.sock`
255
+ - default daemon endpoint on Windows:
256
+ `tcp:127.0.0.1:37421`
257
+
258
+ ## Repository Guide
259
+
260
+ - [docs/install.md](./docs/install.md): quick install and lifecycle guide
261
+ - [docs/installation.md](./docs/installation.md): full installation and
262
+ packaging reference
263
+ - [docs/uninstall.md](./docs/uninstall.md): clean shutdown and removal
264
+ - [docs/architecture.md](./docs/architecture.md): current implemented system
265
+ architecture
266
+ - [docs/implementation.md](./docs/implementation.md): important implementation
267
+ contracts
268
+ - [docs/mathematics-v2.md](./docs/mathematics-v2.md): formal scoring and
269
+ optimization reference
270
+
271
+ ## Current Constraint
272
+
273
+ Because OpenClaw environments can be strict about postinstall downloads,
274
+ daemon spawning, and scanner-visible binary bootstrap behavior, the cleanest
275
+ supported user path today is:
276
+
277
+ - install plugin
278
+ - install daemon
279
+ - assign both slots
280
+ - let the plugin connect to a stable local endpoint
281
+
282
+ That tradeoff is intentional. It keeps the plugin installation surface simple
283
+ and auditable while preserving the full local memory engine at runtime.
package/docs/README.md CHANGED
@@ -5,12 +5,15 @@ legacy non-versioned predecessor also exists. Older non-versioned docs are kept
5
5
  to preserve project history and design evolution.
6
6
 
7
7
  - [installation.md](./installation.md) - Complete install, activation, verification, and troubleshooting reference.
8
+ - [install.md](./install.md) - Practical lifecycle guide for Homebrew, the OpenClaw plugin, and manual daemon management.
9
+ - [uninstall.md](./uninstall.md) - Clean shutdown and removal guide for the plugin, daemon, and optional local data.
8
10
  - [architecture.md](./architecture.md) - End-to-end component model, turn lifecycle, compaction flow, and degraded behavior.
9
11
  - [problem.md](./problem.md) - Technical argument for replacing the stock OpenClaw memory lifecycle in this use case.
10
12
  - [mathematics-v2.md](./mathematics-v2.md) - Formal reference for hybrid scoring, decay, token budgeting, Matryoshka retrieval, compaction, and planned two-pass retrieval.
11
13
  - [compaction-evaluation.md](./compaction-evaluation.md) - Real-model benchmark notes for T5 summary confidence, Nomic-space preservation, and the hard preservation gate.
12
14
  - [continuity.md](./continuity.md) - Continuity model for invariant context, preserved recent raw session tail, and retrieved older memory.
13
15
  - [ast-v2.md](./ast-v2.md) - Reviewed authoritative AST partitioning reference for authored Markdown hard invariants, soft invariants, and variant lore.
16
+ - [elevated-guidance.md](./elevated-guidance.md) - Tier 1.5 protected-shard and elevated-guidance model for preserving shadow rules through compaction.
14
17
  - [ast.md](./ast.md) - Historical predecessor to `ast-v2.md`, kept to show design evolution and earlier bugs.
15
18
  - [gating.md](./gating.md) - Full derivation and calibration guide for the domain-adaptive gating scalar.
16
19
  - [implementation.md](./implementation.md) - Non-obvious implementation decisions and their rationale.
package/docs/ast-v2.md CHANGED
@@ -33,6 +33,27 @@ We formalize this as a binary promotion scalar \(\sigma: N_d \to \{0,1\}\). This
33
33
  \end{cases}
34
34
  \]
35
35
 
36
+ To reason about tuning noise in the bigram set \(W_{\mathrm{deontic}}\), we
37
+ also define the paragraph classifier error rates:
38
+ \[
39
+ P_{\mathrm{fp}} = P(\sigma(n) = 1 \mid n \text{ is narrative lore})
40
+ \]
41
+ \[
42
+ P_{\mathrm{fn}} = P(\sigma(n) = 0 \mid n \text{ is behavioral rule})
43
+ \]
44
+
45
+ For authored documents whose lore paragraphs would otherwise remain in
46
+ \(\mathcal{V}_d\), the expected Tier-2 waste introduced by false positives is:
47
+ \[
48
+ \mathbb{E}[\mathrm{wasted\ toks\ in\ }\mathcal{I}_2]
49
+ =
50
+ P_{\mathrm{fp}} \cdot |\mathcal{V}_{d,\mathrm{paragraphs}}| \cdot \mathbb{E}[\mathrm{toks}(n)]
51
+ \]
52
+
53
+ This gives the parser a concrete quantity to minimize when adjusting
54
+ \(W_{\mathrm{deontic}}\), while \(P_{\mathrm{fn}}\) measures the risk of leaving
55
+ true behavioral rules behind in \(\mathcal{V}_d\).
56
+
36
57
  *Implemented via `NewDeonticFrame` and `EvaluateText` in the zero-allocation byte lexer.*
37
58
 
38
59
  ## 3. The Three-Tier Structural Indicator Function \(\iota\)
@@ -59,12 +80,27 @@ We define the structural indicator function \(\iota: N_d \to \{0,1,2\}\) mapping
59
80
  ## 4. Corpus Decomposition and Set Integration
60
81
 
61
82
  For any document \(d \in \mathbf{D}_{\text{agents}} \cup \mathbf{D}_{\text{souls}}\), the node set \(N_d\) is partitioned cleanly into three sets:
62
- - **Hard Directives:** \(\mathcal{I}_{1d} = \{ n \in N_d \mid \iota(n) = 1 \}\)
83
+ - **Hard Directives:** \(\mathcal{I}_{1d} = \langle n \in N_d \mid \iota(n) = 1 \rangle\), ordered by \(\mathrm{position}(n)\) ascending, where \(\mathrm{position}(n)\) is the byte offset of node \(n\) in \(d_{\mathrm{raw}}\)
63
84
  - **Soft Directives:** \(\mathcal{I}_{2d} = \{ n \in N_d \mid \iota(n) = 2 \}\)
64
85
  - **Contextual Lore:** \(\mathcal{V}_d = \{ n \in N_d \mid \iota(n) = 0 \}\)
65
86
 
66
87
  *Partition Completeness:* Because \(\iota(n)\) maps every node to exactly one integer in \(\{0, 1, 2\}\), the resulting sets are mutually exclusive and collectively exhaustive:
67
- \[ \mathcal{I}_{1d} \cup \mathcal{I}_{2d} \cup \mathcal{V}_d = N_d \quad \text{and} \quad \mathcal{I}_{1d} \cap \mathcal{I}_{2d} \cap \mathcal{V}_d = \emptyset \]
88
+ \[
89
+ \mathcal{I}_{1d} \cup \mathcal{I}_{2d} \cup \mathcal{V}_d = N_d
90
+ \]
91
+ \[
92
+ \mathcal{I}_{1d} \cap \mathcal{I}_{2d} = \emptyset
93
+ \]
94
+ \[
95
+ \mathcal{I}_{1d} \cap \mathcal{V}_d = \emptyset
96
+ \]
97
+ \[
98
+ \mathcal{I}_{2d} \cap \mathcal{V}_d = \emptyset
99
+ \]
100
+
101
+ These pairwise disjointness statements follow directly from \(\iota\) being a
102
+ single-valued total function into \(\{0,1,2\}\): no node can be assigned to
103
+ more than one tier simultaneously.
68
104
 
69
105
  These sets integrate into the global corpus. Let \(\mathbf{D}_{\text{standard}}\) be the set of standard memory documents (non-core files). We formally define the standard variant node set as \(\mathcal{V}_{\text{standard}} = \bigcup_{d \in \mathbf{D}_{\text{standard}}} E(d)\). The global corpus is then:
70
106
  \[ \mathcal{I}_1 = \bigcup_{d} \mathcal{I}_{1d} \qquad \mathcal{I}_2 = \bigcup_{d} \mathcal{I}_{2d} \qquad \mathcal{V} = \mathcal{V}_{\text{standard}} \cup \left( \bigcup_{d} \mathcal{V}_d \right) \]
@@ -86,7 +122,7 @@ For Hard Invariants (\(\alpha_1\)):
86
122
  \[ \sum_{n \in \mathcal{I}_{1d}} \mathrm{toks}(n) \le \alpha_1 \tau \implies \text{fast-fail and reject agent load if exceeded} \]
87
123
 
88
124
  For Soft Invariants (\(\alpha_2\)):
89
- \[ \sum_{n \in \mathcal{I}_{2d}} \mathrm{toks}(n) \le \alpha_2 \tau \implies \text{truncate by position if exceeded} \]
125
+ \[ \sum_{n \in \mathcal{I}_{2d}} \mathrm{toks}(n) \le \alpha_2 \tau \implies \text{truncate by source position if exceeded} \]
90
126
 
91
127
  *Cumulative Verification Proof:* Let the total reserved invariant budget fraction be \(\alpha\), where \(\alpha_1 + \alpha_2 \le \alpha\). If both independent enforcement bounds are satisfied, then:
92
128
  \[ \sum_{n \in \mathcal{I}_{1d}} \mathrm{toks}(n) + \sum_{n \in \mathcal{I}_{2d}} \mathrm{toks}(n) \le \alpha_1 \tau + \alpha_2 \tau = (\alpha_1 + \alpha_2)\tau \le \alpha \tau \]
@@ -102,14 +138,20 @@ therefore treats the tiers with the following precedence:
102
138
  3. **Tier 2 / Soft invariants** are injected by longest-prefix truncation under the effective budget
103
139
  \[
104
140
  \tau_{\mathcal{I}_2}^{\mathrm{eff}}=
141
+ \max\!\left(0,\,
105
142
  \min\!\left(\alpha_2\tau,\,
106
- \tau-\tau_{\mathcal{I}_1}-\mathrm{toks}(T_{\mathrm{base}})\right)
143
+ \tau-\tau_{\mathcal{I}_1}-\mathrm{toks}(T_{\mathrm{base}})\right)\right)
107
144
  \]
108
145
  4. **Variant lore** competes only for the final residual budget after Tier 1,
109
146
  the admitted Tier 2 prefix, and the exact recent tail are accounted for.
110
147
 
111
148
  This makes \(\mathcal{I}_1\) and the minimum continuity suffix hard
112
149
  constraints, while keeping \(\mathcal{I}_2\) order-preserving but elastic.
150
+ Equivalently, the runtime safety invariant is:
151
+ \[
152
+ \tau_{\mathcal{I}_1} + \mathrm{toks}(T_{\mathrm{base}}) \le \tau
153
+ \quad \text{must hold at runtime or Tier 2 is fully evicted}
154
+ \]
113
155
 
114
156
  ## 7. The Document-Addressed Cache (\(\Psi\)) and Runtime Implications
115
157
 
@@ -121,5 +163,5 @@ Because the token estimator function \(\lceil \frac{|t|}{\chi(t)} \rceil\) depen
121
163
 
122
164
  At runtime:
123
165
  1. **Tier 1 (\(\mathcal{I}_{1d}\))** is injected via an \(O(1)\) memory copy.
124
- 2. **Tier 2 (\(\mathcal{I}_{2d}\))** is evaluated via an \(O(|\mathcal{I}_{2d}|)\) prefix sum to enforce position truncation under \(\tau_{\mathcal{I}_2}^{\mathrm{eff}}\).
166
+ 2. **Tier 2 (\(\mathcal{I}_{2d}\))** is evaluated via an \(O(|\mathcal{I}_{2d}|)\) prefix sum to enforce source-order truncation under \(\tau_{\mathcal{I}_2}^{\mathrm{eff}}\).
125
167
  3. **Tier 0 (\(\mathcal{V}_d\))** bypasses re-parsing and feeds into the semantic Pass 1 vector retrieval only after the continuity layer removes the exact recent tail into \(T_{\mathrm{recent}}\), leaving \(\mathcal{V}_{\mathrm{rest}}\).