@rljson/server 0.0.4 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -7,3 +7,1355 @@ found in the LICENSE file in the root of this package.
7
7
  -->
8
8
 
9
9
  # Architecture
10
+
11
+ ## Overview
12
+
13
+ The `@rljson/server` package implements a distributed, local-first data architecture that enables multiple clients to share data through a central server while maintaining local storage priority. The system uses a multi-layer approach where local data always takes precedence over server data.
14
+
15
+ ### System map (ASCII)
16
+
17
+ ```text
18
+ [ Client A ] [ Client B ]
19
+ ┌────────────────┐ ┌────────────────┐
20
+ │ IoMulti │ │ IoMulti │
21
+ │ BsMulti │ │ BsMulti │
22
+ │ (local first) │ │ (local first) │
23
+ └───────┬────────┘ └────────┬───────┘
24
+ │ IoPeer/BsPeer (pull from server) │
25
+ │ │
26
+ ┌───────▼────────┐ multicast refs ┌──────▼────────┐
27
+ │ Server │◄──────────────────►│ Server │
28
+ │ IoMulti │ │ BsMulti │
29
+ │ (local+peers) │ │ (local+peers)│
30
+ └───────┬────────┘ └────────┬──────┘
31
+ │ IoPeerBridge/BsPeerBridge (pull to clients)
32
+
33
+ ┌───────▼────────┐
34
+ │ Client C (etc) │
35
+ └────────────────┘
36
+ ```
37
+
38
+ - **Refs broadcast**: Clients emit hashes; server multicasts to others.
39
+ - **Data pulls**: Readers query by ref; multis cascade local ➜ server ➜ peers.
40
+ - **No push of payloads**: Only hashes traverse sockets by default.
41
+
42
+ ### Request flow (pull by reference)
43
+
44
+ ```text
45
+ Client B: db.get(route, {_hash: ref})
46
+ ↓ priority walk
47
+ 1) Local Io (miss)
48
+ 2) IoPeer → Server IoMulti
49
+ a) Server Local Io (miss?)
50
+ b) IoPeer[Client A] (hit)
51
+ ↑ data flows back A → Server → B
52
+ ```
53
+
54
+ ### Layer cheat sheet
55
+
56
+ - **Priority 1**: Local Io/Bs (read+write)
57
+ - **Priority 2+**: Peers (read-only), ordered insertion
58
+ - **Servers**: IoServer/BsServer expose multis to clients
59
+ - **Bridges**: IoPeerBridge/BsPeerBridge let server pull from clients
60
+ - **Peers**: IoPeer/BsPeer let clients pull from server
61
+
62
+ ### Socket namespace separation (why)
63
+
64
+ - **Isolation of channels**: Io (tables) and Bs (blobs) have different payload shapes and backpressure behavior. Separate namespaces prevent cross-talk and let us tune each channel independently.
65
+ - **Avoid coupling and event collisions**: Socket.IO treats event names within a namespace; isolating `io` and `bs` avoids accidental handler overlap and makes tracing simpler.
66
+ - **Directional clarity**: We split up/down per layer (`ioUp/ioDown`, `bsUp/bsDown`) so bridges can enforce read-only vs. read/write roles and keep the API symmetrical for server and client wiring.
67
+ - **Transport flexibility**: In environments that support multiple transports or QoS settings, namespaces can be mapped to different priorities or even different sockets without changing higher-level code.
68
+
69
+ In default setups you can reuse a single socket for all four channels; the code normalizes that into a bundle. When you need stricter isolation (e.g., large blob streams vs. small Io refs), use distinct namespaces/sockets to avoid head-of-line blocking and to keep logging/metrics per channel.
70
+
71
+ ### Design Pillars
72
+
73
+ - **Local-first reads, local-only writes**: All mutations stay on the caller; reads walk the priority ladder (local first, then peers through the server).
74
+ - **Pull by reference**: References (hashes) travel over the wire; data is fetched on-demand through `IoMulti`/`BsMulti`.
75
+ - **Server as proxy/aggregator**: The server multicasts refs and aggregates peers but does not duplicate client data unless explicitly imported there.
76
+ - **Unified surface area**: Public APIs expose merged multis (`Client.io/bs`, `Server.io/bs`) so callers never assemble peer lists manually.
77
+
78
+ ## Core Components
79
+
80
+ ### 1. Client
81
+
82
+ The `Client` class provides a unified interface for data access by combining local storage with server storage.
83
+
84
+ **Key Responsibilities:**
85
+
86
+ - Manage local Io (data tables) and Bs (blob storage)
87
+ - Create bidirectional communication with server
88
+ - Merge local and server data layers into single interfaces (IoMulti, BsMulti)
89
+
90
+ **Data Flow Architecture:**
91
+
92
+ ```text
93
+ ┌─────────────────────────────────────────┐
94
+ │ Client Instance │
95
+ ├─────────────────────────────────────────┤
96
+ │ │
97
+ │ ┌───────────────────────────────────┐ │
98
+ │ │ IoMulti (Priority) │ │
99
+ │ ├───────────────────────────────────┤ │
100
+ │ │ 1. Local Io (read/write/dump) │ │ ← Priority 1: Local First
101
+ │ │ 2. IoPeer (read only) │ │ ← Priority 2: Server Read
102
+ │ └───────────────────────────────────┘ │
103
+ │ ▲ ▲ │
104
+ │ │ │ │
105
+ │ IoPeerBridge IoPeer │
106
+ │ (upstream) (downstream) │
107
+ │ │ │ │
108
+ └───────────┼──────────────┼──────────────┘
109
+ │ │
110
+ ▼ ▼
111
+ ┌───────────────────────────┐
112
+ │ Socket to Server │
113
+ └───────────────────────────┘
114
+ ```
115
+
116
+ **Upstream (Client → Server):**
117
+
118
+ - `IoPeerBridge`: Exposes client's local Io to server for reading
119
+ - `BsPeerBridge`: Exposes client's local Bs to server for reading
120
+ - Server can pull data from connected clients
121
+
122
+ **Downstream (Server → Client):**
123
+
124
+ - `IoPeer`: Allows client to read from server's Io
125
+ - `BsPeer`: Allows client to read from server's Bs
126
+ - Client can pull data from server
127
+
128
+ ### 2. Server
129
+
130
+ The `Server` class acts as a central coordination point that:
131
+
132
+ - Manages connections to multiple clients
133
+ - Aggregates data from all clients into unified interfaces
134
+ - Broadcasts notifications between clients
135
+ - Provides read access to its own local storage
136
+
137
+ **Data Flow Architecture:**
138
+
139
+ ```text
140
+ ┌────────────────────────────────────────────────────┐
141
+ │ Server Instance │
142
+ ├────────────────────────────────────────────────────┤
143
+ │ │
144
+ │ ┌──────────────────────────────────────────────┐ │
145
+ │ │ IoMulti (Priority) │ │
146
+ │ ├──────────────────────────────────────────────┤ │
147
+ │ │ 1. Local Io (read/write/dump) │ │ ← Priority 1
148
+ │ │ 2. IoPeer[Client A] (read only) │ │ ← Priority 2
149
+ │ │ 3. IoPeer[Client B] (read only) │ │ ← Priority 2
150
+ │ │ 4. IoPeer[Client C] (read only) │ │ ← Priority 2
151
+ │ └──────────────────────────────────────────────┘ │
152
+ │ │ │
153
+ │ ▼ │
154
+ │ ┌──────────────────────────────────────────────┐ │
155
+ │ │ IoServer │ │
156
+ │ │ (Exposes IoMulti to clients) │ │
157
+ │ └──────────────────────────────────────────────┘ │
158
+ │ │
159
+ │ Connected Clients: │
160
+ │ ┌──────────────────────────────────────────────┐ │
161
+ │ │ Client A → IoPeer A, Socket A │ │
162
+ │ │ Client B → IoPeer B, Socket B │ │
163
+ │ │ Client C → IoPeer C, Socket C │ │
164
+ │ └──────────────────────────────────────────────┘ │
165
+ └────────────────────────────────────────────────────┘
166
+ ```
167
+
168
+ **Lifecycle and controls:**
169
+
170
+ - `addSocket()` attaches a stable `__clientId`, builds `IoPeer`/`BsPeer` (guarded by `peerInitTimeoutMs`), queues them, rebuilds multis once, refreshes servers in a batch, and registers an auto-disconnect handler.
171
+ - `removeSocket(clientId)` removes a client’s listeners and peers, rebuilds multis, and re-establishes multicast for remaining clients.
172
+ - `tearDown()` stops the eviction timer, removes all listeners/disconnect handlers, clears clients, closes IoMulti, and resets all internal state.
173
+ - Multicast uses `__origin` markers plus a **two-generation ref set** (`_multicastedRefsCurrent` / `_multicastedRefsPrevious`) to avoid echo loops and duplicate ref forwarding. Refs are evicted on a configurable interval (`refEvictionIntervalMs`, default 60 s) to prevent unbounded memory growth.
174
+ - Pending sockets are refreshed together so multiple joins trigger a single multi rebuild.
175
+ - All lifecycle events, errors, and traffic are logged via the injected `ServerLogger` (defaults to `NoopLogger`).
176
+ - Traffic logging captures inbound refs from clients and outbound multicasts with `from`/`to` client IDs.
177
+ - Disconnected sockets are auto-detected: a `'disconnect'` listener triggers `removeSocket()`, cleaning up dead peers and rebuilding multis.
178
+
179
+ ### 3. Multi-Layer Priority System
180
+
181
+ Both Client and Server use `IoMulti` and `BsMulti` to merge multiple data sources:
182
+
183
+ **Priority Rules:**
184
+
185
+ - **Priority 1 (Local)**: Read/Write/Dump enabled, always checked first
186
+ - **Priority 2 (Peer)**: Read-only, fallback when data not found locally
187
+
188
+ **Example Flow:**
189
+
190
+ ```text
191
+ Client A reads table "cars":
192
+ 1. Check local IoMem (priority 1) → Not found
193
+ 2. Check IoPeer to server (priority 2) → Found!
194
+ 3. Return data from server
195
+
196
+ Client A writes to table "cars":
197
+ 1. Write to local IoMem (priority 1) only
198
+ 2. IoPeer is read-only, no write to server
199
+ 3. Local data now takes precedence
200
+ ```
201
+
202
+ ### 4. BaseNode (shared helper)
203
+
204
+ `Client` and `Server` both extend `BaseNode`, which enforces an open local Io and provides Db helpers:
205
+
206
+ - `createTables()` seeds table definitions on the local Io (optionally with insert history).
207
+ - `import()` loads rljson payloads into the local Db, keeping writes local-first.
208
+ - A guard throws if the supplied local Io is not initialized/open, catching miswired setups early.
209
+
210
+ ## Synchronization Patterns
211
+
212
+ ### Overview: Pull-Based Reference Architecture
213
+
214
+ The system implements a **pull-based architecture** where data is retrieved on-demand using references (hashes). No data is automatically pushed between clients or to the server. Instead:
215
+
216
+ 1. **Client stores data locally** (write to priority 1 layer)
217
+ 2. **Client exposes data via IoPeerBridge/BsPeerBridge** (read-only upstream)
218
+ 3. **Other clients retrieve data by reference** (pull from priority 2 layer)
219
+ 4. **Server acts as proxy**, pulling from connected clients on-demand
220
+
221
+ ### Key principle: references flow, data is pulled
222
+
223
+ ```text
224
+ Reference Flow: Client A → Server → Client B
225
+ Data Flow: Client A ← Server ← Client B (pulled on-demand)
226
+ ```
227
+
228
+ ### IoMulti and BsMulti Architecture
229
+
230
+ Both Client and Server use multi-layer storage to aggregate data from multiple sources:
231
+
232
+ **IoMulti (Data Tables):**
233
+
234
+ - Priority 1: Local Io (read/write/dump)
235
+ - Priority 2+: IoPeer instances (read-only) to other participants
236
+
237
+ **BsMulti (Blob Storage):**
238
+
239
+ - Priority 1: Local Bs (read/write)
240
+ - Priority 2+: BsPeer instances (read-only) to other blob stores
241
+
242
+ **Multi-Layer Query Flow:**
243
+
244
+ ```text
245
+ Query: db.get(route, { _hash: "abc123" })
246
+
247
+
248
+ 1. Check Local Io (priority 1)
249
+ ├─ Found? → Return data ✓
250
+ └─ Not found? → Continue to priority 2
251
+
252
+
253
+ 2. Check IoPeer to Server (priority 2)
254
+ ├─ Server checks its Local Io (priority 1)
255
+ │ └─ Not found? → Continue to server's priority 2
256
+ │ │
257
+ │ ▼
258
+ │ Server queries IoPeer[Client A] (server priority 2)
259
+ │ └─ Found in Client A! → Return via chain ✓
260
+
261
+ └─ Data flows back: Client A → Server → Client B
262
+ ```
263
+
264
+ ## Data Synchronization Patterns
265
+
266
+ ### Pattern 1: Io Data Sync (Regular Tables)
267
+
268
+ Io data represents regular relational tables (Cake, Cell, etc.) stored in the Io layer.
269
+
270
+ #### Scenario: Client A inserts data, Client B retrieves it
271
+
272
+ ```text
273
+ ┌──────────┐ ┌──────────┐ ┌──────────┐
274
+ │Client A │ │ Server │ │Client B │
275
+ └────┬─────┘ └────┬─────┘ └────┬─────┘
276
+ │ │ │
277
+ │ 1. db.insert(route, data) │ │
278
+ ├──────────────────────► │ │
279
+ │ (writes to local Io) │ │
280
+ │ Returns: [{ _hash }] │ │
281
+ │ │ │
282
+ │ 2. Broadcast ref to server │ │
283
+ │ socket.emit(route, ref) │ │
284
+ ├────────────────────────────►│ │
285
+ │ │ │
286
+ │ │ 3. Multicast ref to Client B│
287
+ │ ├─────────────────────────────►
288
+ │ │ (with __origin marker) │
289
+ │ │ │
290
+ │ │ 4. Client B: db.get(route, {_hash: ref})
291
+ │ │◄────────────────────────────┤
292
+ │ │ │
293
+ │ 5. Server pulls from A │ │
294
+ │◄────────────────────────────┤ │
295
+ │ via IoPeerBridge │ │
296
+ │ │ │
297
+ │────────────────────────────►│ │
298
+ │ Returns data │ │
299
+ │ │ │
300
+ │ │ 6. Server returns to B │
301
+ │ ├─────────────────────────────►
302
+ │ │ Data pulled through chain│
303
+ ```
304
+
305
+ **Implementation Details:**
306
+
307
+ ```typescript
308
+ // Client A: Insert data (writes locally)
309
+ const insertResults = await dbA.insert(route, [cakeData]);
310
+ const dataRef = insertResults[0]._hash;
311
+
312
+ // Client A: Broadcast reference (optional for notifications)
313
+ clientA.socket.emit(route.flat, dataRef);
314
+
315
+ // Client B: Retrieve by reference (pulls data)
316
+ const result = await dbB.get(route, { _hash: dataRef });
317
+ // Query flows: Client B → IoPeer → Server → IoPeer[A] → Client A
318
+ // Data returns: Client A → Server → Client B
319
+ ```
320
+
321
+ **Key Characteristics:**
322
+
323
+ - ✅ Data never leaves Client A's local storage
324
+ - ✅ Server does NOT store the data (acts as proxy)
325
+ - ✅ Client B pulls data on-demand via reference
326
+ - ✅ Works for: Cake tables, Cell tables, custom content types
327
+
328
+ ### Pattern 2: Bs Data Sync (Blob Storage)
329
+
330
+ Bs data represents binary blobs (files, images, videos) stored in the Bs layer.
331
+
332
+ #### Scenario: Client A stores blob, Client B retrieves it
333
+
334
+ ```text
335
+ ┌──────────┐ ┌──────────┐ ┌──────────┐
336
+ │Client A │ │ Server │ │Client B │
337
+ └────┬─────┘ └────┬─────┘ └────┬─────┘
338
+ │ │ │
339
+ │ 1. bsA.put(blob) │ │
340
+ ├──────────────────────► │ │
341
+ │ (writes to local Bs) │ │
342
+ │ Returns: blobHash │ │
343
+ │ │ │
344
+ │ 2. Store ref in Io table │ │
345
+ │ db.insert(route, { │ │
346
+ │ blobRef: blobHash │ │
347
+ │ }) │ │
348
+ │ │ │
349
+ │ 3. Client B gets ref │ │
350
+ │ │◄────────────────────────────┤
351
+ │ │ db.get(route, where) │
352
+ │ │ │
353
+ │ │ 4. Client B pulls blob │
354
+ │ │◄────────────────────────────┤
355
+ │ │ bsB.get(blobHash) │
356
+ │ │ │
357
+ │ 5. Server pulls from A │ │
358
+ │◄────────────────────────────┤ │
359
+ │ via BsPeerBridge │ │
360
+ │ │ │
361
+ │────────────────────────────►│ │
362
+ │ Returns blob data │ │
363
+ │ │ │
364
+ │ │ 6. Server returns to B │
365
+ │ ├─────────────────────────────►
366
+ │ │ Blob data pulled through │
367
+ ```
368
+
369
+ **Implementation Details:**
370
+
371
+ ```typescript
372
+ // Client A: Store blob locally
373
+ const blobData = new Uint8Array([1, 2, 3, 4]);
374
+ const blobHash = await clientA.bs!.put(blobData);
375
+
376
+ // Client A: Store blob reference in Io table
377
+ await dbA.insert(route, [{
378
+ fileName: "example.bin",
379
+ blobRef: blobHash,
380
+ size: blobData.length
381
+ }]);
382
+
383
+ // Client B: Retrieve blob reference from Io
384
+ const fileRecord = await dbB.get(route, { fileName: "example.bin" });
385
+ const blobHash = fileRecord.rljson.files._data[0].blobRef;
386
+
387
+ // Client B: Pull blob by hash
388
+ const blob = await clientB.bs!.get(blobHash);
389
+ // Query flows: Client B → BsPeer → Server → BsPeer[A] → Client A
390
+ // Blob returns: Client A → Server → Client B
391
+ ```
392
+
393
+ **Key Characteristics:**
394
+
395
+ - ✅ Blobs stored separately from Io tables
396
+ - ✅ Io tables store blob references (hashes)
397
+ - ✅ BsMulti provides same priority-based access as IoMulti
398
+ - ✅ Hot-swapping: Downloaded blobs can be cached locally
399
+ - ✅ Deduplication: Same blob hash = same content
400
+
401
+ ### Pattern 3: Tree Data Sync (Tree Structures)
402
+
403
+ Tree data represents hierarchical structures converted from JavaScript objects using `treeFromObject()`.
404
+
405
+ #### Scenario: Client A creates tree, Client B retrieves entire tree
406
+
407
+ ```text
408
+ ┌──────────┐ ┌──────────┐ ┌──────────┐
409
+ │Client A │ │ Server │ │Client B │
410
+ └────┬─────┘ └────┬─────┘ └────┬─────┘
411
+ │ │ │
412
+ │ 1. Create tree from object │ │
413
+ │ const trees = │ │
414
+ │ treeFromObject({ │ │
415
+ │ x: 10, │ │
416
+ │ y: { z: 20 } │ │
417
+ │ }) │ │
418
+ │ │ │
419
+ │ 2. Import tree data │ │
420
+ │ clientA.import({ │ │
421
+ │ treeName: { │ │
422
+ │ _type: 'trees', │ │
423
+ │ _data: trees │ │
424
+ │ } │ │
425
+ │ }) │ │
426
+ │ (writes to local Io) │ │
427
+ │ │ │
428
+ │ 3. Get root ref │ │
429
+ │ rootHash = │ │
430
+ │ trees[trees.length-1] │ │
431
+ │ ._hash │ │
432
+ │ │ │
433
+ │ 4. Broadcast root ref │ │
434
+ │ socket.emit(route, │ │
435
+ │ rootHash) │ │
436
+ ├────────────────────────────►│ │
437
+ │ │ │
438
+ │ │ 5. Multicast to Client B │
439
+ │ ├─────────────────────────────►
440
+ │ │ │
441
+ │ │ 6. Client B: get by root │
442
+ │ │◄────────────────────────────┤
443
+ │ │ db.get(route, { │
444
+ │ │ _hash: rootHash │
445
+ │ │ }) │
446
+ │ │ │
447
+ │ 7. Server pulls tree nodes │ │
448
+ │◄────────────────────────────┤ │
449
+ │ via IoPeerBridge │ │
450
+ │ (pulls ALL related nodes)│ │
451
+ │ │ │
452
+ │────────────────────────────►│ │
453
+ │ Returns tree nodes[] │ │
454
+ │ │ │
455
+ │ │ 8. Server returns to B │
456
+ │ ├─────────────────────────────►
457
+ │ │ Full tree structure │
458
+ ```
459
+
460
+ **Implementation Details:**
461
+
462
+ ```typescript
463
+ // Client A: Convert object to tree structure
464
+ const treeObject = { x: 10, y: { z: 20 } };
465
+ const trees = treeFromObject(treeObject);
466
+ // trees = [
467
+ // { id: 'x', meta: { value: 10 }, ... },
468
+ // { id: 'y', isParent: true, children: ['z'], ... },
469
+ // { id: 'z', meta: { value: 20 }, ... },
470
+ // { id: 'root', isParent: true, children: ['x', 'y'], ... } ← Root
471
+ // ]
472
+
473
+ // Client A: Get root reference (last tree in array)
474
+ const rootTreeHash = trees[trees.length - 1]._hash;
475
+
476
+ // Client A: Create trees table and import
477
+ const treeCfg = createTreesTableCfg('myTree');
478
+ await clientA.createTables({ withInsertHistory: [treeCfg] });
479
+ await clientA.import({
480
+ myTree: { _type: 'trees', _data: trees }
481
+ });
482
+
483
+ // Client B: Setup same table definition
484
+ await clientB.createTables({ withInsertHistory: [treeCfg] });
485
+
486
+ // Client B: Pull entire tree by root hash
487
+ const result = await dbB.get(Route.fromFlat('myTree'), {
488
+ _hash: rootTreeHash
489
+ });
490
+ // Returns ALL tree nodes (x, y, z, root) in result.rljson.myTree._data
491
+ // Query flows: Client B → IoPeer → Server → IoPeer[A] → Client A
492
+ // Tree flows: Client A → Server → Client B (all related nodes)
493
+ ```
494
+
495
+ **Key Characteristics:**
496
+
497
+ - ✅ `treeFromObject()` converts JS objects to Tree[] arrays
498
+ - ✅ Root node is LAST element in trees array
499
+ - ✅ Query by root hash returns ALL related nodes (entire subtree)
500
+ - ✅ Trees table uses `createTreesTableCfg()` configuration
501
+ - ✅ Pull pattern: Server does NOT store tree (proxies to Client A)
502
+ - ✅ Efficient: Single query retrieves complete tree structure
503
+
504
+ **Tree Structure Details:**
505
+
506
+ ```typescript
507
+ interface Tree {
508
+ id: string; // Unique identifier
509
+ _hash: string; // Content hash (reference)
510
+ isParent?: boolean; // Has children?
511
+ children?: string[]; // Child node IDs
512
+ meta?: {
513
+ value?: any; // Leaf value (for non-parent nodes)
514
+ [key: string]: any; // Additional metadata
515
+ };
516
+ }
517
+ ```
518
+
519
+ ## Data Distribution Patterns
520
+
521
+ ### Pattern 1: Client-to-Client via Server (Pull Pattern)
522
+
523
+ When Client A creates/modifies data that other clients need to access:
524
+
525
+ ```text
526
+ ┌──────────┐ ┌──────────┐ ┌──────────┐
527
+ │Client A │ │ Server │ │Client B │
528
+ └────┬─────┘ └────┬─────┘ └────┬─────┘
529
+ │ │ │
530
+ │ 1. insert(route, data) │ │
531
+ ├──────────────────────► │ │
532
+ │ (writes to local Io) │ │
533
+ │ │ │
534
+ │ │ 2. Client B get(route, ref) │
535
+ │ │◄────────────────────────────┤
536
+ │ │ (via IoPeer) │
537
+ │ │ │
538
+ │ 3. Server's IoMulti cascade │ │
539
+ │◄────────────────────────────┤ │
540
+ │ (automatic via priority) │ │
541
+ │ Reads from Client A │ │
542
+ │ via IoPeerBridge │ │
543
+ │ │ │
544
+ │────────────────────────────►│ │
545
+ │ Returns data │ │
546
+ │ │ │
547
+ │ ├─────────────────────────────►
548
+ │ │ 4. Data flows back to B │
549
+ │ │ (Client A → Server → B) │
550
+ ```text
551
+
552
+ ### Pattern 2: Notification Broadcasting
553
+
554
+ For real-time updates, the server multicasts references between clients:
555
+
556
+ ```text
557
+ ┌──────────┐ ┌──────────┐ ┌──────────┐
558
+ │Client A │ │ Server │ │Client B │
559
+ └────┬─────┘ └────┬─────┘ └────┬─────┘
560
+ │ │ │
561
+ │ 1. socket.emit(route, ref) │ │
562
+ ├──────────────────────► │ │
563
+ │ │ │
564
+ │ │ 2. Multicast to others │
565
+ │ │ (adds __origin marker) │
566
+ │ ├─────────────────────────────►
567
+ │ │ │
568
+ │ │ │
569
+ │ │ 3. Client B receives ref │
570
+ │ │ and can fetch data │
571
+ ```text
572
+
573
+ **Multicast Logic:**
574
+
575
+ - Server listens on route for all connected clients
576
+ - When Client A emits on route, server forwards to all OTHER clients
577
+ - `__origin` marker prevents infinite loops
578
+ - Deduplication via `_multicastedRefs` Set
579
+ - **References are broadcast, data is pulled on-demand**
580
+
581
+ ### Pattern 4: Server as Data Proxy (Not Storage)
582
+
583
+ Important: The server does NOT store client data by default.
584
+
585
+ **Incorrect Pattern (Push):**
586
+
587
+ ```typescript
588
+ // ❌ WRONG: Server should NOT import client data
589
+ await server.import(clientData); // Server becomes storage layer
590
+ ```
591
+
592
+ **Correct Pattern (Pull):**
593
+
594
+ ```typescript
595
+ // ✅ CORRECT: Client stores, server proxies on-demand
596
+ await clientA.import(data); // Client A stores locally
597
+ // Server reads from Client A via IoPeerBridge only when Client B requests it
598
+ const result = await dbB.get(route, { _hash: ref });
599
+ // Server pulls from Client A dynamically
600
+ ```
601
+
602
+ **When Server SHOULD Store Data:**
603
+
604
+ - ✅ Shared configuration data all clients need
605
+ - ✅ Reference data (lookup tables, constants)
606
+ - ✅ Bootstrapping data for new clients
607
+ - ❌ NOT for client-specific operational data
608
+
609
+ ### Pattern 5: Reference Passing Between Clients
610
+
611
+ The most efficient pattern for distributed access:
612
+
613
+ ```text
614
+ 1. Client A creates data → Returns references (hashes)
615
+ 2. Client A broadcasts references (not data) to server
616
+ 3. Server multicasts references to Client B
617
+ 4. Client B receives references
618
+ 5. Client B queries by reference when needed
619
+ 6. Server pulls actual data from Client A on-demand
620
+ 7. Data flows: Client A → Server → Client B (only when requested)
621
+ ```
622
+
623
+ **Benefits:**
624
+
625
+ - ✅ Minimal network traffic (only refs broadcast)
626
+ - ✅ Data pulled only when needed
627
+ - ✅ No stale data (always pull latest from source)
628
+ - ✅ Source of truth remains at Client A
629
+
630
+ ## Complete Integration Examples
631
+
632
+ ### Example 1: Io Data (Cake Table) - Complete Flow
633
+
634
+ ```typescript
635
+ // Setup: All parties create table definitions
636
+ const cakeCfg = {
637
+ name: 'carCake',
638
+ cfg: { _type: 'cake', columns: ['brand', 'model'] }
639
+ };
640
+
641
+ await server.createTables({ withInsertHistory: [cakeCfg] });
642
+ await clientA.createTables({ withInsertHistory: [cakeCfg] });
643
+ await clientB.createTables({ withInsertHistory: [cakeCfg] });
644
+
645
+ // When route was passed to Client constructor, Db is available directly:
646
+ const dbA = clientA.db!;
647
+ const dbB = clientB.db!;
648
+
649
+ // Client A: Insert data (stores locally)
650
+ const route = Route.fromFlat('carCake');
651
+ const insertResult = await dbA.insert(route, [{
652
+ brand: 'Tesla',
653
+ model: 'Model S'
654
+ }]);
655
+ const carRef = insertResult[0]._hash;
656
+
657
+ // Client A: Broadcast reference
658
+ clientA.socket.emit(route.flat, carRef);
659
+
660
+ // Client B: Listen for reference
661
+ clientB.socket.on(route.flat, async (ref) => {
662
+ // Pull data by reference
663
+ const result = await dbB.get(route, { _hash: ref });
664
+ console.log(result.rljson.carCake._data[0]);
665
+ // { brand: 'Tesla', model: 'Model S', _hash: '...' }
666
+ });
667
+ ```
668
+
669
+ ### Example 2: Bs Data (Blob) - Complete Flow
670
+
671
+ ```typescript
672
+ // Setup: All parties initialize blob storage (BsMulti)
673
+ // Already done via client.init() and server.init()
674
+
675
+ // Client A: Store blob locally
676
+ const imageData = new Uint8Array([255, 216, 255, ...]); // JPEG bytes
677
+ const blobHash = await clientA.bs!.put(imageData);
678
+
679
+ // Client A: Store blob reference in Io table
680
+ const fileRoute = Route.fromFlat('images');
681
+ const insertResult = await dbA.insert(fileRoute, [{
682
+ fileName: 'photo.jpg',
683
+ blobRef: blobHash,
684
+ size: imageData.length,
685
+ mimeType: 'image/jpeg'
686
+ }]);
687
+ const fileRecordRef = insertResult[0]._hash;
688
+
689
+ // Client A: Broadcast file record reference
690
+ clientA.socket.emit(fileRoute.flat, fileRecordRef);
691
+
692
+ // Client B: Receive reference and pull blob
693
+ clientB.socket.on(fileRoute.flat, async (ref) => {
694
+ // 1. Get file metadata from Io
695
+ const fileRecord = await dbB.get(fileRoute, { _hash: ref });
696
+ const blobHash = fileRecord.rljson.images._data[0].blobRef;
697
+
698
+ // 2. Pull actual blob from Bs
699
+ const imageData = await clientB.bs!.get(blobHash);
700
+ console.log(`Downloaded ${imageData.length} bytes`);
701
+
702
+ // 3. Optional: Cache locally (hot-swap)
703
+ await clientB.bs!.put(imageData); // Now in Client B's local Bs
704
+ });
705
+ ```
706
+
707
+ ### Example 3: Tree Data - Complete Flow
708
+
709
+ ```typescript
710
+ // Setup: Create trees table configuration
711
+ const treeCfg = createTreesTableCfg('projectTree');
712
+ await server.createTables({ withInsertHistory: [treeCfg] });
713
+ await clientA.createTables({ withInsertHistory: [treeCfg] });
714
+ await clientB.createTables({ withInsertHistory: [treeCfg] });
715
+
716
+ // When route was passed to Client constructor, Db is available directly:
717
+ const dbA = clientA.db!;
718
+ const dbB = clientB.db!;
719
+
720
+ // Client A: Create tree from object
721
+ const projectData = {
722
+ name: 'MyApp',
723
+ version: '1.0.0',
724
+ dependencies: {
725
+ react: '18.0.0',
726
+ typescript: '5.0.0'
727
+ },
728
+ scripts: {
729
+ build: 'tsc',
730
+ test: 'vitest'
731
+ }
732
+ };
733
+
734
+ const trees = treeFromObject(projectData);
735
+ const rootHash = trees[trees.length - 1]._hash;
736
+
737
+ // Client A: Import tree (stores locally)
738
+ await clientA.import({
739
+ projectTree: { _type: 'trees', _data: trees }
740
+ });
741
+
742
+ // Client A: Broadcast root reference
743
+ const treeRoute = Route.fromFlat('projectTree');
744
+ clientA.socket.emit(treeRoute.flat, rootHash);
745
+
746
+ // Client B: Receive reference and pull entire tree
747
+ clientB.socket.on(treeRoute.flat, async (rootRef) => {
748
+ // Pull entire tree by root hash
749
+ const result = await dbB.get(treeRoute, { _hash: rootRef });
750
+ const treeNodes = result.rljson.projectTree._data;
751
+
752
+ console.log(`Received ${treeNodes.length} tree nodes`);
753
+ // Includes: name, version, dependencies, react, typescript,
754
+ // scripts, build, test, root
755
+
756
+ // Navigate tree structure
757
+ const root = treeNodes.find(n => n._hash === rootRef);
758
+ console.log(`Root children: ${root.children}`);
759
+ });
760
+ ```
761
+
762
+ ## Performance Considerations
763
+
764
+ ### IoMulti/BsMulti Query Optimization
765
+
766
+ **Priority-based short-circuiting:**
767
+
768
+ ```typescript
769
+ // Query: get(route, where)
770
+ // 1. Check priority 1 (local) → Found? Return immediately ✓
771
+ // 2. Check priority 2 (IoPeer) → Found? Return immediately ✓
772
+ // 3. Check priority 3 (additional peers) → And so on...
773
+ ```
774
+
775
+ **Best Practices:**
776
+
777
+ - ✅ Cache frequently accessed data locally (hot-swapping)
778
+ - ✅ Use specific queries ({ _hash: ref }) instead of broad scans
779
+ - ✅ Minimize priority 2+ queries by pre-loading critical data
780
+ - ❌ Avoid scanning large tables without where clauses
781
+
782
+ ### Blob Storage Optimization
783
+
784
+ **Deduplication:**
785
+
786
+ - Same content = same hash
787
+ - Multiple references to same blob = single storage
788
+
789
+ **Streaming (Future):**
790
+
791
+ - Large blobs can be streamed via `getStream()`
792
+ - Partial retrieval via `get(hash, range)`
793
+
794
+ ### Tree Query Optimization
795
+
796
+ **Single Query for Entire Tree:**
797
+
798
+ - Query root hash returns ALL related nodes
799
+ - No need for recursive queries
800
+ - Efficient for hierarchical data
801
+
802
+ **Tree Caching:**
803
+
804
+ ```typescript
805
+ // After first pull, tree is available locally
806
+ await dbB.get(route, { _hash: rootRef }); // Pulls from Client A
807
+ await dbB.get(route, { _hash: rootRef }); // Reads from local cache
808
+ ```
809
+
810
+ ## Consistency Model (Db layer)
811
+
812
+ The `Db` class operates on top of `IoMulti`, providing distributed data access:
813
+
814
+ ```text
815
+ ┌────────────────────────────────────────┐
816
+ │ Client A │
817
+ │ ┌──────────────────────────────────┐ │
818
+ │ │ Db (dbA) │ │
819
+ │ │ ↓ │ │
820
+ │ │ IoMulti │ │
821
+ │ │ ├─ Local Io (priority 1) │ │
822
+ │ │ └─ IoPeer → Server (priority 2)│ │
823
+ │ └──────────────────────────────────┘ │
824
+ └────────────────────────────────────────┘
825
+
826
+ ┌────────────────────────────────────────┐
827
+ │ Server │
828
+ │ ┌──────────────────────────────────┐ │
829
+ │ │ IoMulti │ │
830
+ │ │ ├─ Local Io (priority 1) │ │
831
+ │ │ ├─ IoPeer[A] (priority 2) │ │
832
+ │ │ └─ IoPeer[B] (priority 2) │ │
833
+ │ └──────────────────────────────────┘ │
834
+ └────────────────────────────────────────┘
835
+
836
+ ┌────────────────────────────────────────┐
837
+ │ Client B │
838
+ │ ┌──────────────────────────────────┐ │
839
+ │ │ Db (dbB) │ │
840
+ │ │ ↓ │ │
841
+ │ │ IoMulti │ │
842
+ │ │ ├─ Local Io (priority 1) │ │
843
+ │ │ └─ IoPeer → Server (priority 2)│ │
844
+ │ └──────────────────────────────────┘ │
845
+ └────────────────────────────────────────┘
846
+ ```
847
+
848
+ **Operations:**
849
+
850
+ **db.insert(route, data):**
851
+
852
+ - Writes to local Io only (via IoMulti's priority 1 layer)
853
+ - Returns `InsertHistoryRow[]` with refs
854
+ - Data remains local until server reads it via IoPeerBridge
855
+
856
+ **db.get(route, where):**
857
+
858
+ - Searches local Io first (priority 1)
859
+ - Falls back to server (priority 2) if not found locally
860
+ - Server's IoMulti includes data from all connected clients
861
+ - Returns `Container` with rljson, tree, and cell data
862
+
863
+ ## Consistency Model
864
+
865
+ ### Local-First Guarantees
866
+
867
+ 1. **Writes are local**: All write operations go to local storage only
868
+ 2. **Reads are prioritized**: Local data is always checked first
869
+ 3. **Server as fallback**: Server data accessed when not available locally
870
+ 4. **Hot-swapping**: When data is read from server, it can be cached locally
871
+
872
+ ### Data Visibility and Access Patterns
873
+
874
+ **What Client A can see:**
875
+
876
+ - ✅ Its own local Io data (priority 1)
877
+ - ✅ Its own local Bs blobs (priority 1)
878
+ - ✅ Server's local data (priority 2) if server has any
879
+ - ✅ Other clients' data via server (priority 2) - **pulled automatically on-demand**
880
+ - When Client A queries by reference, server checks its cache (priority 1)
881
+ - If not in server cache, server automatically pulls from Client B (priority 2)
882
+ - This happens transparently through IoMulti's priority system
883
+
884
+ **What Client A cannot see:**
885
+
886
+ - ❌ Data without a valid reference (hash) to query by
887
+ - ❌ Data from disconnected clients (no IoPeer connection)
888
+ - ❌ Data that hasn't been imported/inserted anywhere in the network
889
+
890
+ **What Server can see:**
891
+
892
+ - ✅ Its own local Io data (priority 1)
893
+ - ✅ All connected clients' data (priority 2+) via IoPeerBridge
894
+ - ✅ **Server acts as aggregator** - sees union of all client data
895
+
896
+ ### Data Flow Guarantees
897
+
898
+ **Io Data (Tables):**
899
+
900
+ - Writes: Always to local Io only
901
+ - Reads: Priority 1 (local) → Priority 2 (server/peers)
902
+ - Consistency: Eventually consistent via pull
903
+ - References: Content-addressed by hash
904
+
905
+ **Bs Data (Blobs):**
906
+
907
+ - Writes: Always to local Bs only
908
+ - Reads: Priority 1 (local) → Priority 2 (server/peers)
909
+ - Deduplication: Same hash = same content
910
+ - References: Content-addressed by hash
911
+
912
+ **Tree Data:**
913
+
914
+ - Storage: In Io layer as special 'trees' type
915
+ - Queries: By root hash → returns all related nodes
916
+ - Structure: Hierarchical parent-child relationships
917
+ - References: Root hash identifies entire tree
918
+
919
+ ### Synchronization
920
+
921
+ **No automatic sync**: The system does not automatically replicate writes between clients.
922
+
923
+ **Pull-based sync patterns:**
924
+
925
+ 1. **Via References**: Client A broadcasts ref → Client B pulls data by ref
926
+ 2. **Via Server Proxy**: Client B queries → Server pulls from Client A on-demand
927
+ 3. **Via IoPeerBridge/BsPeerBridge**: Exposing local storage to server for reading
928
+
929
+ **Key Differences from Push-based Sync:**
930
+
931
+ | Aspect | Pull-based (rljson) | Push-based (traditional) |
932
+ | --------------- | --------------------------- | ------------------------- |
933
+ | Data movement | On-demand via query | Automatic replication |
934
+ | Network traffic | Minimal (refs only) | High (all data) |
935
+ | Staleness | Always fresh (pulls latest) | Possible (stale replicas) |
936
+ | Storage | Single source of truth | Multiple copies |
937
+ | Bandwidth | Low (pull when needed) | High (push all changes) |
938
+ | Consistency | Eventually consistent | Strong/eventual |
939
+
940
+ ## Architecture Comparison: Io vs Bs vs Tree
941
+
942
+ | Feature | Io Data | Bs Data | Tree Data |
943
+ | ------------------ | ------------------------- | ------------------------- | ------------------------- |
944
+ | **Storage Layer** | IoMulti (Io + IoPeer[]) | BsMulti (Bs + BsPeer[]) | IoMulti (special type) |
945
+ | **Data Type** | Tables, rows, columns | Binary blobs | Hierarchical nodes |
946
+ | **Content Type** | 'cake', 'cell', custom | Raw bytes | 'trees' |
947
+ | **Query Method** | `db.get(route, where)` | `bs.get(hash)` | `db.get(route, {_hash})` |
948
+ | **Reference Type** | Row hash (_hash) | Blob hash | Root node hash (_hash) |
949
+ | **Write Target** | Priority 1 (local Io) | Priority 1 (local Bs) | Priority 1 (local Io) |
950
+ | **Read Priority** | 1: Local, 2: Server+Peers | 1: Local, 2: Server+Peers | 1: Local, 2: Server+Peers |
951
+ | **Deduplication** | By content hash | By content hash | By content hash |
952
+ | **Query Result** | Matching rows | Single blob | All related nodes |
953
+ | **Table Config** | `createTableCfg()` | N/A | `createTreesTableCfg()` |
954
+ | **Sync Pattern** | Pull by ref | Pull by ref | Pull by root ref |
955
+ | **Use Cases** | Structured data | Files, images, videos | JSON objects, configs |
956
+
957
+ ## Real-World Scenarios
958
+
959
+ ### Scenario 1: Collaborative Document Editing
960
+
961
+ ```text
962
+ Team working on shared documents:
963
+ - Each client has local document storage (Io data)
964
+ - Document edits create new versions (content-addressed)
965
+ - Editor broadcasts document ref to team
966
+ - Team members pull latest version by ref on-demand
967
+ - Server never stores documents (only proxies)
968
+ - Tree data represents document structure (headings, sections)
969
+ ```
970
+
971
+ ### Scenario 2: Media Sharing Application
972
+
973
+ ```text
974
+ Users sharing photos/videos:
975
+ - Photos stored in local Bs (Client A)
976
+ - Photo metadata in Io table (title, tags, blobRef)
977
+ - User A uploads → stores locally, broadcasts ref
978
+ - User B sees notification → pulls blob by ref
979
+ - User B caches blob locally (hot-swap)
980
+ - Server proxies blob from A to B (doesn't store)
981
+ ```
982
+
983
+ ### Scenario 3: Configuration Management
984
+
985
+ ```text
986
+ Application configuration distribution:
987
+ - Config as JSON object → converted to Tree
988
+ - Config stored on admin client (Client A)
989
+ - Root ref broadcast to all clients
990
+ - Clients pull config tree by root ref on-demand
991
+ - Changes create new tree → new root ref
992
+ - Clients update by pulling new root ref
993
+ ```
994
+
995
+ ## Lifecycle
996
+
997
+ ### Client Initialization
998
+
999
+ ```typescript
1000
+ // With route: Db and Connector are created automatically during init()
1001
+ const client = new Client(socket, localIo, localBs, route);
1002
+ await client.init(); // Sets up IoMulti, BsMulti, Db, and Connector
1003
+ await client.ready(); // Waits for IoMulti to be ready
1004
+
1005
+ const db = client.db!; // Db wrapping IoMulti
1006
+ const connector = client.connector!; // Connector wired to route + socket
1007
+
1008
+ // With logging:
1009
+ import { BufferedLogger } from '@rljson/server';
1010
+ const logger = new BufferedLogger();
1011
+ const client = new Client(socket, localIo, localBs, route, { logger });
1012
+ await client.init();
1013
+ // logger.entries now contains lifecycle events:
1014
+ // Constructing client, Initializing client, Setting up Io multi,
1015
+ // Io peer bridge started, Io multi ready, Setting up Bs multi, ...
1016
+
1017
+ // Without route (legacy): only IoMulti and BsMulti are created
1018
+ const client = new Client(socket, localIo, localBs);
1019
+ await client.init();
1020
+ const db = new Db(client.io!); // Caller creates Db manually
1021
+ ```
1022
+
1023
+ ### Server Initialization
1024
+
1025
+ ```typescript
1026
+ const server = new Server(route, serverIo, serverBs);
1027
+ await server.init(); // Sets up IoMulti and BsMulti
1028
+
1029
+ // When clients connect:
1030
+ await server.addSocket(socket); // Rebuilds multis with new IoPeer
1031
+ ```
1032
+
1033
+ ### Adding a Client
1034
+
1035
+ When `server.addSocket(socket)` is called:
1036
+
1037
+ 1. **Create IoPeer/BsPeer**: Establish connection to client
1038
+ 2. **Queue peers**: Add to `_ios` and `_bss` arrays
1039
+ 3. **Rebuild multis**: Recreate IoMulti/BsMulti with all peers
1040
+ 4. **Refresh servers**: Update IoServer/BsServer with new multis
1041
+ 5. **Setup multicast**: Register listeners for route broadcasting
1042
+
1043
+ Each step is logged at `info` level. Errors in any step are logged at `error` level and re-thrown.
1044
+
1045
+ ### Teardown
1046
+
1047
+ ```typescript
1048
+ await client.tearDown(); // Closes IoMulti, clears Db, Connector, and all state
1049
+ ```
1050
+
1051
+ ## Testing Patterns
1052
+
1053
+ ### Distributed Get Pattern (Server Data)
1054
+
1055
+ ```typescript
1056
+ // Use case: Server has shared reference data
1057
+ await server.createTables({ withInsertHistory: tableCfgs });
1058
+ await server.import(exampleData);
1059
+
1060
+ // Clients need table definitions
1061
+ await clientA.createTables({ withInsertHistory: tableCfgs });
1062
+ await clientB.createTables({ withInsertHistory: tableCfgs });
1063
+
1064
+ // Client A can read server data (priority 2)
1065
+ const dataFromA = await dbA.get(route, where);
1066
+
1067
+ // Client B can read the same server data (priority 2)
1068
+ const dataFromB = await dbB.get(route, where);
1069
+
1070
+ // Both see identical data from server
1071
+ ```
1072
+
1073
+ ### Client-to-Client Pattern (Pull via Server)
1074
+
1075
+ ```typescript
1076
+ // Setup: All parties need table definitions
1077
+ await server.createTables({ withInsertHistory: tableCfgs });
1078
+ await clientA.createTables({ withInsertHistory: tableCfgs });
1079
+ await clientB.createTables({ withInsertHistory: tableCfgs });
1080
+
1081
+ // Client A creates local data
1082
+ await clientA.import(localData);
1083
+
1084
+ // Client A sees its local data (priority 1)
1085
+ const dataFromA = await dbA.get(route, where);
1086
+ const ref = dataFromA.rljson.tableName._data[0]._hash;
1087
+
1088
+ // Client B CAN see Client A's data by reference
1089
+ // Server's IoMulti automatically cascades to Client A
1090
+ const dataFromB = await dbB.get(route, { _hash: ref });
1091
+ // Query: Client B → IoPeer → Server IoMulti → IoPeer[A] → Client A
1092
+ // Data flows back: Client A → Server → Client B
1093
+
1094
+ expect(dataFromB.rljson.tableName._data[0]._hash).toBe(ref);
1095
+ ```
1096
+
1097
+ ### Local-Only Pattern (No Reference Query)
1098
+
1099
+ ```typescript
1100
+ // Client A creates local data
1101
+ await clientA.createTables({ withInsertHistory: tableCfgs });
1102
+ await clientA.import(localData);
1103
+
1104
+ // Client B has no reference to query by
1105
+ await clientB.createTables({ withInsertHistory: tableCfgs });
1106
+
1107
+ // Client B cannot discover Client A's data without a reference
1108
+ // Broad queries won't automatically sync all data
1109
+ await expect(dbB.get(route, {})).rejects.toThrow();
1110
+ // Or returns empty result if table exists but no data locally
1111
+ ```
1112
+
1113
+ ## Key Design Decisions
1114
+
1115
+ ### Why Local-First?
1116
+
1117
+ - **Offline capability**: Clients work without server connection
1118
+ - **Low latency**: Read/write operations are fast (no network)
1119
+ - **Data ownership**: Clients control their own data
1120
+ - **Flexible sync**: Sync on-demand, not automatically
1121
+
1122
+ ### Why Read-Only Peers?
1123
+
1124
+ - **Simplicity**: No conflict resolution needed
1125
+ - **Safety**: Prevents accidental cross-client writes
1126
+ - **Clear semantics**: Local writes, remote reads
1127
+ - **Scalability**: Server doesn't manage write transactions
1128
+
1129
+ ### Why Priority-Based Multi?
1130
+
1131
+ - **Predictable behavior**: Always check local first
1132
+ - **Flexibility**: Add multiple data sources
1133
+ - **Performance**: Short-circuit on local hits
1134
+ - **Composability**: Easy to add new layers
1135
+
1136
+ ## Related Packages
1137
+
1138
+ - **@rljson/io**: Io, IoMulti, IoPeer, IoPeerBridge, IoServer
1139
+ - **@rljson/bs**: Bs, BsMulti, BsPeer, BsPeerBridge, BsServer
1140
+ - **@rljson/db**: Db operations (insert, get, join, etc.)
1141
+ - **@rljson/rljson**: Data structures (Route, TableCfg, etc.)
1142
+
1143
+ ## Observability
1144
+
1145
+ ### Structured Logging
1146
+
1147
+ Both `Server` and `Client` accept an optional `ServerLogger` via their options parameter. The logger is called at every significant lifecycle point, error boundary, and network traffic event.
1148
+
1149
+ **Logger interface:**
1150
+
1151
+ ```typescript
1152
+ interface ServerLogger {
1153
+ info(source: string, message: string, data?: Record<string, unknown>): void;
1154
+ warn(source: string, message: string, data?: Record<string, unknown>): void;
1155
+ error(source: string, message: string, error?: unknown, data?: Record<string, unknown>): void;
1156
+ traffic(direction: 'in' | 'out', source: string, event: string, data?: Record<string, unknown>): void;
1157
+ }
1158
+ ```
1159
+
1160
+ **What gets logged:**
1161
+
1162
+ | Phase | Source | Level | Events |
1163
+ | --------------- | ------------------------- | ------- | ------------------------------------------- |
1164
+ | Construction | `Server` / `Client` | info | Route, options |
1165
+ | Initialization | `Server` / `Client` | info | Start, success |
1166
+ | Io/Bs setup | `Client.Io` / `Client.Bs` | info | Multi creation, peer bridges, peer creation |
1167
+ | Peer creation | `Server.Io` / `Server.Bs` | info | Per-client peer setup |
1168
+ | Multi rebuild | `Server` | info | Peer count, rebuild success |
1169
+ | Server refresh | `Server` | info | Pending socket count, completion |
1170
+ | Multicast in | `Server.Multicast` | traffic | Ref, sender clientId |
1171
+ | Multicast out | `Server.Multicast` | traffic | Ref, sender clientId, receiver clientId |
1172
+ | Duplicate ref | `Server.Multicast` | warn | Ref, sender |
1173
+ | Loop prevention | `Server.Multicast` | warn | Ref, origin, sender |
1174
+ | Any failure | Various | error | Error object, context data |
1175
+ | TearDown | `Client` | info | Start, completion |
1176
+ | Socket removal | `Server` | info | Removing, rebuilding multis, removal done |
1177
+ | Server tearDown | `Server` | info | Tearing down, timer stop, completion |
1178
+ | Disconnect | `Server` | info | Client disconnected, auto-removal |
1179
+
1180
+ **Built-in implementations:**
1181
+
1182
+ - `NoopLogger` — zero overhead, used by default
1183
+ - `ConsoleLogger` — `console.log`/`warn`/`error` with formatted prefixes
1184
+ - `BufferedLogger` — in-memory array with `byLevel()`, `bySource()`, `clear()` helpers
1185
+ - `FilteredLogger` — wraps another logger, filters by `levels` and/or `sources`
1186
+
1187
+ **Production recommendation:** Use `FilteredLogger` wrapping your framework's logger, filtering to `['error', 'warn']` levels. Enable `traffic` level only for debugging multicast issues.
1188
+
1189
+ ## Sync Protocol (opt-in hardening)
1190
+
1191
+ The server supports an optional sync protocol that provides production-grade guarantees on top of the basic multicast mechanism. Enabled by passing `syncConfig` in `ServerOptions`.
1192
+
1193
+ ### Architecture
1194
+
1195
+ ```text
1196
+ Client A (Connector) Server Client B (Connector)
1197
+ ──────────────────── ────── ────────────────────
1198
+ send(ref) →
1199
+ enriches payload:
1200
+ {o, r, c?, t?, seq?, p?}
1201
+ ────emit(route)───►
1202
+ ┌─ append to ref log
1203
+ ├─ setup ACK collection
1204
+ ├─ forward to Client B ──emit(route)──►
1205
+ │ processIncoming()
1206
+ │ ◄──ackClient──
1207
+ ├─ collect ackClient
1208
+ ├─ emit aggregated ACK
1209
+ ◄───ack────────────┘
1210
+ ```
1211
+
1212
+ ### Wire format reference
1213
+
1214
+ All sync payloads are JSON objects. The types are defined in `@rljson/rljson` (Layer 0) and used unchanged across all layers.
1215
+
1216
+ #### ConnectorPayload (bidirectional, event: `${route}`)
1217
+
1218
+ The main wire message between Connector and Server. Two required fields provide backward compatibility; optional fields activate based on `SyncConfig` flags.
1219
+
1220
+ | Field | Type | Required | Activated by | Purpose |
1221
+ | ------- | ----------------------- | -------- | ----------------------- | ---------------------------------------------------- |
1222
+ | `r` | `string` | ✅ | always | The ref (InsertHistory timeId) being announced |
1223
+ | `o` | `string` | ✅ | always | Ephemeral origin of the sender (self-echo filtering) |
1224
+ | `c` | `ClientId` | ❌ | `includeClientIdentity` | Stable client identity (survives reconnections) |
1225
+ | `t` | `number` | ❌ | `includeClientIdentity` | Client-side wall-clock timestamp (ms since epoch) |
1226
+ | `seq` | `number` | ❌ | `causalOrdering` | Monotonic counter per (client, route) pair |
1227
+ | `p` | `InsertHistoryTimeId[]` | ❌ | `causalOrdering` | Causal predecessor timeIds |
1228
+ | `cksum` | `string` | ❌ | — | Content checksum for ACK verification |
1229
+
1230
+ Minimal payload (no SyncConfig): `{ o: "...", r: "..." }`
1231
+
1232
+ Full payload (all flags): `{ o, r, c, t, seq, p }`
1233
+
1234
+ #### AckPayload (Server → Client, event: `${route}:ack`)
1235
+
1236
+ | Field | Type | Required | Purpose |
1237
+ | -------------- | --------- | -------- | ------------------------------------------------------------- |
1238
+ | `r` | `string` | ✅ | The ref being acknowledged |
1239
+ | `ok` | `boolean` | ✅ | `true` if all clients confirmed; `false` on timeout / partial |
1240
+ | `receivedBy` | `number` | ❌ | Count of clients that confirmed receipt |
1241
+ | `totalClients` | `number` | ❌ | Total receiver clients at broadcast time |
1242
+
1243
+ #### GapFillRequest (Client → Server, event: `${route}:gapfill:req`)
1244
+
1245
+ | Field | Type | Required | Purpose |
1246
+ | ------------- | --------------------- | -------- | ------------------------------------------ |
1247
+ | `route` | `string` | ✅ | The route for which refs are missing |
1248
+ | `afterSeq` | `number` | ✅ | Last seq the client successfully processed |
1249
+ | `afterTimeId` | `InsertHistoryTimeId` | ❌ | Alternative anchor if seq unavailable |
1250
+
1251
+ #### GapFillResponse (Server → Client, event: `${route}:gapfill:res`)
1252
+
1253
+ | Field | Type | Required | Purpose |
1254
+ | ------- | -------------------- | -------- | ----------------------------------------------- |
1255
+ | `route` | `string` | ✅ | The route this response corresponds to |
1256
+ | `refs` | `ConnectorPayload[]` | ✅ | Ordered list of missing payloads (oldest first) |
1257
+
1258
+ #### Event name derivation
1259
+
1260
+ All event names are route-specific, derived by `syncEvents(route)` from `@rljson/rljson`:
1261
+
1262
+ | Property | Derived name | Direction |
1263
+ | ------------ | ------------------------ | --------------- |
1264
+ | `ref` | `"${route}"` | Bidirectional |
1265
+ | `ack` | `"${route}:ack"` | Server → Client |
1266
+ | `ackClient` | `"${route}:ack:client"` | Client → Server |
1267
+ | `gapFillReq` | `"${route}:gapfill:req"` | Client → Server |
1268
+ | `gapFillRes` | `"${route}:gapfill:res"` | Server → Client |
1269
+ | `bootstrap` | `"${route}:bootstrap"` | Server → Client |
1270
+
1271
+ #### SyncConfig flag activation matrix
1272
+
1273
+ | SyncConfig flag | Payload fields activated | Events activated |
1274
+ | ----------------------- | ------------------------------ | ------------------------------ |
1275
+ | _(none / default)_ | `o`, `r` | `${route}` only |
1276
+ | `causalOrdering` | + `seq`, `p` | + `gapfill:req`, `gapfill:res` |
1277
+ | `requireAck` | _(no extra fields)_ | + `ack`, `ack:client` |
1278
+ | `includeClientIdentity` | + `c`, `t` | _(no extra events)_ |
1279
+ | All flags combined | `o`, `r`, `c`, `t`, `seq`, `p` | All 6 events |
1280
+ | `maxDedupSetSize` | _(Connector-only setting)_ | _(no events)_ |
1281
+ | `bootstrapHeartbeatMs` | _(no extra fields)_ | + `bootstrap` (periodic) |
1282
+
1283
+ #### ClientId format
1284
+
1285
+ A `ClientId` is a `"client_"` prefix followed by a 12-character nanoid (e.g. `client_V1StGXR8_Z5j`). Unlike the ephemeral `origin` (which changes per Connector instantiation), a `ClientId` persists across reconnections and should be stored by the application.
1286
+
1287
+ ### Ref log (ring buffer)
1288
+
1289
+ The server maintains a bounded ring buffer of recent `ConnectorPayload` entries. When the buffer exceeds `refLogSize` (default: 1000), the oldest entry is dropped. The ref log serves as the data source for gap-fill responses.
1290
+
1291
+ ### ACK aggregation
1292
+
1293
+ When `requireAck` is enabled:
1294
+
1295
+ 1. **Before broadcast**: The server registers `ackClient` listeners on all receiver sockets.
1296
+ 2. **During broadcast**: Payloads are forwarded to all other clients.
1297
+ 3. **After broadcast**: The server waits for individual `ackClient` events from each receiver.
1298
+ 4. **On completion or timeout**: An aggregated `AckPayload` is emitted back to the sender on the `ack` event.
1299
+
1300
+ The ACK includes `receivedBy` (count of confirmed receivers) and `totalClients` (total receiver count). If all receivers confirm, `ok: true`; if timeout fires first, `ok: false`.
1301
+
1302
+ ### Gap-fill responder
1303
+
1304
+ When `causalOrdering` is enabled:
1305
+
1306
+ 1. The server listens for `gapfill:req` events from each client.
1307
+ 2. On request, it filters the ref log for payloads with `seq > afterSeq`.
1308
+ 3. The matching payloads are sent back on the `gapfill:res` event.
1309
+
1310
+ ### Bootstrap (late joiner support)
1311
+
1312
+ The server tracks the most recent ref seen on `_latestRef` (updated in `_multicastRefs` on every broadcast). This enables two mechanisms:
1313
+
1314
+ **Immediate bootstrap on connect:**
1315
+
1316
+ When `addSocket()` completes, the server calls `_sendBootstrap(ioDown)` which emits a `ConnectorPayload` with `o: '__server__'` and `r: _latestRef` on the `${route}:bootstrap` event. The Connector's `_registerBootstrapHandler()` feeds this into `_processIncoming()`, triggering listen callbacks and applying dedup automatically.
1317
+
1318
+ **Periodic heartbeat (optional):**
1319
+
1320
+ When `bootstrapHeartbeatMs > 0` in `SyncConfig`, `_startBootstrapHeartbeat()` starts an interval timer that calls `_broadcastBootstrapHeartbeat()` to emit the latest ref to all connected clients. The timer calls `.unref()` so it doesn't keep the process alive. `tearDown()` clears the timer.
1321
+
1322
+ ```text
1323
+ addSocket(socketB)
1324
+
1325
+ ├─ setup IoPeer, BsPeer, multicast listeners
1326
+ ├─ _sendBootstrap(ioDown) → emit(bootstrap, { o: '__server__', r: latestRef })
1327
+ └─ _startBootstrapHeartbeat() → setInterval(broadcastBootstrapHeartbeat, ms)
1328
+ ```
1329
+
1330
+ **Design decisions:**
1331
+
1332
+ - `_events` is always initialized (even without `syncConfig`) because bootstrap needs event names regardless of sync config
1333
+ - Bootstrap uses a dedicated event (`${route}:bootstrap`) rather than the main `${route}` event to avoid interfering with multicast payload processing
1334
+ - The `'__server__'` origin ensures no Connector treats bootstrap as a self-echo
1335
+
1336
+ ### Event registration lifecycle
1337
+
1338
+ - `_multicastRefs()` sets up all sync listeners (ref, ackClient, gapFillReq) per client.
1339
+ - `_removeAllListeners()` tears down all sync listeners (route, ackClient, gapFillReq).
1340
+ - `addSocket()` and `removeSocket()` trigger rebuild of all listeners.
1341
+ - `tearDown()` clears the ref log in addition to existing cleanup.
1342
+
1343
+ ### Client-side integration
1344
+
1345
+ The `Client` class accepts `syncConfig`, `clientIdentity`, and `peerInitTimeoutMs` in `ClientOptions`.
1346
+
1347
+ - **`peerInitTimeoutMs`** (default 30 s, 0 = disable): Guards `IoPeer` and `BsPeer` initialization during `init()` with a `Promise.race`-based timeout. If the server is unreachable, `init()` rejects cleanly instead of hanging indefinitely. Uses the same `_withTimeout()` pattern as the server.
1348
+ - **`syncConfig`** + **`clientIdentity`**: When a route is provided, these are passed through to the `Connector` constructor, activating enriched payloads (sequence numbers, causal ordering, client identity) on the client side.
1349
+ - **`tearDown()`**: Calls `connector.tearDown()` to remove all socket listeners before clearing internal references. This prevents leaked listeners that would keep the socket alive after the client is disposed.
1350
+
1351
+ ## Future Considerations
1352
+
1353
+ - **Write replication**: Automatically sync writes to server
1354
+ - **Conflict resolution**: Handle concurrent writes
1355
+ - **Change detection**: Notify on data changes
1356
+ - **Batch operations**: Optimize bulk transfers
1357
+ - **Compression**: Reduce network payload size
1358
+ - **Authentication hooks**: Verify client identity in `addSocket()`
1359
+ - **Connection health introspection**: Query connected client state, connection time, etc.
1360
+ - **Backpressure / rate limiting**: Protect against misbehaving clients flooding multicast
1361
+ - **Metrics / counters**: Numeric counters (connected clients, refs/sec) for monitoring dashboards