@rljson/server 0.0.4 → 0.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -10,6 +10,19 @@ found in the LICENSE file in the root of this package.
10
10
 
11
11
  @rljson/server provides a lightweight client/server layer for Rljson storage. It wires Io (row/table data) and Bs (blob storage) over sockets so clients can read from server storage while still keeping their own local storage.
12
12
 
13
+ ## Prerequisites
14
+
15
+ - Node.js v22.14.0 or newer
16
+ - A socket runtime (examples use Socket.IO)
17
+ - `Io`/`Bs` implementations (in-memory examples use `IoMem` and `BsMem`)
18
+
19
+ ## Design pillars
20
+
21
+ - **Local-first, read-through**: Writes stay on the caller; reads walk priorities (local first, then peers via server).
22
+ - **Pull by reference**: Only references (hashes) travel over the wire; data is pulled on demand through `IoMulti`/`BsMulti`.
23
+ - **Server as proxy**: The server aggregates and multicasts refs, but does not duplicate client data unless you intentionally store it there.
24
+ - **Single abstraction surface**: `Client.io`/`Client.bs` and `Server.io`/`Server.bs` expose merged multis so you do not handle peers manually.
25
+
13
26
  ## What it does (quick overview)
14
27
 
15
28
  - **Server** hosts Io + Bs and exposes them over sockets.
@@ -22,6 +35,60 @@ found in the LICENSE file in the root of this package.
22
35
  pnpm add @rljson/server
23
36
  ```
24
37
 
38
+ ## Quick start (Socket.IO example)
39
+
40
+ Server setup:
41
+
42
+ ```ts
43
+ import { createServer } from 'node:http';
44
+ import { Server as SocketIoServer } from 'socket.io';
45
+ import { BsMem } from '@rljson/bs';
46
+ import { IoMem } from '@rljson/io';
47
+ import { Route } from '@rljson/rljson';
48
+ import { Server, SocketIoBridge } from '@rljson/server';
49
+
50
+ const httpServer = createServer();
51
+ const socketIo = new SocketIoServer(httpServer);
52
+
53
+ const route = Route.fromFlat('my.app');
54
+ const server = new Server(route, new IoMem(), new BsMem());
55
+ await server.init();
56
+
57
+ // Or with logging enabled:
58
+ // import { ConsoleLogger } from '@rljson/server';
59
+ // const server = new Server(route, new IoMem(), new BsMem(), { logger: new ConsoleLogger() });
60
+
61
+ socketIo.on('connection', async (socket) => {
62
+ await server.addSocket(new SocketIoBridge(socket));
63
+ });
64
+
65
+ httpServer.listen(0);
66
+ ```
67
+
68
+ Client setup:
69
+
70
+ ```ts
71
+ import { io as socketIoClient } from 'socket.io-client';
72
+ import { BsMem } from '@rljson/bs';
73
+ import { IoMem } from '@rljson/io';
74
+ import { Route } from '@rljson/rljson';
75
+ import { Client, SocketIoBridge } from '@rljson/server';
76
+
77
+ const socket = socketIoClient('http://localhost:3000', { forceNew: true });
78
+
79
+ // Pass the same route as the server to automatically create Db and Connector
80
+ const route = Route.fromFlat('my.app');
81
+ const client = new Client(new SocketIoBridge(socket), new IoMem(), new BsMem(), route);
82
+ await client.init();
83
+
84
+ // Or with logging enabled:
85
+ // import { ConsoleLogger } from '@rljson/server';
86
+ // const client = new Client(socket, io, bs, route, { logger: new ConsoleLogger() });
87
+
88
+ // client.db and client.connector are ready to use
89
+ // client.db.get/insert now cascade local ➜ server automatically
90
+ ```
91
+
25
92
  ## Basic usage
26
93
 
27
94
  ### Server
@@ -46,11 +113,12 @@ await server.init();
46
113
  // await server.addSocket(new SocketIoBridge(serverSocket));
47
114
  ```
48
115
 
49
- ### Client
116
+ ### Client API
50
117
 
51
118
  ```ts
52
119
  import { BsMem } from '@rljson/bs';
53
120
  import { IoMem } from '@rljson/io';
121
+ import { Route } from '@rljson/rljson';
54
122
 
55
123
  import { Client } from '@rljson/server';
56
124
 
@@ -60,14 +128,20 @@ await localIo.isReady();
60
128
 
61
129
  const localBs = new BsMem();
62
130
 
63
- const client = new Client(new SocketIoBridge(clientSocket), localIo, localBs);
131
+ // With route: Db and Connector are created automatically
132
+ const route = Route.fromFlat('my.app.route');
133
+ const client = new Client(new SocketIoBridge(clientSocket), localIo, localBs, route);
64
134
  await client.init();
65
135
 
66
136
  // Unified interfaces
67
- const io = client.io;
68
- const bs = client.bs;
137
+ const io = client.io; // IoMulti (local + server)
138
+ const bs = client.bs; // BsMulti (local + server)
139
+ const db = client.db; // Db (wraps IoMulti)
140
+ const connector = client.connector; // Connector (wired to route + socket)
69
141
  ```
70
142
 
143
+ The `route` parameter is optional. Without it, the client only sets up `io` and `bs`, and `db`/`connector` will be `undefined`.
144
+
71
145
  ## How the layering works
72
146
 
73
147
  Both client and server use a **multi-layer** approach:
@@ -81,19 +155,28 @@ This is implemented with `IoMulti` and `BsMulti` internally, but the public API
81
155
 
82
156
  ### Client
83
157
 
84
- - `init()` – builds Io/Bs multis and starts peer bridges
158
+ - `init()` – builds Io/Bs multis, starts peer bridges, and (if route was provided) creates Db and Connector
85
159
  - `ready()` – resolves once Io is ready
86
160
  - `tearDown()` – closes and clears local state
87
161
  - `io` – Io interface (multi-layer)
88
162
  - `bs` – Bs interface (multi-layer)
163
+ - `db` – Db instance wrapping IoMulti (available when route was provided)
164
+ - `connector` – Connector wired to the route and socket (available when route was provided)
165
+ - `route` – the Route passed to the constructor
166
+ - `logger` – the `ServerLogger` instance (defaults to `noopLogger`)
89
167
 
90
- ### Server
168
+ ### Server API
91
169
 
92
170
  - `init()` – initializes server multis
93
171
  - `ready()` – resolves when Io is ready
94
- - `addSocket(socket)` – registers a client socket and refreshes multis
172
+ - `addSocket(socket)` – registers a client socket, sets up disconnect handling, and refreshes multis
173
+ - `removeSocket(clientId)` – removes a client, cleans up peers/listeners, and rebuilds multis
174
+ - `tearDown()` – gracefully shuts down: stops timers, clears all clients, closes storage
95
175
  - `io` – Io interface used by server
96
176
  - `bs` – Bs interface used by server
177
+ - `clients` – `Map` of connected clients (keyed by internal clientId)
178
+ - `isTornDown` – whether the server has been shut down
179
+ - `logger` – the `ServerLogger` instance (defaults to `noopLogger`)
97
180
 
98
181
  ## Example
99
182
 
@@ -127,3 +210,513 @@ The same pattern is used for Bs (blob storage).
127
210
 
128
211
  - `Client.io` and `Client.bs` are already merged interfaces. No need to access multis directly.
129
212
  - `Server.addSocket()` batches refreshes to reduce rebuild overhead when multiple sockets connect.
213
+ - Multicast uses `__origin` markers plus a two-generation ref set to prevent echo loops. Stale refs are automatically evicted (configurable via `refEvictionIntervalMs`).
214
+ - Disconnected sockets are auto-detected and cleaned up — dead peers are removed and multis rebuilt.
215
+ - Peer initialization is guarded by a configurable timeout (`peerInitTimeoutMs`, default 30 s) on both server and client. On the server it prevents `addSocket()` from hanging on unresponsive clients; on the client it prevents `init()` from hanging when the server is unreachable.
216
+ - Logging is opt-in via `{ logger }` options. Use `ConsoleLogger` for development, `BufferedLogger` for testing, `FilteredLogger` for production. Default is `NoopLogger` (zero overhead).
217
+
218
+ ## Logging
219
+
220
+ Both `Server` and `Client` support structured logging via an injectable `ServerLogger` interface. Logging is opt-in — by default a zero-overhead `NoopLogger` is used.
221
+
222
+ ### Logger implementations
223
+
224
+ | Class | Purpose |
225
+ | ---------------- | ------------------------------------------------------------- |
226
+ | `NoopLogger` | Default. All methods are empty — zero overhead in production. |
227
+ | `ConsoleLogger` | Logs to `console.log`/`warn`/`error`. Good for development. |
228
+ | `BufferedLogger` | Stores entries in memory. Ideal for test assertions. |
229
+ | `FilteredLogger` | Wraps another logger, filtering by level and/or source. |
230
+
231
+ ### Injecting a logger
232
+
233
+ ```ts
234
+ import { Server, Client, ConsoleLogger, BufferedLogger, FilteredLogger } from '@rljson/server';
235
+
236
+ // Console logging (development)
237
+ const server = new Server(route, io, bs, { logger: new ConsoleLogger() });
238
+ const client = new Client(socket, io, bs, route, { logger: new ConsoleLogger() });
239
+
240
+ // Buffered logging (testing)
241
+ const logger = new BufferedLogger();
242
+ const server = new Server(route, io, bs, { logger });
243
+ // After operations:
244
+ logger.entries; // All log entries
245
+ logger.byLevel('error'); // Only errors
246
+ logger.bySource('Server.Multicast'); // Only multicast entries
247
+ logger.clear(); // Reset
248
+
249
+ // Filtered logging (production — errors and warnings only)
250
+ const filtered = new FilteredLogger(new ConsoleLogger(), {
251
+ levels: ['error', 'warn'],
252
+ });
253
+ const server = new Server(route, io, bs, { logger: filtered });
254
+
255
+ // Filtered by source (only multicast traffic)
256
+ const trafficOnly = new FilteredLogger(new ConsoleLogger(), {
257
+ levels: ['traffic'],
258
+ sources: ['Server.Multicast'],
259
+ });
260
+ ```
261
+
262
+ ### Log levels
263
+
264
+ | Level | Method | What it captures |
265
+ | --------- | ------------------------------------------------- | ----------------------------------------------------------------------------- |
266
+ | `info` | `logger.info(source, message, data?)` | Lifecycle events: construction, init, tearDown, peer creation, multi rebuilds |
267
+ | `warn` | `logger.warn(source, message, data?)` | Duplicate ref suppression, loop prevention |
268
+ | `error` | `logger.error(source, message, error?, data?)` | Failures during init, peer creation, multicast, multi rebuilds |
269
+ | `traffic` | `logger.traffic(direction, source, event, data?)` | Socket traffic: inbound refs from clients, outbound multicasts to clients |
270
+
271
+ ### Log sources
272
+
273
+ Each log entry includes a `source` field identifying the component:
274
+
275
+ | Source | Component |
276
+ | ------------------ | ----------------------------------------------------- |
277
+ | `Server` | Server lifecycle (init, addSocket, rebuild, refresh) |
278
+ | `Server.Io` | Server Io peer creation |
279
+ | `Server.Bs` | Server Bs peer creation |
280
+ | `Server.Multicast` | Ref broadcasting between clients |
281
+ | `Client` | Client lifecycle (init, tearDown, Db/Connector setup) |
282
+ | `Client.Io` | Client Io multi setup, peer bridge, peer creation |
283
+ | `Client.Bs` | Client Bs multi setup, peer bridge, peer creation |
284
+
285
+ ### Custom logger
286
+
287
+ Implement the `ServerLogger` interface to integrate with any logging framework:
288
+
289
+ ```ts
290
+ import type { ServerLogger } from '@rljson/server';
291
+
292
+ const myLogger: ServerLogger = {
293
+ info(source, message, data?) { /* your logging framework */ },
294
+ warn(source, message, data?) { /* ... */ },
295
+ error(source, message, error?, data?) { /* ... */ },
296
+ traffic(direction, source, event, data?) { /* ... */ },
297
+ };
298
+ ```
299
+
300
+ ## Server options
301
+
302
+ `ServerOptions` configures production behavior:
303
+
304
+ ```ts
305
+ const server = new Server(route, io, bs, {
306
+ logger: new ConsoleLogger(), // Structured logging (default: NoopLogger)
307
+ refEvictionIntervalMs: 60_000, // Ref dedup sweep interval (default: 60 s, 0 = disable)
308
+ peerInitTimeoutMs: 30_000, // Peer handshake timeout (default: 30 s, 0 = disable)
309
+ });
310
+ ```
311
+
312
+ | Option | Default | Description |
313
+ | ----------------------- | ---------- | --------------------------------------------------------------------------------------------------------------------------------------- |
314
+ | `logger` | NoopLogger | Structured logger for lifecycle, traffic, and error events. |
315
+ | `refEvictionIntervalMs` | 60 000 | Two-generation sweep interval for multicast ref dedup. Refs older than two intervals are forgotten, preventing unbounded memory growth. |
316
+ | `peerInitTimeoutMs` | 30 000 | Maximum time `addSocket()` waits for a peer to initialize. Prevents hanging on unresponsive clients. |
317
+ | `syncConfig` | undefined | Sync protocol configuration (see below). Enables ACK aggregation, gap-fill, and enriched payloads. |
318
+ | `refLogSize` | 1 000 | Maximum number of recent payloads retained in the ref log for gap-fill responses. |
319
+ | `ackTimeoutMs` | 10 000 | Timeout for collecting individual client ACKs before emitting the aggregated ACK. Falls back to `syncConfig.ackTimeoutMs`. |
320
+
321
+ ## Client options
322
+
323
+ `ClientOptions` configures the client:
324
+
325
+ ```ts
326
+ const client = new Client(socket, io, bs, route, {
327
+ logger: new ConsoleLogger(), // Structured logging (default: NoopLogger)
328
+ peerInitTimeoutMs: 30_000, // Peer handshake timeout (default: 30 s, 0 = disable)
329
+ syncConfig, // Sync protocol config (default: undefined)
330
+ clientIdentity: 'my-client-id', // Stable client identity (default: auto-generated)
331
+ });
332
+ ```
333
+
334
+ | Option | Default | Description |
335
+ | ------------------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------- |
336
+ | `logger` | NoopLogger | Structured logger for lifecycle, traffic, and error events. |
337
+ | `peerInitTimeoutMs` | 30 000 | Maximum time `init()` waits for Io/Bs peers to initialize. Prevents hanging when the server is unreachable. Set to 0 to disable. |
338
+ | `syncConfig` | undefined | Sync protocol configuration (see below). Passed through to the Connector for enriched payloads. |
339
+ | `clientIdentity` | undefined | Stable client identity passed to the Connector. Auto-generated when `syncConfig.includeClientIdentity` is true and this is omitted. |
340
+
341
+ ## Sync protocol
342
+
343
+ The sync protocol is **opt-in** and backward-compatible. When `syncConfig` is provided to the server (and/or client), the system activates enriched payload forwarding, ACK aggregation, and gap-fill support.
344
+
345
+ ### Enabling sync
346
+
347
+ ```ts
348
+ import { Server, Client, ConsoleLogger } from '@rljson/server';
349
+ import type { SyncConfig } from '@rljson/server';
350
+
351
+ const syncConfig: SyncConfig = {
352
+ causalOrdering: true, // Attach seq numbers, detect gaps, serve gap-fill
353
+ requireAck: true, // Collect client ACKs, emit aggregated AckPayload
354
+ includeClientIdentity: true, // Attach stable ClientId and timestamp to payloads
355
+ ackTimeoutMs: 5_000, // Per-ref ACK timeout (default: 10 s)
356
+ maxDedupSetSize: 10_000, // Max refs per dedup generation (default: 10 000)
357
+ bootstrapHeartbeatMs: 30_000, // Periodic bootstrap heartbeat interval (optional)
358
+ };
359
+
360
+ // Server — enables ref log, ACK aggregation, gap-fill responder
361
+ const server = new Server(route, io, bs, { syncConfig });
362
+ await server.init();
363
+
364
+ // Client — passes SyncConfig to the Connector for enriched payloads
365
+ const client = new Client(socket, localIo, localBs, route, { syncConfig });
366
+ await client.init();
367
+ ```
368
+
369
+ ### What each flag does
370
+
371
+ | Flag | Server effect | Client (Connector) effect |
372
+ | ----------------------- | ------------------------------------------------ | ------------------------------------------------------------ |
373
+ | `causalOrdering` | Stores payloads in ref log; responds to gap-fill | Attaches `seq` + `p`; detects gaps; requests gap-fill |
374
+ | `requireAck` | Collects per-client ACKs; emits aggregated ACK | Awaits ACK via `sendWithAck()`; emits client ACK |
375
+ | `includeClientIdentity` | Forwards `c` and `t` transparently | Attaches stable `ClientId` and wall-clock timestamp |
376
+ | `ackTimeoutMs` | Controls server-side ACK collection timeout | Controls client-side ACK wait timeout |
377
+ | `maxDedupSetSize` | — | Caps dedup set size per generation (two-generation eviction) |
378
+ | `bootstrapHeartbeatMs` | Sends latest ref to all clients periodically | — (handled server-side only) |
379
+
380
+ ### ACK flow
381
+
382
+ ```text
383
+ Client A ──emit(ref)──► Server ──forward──► Client B, C
384
+ Server ◄──ackClient── Client B
385
+ Server ◄──ackClient── Client C
386
+ Client A ◄──ack──────── Server (ok: true, receivedBy: 2, totalClients: 2)
387
+ ```
388
+
389
+ If not all clients ACK within the timeout, the server emits a partial ACK (`ok: false`).
390
+
391
+ ### Gap-fill flow
392
+
393
+ ```text
394
+ Client B detects seq gap (expected 6, got 8)
395
+ Client B ──gapfill:req──► Server (afterSeq: 5)
396
+ Client B ◄──gapfill:res── Server (refs with seq 6, 7)
397
+ ```
398
+
399
+ The server maintains a bounded ref log (ring buffer) of recent payloads. When a client detects a sequence gap, it requests the missing refs. The server filters the ref log and responds with matching entries.
400
+
401
+ ### Bootstrap (late joiner support)
402
+
403
+ When a new client connects, the server immediately sends the **latest known ref** via the `${route}:bootstrap` event. This ensures late-joining clients catch up without waiting for the next write.
404
+
405
+ ```text
406
+ Client A ──emit(ref)──► Server (Server tracks latestRef)
407
+ ... time passes ...
408
+ Client B ──addSocket──► Server
409
+ Client B ◄──bootstrap── Server ({ o: '__server__', r: latestRef })
410
+ Client B ──db.get(ref)──► Server ► A (pull data on demand)
411
+ ```
412
+
413
+ For additional resilience, configure `bootstrapHeartbeatMs` in `SyncConfig` to periodically broadcast the latest ref to all clients. Each client's dedup pipeline automatically filters refs it has already seen.
414
+
415
+ ```ts
416
+ const syncConfig: SyncConfig = {
417
+ bootstrapHeartbeatMs: 30_000, // Send latest ref to all clients every 30s
418
+ };
419
+ ```
420
+
421
+ **Key details:**
422
+
423
+ - Bootstrap payload uses origin `'__server__'` (not a real client)
424
+ - The Connector's `_processIncoming()` handles dedup automatically
425
+ - If no ref has been seen yet (empty server), no bootstrap is sent
426
+ - Heartbeat timer calls `.unref()` so it doesn't keep the process alive
427
+
428
+ ### Sync event names
429
+
430
+ All sync events are route-specific, generated by `syncEvents(route)`:
431
+
432
+ | Event | Direction | Purpose |
433
+ | ---------------------- | --------------- | ------------------------------------------ |
434
+ | `${route}` | Bidirectional | Ref broadcast (existing) |
435
+ | `${route}:ack` | Server → Client | Aggregated delivery acknowledgment |
436
+ | `${route}:ack:client` | Client → Server | Individual client receipt confirmation |
437
+ | `${route}:gapfill:req` | Client → Server | Request missing refs after detected gap |
438
+ | `${route}:gapfill:res` | Server → Client | Supply missing refs from ref log |
439
+ | `${route}:bootstrap` | Server → Client | Latest ref on connect / periodic heartbeat |
440
+
441
+ ### Wire format reference
442
+
443
+ All payloads are JSON objects transmitted via socket events. The two required fields (`o`, `r`) provide backward-compatible self-echo filtering and ref identification. All other fields activate only when the corresponding `SyncConfig` flags are set.
444
+
445
+ #### ConnectorPayload
446
+
447
+ The main message transmitted between Connector and Server. Sent on event `${route}`.
448
+
449
+ | Field | Type | Required | Activated by | Purpose |
450
+ | ------- | ----------------------- | -------- | ----------------------- | ---------------------------------------------------- |
451
+ | `r` | `string` | ✅ | always | The ref (InsertHistory timeId) being announced |
452
+ | `o` | `string` | ✅ | always | Ephemeral origin of the sender (self-echo filtering) |
453
+ | `c` | `ClientId` | ❌ | `includeClientIdentity` | Stable client identity (survives reconnections) |
454
+ | `t` | `number` | ❌ | `includeClientIdentity` | Client-side wall-clock timestamp (ms since epoch) |
455
+ | `seq` | `number` | ❌ | `causalOrdering` | Monotonic counter per (client, route) pair |
456
+ | `p` | `InsertHistoryTimeId[]` | ❌ | `causalOrdering` | Causal predecessor timeIds |
457
+ | `cksum` | `string` | ❌ | — | Content checksum for ACK verification |
458
+
459
+ **Minimal** (backward-compatible, no SyncConfig):
460
+
461
+ ```json
462
+ { "o": "1700000000000:AbCd", "r": "1700000000001:EfGh" }
463
+ ```
464
+
465
+ **Fully populated** (all SyncConfig flags enabled):
466
+
467
+ ```json
468
+ {
469
+ "o": "1700000000000:AbCd",
470
+ "r": "1700000000001:EfGh",
471
+ "c": "client_V1StGXR8_Z5j",
472
+ "t": 1700000000001,
473
+ "seq": 42,
474
+ "p": ["1700000000000:XyZw"]
475
+ }
476
+ ```
477
+
478
+ #### AckPayload
479
+
480
+ Server → Client acknowledgment. Sent on event `${route}:ack` after the server has collected individual client ACKs (or after a timeout).
481
+
482
+ | Field | Type | Required | Purpose |
483
+ | -------------- | --------- | -------- | ------------------------------------------------------------- |
484
+ | `r` | `string` | ✅ | The ref being acknowledged |
485
+ | `ok` | `boolean` | ✅ | `true` if all clients confirmed; `false` on timeout / partial |
486
+ | `receivedBy` | `number` | ❌ | Count of clients that confirmed receipt |
487
+ | `totalClients` | `number` | ❌ | Total receiver clients at broadcast time |
488
+
489
+ **Full ACK example:**
490
+
491
+ ```json
492
+ { "r": "1700000000001:EfGh", "ok": true, "receivedBy": 3, "totalClients": 3 }
493
+ ```
494
+
495
+ **Partial / timed-out ACK:**
496
+
497
+ ```json
498
+ { "r": "1700000000001:EfGh", "ok": false, "receivedBy": 1, "totalClients": 3 }
499
+ ```
500
+
501
+ #### GapFillRequest
502
+
503
+ Client → Server request for missing refs. Sent on event `${route}:gapfill:req` when a Connector detects a sequence gap.
504
+
505
+ | Field | Type | Required | Purpose |
506
+ | ------------- | --------------------- | -------- | ------------------------------------------------------ |
507
+ | `route` | `string` | ✅ | The route for which refs are missing |
508
+ | `afterSeq` | `number` | ✅ | Last sequence number the client successfully processed |
509
+ | `afterTimeId` | `InsertHistoryTimeId` | ❌ | Alternative anchor if sequence numbers are unavailable |
510
+
511
+ ```json
512
+ { "route": "/sharedTree", "afterSeq": 5, "afterTimeId": "1700000000000:AbCd" }
513
+ ```
514
+
515
+ #### GapFillResponse
516
+
517
+ Server → Client response containing missing refs. Sent on event `${route}:gapfill:res`, ordered chronologically (oldest first).
518
+
519
+ | Field | Type | Required | Purpose |
520
+ | ------- | -------------------- | -------- | ----------------------------------------------- |
521
+ | `route` | `string` | ✅ | The route this response corresponds to |
522
+ | `refs` | `ConnectorPayload[]` | ✅ | Ordered list of missing payloads (oldest first) |
523
+
524
+ ```json
525
+ {
526
+ "route": "/sharedTree",
527
+ "refs": [
528
+ { "o": "1700000000000:AbCd", "r": "1700000000006:MnOp", "seq": 6 },
529
+ { "o": "1700000000000:AbCd", "r": "1700000000007:QrSt", "seq": 7 }
530
+ ]
531
+ }
532
+ ```
533
+
534
+ #### SyncConfig flag → field activation summary
535
+
536
+ | SyncConfig flag | Payload fields activated | Events activated |
537
+ | ----------------------- | ------------------------------ | ------------------------------ |
538
+ | _(none / default)_ | `o`, `r` | `${route}` only |
539
+ | `causalOrdering` | + `seq`, `p` | + `gapfill:req`, `gapfill:res` |
540
+ | `requireAck` | _(no extra fields)_ | + `ack`, `ack:client` |
541
+ | `includeClientIdentity` | + `c`, `t` | _(no extra events)_ |
542
+ | All flags combined | `o`, `r`, `c`, `t`, `seq`, `p` | All 5 events |
543
+
544
+ #### ClientId format
545
+
546
+ A `ClientId` is a 12-character [nanoid](https://github.com/ai/nanoid) prefixed with `"client_"` for easy identification in logs:
547
+
548
+ ```
549
+ client_V1StGXR8_Z5j
550
+ ```
551
+
552
+ Unlike a Connector's ephemeral `origin` (which changes on every instantiation), a `ClientId` should be generated once and stored (e.g. in localStorage) so it persists across reconnections.
553
+
554
+ ## Lifecycle management
555
+
556
+ ### Graceful shutdown
557
+
558
+ ```ts
559
+ // Server
560
+ await server.tearDown();
561
+ // Stops eviction timer, removes all listeners, clears clients, closes IoMulti.
562
+ console.log(server.isTornDown); // true
563
+
564
+ // Client
565
+ await client.tearDown();
566
+ // Calls connector.tearDown() (removes socket listeners),
567
+ // closes IoMulti, clears Bs references, resets Db/Connector.
568
+ ```
569
+
570
+ ### Removing a client
571
+
572
+ ```ts
573
+ // Manual removal by clientId
574
+ const clientIds = Array.from(server.clients.keys());
575
+ await server.removeSocket(clientIds[0]);
576
+
577
+ // Automatic: clients are removed when their socket emits 'disconnect'
578
+ ```
579
+
580
+ ## Architecture Overview
581
+
582
+ ### Pull-Based Reference Architecture
583
+
584
+ @rljson/server implements a **pull-based architecture** where data is retrieved on-demand using content-addressed references (hashes), not automatically pushed between clients. This fundamentally differs from traditional sync systems:
585
+
586
+ ### Key principle: references flow, data is pulled
587
+
588
+ ```text
589
+ Reference Flow: Client A → Server → Client B (broadcast)
590
+ Data Flow: Client A ← Server ← Client B (pulled on query)
591
+ ```
592
+
593
+ ### How It Works
594
+
595
+ 1. **Client A stores data locally** (writes to priority 1 layer)
596
+
597
+ ```ts
598
+ const results = await db.insert(route, [data]);
599
+ const ref = results[0]._hash;
600
+ ```
601
+
602
+ 2. **Client A broadcasts reference** (not the data)
603
+
604
+ ```ts
605
+ socket.emit(route.flat, ref);
606
+ ```
607
+
608
+ 3. **Client B receives reference** via multicast
609
+
610
+ ```ts
611
+ socket.on(route.flat, (ref) => { /* ... */ });
612
+ ```
613
+
614
+ 4. **Client B queries by reference**
615
+
616
+ ```ts
617
+ const result = await db.get(route, { _hash: ref });
618
+ ```
619
+
620
+ 5. **Server automatically pulls from Client A**
621
+ - Client B's query goes to its IoMulti (priority 1: local, not found)
622
+ - Falls back to IoPeer → Server (priority 2)
623
+ - Server's IoMulti cascades: priority 1 (local cache), then priority 2 (IoPeer[Client A])
624
+ - Data flows back: Client A → Server → Client B
625
+
626
+ **This cascade happens automatically** - no explicit pull operation needed.
627
+
628
+ ### Three Storage Types
629
+
630
+ #### 1. Io Data (Tables)
631
+
632
+ - **What**: Relational tables (Cake, Cell, custom content types)
633
+ - **Storage**: IoMulti (local Io + IoPeer instances)
634
+ - **Query**: `db.get(route, { _hash: ref })`
635
+ - **Use Cases**: Structured data, records, metadata
636
+
637
+ #### 2. Bs Data (Blobs)
638
+
639
+ - **What**: Binary blobs (files, images, videos)
640
+ - **Storage**: BsMulti (local Bs + BsPeer instances)
641
+ - **Query**: `bs.get(blobHash)` after getting ref from Io table
642
+ - **Use Cases**: Large files, media content
643
+ - **Pattern**: Store blob → get hash → store hash in Io table → others query by hash
644
+
645
+ #### 3. Tree Data (Hierarchical)
646
+
647
+ - **What**: JSON objects converted to tree structures
648
+ - **Storage**: In Io layer with special 'trees' content type
649
+ - **Conversion**: `treeFromObject(jsObject)` creates Tree[] array
650
+ - **Root**: Last element in array (`trees[trees.length - 1]._hash`)
651
+ - **Query**: `db.get(route, { _hash: rootHash })` returns ALL related nodes
652
+ - **Use Cases**: Configuration objects, nested data structures
653
+
654
+ ### Client-to-Client Communication
655
+
656
+ ### Pattern: Insert on Client A, Get on Client B
657
+
658
+ ```ts
659
+ // Setup: All parties create table definitions
660
+ await server.createTables({ withInsertHistory: [tableCfg] });
661
+ await clientA.createTables({ withInsertHistory: [tableCfg] });
662
+ await clientB.createTables({ withInsertHistory: [tableCfg] });
663
+
664
+ // Client A: Insert data locally
665
+ const result = await dbA.insert(route, [{ name: 'Tesla', model: 'Model S' }]);
666
+ const ref = result[0]._hash;
667
+
668
+ // Client A: Broadcast reference
669
+ clientA.socket.emit(route.flat, ref);
670
+
671
+ // Client B: Listen and pull data
672
+ clientB.socket.on(route.flat, async (ref) => {
673
+ // This query automatically cascades through server to Client A
674
+ const data = await dbB.get(route, { _hash: ref });
675
+ console.log(data.rljson.cars._data[0]); // { name: 'Tesla', ... }
676
+ });
677
+ ```
678
+
679
+ **Server never stores the car data** - it only proxies the query from Client B to Client A.
680
+
681
+ ### Why Pull-Based?
682
+
683
+ | Aspect | Pull-Based (@rljson/server) | Push-Based (Traditional) |
684
+ | ------------------- | ----------------------------- | ---------------------------- |
685
+ | **Network Traffic** | Minimal (only refs) | High (all data replicated) |
686
+ | **Data Freshness** | Always latest (pull on query) | Can be stale (cached copies) |
687
+ | **Storage** | Single source of truth | Multiple copies to sync |
688
+ | **Bandwidth** | Low (on-demand only) | High (push all changes) |
689
+ | **Offline** | Works fully offline | Needs sync when reconnected |
690
+ | **Conflicts** | None (read from source) | Requires resolution logic |
691
+
692
+ ### When to Use @rljson/server
693
+
694
+ ✅ **Good fit:**
695
+
696
+ - Local-first applications with occasional sharing
697
+ - Collaborative tools where users own their data
698
+ - Media sharing apps (store locally, share by reference)
699
+ - Configuration management (pull config by root hash)
700
+ - Document collaboration (pull latest version by ref)
701
+
702
+ ❌ **Not ideal for:**
703
+
704
+ - Real-time collaborative editing (character-by-character)
705
+ - Systems requiring strong consistency guarantees
706
+ - Centralized storage where server must have all data
707
+ - Automatic background sync without references
708
+
709
+ ### Key Design Principles
710
+
711
+ 1. **Local-First**: All writes go to local storage only
712
+ 2. **Content-Addressed**: Everything referenced by hash
713
+ 3. **Reference-Based Discovery**: Need a reference to query data
714
+ 4. **Automatic Cascade**: IoMulti/BsMulti handle priority traversal
715
+ 5. **Server as Proxy**: Server doesn't store client data, only routes queries
716
+ 6. **Pull on Demand**: Data retrieved only when explicitly queried
717
+
718
+ ### Next Steps
719
+
720
+ - See [README.architecture.md](README.architecture.md) for detailed architecture documentation
721
+ - See [test/server.spec.ts](test/server.spec.ts) for comprehensive integration examples
722
+ - See [src/example.ts](src/example.ts) for a basic usage example