svelte-realtime 0.4.13 → 0.4.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +65 -9
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1595,6 +1595,18 @@ export const message = createMessage({ platform: (p) => bus.wrap(p) });
1595
1595
 
1596
1596
  No changes needed in your live modules. `ctx.publish` delegates to whatever platform was passed in, so Redis wrapping is transparent.
1597
1597
 
1598
+ If you already run Postgres and don't need Redis, you can use the [LISTEN/NOTIFY bridge](#postgres-notify) instead for cross-instance pub/sub.
1599
+
1600
+ ### What the extensions handle
1601
+
1602
+ When you add the Redis extensions from [svelte-adapter-uws-extensions](https://github.com/lanteanio/svelte-adapter-uws-extensions), you get:
1603
+
1604
+ - **Cross-instance pub/sub** with echo suppression (messages from the same instance are dropped on receive) and microtask-batched Redis pipelines (multiple publishes in one event loop tick become a single Redis roundtrip)
1605
+ - **Distributed presence** with heartbeat-based zombie cleanup -- dead sockets are detected by probing `getBufferedAmount()`, and stale Redis entries are cleaned server-side by a Lua script after a configurable TTL (default 90s)
1606
+ - **Replay buffers** with atomic sequence numbering via Lua `INCR` + sorted sets -- per-topic ordering is strict, and gap detection triggers a truncation event before replaying what's available
1607
+ - **Cross-instance rate limiting** via atomic Lua scripts that use `redis.call('TIME')` to avoid clock skew between app servers
1608
+ - **Circuit breakers** with a three-state machine (healthy / broken / probing) -- when Redis goes down, the breaker trips after a configurable failure threshold, local delivery continues, and a single probe request tests recovery before resuming full traffic
1609
+
1598
1610
  ### Combined: Redis + rate limiting
1599
1611
 
1600
1612
  ```js
@@ -1653,9 +1665,37 @@ export const orders = live.stream('orders', async (ctx) => {
1653
1665
 
1654
1666
  ---
1655
1667
 
1668
+ ## Failure modes
1669
+
1670
+ ### Redis goes down
1671
+
1672
+ All Redis extensions accept an optional circuit breaker. The breaker trips after a configurable number of consecutive failures (default 5). Once broken, cross-instance pub/sub, presence writes, replay buffering, and distributed rate limiting are skipped entirely -- no retries, no queuing, no thundering herd. Local delivery continues normally: `ctx.publish()` still reaches subscribers on the same instance and across workers. After a configurable timeout (default 30s), the breaker enters a probing state where a single request is allowed through. If it succeeds, the breaker resets to healthy and all extensions resume.
1673
+
1674
+ ### Instance crashes mid-session
1675
+
1676
+ The distributed presence extension runs a heartbeat cycle (default 30s) that probes each tracked WebSocket with `getBufferedAmount()`. Under mass disconnect, the runtime may drop close events entirely -- the heartbeat catches these and triggers a synchronous leave. On the Redis side, stale presence entries are cleaned by a server-side Lua script that scans the hash and removes fields older than the configurable TTL (default 90s). The `LEAVE_SCRIPT` atomically checks whether the same user is still connected on another instance before broadcasting a leave event, so users don't appear to leave and rejoin when a single instance restarts.
1677
+
1678
+ ### Client reconnects after a long disconnect
1679
+
1680
+ Reconnection uses up to three tiers depending on what's available and how large the gap is. The replay buffer (configurable, default 1000 messages per topic) fills small gaps with strict per-topic ordering via atomic Lua sequence numbering. If the gap is too large for replay, delta sync kicks in -- the client sends its last known version, and the server returns only the changes since that version (or `{unchanged: true}` if nothing changed). If neither replay nor delta sync can cover the gap, the client falls back to a full refetch of the init function. All three paths are automatic and require no client-side code changes.
1681
+
1682
+ ### Send buffer overflow
1683
+
1684
+ Each WebSocket connection has a send buffer limit (default 1MB, configurable via `maxBackpressure` in the adapter). When the buffer is full, messages are silently dropped. In dev mode, `handleRpc` logs a warning when a response fails to deliver. For streams that produce high-frequency output, wrap the source with `live.breaker()` or use `live.throttle()` / `live.debounce()` to control the publish rate.
1685
+
1686
+ ### Batch and queue limits
1687
+
1688
+ A single `batch()` call is capped at 50 RPC calls -- the client rejects before sending, and the server enforces the same cap as a safety net. The adapter's client-side send queue holds up to 1000 messages; when full, the oldest item is dropped. The adapter rate-limits WebSocket upgrades per IP with a sliding window (default 10 per 10s) to prevent connection floods.
1689
+
1690
+ ---
1691
+
1656
1692
  ## Clustering
1657
1693
 
1658
- svelte-realtime works with the adapter's `CLUSTER_WORKERS` mode.
1694
+ svelte-realtime works with the adapter's `CLUSTER_WORKERS` mode. The adapter spawns N worker threads (default: number of CPUs). On Linux, workers share the port via `SO_REUSEPORT` and the kernel distributes incoming connections. On macOS and Windows, a primary thread accepts connections and routes them to workers via uWS child app descriptors.
1695
+
1696
+ Cross-worker `ctx.publish()` calls are batched via microtask coalescing -- all publishes within one event loop tick are bundled into a single `postMessage` to the primary thread, which fans them out to other workers. This keeps IPC overhead constant regardless of publish volume.
1697
+
1698
+ Workers are health-checked every 10 seconds. If a worker fails to respond within 30 seconds, it is terminated and restarted with exponential backoff (starting at 100ms, max 5s, up to 50 restart attempts before the process exits). On graceful shutdown (`SIGTERM` / `SIGINT`), the primary stops accepting connections, sends a shutdown signal to all workers, and waits for them to drain in-flight requests and close WebSocket connections with code 1001 (Going Away) so clients reconnect to another instance.
1659
1699
 
1660
1700
  | Method | Cross-worker? | Safe in `live()`? |
1661
1701
  |---|---|---|
@@ -1669,24 +1709,40 @@ svelte-realtime works with the adapter's `CLUSTER_WORKERS` mode.
1669
1709
 
1670
1710
  ---
1671
1711
 
1672
- ## Limits and gotchas
1712
+ ## Production limits
1673
1713
 
1674
1714
  ### maxPayloadLength (default: 16KB)
1675
1715
 
1676
- If an RPC request exceeds this, the adapter closes the connection silently (uWS behavior). If your app sends large payloads, increase `maxPayloadLength` in the adapter's websocket config.
1716
+ Maximum size of a single WebSocket message. If an RPC request exceeds this, the adapter closes the connection (uWS behavior). Increase `maxPayloadLength` in the adapter's websocket config if your app sends large payloads.
1677
1717
 
1678
1718
  ### maxBackpressure (default: 1MB)
1679
1719
 
1680
- If a connection's send buffer exceeds this, messages are silently dropped. `handleRpc` checks the return value of `platform.send()` and warns in dev mode if a response was not delivered.
1720
+ Per-connection send buffer. When exceeded, messages are silently dropped. `handleRpc` checks the return value of `platform.send()` and warns in dev mode when a response is not delivered.
1681
1721
 
1682
- ### sendQueue cap (client-side, max 1000)
1722
+ ### Client send queue (max 1000)
1683
1723
 
1684
- The adapter's `sendQueued()` drops the oldest item if the queue exceeds 1000. Unlikely in practice, but worth knowing for offline-heavy apps.
1724
+ The adapter's `sendQueued()` drops the oldest item when the queue exceeds 1000 messages. This queue buffers messages while the WebSocket is reconnecting.
1685
1725
 
1686
- ### Batch size (max 50 calls)
1726
+ ### Batch size (max 50)
1687
1727
 
1688
1728
  A single `batch()` call is limited to 50 RPC calls. The client rejects before sending if the limit is exceeded, and the server enforces the same limit as a safety net. Split into multiple `batch()` calls if you need more.
1689
1729
 
1730
+ ### Presence refs (max 10,000)
1731
+
1732
+ The server tracks presence join/leave refcounts in memory. When the map reaches 10,000 entries, suspended entries (those with a pending leave timer) are evicted first. If the map is still full after eviction, the join is dropped silently.
1733
+
1734
+ ### Rate-limit identities (max 5,000)
1735
+
1736
+ Per-function rate limiting (`live.rateLimit()`) tracks sliding-window buckets in memory. When the bucket map reaches 5,000 entries, stale buckets are swept first. If still full, new identities are rejected with a `RATE_LIMITED` error. Existing identities are unaffected.
1737
+
1738
+ ### Throttle/debounce timers (max 5,000)
1739
+
1740
+ The server tracks active throttle and debounce entries globally. When at capacity, new entries bypass the timer and publish immediately so data is never silently dropped.
1741
+
1742
+ ### Topic length (max 256 characters)
1743
+
1744
+ The adapter rejects topic names longer than 256 characters or containing control characters (byte value < 32). This applies to subscribe, unsubscribe, and batch-subscribe messages.
1745
+
1690
1746
  ### ws.subscribe() vs the subscribe hook
1691
1747
 
1692
1748
  `live.stream()` calls `ws.subscribe(topic)` server-side, bypassing the adapter's `subscribe` hook entirely. This is correct -- stream topics are gated by `guard()`, not the subscribe hook.
@@ -1939,7 +1995,7 @@ The plugin resolves `$live/chat` to `src/live/chat.js`, generates client stubs,
1939
1995
 
1940
1996
  ## Benchmarks
1941
1997
 
1942
- The benchmark suite measures overhead added by svelte-realtime on top of raw WebSocket messaging.
1998
+ The benchmark suite measures the full-stack overhead added by svelte-realtime on top of raw WebSocket messaging: JSON serialization, RPC path resolution, registry lookup, context construction, handler execution, and response encoding. These run in-process with mock objects and isolate the framework cost from network latency.
1943
1999
 
1944
2000
  Run with:
1945
2001
 
@@ -1962,7 +2018,7 @@ With high-frequency streams (e.g. 1000 cursors at 20 updates/sec), this reduces
1962
2018
 
1963
2019
  In Node/SSR (tests, `__directCall`, etc.), events apply synchronously -- no batching overhead.
1964
2020
 
1965
- These benchmarks run in-process with mock objects (no real network). They isolate the framework overhead from network latency. See [bench/rpc.js](bench/rpc.js) for the full source.
2021
+ See [bench/rpc.js](bench/rpc.js) for the full source.
1966
2022
 
1967
2023
  ---
1968
2024
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "svelte-realtime",
3
- "version": "0.4.13",
3
+ "version": "0.4.14",
4
4
  "description": "Realtime RPC and reactive subscriptions for SvelteKit, built on svelte-adapter-uws",
5
5
  "author": "Kevin Radziszewski",
6
6
  "license": "MIT",