svelte-adapter-uws 0.4.7 → 0.4.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +39 -4
- package/files/handler.js +2 -1
- package/index.d.ts +1 -1
- package/package.json +1 -1
- package/vite.js +1 -1
package/README.md
CHANGED
|
@@ -404,6 +404,16 @@ adapter({
|
|
|
404
404
|
})
|
|
405
405
|
```
|
|
406
406
|
|
|
407
|
+
### Backpressure and connection limits
|
|
408
|
+
|
|
409
|
+
These options control how the server handles misbehaving or slow clients at the WebSocket level:
|
|
410
|
+
|
|
411
|
+
**`maxPayloadLength`** (default: 16 KB) -- the maximum size of a single incoming WebSocket message. If a client sends a message larger than this, uWS closes the connection immediately (not just the message -- the entire connection is dropped). Set this based on the largest message your application expects to receive.
|
|
412
|
+
|
|
413
|
+
**`maxBackpressure`** (default: 1 MB) -- the per-connection outbound send buffer. When a client reads slower than the server writes, messages queue up in this buffer. Once it overflows, subsequent `send()` and `publish()` calls for that connection silently drop the message. The `drain` hook fires when the buffer empties again. Lower this if you expect many slow consumers to avoid per-connection memory bloat.
|
|
414
|
+
|
|
415
|
+
**`upgradeRateLimit`** (default: 10 per 10s window) -- sliding-window rate limit on WebSocket upgrade requests per client IP. Clients exceeding the limit get a `429 Too Many Requests` response. The IP rate map is capped at 10,000 entries with LRU eviction by activity score, so sustained connection floods from many IPs don't cause unbounded memory growth.
|
|
416
|
+
|
|
407
417
|
### Static file behavior
|
|
408
418
|
|
|
409
419
|
All static assets (from the `client/` and `prerendered/` output directories) are loaded once at startup and served directly from RAM. Each response automatically includes:
|
|
@@ -610,6 +620,22 @@ export function drain(ws, { platform }) {
|
|
|
610
620
|
}
|
|
611
621
|
```
|
|
612
622
|
|
|
623
|
+
### Message protocol
|
|
624
|
+
|
|
625
|
+
The adapter uses a JSON envelope format for all pub/sub messages: `{ topic, event, data }`. Control messages from the client store (`subscribe`, `unsubscribe`, `subscribe-batch`) use `{ type, topic }` or `{ type, topics }`.
|
|
626
|
+
|
|
627
|
+
To avoid JSON-parsing every incoming message, the handler uses a byte-prefix discriminator: control messages start with `{"type"` (byte 3 is `y`), while user envelopes start with `{"topic"` (byte 3 is `o`). A single byte comparison skips `JSON.parse` entirely for user messages. Messages over 8 KB are also skipped (generous ceiling for `subscribe-batch` with many topics, well above any realistic control message).
|
|
628
|
+
|
|
629
|
+
### Topic validation
|
|
630
|
+
|
|
631
|
+
Topics submitted by clients are validated before being accepted:
|
|
632
|
+
|
|
633
|
+
- Must be between 1 and 256 characters
|
|
634
|
+
- Must not contain control characters (code points below 32)
|
|
635
|
+
- `subscribe-batch` accepts at most 256 topics per message (the client only sends what it was subscribed to before a reconnect)
|
|
636
|
+
|
|
637
|
+
Topics prefixed with `__` are reserved for adapter plugins (presence uses `__presence:*`, replay uses `__replay:*`). They are not blocked at the protocol level because plugins subscribe to them from the client, but application code should not use the `__` prefix for its own topics.
|
|
638
|
+
|
|
613
639
|
### Explicit handler path
|
|
614
640
|
|
|
615
641
|
If your handler is somewhere other than `src/hooks.ws.js`:
|
|
@@ -630,7 +656,7 @@ The `upgrade` function receives an `UpgradeContext`:
|
|
|
630
656
|
{
|
|
631
657
|
headers: { 'cookie': '...', 'host': 'localhost:3000', ... }, // all lowercase
|
|
632
658
|
cookies: { session_id: 'abc123', theme: 'dark' }, // parsed from Cookie header
|
|
633
|
-
url: '/ws',
|
|
659
|
+
url: '/ws?token=abc', // request path + query string
|
|
634
660
|
remoteAddress: '127.0.0.1' // client IP
|
|
635
661
|
}
|
|
636
662
|
```
|
|
@@ -2292,7 +2318,9 @@ CLUSTER_WORKERS=4 node build
|
|
|
2292
2318
|
CLUSTER_WORKERS=auto PORT=8080 ORIGIN=https://example.com node build
|
|
2293
2319
|
```
|
|
2294
2320
|
|
|
2295
|
-
If a worker crashes, it is automatically restarted with exponential backoff. On `SIGTERM`/`SIGINT`, the primary tells all workers to drain in-flight requests and shut down gracefully.
|
|
2321
|
+
If a worker crashes, it is automatically restarted with exponential backoff (100ms initial, doubling up to 5s, max 50 attempts before the primary exits). On `SIGTERM`/`SIGINT`, the primary tells all workers to drain in-flight requests and shut down gracefully.
|
|
2322
|
+
|
|
2323
|
+
The primary thread monitors worker health with a 10-second heartbeat interval. If a worker fails to acknowledge a heartbeat within 30 seconds (stuck event loop, deadlock), the primary terminates it and the restart policy kicks in.
|
|
2296
2324
|
|
|
2297
2325
|
### Clustering modes
|
|
2298
2326
|
|
|
@@ -2311,14 +2339,14 @@ Setting `CLUSTER_MODE=reuseport` on non-Linux platforms is an error (SO_REUSEPOR
|
|
|
2311
2339
|
|
|
2312
2340
|
### WebSocket + clustering
|
|
2313
2341
|
|
|
2314
|
-
`platform.publish()` is automatically relayed across all workers via the primary thread, so subscribers on any worker receive the message. This is built in -- no external pub/sub needed.
|
|
2342
|
+
`platform.publish()` is automatically relayed across all workers via the primary thread, so subscribers on any worker receive the message. This is built in -- no external pub/sub needed. The relay is microtask-batched: a SvelteKit action that calls `publish()` multiple times sends a single IPC message per microtask instead of one per call.
|
|
2315
2343
|
|
|
2316
2344
|
If you add your own cross-process messaging (Redis, Postgres LISTEN/NOTIFY, etc.), pass `{ relay: false }` to prevent duplicate delivery -- your external source already fans out to every worker, so the built-in relay would double it.
|
|
2317
2345
|
|
|
2318
2346
|
Per-worker limitations (acceptable for most apps):
|
|
2319
2347
|
- `platform.connections` - returns the count for the local worker only
|
|
2320
2348
|
- `platform.subscribers(topic)` - returns the count for the local worker only
|
|
2321
|
-
- `platform.sendTo(filter, ...)` -
|
|
2349
|
+
- `platform.sendTo(filter, ...)` - iterates the local worker's connections only, no cross-worker relay
|
|
2322
2350
|
|
|
2323
2351
|
### Docker / multi-process deployments (Linux)
|
|
2324
2352
|
|
|
@@ -2498,6 +2526,13 @@ uWS native pub/sub delivered 3.5M messages/s with exact 50x fan-out. The adapter
|
|
|
2498
2526
|
- No per-request stream allocation for static files (in-memory Buffer, not `fs.createReadStream`)
|
|
2499
2527
|
- No Node.js `http.IncomingMessage` shim (we construct `Request` directly from uWS)
|
|
2500
2528
|
|
|
2529
|
+
### Internal optimizations
|
|
2530
|
+
|
|
2531
|
+
The adapter applies several allocation and caching strategies to stay off the GC's radar on the hot path:
|
|
2532
|
+
|
|
2533
|
+
- **Request state pooling** -- SSR requests need a `{ aborted: false }` state object. Instead of allocating one per request (which promotes to V8's old generation and stays there), the adapter maintains a pool of up to 256 reusable state objects. Eliminates young-gen GC churn under sustained load.
|
|
2534
|
+
- **Envelope prefix cache** -- `platform.publish()` and `platform.send()` wrap data in a `{"topic":"...","event":"...","data":...}` envelope. The prefix up to `"data":` is cached in a 256-entry LRU map keyed by topic+event. Repeated publishes to the same topic/event (the common case) skip 4 string concatenations and the character validation scan. The cache is trimmed every 60 seconds to reclaim stale entries from shifted traffic patterns.
|
|
2535
|
+
|
|
2501
2536
|
### SSR request deduplication
|
|
2502
2537
|
|
|
2503
2538
|
When multiple concurrent requests arrive for the same anonymous (no cookie/auth) GET or HEAD URL, only one is dispatched to SvelteKit. The others wait for the result and reconstruct their own response from the shared buffer. This prevents redundant rendering work during traffic spikes, a common pattern when a post goes viral or a cron job hits a popular page at the same time as real users.
|
package/files/handler.js
CHANGED
|
@@ -1471,7 +1471,8 @@ if (WS_ENABLED) {
|
|
|
1471
1471
|
}
|
|
1472
1472
|
|
|
1473
1473
|
// -- User upgrade handler path (may be async) --
|
|
1474
|
-
const
|
|
1474
|
+
const query = req.getQuery();
|
|
1475
|
+
const url = query ? req.getUrl() + '?' + query : req.getUrl();
|
|
1475
1476
|
|
|
1476
1477
|
let aborted = false;
|
|
1477
1478
|
res.onAborted(() => {
|
package/index.d.ts
CHANGED
|
@@ -209,7 +209,7 @@ export interface UpgradeContext {
|
|
|
209
209
|
headers: Record<string, string>;
|
|
210
210
|
/** Parsed cookies from the Cookie header. */
|
|
211
211
|
cookies: Record<string, string>;
|
|
212
|
-
/** The request URL path. */
|
|
212
|
+
/** The request URL path, including query string if present (e.g. '/ws?token=abc'). */
|
|
213
213
|
url: string;
|
|
214
214
|
/** Remote IP address. */
|
|
215
215
|
remoteAddress: string;
|
package/package.json
CHANGED
package/vite.js
CHANGED