svelte-adapter-uws 0.3.5 → 0.3.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +73 -0
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -53,6 +53,7 @@ I've been loving Svelte and SvelteKit for a long time. I always wanted to expand
53
53
  **Deployment & scaling**
54
54
  - [Deploying with Docker](#deploying-with-docker)
55
55
  - [Clustering](#clustering)
56
+ - [OS tuning for production](#os-tuning-for-production)
56
57
  - [Performance](#performance)
57
58
 
58
59
  **Examples**
@@ -2174,6 +2175,78 @@ Per-worker limitations (acceptable for most apps):
2174
2175
  - `platform.subscribers(topic)` - returns the count for the local worker only
2175
2176
  - `platform.sendTo(filter, ...)` - only reaches connections on the local worker
2176
2177
 
2178
+ ### Docker / multi-process deployments (Linux)
2179
+
2180
+ On Linux, `SO_REUSEPORT` is set on every `app.listen()` call -- including single-process mode. This means multiple independent `node build` processes can bind to the same port without any adapter-level clustering. The kernel distributes connections across them.
2181
+
2182
+ If you already have external pub/sub (Redis, Postgres LISTEN/NOTIFY) handling cross-process messaging, you do not need `CLUSTER_WORKERS` at all. Just run multiple replicas and let your infrastructure handle the rest:
2183
+
2184
+ ```yaml
2185
+ # docker-compose.yml
2186
+ services:
2187
+ app:
2188
+ build: .
2189
+ command: node build
2190
+ network_mode: host
2191
+ environment:
2192
+ - PORT=443
2193
+ - SSL_CERT=/certs/cert.pem
2194
+ - SSL_KEY=/certs/key.pem
2195
+ deploy:
2196
+ replicas: 4
2197
+ ```
2198
+
2199
+ Each replica is a plain single-process `node build`. No coordinator thread, no built-in relay. Docker handles restarts, Redis handles cross-process messaging, the kernel handles port sharing.
2200
+
2201
+ With `network_mode: host`, containers share the host network stack directly -- no port mapping needed, and services like Postgres and Redis are reachable via `127.0.0.1`. This avoids Docker bridge DNS and gives the best network performance.
2202
+
2203
+ **When to use what:**
2204
+ - **`CLUSTER_WORKERS`** -- single-machine deployments without Docker/k8s/systemd managing processes for you
2205
+ - **Docker replicas** -- production deployments where your infrastructure already handles process management and you have external pub/sub for cross-process messaging
2206
+
2207
+ ---
2208
+
2209
+ ## OS tuning for production
2210
+
2211
+ uWebSockets.js can handle hundreds of thousands of connections per process, but Linux defaults are conservative. For any deployment expecting more than a few hundred concurrent WebSocket connections, apply these settings on the host machine.
2212
+
2213
+ ### Kernel parameters
2214
+
2215
+ Add to `/etc/sysctl.conf` and run `sysctl -p`:
2216
+
2217
+ ```
2218
+ net.ipv4.tcp_max_syn_backlog = 4096 # pending TCP connection queue
2219
+ net.ipv4.tcp_tw_reuse = 1 # reuse TIME_WAIT sockets faster
2220
+ net.core.somaxconn = 4096 # listen() backlog limit
2221
+ fs.file-max = 1024000 # system-wide file descriptor limit
2222
+ ```
2223
+
2224
+ ### File descriptor limits
2225
+
2226
+ Add to `/etc/security/limits.conf` (takes effect on next login):
2227
+
2228
+ ```
2229
+ * soft nofile 1024000
2230
+ * hard nofile 1024000
2231
+ ```
2232
+
2233
+ ### Docker
2234
+
2235
+ If running in Docker, the container also needs raised limits. Add to your `docker-compose.yml`:
2236
+
2237
+ ```yaml
2238
+ services:
2239
+ app:
2240
+ ulimits:
2241
+ nofile:
2242
+ soft: 65536
2243
+ hard: 65536
2244
+ ```
2245
+
2246
+ Without these changes, each process is limited to 1024 file descriptors (the default). Each WebSocket connection uses one file descriptor, so the default caps you at roughly 1000 concurrent connections per process. The server CPU can be well under 50% and you will still hit this ceiling -- the bottleneck is the OS, not uWS or your application code.
2247
+
2248
+ For a deeper walkthrough, see [Millions of active WebSockets with Node.js](https://unetworkingab.medium.com/millions-of-active-websockets-with-node-js-7dc575746a01) from the uWebSockets.js authors.
2249
+
2177
2250
  ---
2178
2251
 
2179
2252
  ## Performance
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "svelte-adapter-uws",
3
- "version": "0.3.5",
3
+ "version": "0.3.7",
4
4
  "description": "SvelteKit adapter for uWebSockets.js - high-performance C++ HTTP server with built-in WebSocket support",
5
5
  "author": "Kevin Radziszewski",
6
6
  "license": "MIT",