wse-client 1.4.3 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (44) hide show
  1. package/README.md +406 -336
  2. package/dist/constants.d.ts +1 -1
  3. package/dist/constants.d.ts.map +1 -1
  4. package/dist/constants.js +3 -1
  5. package/dist/constants.js.map +1 -1
  6. package/dist/hooks/useWSE.d.ts.map +1 -1
  7. package/dist/hooks/useWSE.js +53 -13
  8. package/dist/hooks/useWSE.js.map +1 -1
  9. package/dist/index.d.ts +1 -1
  10. package/dist/index.js +1 -1
  11. package/dist/protocols/compression.d.ts.map +1 -1
  12. package/dist/protocols/compression.js +19 -5
  13. package/dist/protocols/compression.js.map +1 -1
  14. package/dist/services/ConnectionManager.d.ts +0 -1
  15. package/dist/services/ConnectionManager.d.ts.map +1 -1
  16. package/dist/services/ConnectionManager.js +0 -22
  17. package/dist/services/ConnectionManager.js.map +1 -1
  18. package/dist/services/EventSequencer.d.ts.map +1 -1
  19. package/dist/services/EventSequencer.js +5 -3
  20. package/dist/services/EventSequencer.js.map +1 -1
  21. package/dist/services/MessageProcessor.d.ts.map +1 -1
  22. package/dist/services/MessageProcessor.js +39 -5
  23. package/dist/services/MessageProcessor.js.map +1 -1
  24. package/dist/services/NetworkMonitor.d.ts.map +1 -1
  25. package/dist/services/NetworkMonitor.js +3 -1
  26. package/dist/services/NetworkMonitor.js.map +1 -1
  27. package/dist/services/OfflineQueue.d.ts +2 -0
  28. package/dist/services/OfflineQueue.d.ts.map +1 -1
  29. package/dist/services/OfflineQueue.js +10 -3
  30. package/dist/services/OfflineQueue.js.map +1 -1
  31. package/dist/stores/useWSEStore.d.ts +4 -1
  32. package/dist/stores/useWSEStore.d.ts.map +1 -1
  33. package/dist/stores/useWSEStore.js +21 -0
  34. package/dist/stores/useWSEStore.js.map +1 -1
  35. package/dist/types.d.ts +10 -2
  36. package/dist/types.d.ts.map +1 -1
  37. package/dist/utils/logger.d.ts.map +1 -1
  38. package/dist/utils/logger.js +2 -0
  39. package/dist/utils/logger.js.map +1 -1
  40. package/dist/utils/security.d.ts +0 -1
  41. package/dist/utils/security.d.ts.map +1 -1
  42. package/dist/utils/security.js +6 -50
  43. package/dist/utils/security.js.map +1 -1
  44. package/package.json +1 -1
package/README.md CHANGED
@@ -1,469 +1,539 @@
1
- # WSE -- WebSocket Engine
1
+ # WSE - WebSocket Engine
2
2
 
3
- **A complete, out-of-the-box solution for real-time communication between React, Python, and backend services.**
4
-
5
- Three packages. Four lines of code. Your frontend and backend talk in real time.
6
-
7
- [![CI](https://github.com/silvermpx/wse/actions/workflows/ci.yml/badge.svg)](https://github.com/silvermpx/wse/actions/workflows/ci.yml)
8
3
  [![PyPI - Server](https://img.shields.io/pypi/v/wse-server)](https://pypi.org/project/wse-server/)
9
4
  [![PyPI - Client](https://img.shields.io/pypi/v/wse-client)](https://pypi.org/project/wse-client/)
10
5
  [![npm](https://img.shields.io/npm/v/wse-client)](https://www.npmjs.com/package/wse-client)
11
6
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
12
7
 
13
- ---
14
-
15
- ## Why WSE?
8
+ High-performance WebSocket server built in Rust with native clustering, message recovery, presence tracking, and real-time fan-out. Exposed to Python via PyO3 with zero GIL overhead on the data path.
9
+
10
+ ## Features
11
+
12
+ ### Server
13
+
14
+ | Feature | Details |
15
+ |---------|---------|
16
+ | **Rust core** | tokio async runtime, tungstenite WebSocket transport, dedicated thread pool, zero GIL on the data path |
17
+ | **JWT authentication** | Rust-native HS256 validation during handshake (0.01ms), cookie + Authorization header extraction |
18
+ | **Protocol negotiation** | `client_hello`/`server_hello` handshake with feature discovery, capability advertisement, version agreement |
19
+ | **Topic subscriptions** | Per-connection topic subscriptions with automatic cleanup on disconnect |
20
+ | **Pre-framed broadcast** | WebSocket frame built once, shared via Arc across all connections, single allocation per broadcast |
21
+ | **Vectored writes** | `write_vectored` (writev syscall) batches multiple frames per connection in a single kernel call |
22
+ | **Write coalescing** | Write task drains up to 256 pending frames per iteration via `recv_many` |
23
+ | **DashMap state** | Lock-free sharded concurrent hash maps for topics, rates, formats, activity tracking |
24
+ | **mimalloc allocator** | Global allocator optimized for multi-threaded workloads with frequent small allocations |
25
+ | **Deduplication** | 50,000-entry AHashSet with FIFO eviction per `send_event()` call |
26
+ | **Rate limiting** | Per-connection token bucket: 100K capacity, 10K/s refill, client warning at 20% remaining |
27
+ | **Zombie detection** | Server pings every 25s, force-closes connections with no activity for 60s |
28
+ | **Drain mode** | Lock-free crossbeam bounded channel, Python acquires GIL once per batch (not per event) |
29
+ | **Compression** | zlib for client-facing messages above threshold (default 1024 bytes) |
30
+ | **MessagePack** | Opt-in binary transport via `?format=msgpack`, roughly 2x faster serialization, 30% smaller |
31
+ | **Message signing** | Selective HMAC-SHA256 signing for critical operations, nonce-based replay prevention |
16
32
 
17
- Building real-time features between React and Python is painful. You need WebSocket handling, reconnection logic, message ordering, authentication, encryption, offline support, health monitoring. That's weeks of work before you ship a single feature.
18
-
19
- **WSE gives you all of this out of the box.**
20
-
21
- Install `wse-server` on your backend, `wse-client` on your frontend (React or Python). Everything works immediately: auto-reconnection, message encryption, sequence ordering, offline queues, health monitoring. No configuration required for the defaults. Override what you need.
33
+ ### End-to-End Encryption
22
34
 
23
- The engine is Rust-accelerated via PyO3. **14M msg/s** JSON, **30M msg/s** binary, **2.1M deliveries/s** fan-out, **500K concurrent connections** with zero message loss -- benchmarked on AMD EPYC 7502P. Multi-instance horizontal scaling via Redis pub/sub. Sub-millisecond latency (0.38 ms) with Rust JWT authentication.
35
+ | Feature | Details |
36
+ |---------|---------|
37
+ | **Key exchange** | ECDH P-256 (per-connection keypair, automatic during handshake) |
38
+ | **Encryption** | AES-GCM-256 with unique 12-byte IV per message |
39
+ | **Key derivation** | HKDF-SHA256 (salt: `wse-encryption`, info: `aes-gcm-key`) |
40
+ | **Wire format** | `E:` prefix + 12-byte IV + AES-GCM ciphertext + 16-byte auth tag |
41
+ | **Key rotation** | Configurable rotation interval (default 1 hour), automatic renegotiation |
42
+ | **Replay prevention** | Nonce cache (10K entries, 5-minute TTL) on the client side |
43
+
44
+ ### Cluster Protocol
45
+
46
+ | Feature | Details |
47
+ |---------|---------|
48
+ | **Topology** | Full TCP mesh, direct peer-to-peer connections |
49
+ | **Wire format** | Custom binary frames: 8-byte header + topic + payload, 12 message types |
50
+ | **Interest routing** | SUB/UNSUB/RESYNC frames, messages forwarded only to peers with matching subscribers |
51
+ | **Gossip discovery** | PeerAnnounce/PeerList frames, new nodes need one seed address to join |
52
+ | **mTLS** | rustls + tokio-rustls, P-256 certificates, WebPkiClientVerifier for both sides |
53
+ | **Compression** | zstd level 1 for payloads above 256 bytes, capability-negotiated, output capped at 1 MB |
54
+ | **Heartbeat** | 5s ping interval, 15s timeout, dead peer detection |
55
+ | **Circuit breaker** | 10 failures to open, 60s reset, 3 half-open probe calls |
56
+ | **Dead letter queue** | 1000-entry ring buffer for failed cluster sends |
57
+ | **Presence sync** | PresenceUpdate/PresenceFull frames, CRDT last-write-wins conflict resolution |
58
+
59
+ ### Presence Tracking
60
+
61
+ | Feature | Details |
62
+ |---------|---------|
63
+ | **Per-topic tracking** | Which users are active in each topic, with custom metadata (status, avatar, etc.) |
64
+ | **User-level grouping** | Multiple connections from same JWT `sub` share one presence entry |
65
+ | **Join/leave lifecycle** | `presence_join` on first connection, `presence_leave` on last disconnect |
66
+ | **O(1) stats** | `presence_stats()` returns member/connection counts without iteration |
67
+ | **Data updates** | `update_presence()` broadcasts to all topics where the user is present |
68
+ | **Cluster sync** | Synchronized across all nodes, CRDT last-write-wins resolution |
69
+ | **TTL sweep** | Background task every 30s removes entries from dead connections |
70
+
71
+ ### Message Recovery
72
+
73
+ | Feature | Details |
74
+ |---------|---------|
75
+ | **Ring buffers** | Per-topic, power-of-2 capacity, bitmask indexing (single AND instruction) |
76
+ | **Epoch+offset tracking** | Precise recovery positioning, epoch changes on buffer recreation |
77
+ | **Memory management** | Global budget (default 256 MB), TTL eviction, LRU eviction when over budget |
78
+ | **Zero-copy storage** | Recovery entries share `Bytes` (Arc) with the broadcast path |
79
+ | **Recovery on reconnect** | `subscribe_with_recovery()` replays missed messages automatically |
80
+
81
+ ### Client SDKs (Python + TypeScript/React)
82
+
83
+ | Feature | Details |
84
+ |---------|---------|
85
+ | **Auto-reconnection** | 4 strategies: exponential, linear, fibonacci, adaptive backoff with jitter |
86
+ | **Connection pool** | Multi-endpoint with health scoring, 3 load balancing strategies, automatic failover |
87
+ | **Circuit breaker** | CLOSED/OPEN/HALF_OPEN state machine, prevents connection storms |
88
+ | **Rate limiting** | Client-side token bucket, coordinates with server feedback |
89
+ | **E2E encryption** | Wire-compatible AES-GCM-256 + ECDH P-256 (both clients speak the same protocol) |
90
+ | **Event sequencing** | Duplicate detection (sliding window) + out-of-order buffering |
91
+ | **Network monitor** | Real-time latency, jitter, packet loss measurement, quality scoring |
92
+ | **Priority queues** | 5 levels from CRITICAL to BACKGROUND |
93
+ | **Offline queue** | IndexedDB persistence (TypeScript), replayed on reconnect |
94
+ | **Compression** | Automatic zlib for messages above threshold |
95
+ | **MessagePack** | Binary encoding for smaller payloads and faster serialization |
96
+ | **Message signing** | HMAC-SHA256 integrity verification |
97
+
98
+ ### Transport Security
99
+
100
+ | Feature | Details |
101
+ |---------|---------|
102
+ | **Origin validation** | `ALLOWED_ORIGINS` env var, rejects unlisted origins with close code 4403 |
103
+ | **Cookie auth** | `access_token` HTTP-only cookie with `Secure + SameSite=Lax` (OWASP recommended for browsers) |
104
+ | **Frame protection** | 1 MB max frame size, serde_json parsing (no eval), escaped user IDs in server_ready |
105
+ | **Cluster frame protection** | zstd decompression output capped at 1 MB (MAX_FRAME_SIZE), protocol version validation |
24
106
 
25
107
  ---
26
108
 
27
109
  ## Quick Start
28
110
 
29
- ### Server (Python) -- Router Mode
30
-
31
- Embed WSE into your existing FastAPI app on the same port:
111
+ ```bash
112
+ pip install wse-server
113
+ ```
32
114
 
33
115
  ```python
34
- from fastapi import FastAPI
35
- from wse_server import create_wse_router, WSEConfig
36
- import redis.asyncio as redis
116
+ from wse_server import RustWSEServer, rust_jwt_encode
117
+ import time, threading
37
118
 
38
- app = FastAPI()
119
+ server = RustWSEServer(
120
+ "0.0.0.0", 5007,
121
+ max_connections=10_000,
122
+ jwt_secret=b"replace-with-a-strong-secret-key!",
123
+ jwt_issuer="my-app",
124
+ jwt_audience="my-api",
125
+ )
126
+ server.enable_drain_mode()
127
+ server.start()
39
128
 
40
- redis_client = redis.Redis(host="localhost", port=6379, decode_responses=False)
129
+ def handle_events(srv):
130
+ while True:
131
+ for ev in srv.drain_inbound(256, 50):
132
+ if ev[0] == "auth_connect":
133
+ srv.subscribe_connection(ev[1], ["updates"])
134
+ elif ev[0] == "msg":
135
+ print(f"Message from {ev[1]}: {ev[2]}")
136
+ elif ev[0] == "disconnect":
137
+ print(f"Disconnected: {ev[1]}")
41
138
 
42
- wse = create_wse_router(WSEConfig(
43
- redis_client=redis_client,
44
- ))
139
+ threading.Thread(target=handle_events, args=(server,), daemon=True).start()
45
140
 
46
- app.include_router(wse, prefix="/wse")
141
+ while server.is_running():
142
+ time.sleep(1)
47
143
  ```
48
144
 
49
- Publish events from anywhere in your app via the PubSub bus:
145
+ Generate a test token:
50
146
 
51
147
  ```python
52
- bus = app.state.pubsub_bus
53
-
54
- await bus.publish(
55
- topic="notifications",
56
- event={"event_type": "order_shipped", "order_id": 42, "text": "Order shipped!"},
148
+ token = rust_jwt_encode(
149
+ {"sub": "user-1", "iss": "my-app", "aud": "my-api",
150
+ "exp": int(time.time()) + 3600, "iat": int(time.time())},
151
+ b"replace-with-a-strong-secret-key!",
57
152
  )
58
153
  ```
59
154
 
60
- ### Server (Python) -- Standalone Mode
155
+ ---
61
156
 
62
- Run the Rust WebSocket server on a dedicated port for maximum throughput:
157
+ ## Server Configuration
158
+
159
+ `RustWSEServer` constructor parameters:
160
+
161
+ | Parameter | Default | Description |
162
+ |-----------|---------|-------------|
163
+ | `host` | required | Bind address |
164
+ | `port` | required | Bind port |
165
+ | `max_connections` | 1000 | Maximum concurrent WebSocket connections |
166
+ | `jwt_secret` | None | HS256 secret for JWT validation (bytes, min 32 bytes). `None` disables authentication |
167
+ | `jwt_issuer` | None | Expected `iss` claim. Skipped if `None` |
168
+ | `jwt_audience` | None | Expected `aud` claim. Skipped if `None` |
169
+ | `max_inbound_queue_size` | 131072 | Drain mode bounded queue capacity |
170
+ | `recovery_enabled` | False | Enable per-topic message recovery buffers |
171
+ | `recovery_buffer_size` | 128 | Ring buffer slots per topic (rounded to power-of-2) |
172
+ | `recovery_ttl` | 300 | Buffer TTL in seconds before eviction |
173
+ | `recovery_max_messages` | 500 | Max messages returned per recovery response |
174
+ | `recovery_memory_budget` | 268435456 | Global memory limit for all recovery buffers (bytes, default 256 MB) |
175
+ | `presence_enabled` | False | Enable per-topic presence tracking |
176
+ | `presence_max_data_size` | 4096 | Max bytes for a user's presence metadata |
177
+ | `presence_max_members` | 0 | Max tracked members per topic (0 = unlimited) |
63
178
 
64
- ```python
65
- from wse_server._wse_accel import RustWSEServer
179
+ ---
66
180
 
67
- server = RustWSEServer(
68
- "0.0.0.0", 5006,
69
- max_connections=10000,
70
- jwt_secret=b"your-secret-key", # Rust JWT validation in handshake
71
- jwt_issuer="your-app",
72
- jwt_audience="your-api",
73
- )
74
- server.start()
181
+ ## API Reference
182
+
183
+ ### Lifecycle
75
184
 
76
- # Drain inbound events in a background thread
77
- while True:
78
- events = server.drain_inbound(256, 50) # batch size, timeout ms
79
- for event in events:
80
- handle(event)
185
+ ```python
186
+ server.start() # Start the server
187
+ server.stop() # Graceful shutdown
188
+ server.is_running() # Check server status (bool)
81
189
  ```
82
190
 
83
- Standalone mode gives you a dedicated Rust tokio runtime on its own port -- no FastAPI overhead, no GIL on the hot path. This is how WSE achieves 14M msg/s on JSON.
191
+ ### Event Handling
84
192
 
85
- ### Client (React)
193
+ **Drain mode** (recommended) - events are queued in a lock-free crossbeam channel. Python polls in batches, acquiring the GIL once per batch.
86
194
 
87
- ```tsx
88
- import { useWSE } from 'wse-client';
195
+ ```python
196
+ server.enable_drain_mode() # Switch to batch-polling mode (call before start)
197
+ events = server.drain_inbound(256, 50) # Poll up to 256 events, wait up to 50ms
198
+ ```
89
199
 
90
- function Dashboard() {
91
- const { isConnected, connectionHealth } = useWSE({
92
- topics: ['notifications', 'live_data'],
93
- endpoints: ['ws://localhost:8000/wse'],
94
- });
200
+ Each event is a tuple: `(event_type, conn_id, payload)`
95
201
 
96
- useEffect(() => {
97
- const handler = (e: CustomEvent) => {
98
- console.log('New notification:', e.detail);
99
- };
100
- window.addEventListener('notifications', handler);
101
- return () => window.removeEventListener('notifications', handler);
102
- }, []);
202
+ | Event Type | Trigger | Payload |
203
+ |------------|---------|---------|
204
+ | `"auth_connect"` | JWT-validated connection | user_id (string) |
205
+ | `"connect"` | Connection without JWT | cookies (string) |
206
+ | `"msg"` | Client sent WSE-prefixed JSON | parsed dict |
207
+ | `"raw"` | Client sent plain text | raw string |
208
+ | `"bin"` | Client sent binary frame | bytes |
209
+ | `"disconnect"` | Connection closed | None |
210
+ | `"presence_join"` | User's first connection joined a topic | dict with user_id, topic, data |
211
+ | `"presence_leave"` | User's last connection left a topic | dict with user_id, topic, data |
103
212
 
104
- return <div>Status: {connectionHealth}</div>;
105
- }
106
- ```
213
+ **Callback mode** - alternative to drain mode. Callbacks are invoked via `spawn_blocking` per event.
107
214
 
108
- That's it. Your React app receives real-time updates from your Python backend.
215
+ ```python
216
+ server.set_callbacks(on_connect, on_message, on_disconnect)
217
+ ```
109
218
 
110
- ### Client (Python)
219
+ ### Sending Messages
111
220
 
112
221
  ```python
113
- from wse_client import connect
114
-
115
- async with connect("ws://localhost:5006/wse", token="your-jwt") as client:
116
- await client.subscribe(["notifications", "live_data"])
117
- async for event in client:
118
- print(event.type, event.payload)
222
+ server.send(conn_id, text) # Send text to one connection
223
+ server.send_bytes(conn_id, data) # Send binary to one connection
224
+ server.send_event(conn_id, event_dict) # Send structured event (auto-serialized, deduped, rate-checked)
225
+
226
+ server.broadcast_all(text) # Send to every connected client (text)
227
+ server.broadcast_all_bytes(data) # Send to every connected client (binary)
228
+ server.broadcast_local(topic, text) # Fan-out to topic subscribers on this instance
229
+ server.broadcast(topic, text) # Fan-out to topic subscribers across all cluster nodes
119
230
  ```
120
231
 
121
- Same wire protocol, same features. Use the Python client for backend-to-backend communication, microservices, CLI tools, integration tests, or any non-browser use case.
232
+ | Method | Scope | Notes |
233
+ |--------|-------|-------|
234
+ | `send` | Single connection | Raw text frame |
235
+ | `send_bytes` | Single connection | Raw binary frame |
236
+ | `send_event` | Single connection | JSON-serialized, compressed if above threshold, deduplication via 50K-entry FIFO window |
237
+ | `broadcast_all` | All connections | Pre-framed, single frame build shared via Arc |
238
+ | `broadcast_local` | Topic (local) | Pre-framed, DashMap subscriber lookup, stored in recovery buffer if enabled |
239
+ | `broadcast` | Topic (all nodes) | Local fan-out + forwarded to cluster peers with matching interest |
122
240
 
123
- ---
241
+ ### Topic Subscriptions
124
242
 
125
- ## What You Get Out of the Box
243
+ ```python
244
+ server.subscribe_connection(conn_id, ["prices", "news"]) # Subscribe to topics
245
+ server.subscribe_connection(conn_id, ["chat"], {"status": "online"}) # Subscribe with presence data
246
+ server.unsubscribe_connection(conn_id, ["news"]) # Unsubscribe from specific topics
247
+ server.unsubscribe_connection(conn_id, None) # Unsubscribe from all topics
248
+ server.get_topic_subscriber_count("prices") # Subscriber count for a topic
249
+ ```
126
250
 
127
- Everything listed below works the moment you install. No extra setup.
251
+ Subscriptions are cleaned up automatically on disconnect. In cluster mode, interest changes are propagated to peers via SUB/UNSUB frames.
128
252
 
129
- ### Reactive Interface
253
+ ### Presence Tracking
130
254
 
131
- Real-time data flow from Python to React. One hook (`useWSE`) on the client, one `publish()` call on the server. Events appear in your components instantly.
255
+ Requires `presence_enabled=True` in the constructor.
132
256
 
133
- ### Auto-Reconnection
257
+ ```python
258
+ # Query members in a topic
259
+ members = server.presence("chat-room")
260
+ # {"alice": {"data": {"status": "online"}, "connections": 2},
261
+ # "bob": {"data": {"status": "away"}, "connections": 1}}
134
262
 
135
- Exponential backoff with jitter. Connection drops? The client reconnects automatically. No lost messages -- offline queue with IndexedDB persistence stores messages while disconnected and replays them on reconnect.
263
+ # Lightweight counts (O(1), no iteration)
264
+ stats = server.presence_stats("chat-room")
265
+ # {"num_users": 2, "num_connections": 3}
136
266
 
137
- ### End-to-End Encryption
267
+ # Update a user's presence data across all their subscribed topics
268
+ server.update_presence(conn_id, {"status": "away"})
269
+ ```
138
270
 
139
- ECDH P-256 key exchange with AES-GCM-256 per connection. Each connection negotiates a unique session key during the handshake (server_ready / client_hello), then all sensitive messages are encrypted end-to-end. HMAC-SHA256 message signing for integrity. Pluggable encryption and token providers via Python protocols.
271
+ Presence is tracked at the user level (JWT `sub` claim). Multiple connections from the same user share a single presence entry. `presence_join` fires on first connection, `presence_leave` on last disconnect. In cluster mode, presence state is synchronized across all nodes using CRDT last-write-wins resolution.
140
272
 
141
- ### Message Ordering
273
+ ### Message Recovery
142
274
 
143
- Sequence numbers with gap detection and reordering buffer. Messages arrive in order even under high load or network instability. Out-of-order messages are buffered and delivered once the gap fills.
275
+ Requires `recovery_enabled=True` in the constructor.
144
276
 
145
- ### Authentication
277
+ ```python
278
+ result = server.subscribe_with_recovery(
279
+ conn_id, ["prices"],
280
+ recover=True,
281
+ epoch=client_epoch, # From previous session
282
+ offset=client_offset, # From previous session
283
+ )
284
+ # {"topics": {"prices": {"epoch": 123, "offset": 456, "recovered": True, "count": 12}}}
285
+ ```
146
286
 
147
- JWT-based with HS256, validated in Rust during the WebSocket handshake (zero GIL, 0.01ms decode). Per-connection, per-topic access control. Cookie-based token extraction for seamless browser auth. Fallback to Python auth handler when Rust JWT is not configured.
287
+ The server maintains per-topic ring buffers (power-of-2 capacity, bitmask indexing). Clients store the `epoch` and `offset` from their last received message. On reconnect, the server replays missed messages from the ring buffer. If the gap is too large or the epoch has changed, the client receives a `NotRecovered` status and should re-subscribe from scratch.
148
288
 
149
- ### Health Monitoring
289
+ Memory is managed with a global budget (default 256 MB), TTL eviction for idle buffers, and LRU eviction when over budget.
150
290
 
151
- Connection quality scoring (excellent / good / fair / poor), latency tracking, jitter analysis, packet loss detection. Your UI knows when the connection is degraded and can react accordingly.
291
+ ### Cluster
152
292
 
153
- ### Scaling
293
+ ```python
294
+ # Join a cluster mesh with mTLS
295
+ server.connect_cluster(
296
+ peers=["10.0.0.2:9999", "10.0.0.3:9999"],
297
+ tls_ca="/etc/wse/ca.pem",
298
+ tls_cert="/etc/wse/node.pem",
299
+ tls_key="/etc/wse/node.key",
300
+ cluster_port=9999,
301
+ )
154
302
 
155
- Redis pub/sub for multi-instance fan-out. Run N server instances behind a load balancer -- publish on any instance, all subscribers receive the message regardless of which instance they're connected to. Pipelined PUBLISH (up to 64 per round-trip), circuit breaker, exponential backoff with jitter, dead letter queue for failed messages. Capacity scales linearly with instances: tested up to 1.04M deliveries/s per instance.
303
+ # With gossip discovery (only seed addresses needed)
304
+ server.connect_cluster(
305
+ peers=[],
306
+ seeds=["10.0.0.2:9999"],
307
+ cluster_addr="10.0.0.1:9999",
308
+ cluster_port=9999,
309
+ )
156
310
 
157
- ### Rust Performance
311
+ server.cluster_connected() # True if connected to at least one peer
312
+ server.cluster_peers_count() # Number of active peer connections
313
+ ```
158
314
 
159
- Compression, sequencing, filtering, rate limiting, and the WebSocket server itself are implemented in Rust via PyO3. Python API stays the same. Rust accelerates transparently.
315
+ Nodes form a full TCP mesh automatically. The cluster protocol uses a custom binary frame format with an 8-byte header, 12 message types, and capability negotiation during handshake. Features:
160
316
 
161
- ---
317
+ - **Interest-based routing** - SUB/UNSUB/RESYNC frames. Messages are only forwarded to peers with matching subscribers.
318
+ - **Gossip discovery** - PeerAnnounce/PeerList frames. New nodes need one seed address to join.
319
+ - **mTLS** - mutual TLS via rustls with P-256 certificates and WebPkiClientVerifier.
320
+ - **zstd compression** - payloads above 256 bytes compressed at level 1, capability-negotiated.
321
+ - **Circuit breaker** - 10 failures to open, 60s reset, 3 half-open probe calls.
322
+ - **Heartbeat** - 5s interval, 15s timeout, dead peer detection.
323
+ - **Dead letter queue** - 1000-entry ring buffer for failed cluster sends.
324
+ - **Presence sync** - PresenceUpdate/PresenceFull frames with CRDT conflict resolution.
162
325
 
163
- ## Full Feature List
326
+ ### Health Monitoring
164
327
 
165
- ### Server (Python + Rust)
328
+ ```python
329
+ health = server.health_snapshot()
330
+ # {
331
+ # "connections": 150,
332
+ # "inbound_queue_depth": 0,
333
+ # "inbound_dropped": 0,
334
+ # "uptime_secs": 3600.5,
335
+ # "recovery_enabled": True,
336
+ # "recovery_topic_count": 5,
337
+ # "recovery_total_bytes": 1048576,
338
+ # "cluster_connected": True,
339
+ # "cluster_peer_count": 2,
340
+ # "cluster_messages_sent": 50000,
341
+ # "cluster_messages_delivered": 49950,
342
+ # "cluster_messages_dropped": 0,
343
+ # "cluster_bytes_sent": 1048576,
344
+ # "cluster_bytes_received": 1024000,
345
+ # "cluster_reconnect_count": 0,
346
+ # "cluster_unknown_message_types": 0,
347
+ # "cluster_dlq_size": 0,
348
+ # "presence_enabled": True,
349
+ # "presence_topics": 3,
350
+ # "presence_total_users": 25,
351
+ # }
352
+ ```
166
353
 
167
- | Feature | Description |
168
- |---------|-------------|
169
- | **Drain Mode** | Batch-polling inbound events from Rust. One GIL acquisition per batch (up to 256 messages) instead of per-message Python callbacks. Condvar-based wakeup for zero busy-wait. |
170
- | **Write Coalescing** | Outbound pipeline: `feed()` + batch `try_recv()` + single `flush()`. Reduces syscalls under load by coalescing multiple messages into one write. |
171
- | **Ping/Pong in Rust** | Heartbeat handled entirely in Rust with zero Python round-trips. Configurable intervals. TCP_NODELAY on accept for minimal latency. |
172
- | **5-Level Priority Queue** | CRITICAL(10), HIGH(8), NORMAL(5), LOW(3), BACKGROUND(1). Smart dropping under backpressure: lower-priority messages are dropped first. Batch dequeue ordered by priority. |
173
- | **Dead Letter Queue** | Redis-backed DLQ for failed messages. 7-day TTL, 1000-message cap per channel. Manual replay via `replay_dlq_message()`. Prometheus metrics for DLQ size and replay count. |
174
- | **MongoDB-like Filters** | 14 operators: `$eq`, `$ne`, `$gt`, `$lt`, `$gte`, `$lte`, `$in`, `$nin`, `$regex`, `$exists`, `$contains`, `$startswith`, `$endswith`. Logical: `$and`, `$or`. Dot-notation for nested fields (`payload.price`). Compiled regex cache. |
175
- | **Event Sequencer** | Monotonic sequence numbers with AHashSet dedup. Size-based and age-based eviction. Gap detection on both server and client. |
176
- | **Compression** | Flate2 zlib with configurable levels (1-9). Adaptive threshold -- only compress when payload exceeds size limit. Binary mode via msgpack (rmp-serde) for 30% smaller payloads. |
177
- | **Rate Limiter** | Atomic token-bucket rate limiter in Rust. Per-connection rate enforcement. 100K tokens capacity, 10K tokens/sec refill. |
178
- | **Message Deduplication** | AHashSet-backed dedup with bounded queue. Prevents duplicate delivery across reconnections and Redis fan-out. |
179
- | **Wire Envelope** | Protocol v1: `{t, id, ts, seq, p, v}`. Generic payload extraction with automatic type conversion (UUID, datetime, Enum, bytes to JSON-safe primitives). Latency tracking (`latency_ms` field). |
180
- | **Snapshot Provider** | Protocol for initial state delivery. Implement `get_snapshot(user_id, topics)` and clients receive current state immediately on subscribe -- no waiting for the next publish cycle. |
181
- | **Circuit Breaker** | Three-state machine (CLOSED / OPEN / HALF_OPEN). Sliding-window failure tracking. Automatic recovery probes. Prevents cascade failures when downstream services are unhealthy. |
182
- | **Message Categories** | `S` (snapshot), `U` (update), `WSE` (system). Category prefixing for client-side routing and filtering. |
183
- | **Multi-Instance Orchestration** | Horizontal scaling via Redis pub/sub. Publish on any instance, all subscribers receive the message. Pipelined PUBLISH (64 commands/batch, 3 retries), circuit breaker (10-fail threshold, 60s reset), exponential backoff with jitter, dead letter queue (1000-entry ring buffer). 1.04M deliveries/s per instance, linear scaling with N instances. |
184
- | **PubSub Bus** | Redis pub/sub with PSUBSCRIBE pattern matching. Glob wildcard topic routing (`user:*:events`). orjson fast-path serialization. Non-blocking handler invocation. |
185
- | **Pluggable Security** | `EncryptionProvider` and `TokenProvider` protocols. Built-in: AES-GCM-256 with ECDH P-256 key exchange, HMAC-SHA256 signing, selective message signing. Rust-accelerated crypto (SHA-256, HMAC, AES-GCM, ECDH). |
186
- | **Rust JWT Authentication** | HS256 JWT validation in Rust during the WebSocket handshake. Zero GIL acquisition on the connection critical path. 0.01ms decode (85x faster than Python). Cookie extraction and `server_ready` sent from Rust before Python runs. |
187
- | **Lock-Free Server Queries** | `get_connection_count()` uses `AtomicUsize` — zero GIL, zero blocking, safe to call from async Python handlers. No channel round-trip to the tokio runtime. |
188
- | **Inbound MsgPack** | Binary frames from msgpack clients are parsed in Rust via rmpv. Python receives pre-parsed dicts regardless of wire format. Zero Python overhead for msgpack connections. |
189
- | **Connection Metrics** | Prometheus-compatible stubs for: messages sent/received, publish latency, DLQ size, handler errors, circuit breaker state. Drop-in Prometheus integration or use the built-in stubs. |
190
-
191
- ### Client (React + TypeScript)
192
-
193
- | Feature | Description |
194
- |---------|-------------|
195
- | **useWSE Hook** | Single React hook for the entire WebSocket lifecycle. Accepts topics, endpoints, auth tokens. Returns `isConnected`, `connectionHealth`, connection controls. |
196
- | **Connection Pool** | Multi-endpoint support with health-scored failover. Three load-balancing strategies: weighted-random, least-connections, round-robin. Automatic health checks with latency tracking. |
197
- | **Adaptive Quality Manager** | Adjusts React Query defaults based on connection quality. Excellent: `staleTime: Infinity` (pure WebSocket). Poor: aggressive polling fallback. Dispatches `wse:quality-change` events. Optional QueryClient integration. |
198
- | **Offline Queue** | IndexedDB-backed persistent queue. Messages are stored when disconnected and replayed on reconnect, ordered by priority. Configurable max size and TTL. |
199
- | **Network Monitor** | Real-time latency, jitter, and packet-loss analysis. Determines connection quality (excellent / good / fair / poor). Generates diagnostic suggestions. |
200
- | **Event Sequencer** | Client-side sequence validation with gap detection. Out-of-order buffer for reordering. Duplicate detection via seen-ID window with age-based eviction. |
201
- | **Circuit Breaker** | Client-side circuit breaker for connection attempts. Prevents reconnection storms when the server is down. Configurable failure threshold and recovery timeout. |
202
- | **Compression + msgpack** | Client-side decompression (pako zlib) and msgpack decoding. Automatic detection of binary vs JSON frames. |
203
- | **Zustand Stores** | `useWSEStore` for connection state, latency history, diagnostics. `useMessageQueueStore` for message buffering with priority. Lightweight, no boilerplate. |
204
- | **Rate Limiter** | Client-side token-bucket rate limiter for outbound messages. Prevents flooding the server. |
205
- | **Security Manager** | Client-side HMAC verification and optional decryption. Validates message integrity before dispatching to handlers. |
206
-
207
- ### Client (Python)
208
-
209
- | Feature | Description |
210
- |---------|-------------|
211
- | **Async + Sync API** | `AsyncWSEClient` with async context manager and async iterator. `SyncWSEClient` wrapper for threaded/synchronous code. |
212
- | **Connection Manager** | Auto-reconnection with 4 strategies (exponential, linear, fibonacci, adaptive). Jitter, configurable max attempts. Heartbeat with PING/PONG. |
213
- | **Connection Pool** | Multi-endpoint support with health scoring. Weighted-random, least-connections, round-robin load balancing. |
214
- | **Circuit Breaker** | Three-state machine (CLOSED / OPEN / HALF_OPEN). Prevents reconnection storms. |
215
- | **Rate Limiter** | Client-side token-bucket rate limiter for outbound messages. |
216
- | **Event Sequencer** | Duplicate detection (10K ID window) and out-of-order reordering buffer. |
217
- | **Network Monitor** | Latency, jitter, packet loss analysis. Connection quality scoring. |
218
- | **Security** | ECDH P-256 key exchange, AES-GCM-256 encryption, HMAC-SHA256 signing. Wire-compatible with server and TypeScript client. |
219
- | **Compression + msgpack** | Zlib decompression and msgpack decoding. Automatic binary frame detection. |
354
+ ### Connection Management
355
+
356
+ ```python
357
+ server.get_connection_count() # Lock-free AtomicUsize read
358
+ server.get_connections() # List all connection IDs (snapshot)
359
+ server.disconnect(conn_id) # Force-disconnect a connection
360
+ server.inbound_queue_depth() # Events waiting to be drained
361
+ server.inbound_dropped_count() # Events dropped due to full queue
362
+ server.get_cluster_dlq_entries() # Retrieve failed cluster messages from dead letter queue
363
+ ```
220
364
 
221
365
  ---
222
366
 
223
- ## Performance
367
+ ## Security
224
368
 
225
- Rust-accelerated engine via PyO3. AMD EPYC 7502P (32 cores, 128 GB), Ubuntu 24.04, localhost.
369
+ ### JWT Authentication
226
370
 
227
- ### Highlights
371
+ Rust-native HS256 validation during the WebSocket handshake. Zero GIL acquisition, 0.01ms per decode.
228
372
 
229
- | Metric | Value | Details |
230
- |--------|-------|---------|
231
- | **Peak throughput** | **14.2M msg/s** | JSON, Rust client, 500 connections |
232
- | **Peak binary** | **30M msg/s** | MsgPack/compressed, Rust client |
233
- | **Fan-out broadcast** | **2.1M deliveries/s** | Single-instance, zero message loss |
234
- | **Max connections** | **500,000** | Zero errors, zero gaps at every tier |
235
- | **Connection latency** | **0.38 ms** median | Rust JWT auth in handshake |
236
- | **Accept rate** | **15,020 conn/s** | Sustained connection establishment |
237
- | **Memory per conn** | **4.4 KB** | Rust core static overhead |
373
+ Token delivery:
374
+ - **Browser clients**: `access_token` HTTP-only cookie (set by your login endpoint, attached automatically by the browser)
375
+ - **Backend clients**: `Authorization: Bearer <token>` header and/or `access_token` cookie
376
+ - **API clients**: `Authorization: Bearer <token>` header
238
377
 
239
- ### Point-to-Point (Rust Native Client)
378
+ Required claims: `sub` (user ID), `exp` (expiration), `iat` (issued at). Optional: `iss`, `aud` (validated if configured).
240
379
 
241
- | Payload | Throughput | Bandwidth |
242
- |---------|-----------|-----------|
243
- | 64 bytes | **19.4M msg/s** | 1.2 GB/s |
244
- | 256 bytes | **12.5M msg/s** | 3.1 GB/s |
245
- | 1 KB | **10.3M msg/s** | 10.0 GB/s |
246
- | 16 KB | **1.2M msg/s** | **19.9 GB/s** |
247
- | 64 KB | **284K msg/s** | **18.2 GB/s** |
380
+ ### End-to-End Encryption
248
381
 
249
- ### Point-to-Point (Python Multi-Process, 64 Workers)
382
+ Per-connection session keys via ECDH P-256 key exchange, AES-GCM-256 encryption, HKDF-SHA256 key derivation.
250
383
 
251
- | Metric | JSON | MsgPack |
252
- |--------|------|---------|
253
- | **Sustained** | **2,045,000 msg/s** | **2,072,000 msg/s** |
254
- | **Burst** | 1,557,000 msg/s | 1,836,000 msg/s |
255
- | **64KB messages** | 256K msg/s (16.0 GB/s) | -- |
256
- | **Ping RTT** | 0.26 ms median | -- |
384
+ Wire format: `E:` prefix + 12-byte IV + AES-GCM ciphertext + 16-byte auth tag.
257
385
 
258
- ### Point-to-Point (TypeScript/Node.js, 64 Processes)
386
+ Enable on the client side - the server handles key exchange automatically during the handshake.
259
387
 
260
- | Format | Throughput | Scaling |
261
- |--------|-----------|---------|
262
- | JSON | **7.0M msg/s** | 97% linear |
263
- | MsgPack | **7.9M msg/s** | +13% over JSON |
388
+ ### Rate Limiting
264
389
 
265
- ### Fan-out Broadcast (Single-Instance)
390
+ Per-connection token bucket: 100,000 token capacity, 10,000 tokens/second refill. Clients receive a `rate_limit_warning` at 20% remaining capacity, and `RATE_LIMITED` error when exceeded.
266
391
 
267
- Server broadcasts to N subscribers. **Zero message loss at every tier.**
392
+ ### Deduplication
268
393
 
269
- | Subscribers | Deliveries/s | Bandwidth | p50 Latency |
270
- |-------------|-------------|-----------|-------------|
271
- | 10 | **2.1M** | 295 MB/s | 0.005 ms |
272
- | 1,000 | 1.4M | 185 MB/s | -- |
273
- | 10,000 | 1.2M | 163 MB/s | -- |
274
- | 100,000 | 1.7M | 234 MB/s | -- |
275
- | **500,000** | **1.4M** | 128 MB/s | -- |
394
+ `send_event()` maintains a 50,000-entry AHashSet with FIFO eviction. Duplicate message IDs are dropped before serialization.
276
395
 
277
- ### Rust Acceleration vs Python
396
+ ### Zombie Detection
278
397
 
279
- | Component | Speedup | Rust | Python |
280
- |-----------|---------|------|--------|
281
- | JWT decode (HS256) | **85x** | 10 us | 850 us |
282
- | Msgpack parsing | **~50x** | -- | -- |
283
- | Rate limiter | **40x** | -- | -- |
284
- | HMAC-SHA256 | **22x** | -- | -- |
285
- | Compression (zlib) | **6.7x** | -- | -- |
398
+ Server pings every connected client every 25 seconds. Connections with no activity for 60 seconds are force-closed.
286
399
 
287
- See benchmark docs for full results: [Overview](docs/BENCHMARKS.md) | [Rust Client](docs/BENCHMARKS_RUST_CLIENT.md) | [TypeScript Client](docs/BENCHMARKS_TS_CLIENT.md) | [Python Client](docs/BENCHMARKS_PYTHON_CLIENT.md) | [Fan-out](docs/BENCHMARKS_FANOUT.md)
400
+ Full security documentation: [docs/SECURITY.md](docs/SECURITY.md)
288
401
 
289
402
  ---
290
403
 
291
- ## Use Cases
404
+ ## Wire Protocol
292
405
 
293
- ### Live Dashboards
406
+ WSE uses a custom wire protocol with category-prefixed messages:
294
407
 
295
- Push price updates, sensor data, or analytics to the browser in real time.
408
+ **Text frames:** `WSE{...}` (system), `S{...}` (snapshot), `U{...}` (update) + JSON envelope
296
409
 
297
- ```python
298
- # Server: push price updates
299
- await bus.publish(
300
- topic="prices",
301
- event={"event_type": "price_update", "symbol": "AAPL", "price": 187.42},
302
- )
303
- ```
410
+ **Binary frames:** `C:` (zlib compressed), `M:` (MessagePack), `E:` (AES-GCM encrypted), raw zlib (0x78 magic byte)
304
411
 
305
- ```tsx
306
- // React: consume them
307
- window.addEventListener('price_update', (e: CustomEvent) => {
308
- updateChart(e.detail.symbol, e.detail.price);
309
- });
310
- ```
412
+ **MessagePack transport:** opt-in per connection via `?format=msgpack` query parameter. Roughly 2x faster serialization and 30% smaller payloads.
311
413
 
312
- ### Notifications
414
+ **Protocol negotiation:** `client_hello`/`server_hello` handshake with feature discovery, capability advertisement, and version agreement.
313
415
 
314
- Push order updates, alerts, or system events to specific users.
416
+ Full protocol specification: [docs/PROTOCOL.md](docs/PROTOCOL.md)
315
417
 
316
- ```python
317
- # Server: notify a specific user
318
- await bus.publish(
319
- topic=f"user:{user_id}:events",
320
- event={"event_type": "order_shipped", "order_id": 42, "text": "Your order shipped!"},
321
- )
322
- ```
418
+ ---
323
419
 
324
- ```tsx
325
- // React: show a toast
326
- window.addEventListener('order_shipped', (e: CustomEvent) => {
327
- showToast(e.detail.text);
328
- });
329
- ```
420
+ ## Compression
330
421
 
331
- ### Chat and Messaging
422
+ Two compression layers:
332
423
 
333
- Group chats, DMs, typing indicators, read receipts. Enable encryption for privacy.
424
+ - **Client-facing:** zlib for messages above the configurable threshold (default 1024 bytes). Applied automatically by `send_event()`.
425
+ - **Inter-peer (cluster):** zstd level 1 for payloads above 256 bytes. Capability-negotiated during handshake. Decompression output capped at 1 MB (MAX_FRAME_SIZE).
334
426
 
335
- ```python
336
- # Server: broadcast a chat message
337
- await bus.publish(
338
- topic=f"chat:{channel_id}",
339
- event={"event_type": "message_sent", "text": body.text, "author": user.name},
340
- )
427
+ ---
428
+
429
+ ## Client SDKs
430
+
431
+ ### Python
432
+
433
+ ```bash
434
+ pip install wse-client
341
435
  ```
342
436
 
437
+ Full-featured async and sync client with connection pool, circuit breaker, auto-reconnect, E2E encryption, and msgpack binary transport.
438
+
343
439
  ```python
344
- # Python client: listen for messages
345
- async with connect("ws://localhost:5006/wse", token=jwt) as client:
346
- await client.subscribe([f"chat:{channel_id}"])
440
+ from wse_client import connect
441
+
442
+ async with connect("ws://localhost:5007/wse", token="<jwt>") as client:
443
+ await client.subscribe(["updates"])
347
444
  async for event in client:
348
- print(f"{event.payload['author']}: {event.payload['text']}")
445
+ print(event.type, event.payload)
349
446
  ```
350
447
 
351
- ### Other Use Cases
448
+ **Sync interface:**
352
449
 
353
- - **Collaborative editing** -- shared cursors, document changes, conflict detection via sequence numbers
354
- - **IoT and telemetry** -- device status, sensor metrics, command and control. Use LOW priority for telemetry, HIGH for alerts
355
- - **Gaming** -- game state sync, leaderboards, matchmaking. Use msgpack binary protocol for lower latency
356
-
357
- ---
450
+ ```python
451
+ from wse_client import SyncWSEClient
358
452
 
359
- ## Installation
453
+ client = SyncWSEClient("ws://localhost:5007/wse", token="<jwt>")
454
+ client.connect()
455
+ client.subscribe(["updates"])
360
456
 
361
- ```bash
362
- # Server (Python) -- includes prebuilt Rust engine
363
- pip install wse-server
364
-
365
- # Client (React/TypeScript)
366
- npm install wse-client
457
+ @client.on("updates")
458
+ def handle(event):
459
+ print(event.payload)
367
460
 
368
- # Client (Python) -- pure Python, no Rust required
369
- pip install wse-client
461
+ client.run_forever()
370
462
  ```
371
463
 
372
- Server: prebuilt wheels for Linux (x86_64, aarch64), macOS (Intel, Apple Silicon), and Windows. Python 3.12+ (ABI3 stable -- one wheel per platform).
373
-
374
- Python client: pure Python, Python 3.11+. Optional extras: `pip install wse-client[crypto]` for encryption, `pip install wse-client[all]` for everything.
464
+ Key features: 4 reconnect strategies (exponential, linear, fibonacci, adaptive), connection pool with health scoring and 3 load balancing strategies, circuit breaker, token bucket rate limiter, event sequencer with dedup and reorder buffering, network quality monitoring (latency/jitter/packet loss).
375
465
 
376
- ---
466
+ See [python-client/README.md](python-client/README.md) for full API reference.
377
467
 
378
- ## Architecture
468
+ ### TypeScript / React
379
469
 
470
+ ```bash
471
+ npm install wse-client
380
472
  ```
381
- React Client (TypeScript) Python Client Server (Python + Rust)
382
- ======================== ======================== ========================
383
-
384
- useWSE hook AsyncWSEClient FastAPI Router (/wse)
385
- | | -- OR --
386
- v v RustWSEServer (:5006)
387
- ConnectionPool ConnectionPool |
388
- | (multi-endpoint, | (multi-endpoint, v
389
- | health scoring) | health scoring) Rust Engine (PyO3)
390
- v v |
391
- ConnectionManager ConnectionManager v
392
- | (auto-reconnect, | (auto-reconnect, EventTransformer
393
- | circuit breaker) | circuit breaker) |
394
- v v v
395
- MessageProcessor MessageCodec PriorityQueue
396
- | (decompress, verify, | (decompress, |
397
- | sequence, dispatch) | sequence, dedup) v
398
- v v Sequencer + Dedup
399
- AdaptiveQualityManager NetworkMonitor |
400
- | | (quality scoring, v
401
- v | latency, jitter) Compression + Rate Limiter
402
- Zustand Store v |
403
- | Event handlers / v
404
- v async iterator PubSub Bus (Redis)
405
- React Components |
406
- v
407
- Dead Letter Queue
408
- ```
409
473
 
410
- **Wire format (v1):**
474
+ Single React hook (`useWSE`) for connection lifecycle, subscriptions, and message dispatch.
475
+
476
+ ```tsx
477
+ import { useWSE } from 'wse-client';
411
478
 
412
- ```json
413
- {
414
- "v": 1,
415
- "id": "019503a1-...",
416
- "t": "price_update",
417
- "ts": "2026-01-15T10:30:00Z",
418
- "seq": 42,
419
- "p": { "symbol": "AAPL", "price": 187.42 }
479
+ function App() {
480
+ const { sendMessage, connectionHealth } = useWSE(
481
+ '<jwt-token>',
482
+ ['updates'],
483
+ { endpoints: ['ws://localhost:5007/wse'] },
484
+ );
485
+
486
+ return <div>Status: {connectionHealth}</div>;
420
487
  }
421
488
  ```
422
489
 
490
+ Key features: offline queue with IndexedDB persistence, adaptive quality management, connection pool with health scoring, E2E encryption (Web Crypto API), message batching, 5 priority levels, Zustand store for external state access.
491
+
492
+ See [client/README.md](client/README.md) for full API reference.
493
+
423
494
  ---
424
495
 
425
- ## Packages
496
+ ## Performance
497
+
498
+ Benchmarked on AMD EPYC 7502P (64 cores, 128 GB RAM), Ubuntu 24.04.
499
+
500
+ | Mode | Peak Throughput | Connections | Message Loss |
501
+ |------|----------------|-------------|--------------|
502
+ | Standalone (fan-out) | 5.0M deliveries/s | 500K | 0% |
503
+ | Standalone (inbound JSON) | 14.7M msg/s | 500K | 0% |
504
+ | Standalone (inbound msgpack) | 30M msg/s | 500K | 0% |
505
+ | Cluster (2 nodes) | 9.5M deliveries/s | 20K per node | 0% |
426
506
 
427
- | Package | Registry | Language | Install |
428
- |---------|----------|----------|---------|
429
- | `wse-server` | [PyPI](https://pypi.org/project/wse-server/) | Python + Rust | `pip install wse-server` |
430
- | `wse-client` | [npm](https://www.npmjs.com/package/wse-client) | TypeScript + React | `npm install wse-client` |
431
- | `wse-client` | [PyPI](https://pypi.org/project/wse-client/) | Python | `pip install wse-client` |
507
+ Sub-millisecond latency. Median 0.38ms with JWT authentication. Connection handshake: 0.53ms median (Rust JWT path).
432
508
 
433
- All packages are standalone. No shared dependencies between server and clients.
509
+ Detailed results: [Benchmarks](docs/BENCHMARKS.md) | [Fan-out](docs/BENCHMARKS_FANOUT.md) | [Rust Client](docs/BENCHMARKS_RUST_CLIENT.md) | [Python Client](docs/BENCHMARKS_PYTHON_CLIENT.md) | [TypeScript Client](docs/BENCHMARKS_TS_CLIENT.md)
434
510
 
435
511
  ---
436
512
 
437
- ## Documentation
513
+ ## Examples
438
514
 
439
- | Document | Description |
440
- |----------|-------------|
441
- | [Protocol Spec](docs/PROTOCOL.md) | Wire format, versioning, encryption |
442
- | [Architecture](docs/ARCHITECTURE.md) | System design, data flow |
443
- | [Benchmarks](docs/BENCHMARKS.md) | Methodology, results, comparisons |
444
- | [Security Model](docs/SECURITY.md) | Encryption, auth, threat model |
445
- | [Integration Guide](docs/INTEGRATION.md) | FastAPI setup, Redis, deployment |
515
+ Working examples in the [`examples/`](examples/) directory:
516
+
517
+ | Example | Description |
518
+ |---------|-------------|
519
+ | [`standalone_basic.py`](examples/standalone_basic.py) | Basic server with JWT auth and echo |
520
+ | [`standalone_broadcast.py`](examples/standalone_broadcast.py) | Topic-based pub/sub with broadcasting |
521
+ | [`standalone_presence.py`](examples/standalone_presence.py) | Per-topic presence tracking with join/leave events |
522
+ | [`standalone_recovery.py`](examples/standalone_recovery.py) | Message recovery on reconnect with epoch+offset |
446
523
 
447
524
  ---
448
525
 
449
- ## Technology Stack
450
-
451
- | Component | Technology | Purpose |
452
- |-----------|-----------|---------|
453
- | Rust engine | PyO3 + maturin | Compression, sequencing, filtering, rate limiting, WebSocket server |
454
- | Server framework | FastAPI + Starlette | ASGI WebSocket handling |
455
- | Serialization | orjson (Rust) | Zero-copy JSON |
456
- | Binary protocol | msgpack (rmp-serde) | 30% smaller payloads |
457
- | Encryption | AES-GCM-256 + ECDH P-256 (Rust) | Per-connection E2E encryption with key exchange |
458
- | Message signing | HMAC-SHA256 (Rust) | Per-message integrity verification |
459
- | Authentication | Rust JWT (HS256) | Zero-GIL token validation in handshake |
460
- | Pub/Sub backbone | Redis Pub/Sub | Multi-process fan-out |
461
- | Dead Letter Queue | Redis Lists | Failed message recovery |
462
- | Client state | Zustand | Lightweight React store |
463
- | Client hooks | React 18+ | useWSE hook with TypeScript |
464
- | Offline storage | IndexedDB | Persistent offline queue |
465
- | Python client | websockets + cryptography | Async/sync WebSocket client |
466
- | Build system | maturin | Rust+Python hybrid wheels |
526
+ ## Documentation
527
+
528
+ - [Architecture](docs/ARCHITECTURE.md) - server internals, PyO3 bridge, concurrency model
529
+ - [Integration Guide](docs/INTEGRATION.md) - complete setup and API reference
530
+ - [Wire Protocol](docs/PROTOCOL.md) - message format specification
531
+ - [Cluster Protocol](docs/CLUSTER_PROTOCOL.md) - TCP mesh, frame format, gossip, interest routing
532
+ - [Security](docs/SECURITY.md) - JWT, encryption, mTLS, rate limiting, circuit breaker
533
+ - [Deployment](docs/DEPLOYMENT.md) - production setup, Docker, Kubernetes, cluster configuration
534
+ - [Migration Guide](docs/MIGRATION.md) - upgrading from v1.x to v2.0
535
+ - [Contributing](CONTRIBUTING.md) - development setup and coding standards
536
+ - [Changelog](CHANGELOG.md) - version history
467
537
 
468
538
  ---
469
539