@smoothglue/sync-whiteboard 0.1.0 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +65 -24
- package/dist/assets.js +29 -20
- package/dist/logger.js +13 -0
- package/dist/rooms.js +64 -35
- package/dist/server.js +34 -26
- package/package.json +2 -1
package/README.md
CHANGED
|
@@ -1,6 +1,8 @@
|
|
|
1
|
-
##
|
|
1
|
+
## @smoothglue/sync-whiteboard
|
|
2
|
+
|
|
3
|
+
A real-time collaborative whiteboard server designed to provide a realtime, collaborative whiteboard experience for frontends integrated with [tldraw](https://tldraw.dev/). It functions as a microservice, handling websocket connections to sync shared whiteboard edits in real-time using tldraw's [sync-core](https://www.npmjs.com/package/@tldraw/sync-core) library.
|
|
2
4
|
|
|
3
|
-
|
|
5
|
+
## Overview
|
|
4
6
|
|
|
5
7
|
The core responsibilities of this service include:
|
|
6
8
|
|
|
@@ -16,49 +18,88 @@ The core responsibilities of this service include:
|
|
|
16
18
|
|
|
17
19
|
## Architecture
|
|
18
20
|
|
|
19
|
-
This service is designed as a NodeJS microservice
|
|
21
|
+
This service is designed as a NodeJS microservice that connects to client applications using tldraw via WebSockets for real-time data synchronization. It relies on external services (configurable via environment variables) for persisting snapshots and storing assets.
|
|
22
|
+
|
|
23
|
+
## Installation
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
npm install @smoothglue/sync-whiteboard
|
|
27
|
+
# or
|
|
28
|
+
yarn add @smoothglue/sync-whiteboard
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
## Configuration
|
|
32
|
+
The @smoothglue/sync-whiteboard server is configured entirely through the environment variables listed below:
|
|
33
|
+
|
|
34
|
+
| Environment Variable | Description | Default Value | Required |
|
|
35
|
+
| :------------------------------- |:---------------------------------------------------------------------------------------------------------------| :------------ |:---------|
|
|
36
|
+
| `SWB_PORT` | The port on which the server will listen. | `5858` | No |
|
|
37
|
+
| `SWB_HOST` | The host address the server will bind to. | `0.0.0.0` | No |
|
|
38
|
+
| `SWB_LOG_LEVEL` | The minimum log level for messages (e.g, `fatal`, `error`, `warn`, `info`, `debug`). | `info` | No |
|
|
39
|
+
| `SWB_SNAPSHOT_STORAGE_URL` | **Crucial:** The base URL for the service responsible for persisting tldraw room states. | (None) | **Yes** |
|
|
40
|
+
| `SWB_ASSET_STORAGE_URL` | **Crucial:** The base URL for the service responsible for storing and retrieving tldraw assets (e.g., images). | (None) | **Yes** |
|
|
41
|
+
| `SWB_SAVE_INTERVAL_MS` | The interval (in milliseconds) the server periodically checks for and saves updated room snapshots. | `5000` | No |
|
|
42
|
+
|
|
43
|
+
|
|
44
|
+
## Usage
|
|
45
|
+
|
|
46
|
+
This package provides a server that can be integrated into your application or run as a standalone service.
|
|
47
|
+
|
|
48
|
+
### Integrating with your Application
|
|
49
|
+
You can `require`/`import` the package directly into your Node.js application. Upon import, the server will attempt to start, binding to the host and port configured via environment variables.
|
|
50
|
+
|
|
51
|
+
```javascript
|
|
52
|
+
// Example application's entry point: (e.g., app.js or index.js)
|
|
53
|
+
// Imports the package, which initializes and starts the sync server.
|
|
54
|
+
require('@smoothglue/sync-whiteboard');
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
### Running as a Standalone Service (e.g., in Docker)
|
|
58
|
+
To run the `@smoothglue/sync-whiteboard` server as a standalone process, execute its main entry point. This is typically done within a Docker container where environment variables are easily managed.
|
|
59
|
+
|
|
60
|
+
From your containerized Node.js environment with dependencies installed, your CMD or ENTRYPOINT in the Dockerfile would look like this:
|
|
61
|
+
|
|
62
|
+
```dockerfile
|
|
63
|
+
# Dockerfile snippet example
|
|
64
|
+
# ... (setting up Node.js, copying package.json, running npm install)
|
|
65
|
+
# The 'dist/server.js' file is the compiled entry point of the @smoothglue/sync-whiteboard package.
|
|
66
|
+
CMD ["node", "node_modules/@smoothglue/sync-whiteboard/dist/server.js"]
|
|
67
|
+
```
|
|
20
68
|
|
|
21
69
|
## Development Environment (Docker)
|
|
22
70
|
|
|
23
71
|
This project includes a Docker-based development environment configured in `dev-env/docker-compose.yml`. This makes it easy to run the `sync-whiteboard` server along with its dependencies (mock backend API, database, object storage) and a mock frontend client.
|
|
24
72
|
|
|
25
|
-
|
|
73
|
+
### Prerequisites:
|
|
26
74
|
|
|
27
75
|
- Docker ([https://www.docker.com/get-started](https://www.docker.com/get-started))
|
|
28
76
|
- Docker Compose ([https://docs.docker.com/compose/install/](https://docs.docker.com/compose/install/))
|
|
29
77
|
|
|
30
|
-
|
|
78
|
+
### Configuration (Optional):
|
|
79
|
+
|
|
80
|
+
If you wish to alter the default sync-whiteboard server configuration, refer to the following steps.
|
|
31
81
|
|
|
32
82
|
1. Navigate to the `dev-env` directory:
|
|
33
83
|
```bash
|
|
34
84
|
cd sync-whiteboard/dev-env
|
|
35
85
|
```
|
|
36
|
-
2.
|
|
37
|
-
- `POSTGRES_DB`, `POSTGRES_USER`, `POSTGRES_PASSWORD` for the database.
|
|
38
|
-
- `MINIO_ROOT_USER`, `MINIO_ROOT_PASSWORD`, `MINIO_BUCKET` for the object storage.
|
|
39
|
-
- The `SNAPSHOT_STORAGE_URL` and `ASSET_STORAGE_URL` environment variables for the `sync-whiteboard` service itself are set within the `docker-compose.yml` to point to the `mock-backend` service.
|
|
86
|
+
2. Copy `env.example` to `.env` and adjust settings as needed. This file configures services like the database and object storage services expressed within `docker-compose.yml`. For `sync-whiteboard` specific settings, see the [Configuration](#configuration) section above.
|
|
40
87
|
|
|
41
|
-
|
|
88
|
+
### Running the Environment:
|
|
42
89
|
|
|
43
90
|
1. From the `dev-env` directory, run:
|
|
44
91
|
```bash
|
|
45
92
|
docker-compose up --build
|
|
46
93
|
```
|
|
47
94
|
|
|
48
|
-
|
|
95
|
+
### Services:
|
|
49
96
|
|
|
50
97
|
The Docker Compose setup starts the following services:
|
|
51
98
|
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
- **`minio`**: MinIO object storage for the mock backend.
|
|
60
|
-
- Data is persisted in a Docker volume (`minio_data`).
|
|
61
|
-
- API accessible at `http://localhost:9000`.
|
|
62
|
-
- Console accessible at `http://localhost:9001`.
|
|
63
|
-
- **`mock-frontend`**: A simple Vite+React frontend client for testing.
|
|
64
|
-
- Accessible at `http://localhost:8080`.
|
|
99
|
+
| Service Name | Description | Accessible At (Host) |
|
|
100
|
+
| :------------------- | :------------------------------------------------------------------- |:------------------------------------------------|
|
|
101
|
+
| `sync-whiteboard` | The main Node.js sync server for real-time whiteboard collaboration. | `ws://localhost:5858` |
|
|
102
|
+
| `mock-backend` | A Flask-based API simulating backend storage for snapshots & assets. | [http://localhost:5001](http://localhost:5001) |
|
|
103
|
+
| `postgres` | PostgreSQL database for the mock backend's snapshot storage. | `localhost:5433` |
|
|
104
|
+
| `minio` | MinIO object storage for the mock backend's asset storage. | [http://localhost:9001](http://localhost:9001) |
|
|
105
|
+
| `mock-frontend` | A simple Vite+React client for testing the whiteboard functionality. | [http://localhost:8080](http://localhost:8080) |
|
package/dist/assets.js
CHANGED
|
@@ -1,16 +1,20 @@
|
|
|
1
1
|
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
2
5
|
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
6
|
exports.storeAsset = storeAsset;
|
|
4
7
|
exports.loadAsset = loadAsset;
|
|
5
8
|
const stream_1 = require("stream");
|
|
9
|
+
const logger_1 = __importDefault(require("./logger"));
|
|
6
10
|
// --- Configuration ---
|
|
7
|
-
const ASSET_STORAGE_URL = process.env.
|
|
11
|
+
const ASSET_STORAGE_URL = process.env.SWB_ASSET_STORAGE_URL;
|
|
8
12
|
if (!ASSET_STORAGE_URL) {
|
|
9
13
|
// Critical configuration missing, exit the process.
|
|
10
|
-
|
|
14
|
+
logger_1.default.fatal("FATAL ERROR: ASSET_STORAGE_URL environment variable is not set.");
|
|
11
15
|
process.exit(1);
|
|
12
16
|
}
|
|
13
|
-
|
|
17
|
+
logger_1.default.info({ assetStorageUrl: ASSET_STORAGE_URL }, `[ASSETS] Using Asset Storage URL`);
|
|
14
18
|
// --- End Configuration ---
|
|
15
19
|
/**
|
|
16
20
|
* Stores an asset by proxying a PUT request to the configured asset storage backend.
|
|
@@ -23,10 +27,10 @@ console.log(`[ASSETS] Using Asset Storage URL: ${ASSET_STORAGE_URL}`);
|
|
|
23
27
|
*/
|
|
24
28
|
async function storeAsset(id, fileStream, contentType = "application/octet-stream", originalFilename) {
|
|
25
29
|
const url = `${ASSET_STORAGE_URL}/${id}`;
|
|
26
|
-
|
|
30
|
+
logger_1.default.debug({ assetId: id, filename: originalFilename, targetUrl: url }, `[ASSETS] Storing asset`);
|
|
27
31
|
// Ensure we have a readable stream
|
|
28
32
|
if (!(fileStream instanceof stream_1.Readable)) {
|
|
29
|
-
|
|
33
|
+
logger_1.default.error({ assetId: id, receivedType: typeof fileStream }, "[ASSETS] Error: storeAsset received a non-readable stream type.");
|
|
30
34
|
throw new Error("Invalid stream type provided to storeAsset.");
|
|
31
35
|
}
|
|
32
36
|
let webStream = null;
|
|
@@ -45,24 +49,26 @@ async function storeAsset(id, fileStream, contentType = "application/octet-strea
|
|
|
45
49
|
// @ts-ignore - duplex: 'half' is required for streaming request bodies with Node fetch
|
|
46
50
|
duplex: "half",
|
|
47
51
|
});
|
|
48
|
-
|
|
52
|
+
logger_1.default.debug({ assetId: id, status: response.status }, `[ASSETS] Backend PUT response status`);
|
|
49
53
|
// Handle backend errors
|
|
50
54
|
if (!response.ok) {
|
|
51
55
|
const errorBody = await response.text();
|
|
52
|
-
|
|
56
|
+
const err = new Error(`Backend failed to store asset ${id}. Status: ${response.status}. Body: ${errorBody}`);
|
|
57
|
+
logger_1.default.error({ err, assetId: id, responseStatus: response.status, responseStatusText: response.statusText, responseBody: errorBody }, `[ASSETS] Error response from backend storing asset`);
|
|
53
58
|
// Ensure streams are closed on error
|
|
54
59
|
if (webStream) {
|
|
55
60
|
await webStream
|
|
56
61
|
.cancel()
|
|
57
|
-
.catch((
|
|
62
|
+
.catch((cancelErr) => // Changed variable name to avoid shadowing
|
|
63
|
+
logger_1.default.error({ err: cancelErr, assetId: id, stage: 'cancel_upload_after_failed_fetch' }, `[ASSETS] Error cancelling upload webStream`));
|
|
58
64
|
}
|
|
59
|
-
throw
|
|
65
|
+
throw err;
|
|
60
66
|
}
|
|
61
|
-
|
|
67
|
+
logger_1.default.debug({ assetId: id, targetUrl: url }, `[ASSETS] Successfully proxied storage for asset`);
|
|
62
68
|
return id; // Return the ID, confirming success
|
|
63
69
|
}
|
|
64
70
|
catch (error) {
|
|
65
|
-
|
|
71
|
+
logger_1.default.error({ err: error, assetId: id, targetUrl: url, operation: 'storeAsset' }, `[ASSETS] Network or fetch error storing asset`);
|
|
66
72
|
// Clean up streams on error
|
|
67
73
|
if (fileStream instanceof stream_1.Readable && !fileStream.destroyed) {
|
|
68
74
|
fileStream.destroy(error instanceof Error ? error : new Error(String(error)));
|
|
@@ -70,7 +76,8 @@ async function storeAsset(id, fileStream, contentType = "application/octet-strea
|
|
|
70
76
|
if (webStream) {
|
|
71
77
|
await webStream
|
|
72
78
|
.cancel()
|
|
73
|
-
.catch((
|
|
79
|
+
.catch((cancelErr) => // Changed variable name
|
|
80
|
+
logger_1.default.error({ err: cancelErr, assetId: id, stage: 'cancel_upload_during_error_handling' }, `[ASSETS] Error cancelling upload webStream`));
|
|
74
81
|
}
|
|
75
82
|
throw error; // Re-throw error for the server handler
|
|
76
83
|
}
|
|
@@ -83,34 +90,36 @@ async function storeAsset(id, fileStream, contentType = "application/octet-strea
|
|
|
83
90
|
*/
|
|
84
91
|
async function loadAsset(id) {
|
|
85
92
|
const url = `${ASSET_STORAGE_URL}/${id}`;
|
|
86
|
-
|
|
93
|
+
logger_1.default.debug({ assetId: id, targetUrl: url }, `[ASSETS] Loading asset`);
|
|
87
94
|
try {
|
|
88
95
|
// Make the GET request to the actual asset storage backend
|
|
89
96
|
const response = await fetch(url, {
|
|
90
97
|
method: "GET",
|
|
91
98
|
});
|
|
92
|
-
|
|
99
|
+
logger_1.default.debug({ assetId: id, status: response.status }, `[ASSETS] Backend GET response status`);
|
|
93
100
|
// Handle backend errors (like 404 Not Found)
|
|
94
101
|
if (!response.ok) {
|
|
95
102
|
if (response.status === 404) {
|
|
96
|
-
|
|
103
|
+
logger_1.default.warn({ assetId: id, targetUrl: url, status: 404 }, `[ASSETS] Asset not found at backend (404)`);
|
|
97
104
|
const notFoundError = new Error(`Asset ${id} not found.`);
|
|
98
105
|
notFoundError.code = "ENOENT"; // Mimic filesystem error code
|
|
99
106
|
throw notFoundError;
|
|
100
107
|
}
|
|
101
108
|
// Handle other non-OK statuses
|
|
102
109
|
const errorBody = await response.text();
|
|
103
|
-
|
|
104
|
-
|
|
110
|
+
const err = new Error(// better logging context
|
|
111
|
+
`Backend failed to load asset ${id}. Status: ${response.status}. Body: ${errorBody}`);
|
|
112
|
+
logger_1.default.error({ err, assetId: id, responseStatus: response.status, responseStatusText: response.statusText, responseBody: errorBody }, `[ASSETS] Error response from backend loading asset`);
|
|
113
|
+
throw err;
|
|
105
114
|
}
|
|
106
115
|
// Ensure response body exists
|
|
107
116
|
if (!response.body) {
|
|
108
|
-
|
|
117
|
+
logger_1.default.error({ assetId: id, targetUrl: url }, `[ASSETS] No response body received from backend for asset`);
|
|
109
118
|
throw new Error(`No response body received for asset ${id}.`);
|
|
110
119
|
}
|
|
111
120
|
// Get the Content-Type header provided by the backend
|
|
112
121
|
const contentType = response.headers.get("Content-Type") || "application/octet-stream";
|
|
113
|
-
|
|
122
|
+
logger_1.default.debug({ assetId: id, contentType: contentType }, `[ASSETS] Received Content-Type from backend`);
|
|
114
123
|
// Convert the Web Standard stream from fetch response to a Node.js stream
|
|
115
124
|
const nodeStream = stream_1.Readable.fromWeb(response.body);
|
|
116
125
|
// Return the stream and content type for the server handler to use
|
|
@@ -119,7 +128,7 @@ async function loadAsset(id) {
|
|
|
119
128
|
catch (error) {
|
|
120
129
|
// Avoid double-logging known 'Not Found' errors
|
|
121
130
|
if (error.code !== "ENOENT") {
|
|
122
|
-
|
|
131
|
+
logger_1.default.error({ err: error, assetId: id, targetUrl: url, operation: 'loadAsset' }, `[ASSETS] Network or fetch error loading asset`);
|
|
123
132
|
}
|
|
124
133
|
throw error; // Re-throw error for the server handler
|
|
125
134
|
}
|
package/dist/logger.js
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
5
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
|
+
exports.loggerConfig = void 0;
|
|
7
|
+
const pino_1 = __importDefault(require("pino"));
|
|
8
|
+
exports.loggerConfig = {
|
|
9
|
+
level: process.env.SWB_LOG_LEVEL ?? "info",
|
|
10
|
+
timestamp: pino_1.default.stdTimeFunctions.isoTime,
|
|
11
|
+
};
|
|
12
|
+
const logger = (0, pino_1.default)(exports.loggerConfig);
|
|
13
|
+
exports.default = logger;
|
package/dist/rooms.js
CHANGED
|
@@ -1,19 +1,21 @@
|
|
|
1
1
|
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
2
5
|
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
6
|
exports.getOrCreateRoom = getOrCreateRoom;
|
|
4
7
|
const sync_core_1 = require("@tldraw/sync-core");
|
|
5
8
|
const schema_1 = require("./schema");
|
|
9
|
+
const logger_1 = __importDefault(require("./logger"));
|
|
6
10
|
// --- Configuration ---
|
|
7
|
-
const SNAPSHOT_STORAGE_URL = process.env.
|
|
11
|
+
const SNAPSHOT_STORAGE_URL = process.env.SWB_SNAPSHOT_STORAGE_URL;
|
|
8
12
|
if (!SNAPSHOT_STORAGE_URL) {
|
|
9
|
-
|
|
13
|
+
logger_1.default.fatal("FATAL ERROR: SNAPSHOT_STORAGE_URL environment variable is not set.");
|
|
10
14
|
process.exit(1);
|
|
11
15
|
}
|
|
12
|
-
|
|
13
|
-
const SAVE_INTERVAL_MS = process.env.
|
|
14
|
-
|
|
15
|
-
: 5000;
|
|
16
|
-
console.log(`[ROOMS] Snapshot save interval: ${SAVE_INTERVAL_MS}ms`);
|
|
16
|
+
logger_1.default.info({ snapshotStorageUrl: SNAPSHOT_STORAGE_URL }, `[ROOMS] Using Snapshot Storage URL`);
|
|
17
|
+
const SAVE_INTERVAL_MS = parseInt(process.env.SWB_SAVE_INTERVAL_MS || '5000', 10);
|
|
18
|
+
logger_1.default.info({ saveIntervalMs: SAVE_INTERVAL_MS }, `[ROOMS] Snapshot save interval`);
|
|
17
19
|
// In-memory map holding active room states, keyed by roomId
|
|
18
20
|
const rooms = new Map();
|
|
19
21
|
// Mutex to prevent race conditions when multiple requests try to create the same room simultaneously
|
|
@@ -27,7 +29,7 @@ let createRoomMutex = Promise.resolve(undefined);
|
|
|
27
29
|
*/
|
|
28
30
|
async function readSnapshotFromBackend(roomId) {
|
|
29
31
|
const url = `${SNAPSHOT_STORAGE_URL}/${roomId}`;
|
|
30
|
-
|
|
32
|
+
logger_1.default.debug({ roomId, url }, `[ROOMS] Loading snapshot for room ${roomId} from ${url}`);
|
|
31
33
|
try {
|
|
32
34
|
const response = await fetch(url, {
|
|
33
35
|
method: "GET",
|
|
@@ -37,22 +39,24 @@ async function readSnapshotFromBackend(roomId) {
|
|
|
37
39
|
});
|
|
38
40
|
if (response.ok) {
|
|
39
41
|
const snapshot = await response.json();
|
|
40
|
-
|
|
42
|
+
logger_1.default.debug({ roomId, snapshotSize: JSON.stringify(snapshot).length }, `[ROOMS] Snapshot loaded successfully for room ${roomId}`);
|
|
41
43
|
return snapshot;
|
|
42
44
|
}
|
|
43
45
|
else if (response.status === 404) {
|
|
44
|
-
|
|
46
|
+
logger_1.default.info({ roomId, url, status: 404 }, `[ROOMS] No existing snapshot found for room ${roomId} (404)`);
|
|
45
47
|
return undefined; // Expected case for a new room
|
|
46
48
|
}
|
|
47
49
|
else {
|
|
48
50
|
// Handle unexpected errors from the backend
|
|
49
51
|
const errorBody = await response.text();
|
|
50
|
-
|
|
51
|
-
|
|
52
|
+
const err = new Error(// better logging context
|
|
53
|
+
`Backend failed to load snapshot for ${roomId}. Status: ${response.status}. Body: ${errorBody}`);
|
|
54
|
+
logger_1.default.error({ err, roomId, url, responseStatus: response.status, responseBody: errorBody }, `[ROOMS] Error loading snapshot for room ${roomId}`);
|
|
55
|
+
throw err;
|
|
52
56
|
}
|
|
53
57
|
}
|
|
54
58
|
catch (error) {
|
|
55
|
-
|
|
59
|
+
logger_1.default.error({ err: error, roomId, url }, `[ROOMS] Network or fetch error loading snapshot for room ${roomId}`);
|
|
56
60
|
throw error; // Propagate error to getOrCreateRoom
|
|
57
61
|
}
|
|
58
62
|
}
|
|
@@ -64,7 +68,7 @@ async function readSnapshotFromBackend(roomId) {
|
|
|
64
68
|
async function saveSnapshotToBackend(roomId, room) {
|
|
65
69
|
const url = `${SNAPSHOT_STORAGE_URL}/${roomId}`;
|
|
66
70
|
const snapshot = room.getCurrentSnapshot();
|
|
67
|
-
|
|
71
|
+
logger_1.default.debug({ roomId, url, snapshotSize: JSON.stringify(snapshot).length }, `[ROOMS] Saving snapshot for room ${roomId} to ${url}`);
|
|
68
72
|
try {
|
|
69
73
|
const response = await fetch(url, {
|
|
70
74
|
method: "POST",
|
|
@@ -75,15 +79,16 @@ async function saveSnapshotToBackend(roomId, room) {
|
|
|
75
79
|
});
|
|
76
80
|
if (!response.ok) {
|
|
77
81
|
const errorBody = await response.text();
|
|
78
|
-
|
|
82
|
+
logger_1.default.warn({ roomId, url, responseStatus: response.status, responseBody: errorBody }, // No err: new Error() here, just context
|
|
83
|
+
`[ROOMS] Error saving snapshot for room ${roomId}: ${response.status} ${response.statusText}`);
|
|
79
84
|
// Log error but don't throw, to avoid breaking the save interval
|
|
80
85
|
}
|
|
81
86
|
else {
|
|
82
|
-
|
|
87
|
+
logger_1.default.debug({ roomId }, `[ROOMS] Snapshot saved successfully for room ${roomId}`);
|
|
83
88
|
}
|
|
84
89
|
}
|
|
85
90
|
catch (error) {
|
|
86
|
-
|
|
91
|
+
logger_1.default.error({ err: error, roomId, url }, `[ROOMS] Network or fetch error saving snapshot for room ${roomId}`);
|
|
87
92
|
// Log error but don't throw
|
|
88
93
|
}
|
|
89
94
|
}
|
|
@@ -102,20 +107,38 @@ async function getOrCreateRoom(roomId) {
|
|
|
102
107
|
if (rooms.has(roomId)) {
|
|
103
108
|
const existingRoomState = rooms.get(roomId);
|
|
104
109
|
if (!existingRoomState.room.isClosed()) {
|
|
110
|
+
logger_1.default.debug({ roomId }, "[ROOMS] Active room instance found in memory.");
|
|
105
111
|
return; // Room exists and is active
|
|
106
112
|
}
|
|
107
113
|
else {
|
|
108
|
-
|
|
114
|
+
logger_1.default.info({ roomId }, `[ROOMS] Found closed room ${roomId}, removing before creating new one.`);
|
|
109
115
|
rooms.delete(roomId); // Clean up closed room reference
|
|
110
116
|
}
|
|
111
117
|
}
|
|
112
|
-
|
|
118
|
+
logger_1.default.info({ roomId }, `[ROOMS] Creating or recreating room: ${roomId}`);
|
|
113
119
|
// Fetch initial state from the backend API (can throw error)
|
|
114
120
|
const initialSnapshot = await readSnapshotFromBackend(roomId);
|
|
115
|
-
// Define logger for the tldraw room instance
|
|
116
|
-
const
|
|
117
|
-
|
|
118
|
-
|
|
121
|
+
// Define child logger for the tldraw room instance
|
|
122
|
+
const tldrawInstanceLogger = logger_1.default.child({ tldrawRoomId: roomId, component: 'tldraw-sync-core' });
|
|
123
|
+
const tldrawLogAdapter = {
|
|
124
|
+
warn: (...args) => {
|
|
125
|
+
const msg = args.find(arg => typeof arg === 'string') || 'tldraw room warning';
|
|
126
|
+
const details = args.filter(arg => typeof arg !== 'string');
|
|
127
|
+
tldrawInstanceLogger.warn(details.length ? { details } : {}, msg);
|
|
128
|
+
},
|
|
129
|
+
error: (...args) => {
|
|
130
|
+
const errorArg = args.find(arg => arg instanceof Error);
|
|
131
|
+
if (errorArg) {
|
|
132
|
+
const msg = args.filter(arg => typeof arg === 'string' && arg !== errorArg.message).join(' ') || errorArg.message || 'tldraw room error';
|
|
133
|
+
const details = args.filter(arg => arg !== errorArg && typeof arg !== 'string');
|
|
134
|
+
tldrawInstanceLogger.error({ err: errorArg, details: details.length ? details : undefined }, msg);
|
|
135
|
+
}
|
|
136
|
+
else {
|
|
137
|
+
const msg = args.find(arg => typeof arg === 'string') || 'tldraw room error (no Error instance)';
|
|
138
|
+
const details = args.filter(arg => typeof arg !== 'string');
|
|
139
|
+
tldrawInstanceLogger.error(details.length ? { details } : {}, msg);
|
|
140
|
+
}
|
|
141
|
+
}
|
|
119
142
|
};
|
|
120
143
|
// Create the new room state object
|
|
121
144
|
const newRoomState = {
|
|
@@ -125,20 +148,20 @@ async function getOrCreateRoom(roomId) {
|
|
|
125
148
|
room: new sync_core_1.TLSocketRoom({
|
|
126
149
|
schema: schema_1.whiteboardSchema, // Our defined tldraw schema
|
|
127
150
|
initialSnapshot, // Initial state from backend (or undefined)
|
|
128
|
-
log:
|
|
151
|
+
log: tldrawLogAdapter, // Logger for internal tldraw messages
|
|
129
152
|
/** Callback when a user session is removed (e.g., disconnects/times out) */
|
|
130
|
-
onSessionRemoved(
|
|
131
|
-
|
|
153
|
+
onSessionRemoved(roomInstance, args) {
|
|
154
|
+
logger_1.default.debug({ roomId, remainingSessions: args.numSessionsRemaining }, `[ROOMS] Session removed from room ${roomId}. Remaining: ${args.numSessionsRemaining}`);
|
|
132
155
|
// If last user leaves, trigger a final save and close the room
|
|
133
156
|
if (args.numSessionsRemaining === 0) {
|
|
134
|
-
|
|
157
|
+
logger_1.default.info({ roomId }, `[ROOMS] Last user left room ${roomId}. Triggering final save.`);
|
|
135
158
|
// Ensure any pending periodic save completes first
|
|
136
159
|
const savePromise = newRoomState.persistPromise ?? Promise.resolve();
|
|
137
160
|
savePromise.finally(() => {
|
|
138
|
-
|
|
139
|
-
saveSnapshotToBackend(roomId,
|
|
140
|
-
|
|
141
|
-
|
|
161
|
+
logger_1.default.info({ roomId }, `[ROOMS] Performing final save for room ${roomId}...`);
|
|
162
|
+
saveSnapshotToBackend(roomId, roomInstance).finally(() => {
|
|
163
|
+
logger_1.default.info({ roomId }, `[ROOMS] Closing room ${roomId} after final save.`);
|
|
164
|
+
roomInstance.close(); // Mark the tldraw room as closed
|
|
142
165
|
});
|
|
143
166
|
});
|
|
144
167
|
}
|
|
@@ -152,7 +175,7 @@ async function getOrCreateRoom(roomId) {
|
|
|
152
175
|
};
|
|
153
176
|
// Store the new room state in our map
|
|
154
177
|
rooms.set(roomId, newRoomState);
|
|
155
|
-
|
|
178
|
+
logger_1.default.info({ roomId }, `[ROOMS] Room ${roomId} created successfully.`);
|
|
156
179
|
});
|
|
157
180
|
// Wait for the mutex-protected operation (lookup/creation) to complete
|
|
158
181
|
await createRoomMutex;
|
|
@@ -160,7 +183,7 @@ async function getOrCreateRoom(roomId) {
|
|
|
160
183
|
const roomState = rooms.get(roomId);
|
|
161
184
|
if (!roomState || roomState.room.isClosed()) {
|
|
162
185
|
// Defensive check in case something went wrong
|
|
163
|
-
|
|
186
|
+
logger_1.default.error({ roomId }, `[ROOMS] Failed to get or create a valid room instance for ${roomId} after mutex.`);
|
|
164
187
|
throw new Error(`Failed to retrieve valid room instance for ${roomId}`);
|
|
165
188
|
}
|
|
166
189
|
// Return the tldraw room object
|
|
@@ -169,27 +192,33 @@ async function getOrCreateRoom(roomId) {
|
|
|
169
192
|
// --- Periodic Persistence ---
|
|
170
193
|
// Saves snapshots for rooms marked as `needsPersist` at regular intervals.
|
|
171
194
|
setInterval(() => {
|
|
195
|
+
logger_1.default.debug("[ROOMS] Periodic persistence check initiated.");
|
|
196
|
+
let updatedRoomCount = 0;
|
|
172
197
|
for (const roomState of rooms.values()) {
|
|
173
198
|
// Clean up closed rooms from memory
|
|
174
199
|
if (roomState.room.isClosed()) {
|
|
175
|
-
|
|
200
|
+
logger_1.default.info({ roomId: roomState.id }, `[ROOMS] Removing closed room ${roomState.id} during periodic check.`);
|
|
176
201
|
rooms.delete(roomState.id);
|
|
177
202
|
continue;
|
|
178
203
|
}
|
|
179
204
|
// If room has changes and isn't already saving, start a save operation
|
|
180
205
|
if (roomState.needsPersist && !roomState.persistPromise) {
|
|
181
206
|
roomState.needsPersist = false; // Reset flag
|
|
207
|
+
updatedRoomCount++;
|
|
182
208
|
// Track the save operation promise
|
|
183
209
|
roomState.persistPromise = saveSnapshotToBackend(roomState.id, roomState.room)
|
|
184
210
|
.catch((error) => {
|
|
185
211
|
// Log errors from periodic save but don't stop the interval
|
|
186
|
-
|
|
212
|
+
logger_1.default.error({ err: error, roomId: roomState.id }, // Pass error object
|
|
213
|
+
`[ROOMS] Periodic save failed for room ${roomState.id}`);
|
|
187
214
|
})
|
|
188
215
|
.finally(() => {
|
|
189
216
|
// Clear the promise tracker when done
|
|
190
217
|
roomState.persistPromise = null;
|
|
218
|
+
logger_1.default.debug({ roomId: roomState.id }, "[ROOMS] Persistence promise cleared.");
|
|
191
219
|
});
|
|
192
220
|
}
|
|
193
221
|
}
|
|
222
|
+
logger_1.default.debug({ roomsChecked: rooms.size, roomsUpdatedThisInterval: updatedRoomCount }, "[ROOMS] Periodic persistence check completed.");
|
|
194
223
|
}, SAVE_INTERVAL_MS);
|
|
195
224
|
// --- End Periodic Persistence ---
|
package/dist/server.js
CHANGED
|
@@ -4,57 +4,61 @@ var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
|
4
4
|
};
|
|
5
5
|
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
6
|
const fastify_1 = __importDefault(require("fastify"));
|
|
7
|
+
const stream_1 = require("stream");
|
|
7
8
|
const websocket_1 = __importDefault(require("@fastify/websocket"));
|
|
8
9
|
const cors_1 = __importDefault(require("@fastify/cors"));
|
|
9
|
-
const rooms_1 = require("./rooms");
|
|
10
10
|
const assets_1 = require("./assets");
|
|
11
|
-
const
|
|
11
|
+
const logger_1 = require("./logger");
|
|
12
|
+
const rooms_1 = require("./rooms");
|
|
12
13
|
// Configuration
|
|
13
|
-
const
|
|
14
|
-
const
|
|
14
|
+
const parseCorsWhitelist = (cors) => {
|
|
15
|
+
const normalized = (cors || "").replace(/\s/g, '');
|
|
16
|
+
return normalized === "*" ? normalized : normalized.split(',').filter(x => x.length > 0);
|
|
17
|
+
};
|
|
18
|
+
const PORT = parseInt(process.env.SWB_PORT || "5858", 10);
|
|
19
|
+
const HOST = process.env.SWB_HOST || "0.0.0.0"; // Listen on all interfaces by default
|
|
20
|
+
const CORS_WHITELIST = parseCorsWhitelist(process.env.SWB_CORS_WHITELIST);
|
|
15
21
|
// Initialize Fastify app with logging
|
|
16
|
-
const app = (0, fastify_1.default)({ logger:
|
|
22
|
+
const app = (0, fastify_1.default)({ logger: logger_1.loggerConfig });
|
|
17
23
|
// --- Register Plugins ---
|
|
18
24
|
app.register(websocket_1.default); // Enable WebSocket support
|
|
19
25
|
app.register(cors_1.default, {
|
|
20
26
|
// Configure CORS
|
|
21
|
-
origin:
|
|
27
|
+
origin: CORS_WHITELIST,
|
|
22
28
|
methods: ["GET", "PUT", "POST", "DELETE", "OPTIONS"], // Allowed HTTP methods
|
|
23
29
|
allowedHeaders: ["Content-Type", "Authorization", "X-Original-Filename"], // Allowed headers
|
|
24
30
|
});
|
|
25
31
|
// --- Define Routes ---
|
|
26
|
-
app.register(async (
|
|
32
|
+
app.register(async (svc) => {
|
|
27
33
|
// Health check endpoint
|
|
28
|
-
|
|
34
|
+
svc.get("/", async () => ({
|
|
29
35
|
status: "sync-whiteboard is running",
|
|
30
36
|
time: new Date().toISOString(),
|
|
31
37
|
}));
|
|
32
38
|
// WebSocket connection endpoint for tldraw sync
|
|
33
|
-
|
|
39
|
+
svc.get("/connect/:roomId", { websocket: true }, async (socket, req) => {
|
|
34
40
|
const { roomId } = req.params;
|
|
35
41
|
const sessionId = req.query?.sessionId;
|
|
36
42
|
// Client provides sessionId via query param, handled by TLSocketRoom
|
|
37
43
|
try {
|
|
38
44
|
// Get or create the room instance (loads/creates state)
|
|
39
45
|
const room = await (0, rooms_1.getOrCreateRoom)(roomId);
|
|
40
|
-
|
|
41
|
-
// Connect the client's socket to the tldraw room handler
|
|
46
|
+
req.log.debug(`[SERVER] Handling WebSocket connection for room ${roomId}`);
|
|
42
47
|
room.handleSocketConnect({ sessionId, socket });
|
|
43
48
|
}
|
|
44
49
|
catch (error) {
|
|
45
|
-
|
|
46
|
-
// Close socket with error code if room initialization fails
|
|
50
|
+
req.log.error({ err: error, roomId: roomId }, `[SERVER] Error initializing room`);
|
|
47
51
|
socket.close(1011, "Internal server error during room initialization");
|
|
48
52
|
}
|
|
49
53
|
});
|
|
50
54
|
// --- Asset Handling ---
|
|
51
55
|
// Allow raw body parsing for asset uploads
|
|
52
|
-
|
|
56
|
+
svc.addContentTypeParser("*", (_, __, done) => done(null));
|
|
53
57
|
/**
|
|
54
58
|
* Handles asset uploads (PUT /assets/:id).
|
|
55
59
|
* Proxies the request body stream to the asset storage backend via storeAsset.
|
|
56
60
|
*/
|
|
57
|
-
|
|
61
|
+
svc.put("/assets/:id", async (req, reply) => {
|
|
58
62
|
const { id } = req.params;
|
|
59
63
|
const contentType = req.headers["content-type"] || "application/octet-stream";
|
|
60
64
|
// Extract original filename from custom header
|
|
@@ -69,17 +73,21 @@ app.register(async (app) => {
|
|
|
69
73
|
originalFilename = decodeURIComponent(originalFilenameHeader);
|
|
70
74
|
}
|
|
71
75
|
catch (e) {
|
|
72
|
-
|
|
76
|
+
req.log.warn({ headerValue: originalFilenameHeaderRaw }, `[SERVER] Failed to decode X-Original-Filename header`);
|
|
73
77
|
originalFilename = "decode_error";
|
|
74
78
|
}
|
|
75
79
|
}
|
|
76
80
|
else {
|
|
77
|
-
|
|
81
|
+
req.log.warn({ headerValue: originalFilenameHeaderRaw }, `[SERVER] X-Original-Filename header missing or invalid`);
|
|
78
82
|
}
|
|
79
|
-
|
|
83
|
+
req.log.debug({
|
|
84
|
+
assetId: id,
|
|
85
|
+
contentType: contentType,
|
|
86
|
+
originalFilename: originalFilename,
|
|
87
|
+
}, `[SERVER] PUT /assets/:id`);
|
|
80
88
|
// Validate request body is a stream
|
|
81
89
|
if (!(req.raw instanceof stream_1.Readable)) {
|
|
82
|
-
|
|
90
|
+
req.log.error({ assetId: id }, `[SERVER] Error: Request raw body is not a Readable stream`);
|
|
83
91
|
return reply.code(500).send({
|
|
84
92
|
success: false,
|
|
85
93
|
error: "Internal server error: Invalid request body stream.",
|
|
@@ -88,12 +96,12 @@ app.register(async (app) => {
|
|
|
88
96
|
try {
|
|
89
97
|
// Call the asset storage logic (which proxies to the backend)
|
|
90
98
|
await (0, assets_1.storeAsset)(id, req.raw, contentType, originalFilename);
|
|
91
|
-
|
|
99
|
+
req.log.debug({ assetId: id }, `[SERVER] Asset stored successfully.`);
|
|
92
100
|
reply.code(200).send({ success: true });
|
|
93
101
|
}
|
|
94
102
|
catch (error) {
|
|
95
|
-
|
|
96
|
-
const statusCode = error?.code === "ENOENT" ? 404 : 500;
|
|
103
|
+
req.log.error({ err: error, assetId: id }, `[SERVER] Error storing asset`);
|
|
104
|
+
const statusCode = error?.code === "ENOENT" ? 404 : 500;
|
|
97
105
|
reply.code(statusCode).send({
|
|
98
106
|
success: false,
|
|
99
107
|
error: error.message || "Failed to store asset",
|
|
@@ -106,17 +114,17 @@ app.register(async (app) => {
|
|
|
106
114
|
*/
|
|
107
115
|
app.get("/assets/:id", async (req, reply) => {
|
|
108
116
|
const { id } = req.params;
|
|
109
|
-
|
|
117
|
+
req.log.debug({ assetId: id }, `[SERVER] GET /assets/:id`);
|
|
110
118
|
try {
|
|
111
119
|
// Call the asset loading logic (which proxies to the backend)
|
|
112
120
|
const { stream: dataStream, contentType } = await (0, assets_1.loadAsset)(id);
|
|
113
|
-
|
|
121
|
+
req.log.debug({ assetId: id, contentType: contentType }, `[SERVER] Asset loaded. Sending reply...`);
|
|
114
122
|
// Set the correct Content-Type header and send the stream
|
|
115
123
|
reply.header("Content-Type", contentType);
|
|
116
124
|
reply.send(dataStream);
|
|
117
125
|
}
|
|
118
126
|
catch (error) {
|
|
119
|
-
|
|
127
|
+
req.log.error({ err: error, assetId: id }, `[SERVER] Error loading asset`);
|
|
120
128
|
if (error.code === "ENOENT") {
|
|
121
129
|
// Asset not found by the backend
|
|
122
130
|
reply.code(404).send({ success: false, error: "Asset not found" });
|
|
@@ -140,7 +148,7 @@ const start = async () => {
|
|
|
140
148
|
app.log.info(`Sync Whiteboard server running on http://${HOST}:${PORT}`);
|
|
141
149
|
}
|
|
142
150
|
catch (err) {
|
|
143
|
-
app.log.
|
|
151
|
+
app.log.fatal({ err: err }, "Server failed to start");
|
|
144
152
|
process.exit(1); // Exit if server fails to start
|
|
145
153
|
}
|
|
146
154
|
};
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@smoothglue/sync-whiteboard",
|
|
3
|
-
"version": "
|
|
3
|
+
"version": "1.0.0",
|
|
4
4
|
"main": "dist/server.js",
|
|
5
5
|
"scripts": {
|
|
6
6
|
"dev": "ts-node-dev --respawn --transpile-only src/server.ts",
|
|
@@ -37,6 +37,7 @@
|
|
|
37
37
|
"@tldraw/sync-core": "^3.12.0",
|
|
38
38
|
"@tldraw/tlschema": "^3.12.0",
|
|
39
39
|
"fastify": "^5.3.0",
|
|
40
|
+
"pino": "^9.7.0",
|
|
40
41
|
"ws": "^8.18.1"
|
|
41
42
|
}
|
|
42
43
|
}
|