spooder 5.1.12 → 6.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -10,6 +10,9 @@
10
10
 
11
11
  The design goal behind `spooder` is not to provide a full-featured web server, but to expand the Bun runtime with a set of APIs and utilities that make it easy to develop servers with minimal overhead.
12
12
 
13
+ ### spooderverse
14
+ In addition to the core API provided here, you can also find [spooderverse](https://github.com/Kruithne/spooderverse) which is a collection of drop-in modules designed for spooder with minimal overhead and zero dependencies.
15
+
13
16
  > [!NOTE]
14
17
  > If you think a is missing a feature, consider opening an issue with your use-case. The goal behind `spooder` is to provide APIs that are useful for a wide range of use-cases, not to provide bespoke features better suited for userland.
15
18
 
@@ -38,14 +41,16 @@ Below is a full map of the available configuration options in their default stat
38
41
  "spooder": {
39
42
 
40
43
  // see CLI > Usage
41
- "run": "bun run index.ts",
44
+ "run": "",
42
45
  "run_dev": "",
43
46
 
44
47
  // see CLI > Auto Restart
45
- "auto_restart": true,
46
- "auto_restart_max": 30000,
47
- "auto_restart_attempts": 10,
48
- "auto_restart_grace": 30000,
48
+ "auto_restart": {
49
+ "enabled": false,
50
+ "backoff_max": 300000,
51
+ "backoff_grace": 30000,
52
+ "max_attempts": -1
53
+ },
49
54
 
50
55
  // see CLI > Auto Update
51
56
  "update": [
@@ -55,6 +60,7 @@ Below is a full map of the available configuration options in their default stat
55
60
 
56
61
  // see CLI > Canary
57
62
  "canary": {
63
+ "enabled": false,
58
64
  "account": "",
59
65
  "repository": "",
60
66
  "labels": [],
@@ -79,6 +85,7 @@ The `CLI` component of `spooder` is a global command-line tool for running serve
79
85
  - [CLI > Dev Mode](#cli-dev-mode)
80
86
  - [CLI > Auto Restart](#cli-auto-restart)
81
87
  - [CLI > Auto Update](#cli-auto-update)
88
+ - [CLI > Instancing](#cli-instancing)
82
89
  - [CLI > Canary](#cli-canary)
83
90
  - [CLI > Canary > Crash](#cli-canary-crash)
84
91
  - [CLI > Canary > Sanitization](#cli-canary-sanitization)
@@ -90,6 +97,7 @@ The `CLI` component of `spooder` is a global command-line tool for running serve
90
97
 
91
98
  - [API > Cheatsheet](#api-cheatsheet)
92
99
  - [API > Logging](#api-logging)
100
+ - [API > IPC](#api-ipc)
93
101
  - [API > HTTP](#api-http)
94
102
  - [API > HTTP > Directory Serving](#api-http-directory)
95
103
  - [API > HTTP > Server-Sent Events (SSE)](#api-http-sse)
@@ -103,10 +111,8 @@ The `CLI` component of `spooder` is a global command-line tool for running serve
103
111
  - [API > Cache Busting](#api-cache-busting)
104
112
  - [API > Git](#api-git)
105
113
  - [API > Database](#api-database)
114
+ - [API > Database > Utilities](#api-database-utilities)
106
115
  - [API > Database > Schema](#api-database-schema)
107
- - [API > Database > Interface](#api-database-interface)
108
- - [API > Database > Interface > SQLite](#api-database-interface-sqlite)
109
- - [API > Database > Interface > MySQL](#api-database-interface-mysql)
110
116
  - [API > Utilities](#api-utilities)
111
117
 
112
118
  # CLI
@@ -122,7 +128,7 @@ cd /var/www/my-website-about-fish.net/
122
128
  spooder
123
129
  ```
124
130
 
125
- `spooder` will launch your server either by executing the `run` command provided in the configuration, or by executing `bun run index.ts` by default.
131
+ `spooder` will launch your server either by executing the `run` command provided in the configuration. If this is not defined, an error will be thrown.
126
132
 
127
133
  ```json
128
134
  {
@@ -155,7 +161,7 @@ The following differences will be observed when running in development mode:
155
161
 
156
162
  - If `run_dev` is configured, it will be used instead of the default `run` command.
157
163
  - Update commands defined in `spooder.update` will not be executed when starting a server.
158
- - If the server crashes and `auto_restart` is enabled, the server will not be restarted, and spooder will exit with the same exit code as the server.
164
+ - If the server crashes and `auto_restart` is configured, the server will not be restarted, and spooder will exit with the same exit code as the server.
159
165
  - If canary is configured, reports will not be dispatched to GitHub and instead be printed to the console; this includes crash reports.
160
166
 
161
167
  It is possible to detect in userland if a server is running in development mode by checking the `SPOODER_ENV` environment variable.
@@ -188,27 +194,35 @@ You can configure a different command to run when in development mode using the
188
194
  > [!NOTE]
189
195
  > This feature is not enabled by default.
190
196
 
191
- In the event that the server process exits with a non-zero exit code, `spooder` can automatically restart it using an exponential backoff strategy. To enable this feature set `auto_restart` to `true` in the configuration.
197
+ In the event that the server process exits, `spooder` can automatically restart it.
198
+
199
+ If the server exits with a non-zero exit code, this will be considered an **unexpected shutdown**. The process will be restarted using an [exponential backoff strategy](https://en.wikipedia.org/wiki/Exponential_backoff).
192
200
 
193
201
  ```json
194
202
  {
195
203
  "spooder": {
196
- "auto_restart": true,
197
- "auto_restart_max": 30000,
198
- "auto_restart_attempts": 10,
199
- "auto_restart_grace": 30000
204
+ "auto_restart": {
205
+ "enabled": true,
206
+
207
+ // max restarts before giving up
208
+ "max_attempts": -1, // default (unlimited)
209
+
210
+ // max delay (ms) between restart attempts
211
+ "backoff_max": 300000, // default 5 min
212
+
213
+ // grace period after which the backoff protocol
214
+ "backoff_grace": 30000 // default 30s
215
+ }
200
216
  }
201
217
  }
202
218
  ```
203
219
 
204
- ### Configuration Options
220
+ If the server exits with a `0` exit code, this will be considered an **intentional shutdown** and `spooder` will execute the update commands before restarting the server.
205
221
 
206
- - **`auto_restart`** (boolean, default: `false`): Enable or disable the auto-restart feature
207
- - **`auto_restart_max`** (number, default: `30000`): Maximum delay in milliseconds between restart attempts
208
- - **`auto_restart_attempts`** (number, default: `-1`): Maximum number of restart attempts before giving up. Set to `-1` for unlimited attempts
209
- - **`auto_restart_grace`** (number, default: `30000`): Period of time after which the backoff protocol disables if the server remains stable.
222
+ > [!TIP]
223
+ > An **intentional shutdown** can be useful for auto-updating in response to events, such as webhooks.
210
224
 
211
- If the server exits with a zero exit code (successful termination), auto-restart will not trigger.
225
+ If the server exits with `42` (SPOODER_AUTO_RESTART), the update commands will **not** be executed before starting the server. [See Auto Update for information](#cli-auto-update).
212
226
 
213
227
  <a id="cli-auto-update"></a>
214
228
  ## CLI > Auto Update
@@ -238,22 +252,106 @@ Each command should be a separate entry in the array and will be executed in seq
238
252
 
239
253
  If a command in the sequence fails, the remaining commands will not be executed, however the server will still be started. This is preferred over entering a restart loop or failing to start the server at all.
240
254
 
241
- You can utilize this to automatically update your server in response to a webhook by exiting the process.
255
+ You can combine this with [Auto Restart](#cli-auto-restart) to automatically update your server in response to a webhook by exiting the process.
242
256
 
243
257
  ```ts
244
258
  server.webhook(process.env.WEBHOOK_SECRET, '/webhook', payload => {
245
259
  setImmediate(async () => {
246
260
  await server.stop(false);
247
- process.exit();
261
+ process.exit(0);
248
262
  });
249
263
  return HTTP_STATUS_CODE.OK_200;
250
264
  });
251
265
  ```
252
266
 
267
+ ### Multi-Instance Auto Update
268
+
269
+ See [Instancing](#cli-instancing) for instructions on how to use [Auto Update](#cli-auto-update) with multiple instances.
270
+
253
271
  ### Skip Updates
254
272
 
255
273
  In addition to being skipped in [dev mode](#cli-dev-mode), updates can also be skipped in production mode by passing the `--no-update` flag.
256
274
 
275
+ <a id="cli-instancing"></a>
276
+ ## CLI > Instancing
277
+
278
+ > [!NOTE]
279
+ > This feature is not enabled by default.
280
+
281
+ By default, `spooder` will start and manage a single process as defined by the `run` and `run_dev` configuration properties. In some scenarios, you may want multiple processes for a single codebase, such as variant sub-domains.
282
+
283
+ This can be configured in `spooder` using the `instances` array, with each entry defining a unique instance.
284
+
285
+ ```json
286
+ "spooder": {
287
+ "instances": [
288
+ {
289
+ "id": "dev01",
290
+ "run": "bun run --env-file=.env.a index.ts",
291
+ "run_dev": "bun run --env-file=.env.a.dev index.ts --inspect"
292
+ },
293
+ {
294
+ "id": "dev02",
295
+ "run": "bun run --env-file=.env.b index.ts",
296
+ "run_dev": "bun run --env-file=.env.b.dev index.ts --inspect"
297
+ }
298
+ ]
299
+ }
300
+ ```
301
+
302
+ Instances will be managed individually in the same manner that a single process would be, including auto-restarting and other functionality.
303
+
304
+ ### Instance Stagger
305
+
306
+ By default, instances are all launched instantly. This behavior can be configured with the `instance_stagger_interval` configuration property, which defines an interval between instance launches in milliseconds.
307
+
308
+ This interval effects both server start-up, auto-restarting and crash recovery. No two instances will be launched within that interval regardless of the reason.
309
+
310
+ ### Canary
311
+
312
+ The [canary](#cli-canary) feature functions the same for multiple instances as it would for a single instance with the caveat that the `instance` object as defined in the configuration is included in the crash report for diagnostics.
313
+
314
+ This allows you to define custom properties on the instance which will be included as part of the crash report.
315
+
316
+ ```json
317
+ {
318
+ "id": "dev01",
319
+ "run": "bun run --env-file=.env.a index.ts",
320
+ "sub_domain": "dev01.spooder.dev" // custom, for diagnostics
321
+ }
322
+ ```
323
+
324
+ > ![IMPORTANT]
325
+ > You should not include sensitive or confidential credentials in your instance configuration for this reason. This should always be handled using environment variables or credential storage.
326
+
327
+ ### Multi-instance Auto Restart
328
+
329
+ Combining [Auto Restart](#cli-auto-restart) and [Auto Update](#cli-auto-update), when a server process exits with a zero exit code, the update commands will be run as the server restarts. This is suitable for a single-instance setup.
330
+
331
+ In the event of multiple instances, this does not work. One server instance would receive the webhook and exit, resulting in the update commands being run and that instance being restarted, leaving the other instances still running.
332
+
333
+ A solution might be to send the web-hook to every instance, but now each instance is going to restart individually, running the update commands unnecessarily and, if at the same time, causing conflicts. In addition, the concept of multiple instances in spooder is that they operate from a single codebase, which makes sending multiple webhooks a challenge - so don't do this.
334
+
335
+ The solution is to the use the [IPC](#api-ipc) to instruct the host process to handle this.
336
+
337
+ ```ts
338
+ server.webhook(process.env.WEBHOOK_SECRET, '/webhook', payload => {
339
+ setImmediate(async () => {
340
+ ipc_send(IPC_TARGET.SPOODER, IPC_OP.CMSG_TRIGGER_UPDATE);
341
+ });
342
+ return HTTP_STATUS_CODE.OK_200;
343
+ });
344
+
345
+ ipc_register(IPC_OP.SMSG_UPDATE_READY, async () => {
346
+ await server.stop(false);
347
+ process.exit(EXIT_CODE.SPOODER_AUTO_UPDATE);
348
+ });
349
+ ```
350
+
351
+ In this scenario, we instruct the host process from one instance receiving the webhook to apply the updates. Once the update commands have been run, all instances are send the `SMSG_UPDATE_READY` event, indicating they can restart.
352
+
353
+ Exiting with the `SPOODER_AUTO_UPDATE` exit code instructs spooder that we're exiting as part of this process, and prevents auto-update from running on restart.
354
+
257
355
  <a id="cli-canary"></a>
258
356
  ## CLI > Canary
259
357
 
@@ -291,6 +389,7 @@ Each server that intends to use the canary feature will need to have the private
291
389
  ```json
292
390
  "spooder": {
293
391
  "canary": {
392
+ "enabled": true,
294
393
  "account": "<GITHUB_ACCOUNT_NAME>",
295
394
  "repository": "<GITHUB_REPOSITORY>",
296
395
  "labels": ["some-label"]
@@ -537,12 +636,30 @@ caution(err_message_or_obj: string | object, ...err: object[]): Promise<void>;
537
636
  panic(err_message_or_obj: string | object, ...err: object[]): Promise<void>;
538
637
  safe(fn: Callable): Promise<void>;
539
638
 
540
- // worker
541
- worker_event_pipe(worker: Worker, options?: WorkerEventPipeOptions): WorkerEventPipe;
542
- pipe.send(id: string, data?: object): void;
543
- pipe.on(event: string, callback: (data: object) => void | Promise<void>): void;
544
- pipe.once(event: string, callback: (data: object) => void | Promise<void>): void;
545
- pipe.off(event: string): void;
639
+ // worker (main thread)
640
+ worker_pool(options: WorkerPoolOptions): Promise<WorkerPool>;
641
+ pool.id: string;
642
+ pool.send: (peer: string, id: string, data?: WorkerMessageData) => void;
643
+ pool.broadcast: (id: string, data?: WorkerMessageData) => void;
644
+ pool.on: (event: string, callback: (message: WorkerMessage) => Promise<void> | void) => void;
645
+ pool.once: (event: string, callback: (message: WorkerMessage) => Promise<void> | void) => void;
646
+ pool.off: (event: string) => void;
647
+
648
+ type WorkerPoolOptions = {
649
+ id?: string;
650
+ worker: string | string[];
651
+ size?: number;
652
+ auto_restart?: boolean | AutoRestartConfig;
653
+ };
654
+
655
+ type AutoRestartConfig = {
656
+ backoff_max?: number; // default: 5 * 60 * 1000 (5 min)
657
+ backoff_grace?: number; // default: 30000 (30 seconds)
658
+ max_attempts?: number; // default: 5, -1 for unlimited
659
+ };
660
+
661
+ // worker (worker thread)
662
+ worker_connect(peer_id?: string): WorkerPool;
546
663
 
547
664
  // templates
548
665
  Replacements = Record<string, string | Array<string> | object | object[]> | ReplacerFn | AsyncReplaceFn;
@@ -556,46 +673,20 @@ cache_bust_get_hash_table(): Record<string, string>;
556
673
 
557
674
  // git
558
675
  git_get_hashes(length: number): Promise<Record<string, string>>;
559
- git_get_hashes_sync(length: number): Record<string, string>
560
-
561
- // database interface
562
- db_sqlite(filename: string, options: number|object): db_sqlite;
563
- db_mysql(options: ConnectionOptions, pool: boolean): Promise<MySQLDatabaseInterface>;
564
- db_cast_set<T extends string>(set: string | null): Set<T>;
565
- db_serialize_set<T extends string>(set: Set<T> | null): string;
566
-
567
- // db_sqlite
568
- update_schema(db_dir: string, schema_table?: string): Promise<void>
569
- insert(sql: string, ...values: any): number;
570
- insert_object(table: string, obj: Record<string, any>): number;
571
- execute(sql: string, ...values: any): number;
572
- get_all<T>(sql: string, ...values: any): T[];
573
- get_single<T>(sql: string, ...values: any): T | null;
574
- get_column<T>(sql: string, column: string, ...values: any): T[];
575
- get_paged<T>(sql: string, values?: any[], page_size?: number): AsyncGenerator<T[]>;
576
- count(sql: string, ...values: any): number;
577
- count_table(table_name: string): number;
578
- exists(sql: string, ...values: any): boolean;
579
- transaction(scope: (transaction: SQLiteDatabaseInterface) => void | Promise<void>): boolean;
580
-
581
- // db_mysql
582
- update_schema(db_dir: string, schema_table?: string): Promise<void>
583
- insert(sql: string, ...values: any): Promise<number>;
584
- insert_object(table: string, obj: Record<string, any>): Promise<number>;
585
- execute(sql: string, ...values: any): Promise<number>;
586
- get_all<T>(sql: string, ...values: any): Promise<T[]>;
587
- get_single<T>(sql: string, ...values: any): Promise<T | null>;
588
- get_column<T>(sql: string, column: string, ...values: any): Promise<T[]>;
589
- call<T>(func_name: string, ...args: any): Promise<T[]>;
590
- get_paged<T>(sql: string, values?: any[], page_size?: number): AsyncGenerator<T[]>;
591
- count(sql: string, ...values: any): Promise<number>;
592
- count_table(table_name: string): Promise<number>;
593
- exists(sql: string, ...values: any): Promise<boolean>;
594
- transaction(scope: (transaction: MySQLDatabaseInterface) => void | Promise<void>): Promise<boolean>;
676
+ git_get_hashes_sync(length: number): Record<string, string>;
677
+
678
+ // database utilities
679
+ db_set_cast<T extends string>(set: string | null): Set<T>;
680
+ db_set_serialize<T extends string>(set: Iterable<T> | null): string;
595
681
 
596
682
  // database schema
597
- db_update_schema_sqlite(db: Database, schema_dir: string, schema_table?: string): Promise<void>;
598
- db_update_schema_mysql(db: Connection, schema_dir: string, schema_table?: string): Promise<void>;
683
+ type SchemaOptions = {
684
+ schema_table: string;
685
+ recursive: boolean;
686
+ };
687
+
688
+ db_get_schema_revision(db: SQL): Promise<number|null>;
689
+ db_schema(db: SQL, schema_path: string, options?: SchemaOptions): Promise<boolean>;
599
690
 
600
691
  // caching
601
692
  cache_http(options?: CacheOptions);
@@ -604,10 +695,19 @@ cache.request(req: Request, cache_key: string, content_generator: () => string |
604
695
 
605
696
  // utilities
606
697
  filesize(bytes: number): string;
698
+ BiMap: class BiMap<K, V>;
699
+
700
+ // ipc
701
+ ipc_register(op: number, callback: IPC_Callback);
702
+ ipc_send(target: string, op: number, data?: object);
607
703
 
608
704
  // constants
609
705
  HTTP_STATUS_TEXT: Record<number, string>;
610
706
  HTTP_STATUS_CODE: { OK_200: 200, NotFound_404: 404, ... };
707
+ EXIT_CODE: Record<string, number>;
708
+ EXIT_CODE_NAMES: Record<number, string>;
709
+ IPC_TARGET: Record<string, string>;
710
+ IPC_OP: Record<string, number>;
611
711
  ```
612
712
 
613
713
  <a id="api-logging"></a>
@@ -621,6 +721,13 @@ log('Hello, {world}!');
621
721
  // > [info] Hello, world!
622
722
  ```
623
723
 
724
+ Tagged template literals are also supported and automatically highlights values without the brace syntax.
725
+
726
+ ```ts
727
+ const user = 'Fred';
728
+ log`Hello ${user}!`;
729
+ ```
730
+
624
731
  Formatting parameters are supported using standard console logging formatters.
625
732
 
626
733
  ```ts
@@ -667,6 +774,60 @@ log(`Fruit must be one of ${fruit.map(e => `{${e}}`).join(', ')}`);
667
774
  log(`Fruit must be one of ${log_list(fruit)}`);
668
775
  ```
669
776
 
777
+ <a id="api-ipc"></a>
778
+ ## API > IPC
779
+
780
+ `spooder` provides a way to send/receive messages between different instances via IPC. See [CLI > Instancing](#cli-instancing) for documentation on instances.
781
+
782
+ ```ts
783
+ // listen for a message
784
+ ipc_register(0x1, msg => {
785
+ // msg.peer, msg.op, msg.data
786
+ console.log(msg.data.foo); // 42
787
+ });
788
+
789
+ // send a message to dev02
790
+ ipc_send('dev02', 0x1, { foo: 42 });
791
+
792
+ // send a message to all other instances
793
+ ipc_send(IPC_TARGET.BROADCAST, 0x1, { foo: 42 });
794
+ ```
795
+
796
+ This can also be used to communicate with the host process for certain functionality, such as [auto-restarting](#cli-auto-restart).
797
+
798
+ #### OpCodes
799
+
800
+ When sending/receiving IPC messages, the message will include an opcode. When communicating with the host process, that will be one of the following:
801
+
802
+ ```ts
803
+ IPC_OP.CMSG_TRIGGER_UPDATE = -1;
804
+ IPC_OP.SMSG_UPDATE_READY = -2;
805
+ IPC_OP.CMSG_REGISTER_LISTENER = -3; // used internally by ipc_register
806
+ ```
807
+
808
+ When sending/receiving your own messages, you can define and use your own ID schema. To prevent conflict with internal opcodes, always use positive values; `spooder` internal opcodes will always be negative.
809
+
810
+ ### `ipc_register(op: number, callback: IPC_Callback)`
811
+
812
+ Register a listener for IPC events. The callback will receive an object with this structure:
813
+
814
+ ```ts
815
+ type IPC_Message = {
816
+ op: number; // opcode received
817
+ peer: string; // sender
818
+ data?: object // payload data (optional)
819
+ };
820
+ ```
821
+
822
+ ### `ipc_send(peer: string, op: number, data?: object)`
823
+
824
+ Send an IPC event. The target can either be the ID of another instance (such as the `peer` ID from an `IPC_Message`) or one of the following constants.
825
+
826
+ ```ts
827
+ IPC_TARGET.SPOODER; // communicate with the host
828
+ IPC_TARGET.BROADCAST; // broadcast to all other instances
829
+ ```
830
+
670
831
  <a id="api-http"></a>
671
832
  ## API > HTTP
672
833
 
@@ -1688,78 +1849,251 @@ await safe(() => {
1688
1849
  <a id="api-workers"></a>
1689
1850
  ## API > Workers
1690
1851
 
1691
- ### 🔧 `worker_event_pipe(worker: Worker, options?: WorkerEventPipeOptions): WorkerEventPipe`
1852
+ ### 🔧 `worker_pool(options: WorkerPoolOptions): Promise<WorkerPool>` (Main Thread)
1692
1853
 
1693
- Create an event-based communication pipe between host and worker processes. This function works both inside and outside of workers and provides a simple event system on top of the native `postMessage` API.
1854
+ Create a worker pool with an event-based communication system between the main thread and one or more workers. This provides a networked event system on top of the native `postMessage` API.
1694
1855
 
1695
1856
  ```ts
1696
- // main thread
1697
- const worker = new Worker('./some_file.ts');
1698
- const pipe = worker_event_pipe(worker);
1857
+ // with a single worker (id defaults to 'main')
1858
+ const pool = await worker_pool({
1859
+ worker: './worker.ts'
1860
+ });
1861
+
1862
+ // with multiple workers and custom ID
1863
+ const pool = await worker_pool({
1864
+ id: 'main',
1865
+ worker: ['./worker_a.ts', './worker_b.ts']
1866
+ });
1867
+
1868
+ // spawn multiple instances of the same worker
1869
+ const pool = await worker_pool({
1870
+ worker: './worker.ts',
1871
+ size: 5 // spawns 5 instances
1872
+ });
1873
+
1874
+ // with custom response timeout
1875
+ const pool = await worker_pool({
1876
+ worker: './worker.ts',
1877
+ response_timeout: 10000 // 10 seconds (default: 5000ms, use -1 to disable)
1878
+ });
1699
1879
 
1700
- pipe.on('bar', data => console.log('Received from worker:', data));
1701
- pipe.send('foo', { x: 42 });
1880
+ // with auto-restart enabled (boolean)
1881
+ const pool = await worker_pool({
1882
+ worker: './worker.ts',
1883
+ auto_restart: true // uses default settings
1884
+ });
1885
+
1886
+ // with custom auto-restart configuration
1887
+ const pool = await worker_pool({
1888
+ worker: './worker.ts',
1889
+ auto_restart: {
1890
+ backoff_max: 5 * 60 * 1000, // 5 min (default)
1891
+ backoff_grace: 30000, // 30 seconds (default)
1892
+ max_attempts: 5 // -1 for unlimited (default: 5)
1893
+ }
1894
+ });
1895
+ ```
1896
+
1897
+ ### 🔧 `worker_connect(peer_id?: string, response_timeout?: number): WorkerPool` (Worker Thread)
1898
+
1899
+ Connect a worker thread to the worker pool. This should be called from within a worker thread to establish communication with the main thread and other workers.
1702
1900
 
1901
+ **Parameters:**
1902
+ - `peer_id` - Optional worker ID (defaults to `worker-UUID`)
1903
+ - `response_timeout` - Optional timeout in milliseconds for request-response patterns (default: 5000ms, use -1 to disable)
1904
+
1905
+ ```ts
1703
1906
  // worker thread
1704
- import { worker_event_pipe } from 'spooder';
1907
+ const pool = worker_connect('my_worker'); // defaults to worker-UUID, 5000ms timeout
1908
+ pool.on('test', msg => {
1909
+ console.log(`Received ${msg.data.foo} from ${msg.peer}`);
1910
+ });
1705
1911
 
1706
- const pipe = worker_event_pipe(globalThis as unknown as Worker);
1912
+ // with custom timeout
1913
+ const pool = worker_connect('my_worker', 10000); // 10 second timeout
1914
+ const pool = worker_connect('my_worker', -1); // no timeout
1915
+ ```
1916
+
1917
+ ### Basic Usage
1707
1918
 
1708
- pipe.on('foo', data => {
1709
- console.log('Received from main:', data); // { x: 42 }
1710
- pipe.send('bar', { response: 'success' });
1919
+ ```ts
1920
+ // main thread
1921
+ const pool = await worker_pool({
1922
+ id: 'main',
1923
+ worker: './worker.ts'
1924
+ });
1925
+
1926
+ pool.send('my_worker', 'test', { foo: 42 });
1927
+
1928
+ // worker thread (worker.ts)
1929
+ const pool = worker_connect('my_worker');
1930
+ pool.on('test', msg => {
1931
+ console.log(`Received ${msg.data.foo} from ${msg.peer}`);
1932
+ // > Received 42 from main
1933
+ });
1934
+ ```
1935
+
1936
+ ### Cross-Worker Communication
1937
+
1938
+ ```ts
1939
+ // main thread
1940
+ const pool = await worker_pool({
1941
+ id: 'main',
1942
+ worker: ['./worker_a.ts', './worker_b.ts']
1711
1943
  });
1944
+
1945
+ pool.send('worker_a', 'test', { foo: 42 }); // send to just worker_a
1946
+ pool.broadcast('test', { foo: 50 } ); // send to all workers
1947
+
1948
+ // worker_a.ts
1949
+ const pool = worker_connect('worker_a');
1950
+ // send from worker_a to worker_b
1951
+ pool.send('worker_b', 'test', { foo: 500 });
1712
1952
  ```
1713
1953
 
1714
- ### WorkerEventPipeOptions
1954
+ ### 🔧 `pool.send(peer: string, id: string, data?: Record<string, any>, expect_response?: boolean): void | Promise<WorkerMessage>`
1715
1955
 
1716
- The second parameter of `worker_event_pipe` accepts an object of options.
1956
+ Send a message to a specific peer in the pool, which can be the main host or another worker.
1717
1957
 
1718
- Currently the only available option is `use_canary_reporting`. If enabled, the event pipe will call `caution()` when it encounters errors such as malformed payloads.
1958
+ When `expect_response` is `false` (default), the function returns `void`. When `true`, it returns a `Promise<WorkerMessage>` that resolves when the peer responds using `pool.respond()`.
1719
1959
 
1720
- ### 🔧 `pipe.send(id: string, data?: object): void`
1960
+ ```ts
1961
+ // Fire-and-forget (default behavior)
1962
+ pool.send('main', 'user_update', { user_id: 123, name: 'John' });
1963
+ pool.send('worker_b', 'simple_event');
1964
+
1965
+ // Request-response pattern
1966
+ const response = await pool.send('worker_b', 'calculate', { value: 42 }, true);
1967
+ console.log('Result:', response.data);
1968
+ ```
1721
1969
 
1722
- Send a message to the other side of the worker pipe with the specified event ID and optional data payload.
1970
+ > [!NOTE]
1971
+ > When using `expect_response: true`, the promise will reject with a timeout error if no response is received within the configured timeout (default: 5000ms). You can configure this timeout in `worker_pool()` options or `worker_connect()` parameters, or disable it entirely by setting it to `-1`.
1972
+
1973
+ ### 🔧 `pool.broadcast(id: string, data?: Record<string, any>): void`
1974
+
1975
+ Broadcast a message to all peers in the pool.
1723
1976
 
1724
1977
  ```ts
1725
- pipe.send('user_update', { user_id: 123, name: 'John' });
1726
- pipe.send('simple_event'); // data defaults to {}
1978
+ pool.broadcast('test_event', { foo: 42 });
1727
1979
  ```
1728
1980
 
1729
- ### 🔧 `pipe.on(event: string, callback: (data: object) => void | Promise<void>): void`
1981
+ ### 🔧 `pool.on(event: string, callback: (data: Record<string, any>) => void | Promise<void>): void`
1730
1982
 
1731
1983
  Register an event handler for messages with the specified event ID. The callback can be synchronous or asynchronous.
1732
1984
 
1733
1985
  ```ts
1734
- pipe.on('process_data', async (data) => {
1735
- const result = await processData(data);
1736
- pipe.send('data_processed', { result });
1737
- });
1738
-
1739
- pipe.on('log_message', (data) => {
1740
- console.log(data.message);
1986
+ pool.on('process_data', async msg => {
1987
+ // msg.peer
1988
+ // msg.id
1989
+ // msg.data
1741
1990
  });
1742
1991
  ```
1743
1992
 
1744
1993
  > [!NOTE]
1745
1994
  > There can only be one event handler for a specific event ID. Registering a new handler for an existing event ID will overwrite the previous handler.
1746
1995
 
1747
- ### 🔧 `pipe.once(event: string, callback: (data: object) => void | Promise<void>): void`
1996
+ ### 🔧 `pool.once(event: string, callback: (data: Record<string, any>) => void | Promise<void>): void`
1748
1997
 
1749
- Register an event handler for messages with the specified event ID. This is the same as `pipe.on`, except the handler is automatically removed once it is fired.
1998
+ Register an event handler for messages with the specified event ID. This is the same as `pool.on`, except the handler is automatically removed once it is fired.
1750
1999
 
1751
2000
  ```ts
1752
- pipe.once('one_time_event', async (data) => {
2001
+ pool.once('one_time_event', async msg => {
1753
2002
  // this will only fire once
1754
2003
  });
1755
2004
  ```
1756
2005
 
1757
- ### 🔧 `pipe.off(event: string): void`
2006
+ ### 🔧 `pool.off(event: string): void`
1758
2007
 
1759
2008
  Unregister an event handler for events with the specified event ID.
1760
2009
 
1761
2010
  ```ts
1762
- pipe.off('event_name');
2011
+ pool.off('event_name');
2012
+ ```
2013
+
2014
+ ### 🔧 `pool.respond(message: WorkerMessage, data?: Record<string, any>): void`
2015
+
2016
+ Respond to a message that was sent with `expect_response: true`. This allows implementing request-response patterns between peers.
2017
+
2018
+ ```ts
2019
+ pool.on('calculate', msg => {
2020
+ const result = msg.data.value * 2;
2021
+ pool.respond(msg, { result });
2022
+ });
2023
+
2024
+ const response = await pool.send('worker_a', 'calculate', { value: 42 }, true);
2025
+ console.log(response.data.result); // 84
2026
+ ```
2027
+
2028
+ **Message Structure:**
2029
+ - `message.id` - The event ID
2030
+ - `message.peer` - The sender's peer ID
2031
+ - `message.data` - The message payload
2032
+ - `message.uuid` - Unique identifier for this message
2033
+ - `message.response_to` - UUID of the message being responded to (only present in responses)
2034
+
2035
+ ### Request-Response Example
2036
+
2037
+ ```ts
2038
+ // main.ts
2039
+ const pool = await worker_pool({
2040
+ id: 'main',
2041
+ worker: './worker.ts'
2042
+ });
2043
+
2044
+ const response = await pool.send('worker_a', 'MSG_REQUEST', { value: 42 }, true);
2045
+ console.log(`Got response ${response.data.value} from ${response.peer}`);
2046
+
2047
+ // worker.ts
2048
+ const pool = worker_connect('worker_a');
2049
+
2050
+ pool.on('MSG_REQUEST', msg => {
2051
+ console.log(`Received request with value: ${msg.data.value}`);
2052
+ pool.respond(msg, { value: msg.data.value * 2 });
2053
+ });
2054
+ ```
2055
+
2056
+ ### Auto-Restart
2057
+
2058
+ The `worker_pool` function supports automatic worker restart when workers crash or close unexpectedly. This feature includes an exponential backoff protocol to prevent restart loops.
2059
+
2060
+ #### Configuration:
2061
+ - `auto_restart`: `boolean | AutoRestartConfig` - Enable auto-restart (optional)
2062
+ - If `true`, uses default settings
2063
+ - If an object, allows customization of restart behavior
2064
+
2065
+ #### AutoRestartConfig
2066
+ - `backoff_max`: `number` - Maximum delay between restart attempts in milliseconds (default: `5 * 60 * 1000` = 5 minutes)
2067
+ - `backoff_grace`: `number` - Time in milliseconds a worker must run successfully before restart attempts are reset (default: `30000` = 30 seconds)
2068
+ - `max_attempts`: `number` - Maximum number of restart attempts before giving up (default: `5`, use `-1` for unlimited)
2069
+
2070
+ #### Backoff Protocol
2071
+ 1. Initial restart delay starts at 100ms
2072
+ 2. Each subsequent restart doubles the delay
2073
+ 3. Delay is capped at `backoff_max`
2074
+ 4. If a worker runs successfully for `backoff_grace` milliseconds, the delay and attempt counter reset
2075
+ 5. After `max_attempts` failures, auto-restart stops for that worker
2076
+
2077
+ **Example:**
2078
+ ```ts
2079
+ const pool = await worker_pool({
2080
+ worker: './worker.ts',
2081
+ auto_restart: {
2082
+ backoff_max: 5 * 60 * 1000, // cap at 5 minutes
2083
+ backoff_grace: 30000, // reset after 30 seconds of successful operation
2084
+ max_attempts: 5 // give up after 5 failed attempts
2085
+ }
2086
+ });
2087
+ ```
2088
+
2089
+ #### Graceful Exit
2090
+
2091
+ Workers can exit gracefully without triggering an auto-restart by using the `WORKER_EXIT_NO_RESTART` exit code (42):
2092
+
2093
+ ```ts
2094
+ // worker thread
2095
+ import { WORKER_EXIT_NO_RESTART } from 'spooder';
2096
+ process.exit(WORKER_EXIT_NO_RESTART); // exits without auto-restart
1763
2097
  ```
1764
2098
 
1765
2099
  > [!IMPORTANT]
@@ -2176,538 +2510,160 @@ const full_hashes = await git_get_hashes(40);
2176
2510
 
2177
2511
 
2178
2512
  <a id="api-database"></a>
2179
- <a id="api-database-interface"></a>
2180
2513
  ## API > Database
2181
2514
 
2182
- ### 🔧 ``db_cast_set<T extends string>(set: string | null): Set<T>``
2515
+ Before `v6.0.0`, spooder provided a database API for `sqlite` and `mysql` while they were not available natively in `bun`.
2183
2516
 
2184
- Takes a database SET string and returns a `Set<T>` where `T` is a provided enum.
2517
+ Now that `bun` provides a native API for these, we've dropped our API in favor of those as it aligns with the mission of minimalism.
2185
2518
 
2186
- ```ts
2187
- enum ExampleRow {
2188
- OPT_A = 'OPT_A',
2189
- OPT_B = 'OPT_B',
2190
- OPT_C = 'OPT_C'
2191
- };
2519
+ You can see the documentation for the [Bun SQL API here.](https://bun.com/docs/runtime/sql)
2192
2520
 
2193
- const set = db_cast_set<ExampleRow>('OPT_A,OPT_B');
2194
- if (set.has(ExampleRow.OPT_B)) {
2195
- // ...
2196
- }
2197
- ```
2521
+ <a id="api-database-utilities"></a>
2522
+ ## API > Database > Utilities
2198
2523
 
2199
- ### 🔧 ``db_serialize_set<T extends string>(set: Set<T> | null): string``
2524
+ ### 🔧 ``db_set_cast<T extends string>(set: string | null): Set<T>``
2200
2525
 
2201
- Takes a `Set<T>` and returns a database SET string. If the set is empty or `null`, it returns an empty string.
2526
+ Takes a database SET string and returns a `Set<T>` where `T` is a provided enum.
2202
2527
 
2203
2528
  ```ts
2204
- enum ExampleRow {
2205
- OPT_A = 'OPT_A',
2206
- OPT_B = 'OPT_B',
2207
- OPT_C = 'OPT_C'
2529
+ enum Fruits {
2530
+ Apple = 'Apple',
2531
+ Banana = 'Banana',
2532
+ Lemon = 'Lemon'
2208
2533
  };
2209
2534
 
2210
- const set = new Set<ExampleRow>([ExampleRow.OPT_A, ExampleRow.OPT_B]);
2211
-
2212
- const serialized = db_serialize_set(set);
2213
- // > 'OPT_A,OPT_B'
2214
- ```
2215
-
2216
- <a id="api-database-interface-sqlite"></a>
2217
- ## API > Database > Interface > SQLite
2218
-
2219
- `spooder` provides a simple **SQLite** interface that acts as a wrapper around the Bun SQLite API. The construction parameters match the underlying API.
2220
-
2221
- ```ts
2222
- // see: https://bun.sh/docs/api/sqlite
2223
- const db = db_sqlite(':memory:', { create: true });
2224
- db.instance; // raw access to underlying sqlite instance.
2225
- ```
2226
-
2227
- ### Error Reporting
2228
-
2229
- In the event of an error from SQLite, an applicable value will be returned from interface functions, rather than the error being thrown.
2535
+ const [row] = await sql`SELECT * FROM some_table`;
2536
+ const set = db_set_cast<Fruits>(row.fruits);
2230
2537
 
2231
- ```ts
2232
- const result = await db.get_single('BROKEN QUERY');
2233
- if (result !== null) {
2234
- // do more stuff with result
2538
+ if (set.has(Fruits.Apple)) {
2539
+ // we have an apple in the set
2235
2540
  }
2236
2541
  ```
2237
2542
 
2238
- If you have configured the canary reporting feature in spooder, you can instruct the database interface to report errors using this feature with the `use_canary_reporting` parameter.
2239
-
2240
- ```ts
2241
- const db = db_sqlite(':memory', { ... }, true);
2242
- ```
2243
-
2244
- ### 🔧 ``db_sqlite.update_schema(schema_dir: string, schema_table: string): Promise<void>``
2245
-
2246
- `spooder` offers a database schema management system. The `update_schema()` function is a shortcut to call this on the underlying database.
2543
+ ### 🔧 ``db_set_serialize<T extends string>(set: Iterable<T> | null): string``
2247
2544
 
2248
- See [API > Database > Schema](#api-database-schema) for information on how schema updating works.
2545
+ Takes an `Iterable<T>` and returns a database SET string. If the set is empty or `null`, it returns an empty string.
2249
2546
 
2250
2547
  ```ts
2251
- // without interface
2252
- import { db_sqlite, db_update_schema_sqlite } from 'spooder';
2253
- const db = db_sqlite('./my_database.sqlite');
2254
- await db_update_schema_sqlite(db.instance, './schema');
2255
-
2256
- // with interface
2257
- import { db_sqlite } from 'spooder';
2258
- const db = db_sqlite('./my_database.sqlite');
2259
- await db.update_schema('./schema');
2260
- ```
2261
-
2262
- ### 🔧 ``db_sqlite.insert(sql: string, ...values: any): number``
2263
-
2264
- Executes a query and returns the `lastInsertRowid`. Returns `-1` in the event of an error or if `lastInsertRowid` is not provided.
2265
-
2266
- ```ts
2267
- const id = db.insert('INSERT INTO users (name) VALUES(?)', 'test');
2268
- ```
2269
-
2270
- ### 🔧 ``db_sqlite.insert_object(table: string, obj: Record<string, any>): number``
2271
-
2272
- Executes an insert query using object key/value mapping and returns the `lastInsertRowid`. Returns `-1` in the event of an error.
2273
-
2274
- ```ts
2275
- const id = db.insert_object('users', { name: 'John', email: 'john@example.com' });
2276
- ```
2277
-
2278
- ### 🔧 ``db_sqlite.execute(sql: string, ...values: any): number``
2279
-
2280
- Executes a query and returns the number of affected rows. Returns `-1` in the event of an error.
2281
-
2282
- ```ts
2283
- const affected = db.execute('UPDATE users SET name = ? WHERE id = ?', 'Jane', 1);
2284
- ```
2285
-
2286
- ### 🔧 ``db_sqlite.get_all<T>(sql: string, ...values: any): T[]``
2287
-
2288
- Returns the complete query result set as an array. Returns empty array if no rows found or if query fails.
2289
-
2290
- ```ts
2291
- const users = db.get_all<User>('SELECT * FROM users WHERE active = ?', true);
2292
- ```
2293
-
2294
- ### 🔧 ``db_sqlite.get_single<T>(sql: string, ...values: any): T | null``
2295
-
2296
- Returns the first row from a query result set. Returns `null` if no rows found or if query fails.
2297
-
2298
- ```ts
2299
- const user = db.get_single<User>('SELECT * FROM users WHERE id = ?', 1);
2300
- ```
2301
-
2302
- ### 🔧 ``db_sqlite.get_column<T>(sql: string, column: string, ...values: any): T[]``
2303
-
2304
- Returns the query result as a single column array. Returns empty array if no rows found or if query fails.
2305
-
2306
- ```ts
2307
- const names = db.get_column<string>('SELECT name FROM users', 'name');
2308
- ```
2309
-
2310
- ### 🔧 ``db_sqlite.get_paged<T>(sql: string, values?: any[], page_size?: number): AsyncGenerator<T[]>``
2311
-
2312
- Returns an async iterator that yields pages of database rows. Each page contains at most `page_size` rows (default 1000).
2313
-
2314
- ```ts
2315
- for await (const page of db.get_paged<User>('SELECT * FROM users', [], 100)) {
2316
- console.log(`Processing ${page.length} users`);
2317
- }
2318
- ```
2319
-
2320
- ### 🔧 ``db_sqlite.count(sql: string, ...values: any): number``
2321
-
2322
- Returns the value of `count` from a query. Returns `0` if query fails.
2323
-
2324
- ```ts
2325
- const user_count = db.count('SELECT COUNT(*) AS count FROM users WHERE active = ?', true);
2326
- ```
2327
-
2328
- ### 🔧 ``db_sqlite.count_table(table_name: string): number``
2329
-
2330
- Returns the total count of rows from a table. Returns `0` if query fails.
2331
-
2332
- ```ts
2333
- const total_users = db.count_table('users');
2334
- ```
2335
-
2336
- ### 🔧 ``db_sqlite.exists(sql: string, ...values: any): boolean``
2337
-
2338
- Returns `true` if the query returns any results. Returns `false` if no results found or if query fails.
2339
-
2340
- ```ts
2341
- const has_active_users = db.exists('SELECT 1 FROM users WHERE active = ? LIMIT 1', true);
2342
- ```
2343
-
2344
- ### 🔧 ``db_sqlite.transaction(scope: (transaction: SQLiteDatabaseInterface) => void | Promise<void>): boolean``
2345
-
2346
- Executes a callback function within a database transaction. The callback receives a transaction object with all the same database methods available. Returns `true` if the transaction was committed successfully, `false` if it was rolled back due to an error.
2347
-
2348
- ```ts
2349
- const success = db.transaction(async (tx) => {
2350
- const user_id = tx.insert('INSERT INTO users (name) VALUES (?)', 'John');
2351
- tx.insert('INSERT INTO user_profiles (user_id, bio) VALUES (?, ?)', user_id, 'Hello world');
2352
- });
2353
-
2354
- if (success) {
2355
- console.log('Transaction completed successfully');
2356
- } else {
2357
- console.log('Transaction was rolled back');
2358
- }
2359
- ```
2360
-
2361
- <a id="api-database-interface-mysql"></a>
2362
- ## API > Database > Interface > MySQL
2363
-
2364
- `spooder` provides a simple **MySQL** interface that acts as a wrapper around the `mysql2` API. The connection options match the underlying API.
2365
-
2366
- > [!IMPORTANT]
2367
- > MySQL requires the optional dependency `mysql2` to be installed - this is not automatically installed with spooder. This will be replaced when bun:sql supports MySQL natively.
2368
-
2369
- ```ts
2370
- // see: https://github.com/mysqljs/mysql#connection-options
2371
- const db = await db_mysql({
2372
- // ...
2373
- });
2374
- db.instance; // raw access to underlying mysql2 instance.
2375
- ```
2376
-
2377
- ### Error Reporting
2378
-
2379
- In the event of an error from MySQL, an applicable value will be returned from interface functions, rather than the error being thrown.
2380
-
2381
- ```ts
2382
- const result = await db.get_single('BROKEN QUERY');
2383
- if (result !== null) {
2384
- // do more stuff with result
2385
- }
2386
- ```
2387
-
2388
- If you have configured the canary reporting feature in spooder, you can instruct the database interface to report errors using this feature with the `use_canary_reporting` parameter.
2389
-
2390
- ```ts
2391
- const db = await db_mysql({ ... }, false, true);
2392
- ```
2393
-
2394
- ### Pooling
2395
-
2396
- MySQL supports connection pooling. This can be configured by providing `true` to the `pool` parameter.
2397
-
2398
- ```ts
2399
- const pool = await db_mysql({ ... }, true);
2400
- ```
2401
-
2402
- ### 🔧 ``db_mysql.update_schema(schema_dir: string, schema_table: string): Promise<void>``
2403
-
2404
- `spooder` offers a database schema management system. The `update_schema()` function is a shortcut to call this on the underlying database.
2405
-
2406
- See [API > Database > Schema](#api-database-schema) for information on how schema updating works.
2407
-
2408
- ```ts
2409
- // without interface
2410
- import { db_mysql, db_update_schema_mysql } from 'spooder';
2411
- const db = await db_mysql({ ... });
2412
- await db_update_schema_mysql(db.instance, './schema');
2413
-
2414
- // with interface
2415
- import { db_mysql } from 'spooder';
2416
- const db = await db_mysql({ ... });
2417
- await db.update_schema('./schema');
2418
- ```
2419
-
2420
- ### 🔧 ``db_mysql.insert(sql: string, ...values: any): Promise<number>``
2421
-
2422
- Executes a query and returns the `LAST_INSERT_ID`. Returns `-1` in the event of an error or if `LAST_INSERT_ID` is not provided.
2423
-
2424
- ```ts
2425
- const id = await db.insert('INSERT INTO tbl (name) VALUES(?)', 'test');
2426
- ```
2427
-
2428
- ### 🔧 ``db_mysql.insert_object(table: string, obj: Record<string, any>): Promise<number>``
2429
-
2430
- Executes an insert query using object key/value mapping and returns the `LAST_INSERT_ID`. Returns `-1` in the event of an error.
2431
-
2432
- ```ts
2433
- const id = await db.insert_object('users', { name: 'John', email: 'john@example.com' });
2434
- ```
2435
-
2436
- ### 🔧 ``db_mysql.execute(sql: string, ...values: any): Promise<number>``
2437
-
2438
- Executes a query and returns the number of affected rows. Returns `-1` in the event of an error.
2439
-
2440
- ```ts
2441
- const affected = await db.execute('UPDATE users SET name = ? WHERE id = ?', 'Jane', 1);
2442
- ```
2443
-
2444
- ### 🔧 ``db_mysql.get_all<T>(sql: string, ...values: any): Promise<T[]>``
2445
-
2446
- Returns the complete query result set as an array. Returns empty array if no rows found or if query fails.
2447
-
2448
- ```ts
2449
- const users = await db.get_all<User>('SELECT * FROM users WHERE active = ?', true);
2450
- ```
2451
-
2452
- ### 🔧 ``db_mysql.get_single<T>(sql: string, ...values: any): Promise<T | null>``
2453
-
2454
- Returns the first row from a query result set. Returns `null` if no rows found or if query fails.
2455
-
2456
- ```ts
2457
- const user = await db.get_single<User>('SELECT * FROM users WHERE id = ?', 1);
2458
- ```
2459
-
2460
- ### 🔧 ``db_mysql.get_column<T>(sql: string, column: string, ...values: any): Promise<T[]>``
2461
-
2462
- Returns the query result as a single column array. Returns empty array if no rows found or if query fails.
2463
-
2464
- ```ts
2465
- const names = await db.get_column<string>('SELECT name FROM users', 'name');
2466
- ```
2467
-
2468
- ### 🔧 ``db_mysql.call<T>(func_name: string, ...args: any): Promise<T[]>``
2469
-
2470
- Calls a stored procedure and returns the result set as an array. Returns empty array if no rows found or if query fails.
2471
-
2472
- ```ts
2473
- const results = await db.call<User>('get_active_users', true, 10);
2474
- ```
2475
-
2476
- ### 🔧 ``db_mysql.get_paged<T>(sql: string, values?: any[], page_size?: number): AsyncGenerator<T[]>``
2477
-
2478
- Returns an async iterator that yields pages of database rows. Each page contains at most `page_size` rows (default 1000).
2479
-
2480
- ```ts
2481
- for await (const page of db.get_paged<User>('SELECT * FROM users', [], 100)) {
2482
- console.log(`Processing ${page.length} users`);
2483
- }
2484
- ```
2485
-
2486
- ### 🔧 ``db_mysql.count(sql: string, ...values: any): Promise<number>``
2487
-
2488
- Returns the value of `count` from a query. Returns `0` if query fails.
2489
-
2490
- ```ts
2491
- const user_count = await db.count('SELECT COUNT(*) AS count FROM users WHERE active = ?', true);
2492
- ```
2548
+ enum Fruits {
2549
+ Apple = 'Apple',
2550
+ Banana = 'Banana',
2551
+ Lemon = 'Lemon'
2552
+ };
2493
2553
 
2494
- ### 🔧 ``db_mysql.count_table(table_name: string): Promise<number>``
2554
+ // edit existing set
2555
+ const [row] = await sql`SELECT * FROM some_table`;
2556
+ const fruits = db_set_cast<Fruits>(row.fruits);
2495
2557
 
2496
- Returns the total count of rows from a table. Returns `0` if query fails.
2558
+ if (!fruits.has(Fruits.Lemon))
2559
+ fruits.add(Fruits.Lemon);
2497
2560
 
2498
- ```ts
2499
- const total_users = await db.count_table('users');
2500
- ```
2561
+ await sql`UPDATE some_table SET fruits = ${sql(db_set_serialize(fruits))} WHERE id = ${row.id}`;
2501
2562
 
2502
- ### 🔧 ``db_mysql.exists(sql: string, ...values: any): Promise<boolean>``
2503
-
2504
- Returns `true` if the query returns any results. Returns `false` if no results found or if query fails.
2505
-
2506
- ```ts
2507
- const has_active_users = await db.exists('SELECT 1 FROM users WHERE active = ? LIMIT 1', true);
2508
- ```
2509
-
2510
- ### 🔧 ``db_mysql.transaction(scope: (transaction: MySQLDatabaseInterface) => void | Promise<void>): Promise<boolean>``
2511
-
2512
- Executes a callback function within a database transaction. The callback receives a transaction object with all the same database methods available. Returns `true` if the transaction was committed successfully, `false` if it was rolled back due to an error.
2513
-
2514
- ```ts
2515
- const success = await db.transaction(async (tx) => {
2516
- const user_id = await tx.insert('INSERT INTO users (name) VALUES (?)', 'John');
2517
- await tx.insert('INSERT INTO user_profiles (user_id, bio) VALUES (?, ?)', user_id, 'Hello world');
2518
- });
2519
-
2520
- if (success) {
2521
- console.log('Transaction completed successfully');
2522
- } else {
2523
- console.log('Transaction was rolled back');
2524
- }
2563
+ // new set from iterable
2564
+ await sql`UPDATE some_table SET fruits = ${sql(db_set_serialize([Fruits.Apple, Fruits.Lemon]))}`;
2525
2565
  ```
2526
2566
 
2527
2567
  <a id="api-database-schema"></a>
2528
2568
  ## API > Database > Schema
2529
2569
 
2530
- `spooder` provides a straightforward API to manage database schema in revisions through source control.
2531
-
2532
- ```ts
2533
- // sqlite
2534
- db_update_schema_sqlite(db: Database, schema_dir: string, schema_table?: string): Promise<void>;
2535
-
2536
- // mysql
2537
- db_update_schema_mysql(db: Connection, schema_dir: string, schema_table?: string): Promise<void>;
2538
- ```
2539
-
2540
- ```ts
2541
- // sqlite example
2542
- import { db_update_schema_sqlite } from 'spooder';
2543
- import { Database } from 'bun:sqlite';
2570
+ ### 🔧 ``db_schema(db: SQL, schema_path: string, options?: SchemaOptions): Promise<boolean>``
2544
2571
 
2545
- const db = new Database('./database.sqlite');
2546
- await db_update_schema_sqlite(db, './schema');
2547
- ```
2572
+ `db_schema` executes all revisioned `.sql` files in a given directory, applying them to the database incrementally.
2548
2573
 
2549
2574
  ```ts
2550
- // mysql example
2551
- import { db_update_schema_mysql } from 'spooder';
2552
- import mysql from 'mysql2';
2553
-
2554
- const db = await mysql.createConnection({
2555
- // connection options
2556
- // see https://github.com/mysqljs/mysql#connection-options
2557
- });
2558
- await db_update_schema_mysql(db, './schema');
2575
+ const db = new SQL('db:pw@localhost:3306/test');
2576
+ await db_schema(db, './db/revisions');
2559
2577
  ```
2560
2578
 
2561
- > [!IMPORTANT]
2562
- > MySQL requires the optional dependency `mysql2` to be installed - this is not automatically installed with spooder. This will be replaced when bun:sql supports MySQL natively.
2563
-
2564
- ### Interface API
2565
-
2566
- If you are already using the [database interface API](#api-database-interface) provided by `spooder`, you can call `update_schema()` directly on the interface.
2579
+ The above example will **recursively** search the `./db/revisions` directory for all `.sql` files that begin with a positive numeric identifier.
2567
2580
 
2568
2581
  ```ts
2569
- const db = await db_mysql({ ... });
2570
- await db.update_schema('./schema');
2582
+ db/revisions/000_invalid.sql // no: 0 is not valid
2583
+ db/revisions/001_valid.sql // yes: revision 1
2584
+ db/revisions/25-valid.sql // yes: revision 25
2585
+ db/revisions/005_not.txt // no: .sql extension missing
2586
+ db/revisions/invalid_500.sql // no: must begin with rev
2571
2587
  ```
2572
2588
 
2573
- ### Schema Files
2589
+ Revisions are applied in **numerical order**, rather than the file sorting order from the operating system. Invalid files are **skipped** without throwing an error.
2574
2590
 
2575
- The schema directory is expected to contain an SQL file for each table in the database with the file name matching the name of the table.
2576
-
2577
- > [!NOTE]
2578
- > The schema directory is searched recursively and files without the `.sql` extension (case-insensitive) will be ignored.
2579
-
2580
- ```
2581
- - database.sqlite
2582
- - schema/
2583
- - users.sql
2584
- - posts.sql
2585
- - comments.sql
2586
- ```
2591
+ By default, schema revision is tracked in a table called `db_schema`. The name of this table can be customized by providing a different `.schema_table` option.
2587
2592
 
2588
2593
  ```ts
2589
- import { db_update_schema_sqlite } from 'spooder';
2590
- import { Database } from 'bun:sqlite';
2591
-
2592
- const db = new Database('./database.sqlite');
2593
- await db_update_schema_sqlite(db, './schema');
2594
+ await db_schema(db, './db/revisions', { schema_table: 'alt_table_name' });
2594
2595
  ```
2595
2596
 
2596
- Each of the SQL files should contain all of the revisions for the table, with the first revision being table creation and subsequent revisions being table modifications.
2597
-
2598
- ```sql
2599
- -- [1] Table creation.
2600
- CREATE TABLE users (
2601
- id INTEGER PRIMARY KEY,
2602
- username TEXT NOT NULL,
2603
- password TEXT NOT NULL
2604
- );
2605
-
2606
- -- [2] Add email column.
2607
- ALTER TABLE users ADD COLUMN email TEXT;
2597
+ The revision folder is enumerated recursively by default. This can be disabled by passing `false` to `.recursive`, which will only scan the top level of the specified directory.
2608
2598
 
2609
- -- [3] Cleanup invalid usernames.
2610
- DELETE FROM users WHERE username = 'admin';
2611
- DELETE FROM users WHERE username = 'root';
2599
+ ```ts
2600
+ await db_schema(db, './db/revisions', { recursive: false });
2612
2601
  ```
2613
2602
 
2614
- Each revision should be clearly marked with a comment containing the revision number in square brackets. Anything proceeding the revision number is treated as a comment and ignored.
2615
-
2616
- >[!NOTE]
2617
- > The exact revision header syntax is `^--\s*\[(\d+)\]`.
2618
-
2619
- Everything following a revision header is considered part of that revision until the next revision header or the end of the file, allowing for multiple SQL statements to be included in a single revision.
2603
+ Each revision file is executed within a transaction. In the event of an error, the transaction will be rolled back. Successful revision files executed **before** the error will not be rolled back. Subsequent revision files will **not** be executed after an error.
2620
2604
 
2621
- When calling `db_update_schema_*`, unapplied revisions will be applied in ascending order (regardless of order within the file) until the schema is up-to-date.
2605
+ > [!CAUTION]
2606
+ > Implicit commits, such as those that modify DDL, cannot be rolled back inside a transaction.
2607
+ >
2608
+ > It is recommended to only feature one implicit commit query per revision file. In the event of multiple, an error will not rollback previous implicitly committed queries within the revision, leaving your database in a partial state.
2609
+ >
2610
+ > See [MySQL 8.4 Reference Manual // 15.3.3 Statements That Cause an Implicit Commit](https://dev.mysql.com/doc/refman/8.4/en/implicit-commit.html) for more information.
2622
2611
 
2623
- It is acceptable to omit keys. This can be useful to prevent repitition when managing stored procedures, views or functions.
2624
2612
 
2625
- ```sql
2626
- -- example of repetitive declaration
2627
-
2628
- -- [1] create view
2629
- CREATE VIEW `view_test` AS SELECT * FROM `table_a` WHERE col = 'foo';
2613
+ ```ts
2614
+ type SchemaOptions = {
2615
+ schema_table: string;
2616
+ recursive: boolean;
2617
+ };
2630
2618
 
2631
- -- [2] change view
2632
- DROP VIEW IF EXISTS `view_test`;
2633
- CREATE VIEW `view_test` AS SELECT * FROM `table_b` WHERE col = 'foo';
2619
+ db_get_schema_revision(db: SQL): Promise<number|null>;
2620
+ db_schema(db: SQL, schema_path: string, options?: SchemaOptions): Promise<boolean>;
2634
2621
  ```
2635
- Instead of unnecessarily including each full revision of a procedure, view or function in the schema file, simply store the most up-to-date one and increment the version.
2636
- ```sql
2637
- -- [2] create view
2638
- CREATE OR REPLACE VIEW `view_test` AS SELECT * FROM `table_b` WHERE col = 'foo';
2639
- ```
2640
-
2641
2622
 
2642
- Schema revisions are tracked in a table called `db_schema` which is created automatically if it does not exist with the following schema.
2623
+ <a id="api-utilities"></a>
2624
+ ## API > Utilities
2643
2625
 
2644
- ```sql
2645
- CREATE TABLE db_schema (
2646
- db_schema_table_name TEXT PRIMARY KEY,
2647
- db_schema_version INTEGER
2648
- );
2649
- ```
2626
+ ### 🔧 ``filesize(bytes: number): string``
2650
2627
 
2651
- The table used for schema tracking can be changed if necessary by providing an alternative table name as the third paramater.
2628
+ Returns a human-readable string representation of a file size in bytes.
2652
2629
 
2653
2630
  ```ts
2654
- await db_update_schema_sqlite(db, './schema', 'my_schema_table');
2631
+ filesize(512); // > "512 bytes"
2632
+ filesize(1024); // > "1 kb"
2633
+ filesize(1048576); // > "1 mb"
2634
+ filesize(1073741824); // > "1 gb"
2635
+ filesize(1099511627776); // > "1 tb"
2655
2636
  ```
2656
2637
 
2657
- >[!IMPORTANT]
2658
- > The entire process is transactional. If an error occurs during the application of **any** revision for **any** table, the entire process will be rolled back and the database will be left in the state it was before the update was attempted.
2638
+ ### 🔧 ``BiMap<K, V>``
2659
2639
 
2660
- >[!IMPORTANT]
2661
- > `db_update_schema_*` will throw an error if the revisions cannot be parsed or applied for any reason. It is important you catch and handle appropriately.
2640
+ A bidirectional map that maintains a two-way relationship between keys and values, allowing efficient lookups in both directions.
2662
2641
 
2663
2642
  ```ts
2664
- try {
2665
- const db = new Database('./database.sqlite');
2666
- await db_update_schema_sqlite(db, './schema');
2667
- } catch (e) {
2668
- // panic (crash) or gracefully continue, etc.
2669
- await panic(e);
2670
- }
2671
- ```
2672
-
2673
- ### Schema Dependencies
2674
- By default, schema files are executed in the order they are provided by the operating system (generally alphabetically). Individual revisions within files are always executed in ascending order.
2675
-
2676
- If a specific revision depends on one or more other schema files to be executed before it (for example, when adding foreign keys), you can specify dependencies at the revision level.
2677
-
2678
- ```sql
2679
- -- [1] create table_a (no dependencies)
2680
- CREATE TABLE table_a (
2681
- id INTEGER PRIMARY KEY,
2682
- name TEXT NOT NULL
2683
- );
2684
-
2685
- -- [2] add foreign key to table_b
2686
- -- [deps] table_b_schema.sql
2687
- ALTER TABLE table_a ADD COLUMN table_b_id INTEGER REFERENCES table_b(id);
2688
- ```
2689
-
2690
- When a revision specifies dependencies, all revisions of the dependent schema files will be executed before that specific revision runs. This allows you to create tables independently and then add dependencies in later revisions.
2643
+ const users = new BiMap<number, string>();
2691
2644
 
2692
- >[!IMPORTANT]
2693
- > Dependencies are specified per-revision, not per-file. A `-- [deps]` line applies only to the revision it appears in.
2645
+ // Set key-value pairs
2646
+ users.set(1, "Alice");
2647
+ users.set(2, "Bob");
2648
+ users.set(3, "Charlie");
2694
2649
 
2695
- >[!IMPORTANT]
2696
- > Cyclic or missing dependencies will throw an error.
2650
+ // Lookup by key
2651
+ users.getByKey(1); // > "Alice"
2697
2652
 
2698
- <a id="api-utilities"></a>
2699
- ## API > Utilities
2653
+ // Lookup by value
2654
+ users.getByValue("Bob"); // > 2
2700
2655
 
2701
- ### 🔧 ``filesize(bytes: number): string``
2656
+ // Check existence
2657
+ users.hasKey(1); // > true
2658
+ users.hasValue("Charlie"); // > true
2702
2659
 
2703
- Returns a human-readable string representation of a file size in bytes.
2660
+ // Delete by key or value
2661
+ users.deleteByKey(1); // > true
2662
+ users.deleteByValue("Bob"); // > true
2704
2663
 
2705
- ```ts
2706
- filesize(512); // > "512 bytes"
2707
- filesize(1024); // > "1 kb"
2708
- filesize(1048576); // > "1 mb"
2709
- filesize(1073741824); // > "1 gb"
2710
- filesize(1099511627776); // > "1 tb"
2664
+ // Other operations
2665
+ users.size; // > 1
2666
+ users.clear();
2711
2667
  ```
2712
2668
 
2713
2669
  ## Legal