pgsql-seed 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,734 @@
1
+ # pgsql-test
2
+
3
+ <p align="center" width="100%">
4
+ <img height="250" src="https://raw.githubusercontent.com/constructive-io/constructive/refs/heads/main/assets/outline-logo.svg" />
5
+ </p>
6
+
7
+ <p align="center" width="100%">
8
+ <a href="https://github.com/constructive-io/constructive/actions/workflows/run-tests.yaml">
9
+ <img height="20" src="https://github.com/constructive-io/constructive/actions/workflows/run-tests.yaml/badge.svg" />
10
+ </a>
11
+ <a href="https://github.com/constructive-io/constructive/blob/main/LICENSE">
12
+ <img height="20" src="https://img.shields.io/badge/license-MIT-blue.svg"/>
13
+ </a>
14
+ <a href="https://www.npmjs.com/package/pgsql-test">
15
+ <img height="20" src="https://img.shields.io/github/package-json/v/constructive-io/constructive?filename=postgres%2Fpgsql-test%2Fpackage.json"/>
16
+ </a>
17
+ </p>
18
+
19
+ `pgsql-test` gives you instant, isolated PostgreSQL databases for each test — with automatic transaction rollbacks, context switching, and clean seeding. Forget flaky tests and brittle environments. Write real SQL. Get real coverage. Stay fast.
20
+
21
+ ## Install
22
+
23
+ ```sh
24
+ npm install pgsql-test
25
+ ```
26
+
27
+ ## Features
28
+
29
+ * ⚡ **Instant test DBs** — each one seeded, isolated, and UUID-named
30
+ * 🔄 **Per-test rollback** — every test runs in its own transaction or savepoint
31
+ * 🛡️ **RLS-friendly** — test with role-based auth via `.setContext()`
32
+ * 🌱 **Flexible seeding** — run `.sql` files, programmatic seeds, or even load fixtures
33
+ * 🧪 **Compatible with any async runner** — works with `Jest`, `Mocha`, etc.
34
+ * 🧹 **Auto teardown** — no residue, no reboots, just clean exits
35
+
36
+ ### Tutorials
37
+
38
+ 📚 **[Learn how to test PG with pgsql-test →](https://constructive.io/learn/e2e-postgres-testing)**
39
+
40
+ ### Using with Supabase
41
+
42
+ If you're writing tests for Supabase, check out [`supabase-test`](https://www.npmjs.com/package/supabase-test) for Supabase-optimized defaults.
43
+
44
+ ### pgpm migrations
45
+
46
+ Part of the [pgpm](https://pgpm.io) ecosystem, `pgsql-test` is built to pair seamlessly with our TypeScript-based package manager and migration tool. `pgpm` gives you modular Postgres packages, deterministic plans, and tag-aware releases—perfect for authoring the migrations that `pgsql-test` runs.
47
+
48
+ ## Table of Contents
49
+
50
+ 1. [Install](#install)
51
+ 2. [Features](#features)
52
+ 3. [Quick Start](#-quick-start)
53
+ 4. [`getConnections()` Overview](#getconnections-overview)
54
+ 5. [PgTestClient API Overview](#pgtestclient-api-overview)
55
+ 6. [Usage Examples](#usage-examples)
56
+ * [Basic Setup](#-basic-setup)
57
+ * [Role-Based Context](#-role-based-context)
58
+ * [Seeding System](#-seeding-system)
59
+ * [SQL File Seeding](#-sql-file-seeding)
60
+ * [Programmatic Seeding](#-programmatic-seeding)
61
+ * [CSV Seeding](#️-csv-seeding)
62
+ * [JSON Seeding](#️-json-seeding)
63
+ * [pgpm Seeding](#-pgpm-seeding)
64
+ 7. [`getConnections() Options` ](#getconnections-options)
65
+ 8. [Disclaimer](#disclaimer)
66
+
67
+
68
+ ## ✨ Quick Start
69
+
70
+ ```ts
71
+ import { getConnections } from 'pgsql-test';
72
+
73
+ let db, teardown;
74
+
75
+ beforeAll(async () => {
76
+ ({ db, teardown } = await getConnections());
77
+ await db.query(`SELECT 1`); // ✅ Ready to run queries
78
+ });
79
+
80
+ afterAll(() => teardown());
81
+ ```
82
+
83
+ ## `getConnections()` Overview
84
+
85
+ ```ts
86
+ import { getConnections } from 'pgsql-test';
87
+
88
+ // Complete object destructuring
89
+ const { pg, db, admin, teardown, manager } = await getConnections();
90
+
91
+ // Most common pattern
92
+ const { db, teardown } = await getConnections();
93
+ ```
94
+
95
+ The `getConnections()` helper sets up a fresh PostgreSQL test database and returns a structured object with:
96
+
97
+ * `pg`: a `PgTestClient` connected as the root or superuser — useful for administrative setup or introspection
98
+ * `db`: a `PgTestClient` connected as the app-level user — used for running tests with RLS and granted permissions
99
+ * `admin`: a `DbAdmin` utility for managing database state, extensions, roles, and templates
100
+ * `teardown()`: a function that shuts down the test environment and database pool
101
+ * `manager`: a shared connection pool manager (`PgTestConnector`) behind both clients
102
+
103
+ Together, these allow fast, isolated, role-aware test environments with per-test rollback and full control over setup and teardown.
104
+
105
+ The `PgTestClient` returned by `getConnections()` is a fully-featured wrapper around `pg.Pool`. It provides:
106
+
107
+ * Automatic transaction and savepoint management for test isolation
108
+ * Easy switching of role-based contexts for RLS testing
109
+ * A clean, high-level API for integration testing PostgreSQL systems
110
+
111
+ ## `PgTestClient` API Overview
112
+
113
+ ```ts
114
+ let pg: PgTestClient;
115
+ let teardown: () => Promise<void>;
116
+
117
+ beforeAll(async () => {
118
+ ({ pg, teardown } = await getConnections());
119
+ });
120
+
121
+ beforeEach(() => pg.beforeEach());
122
+ afterEach(() => pg.afterEach());
123
+ afterAll(() => teardown());
124
+ ```
125
+
126
+ The `PgTestClient` returned by `getConnections()` wraps a `pg.Client` and provides convenient helpers for query execution, test isolation, and context switching.
127
+
128
+ ### Common Methods
129
+
130
+ * `query(sql, values?)` – Run a raw SQL query and get the `QueryResult`
131
+ * `beforeEach()` – Begins a transaction and sets a savepoint (called at the start of each test)
132
+ * `afterEach()` – Rolls back to the savepoint and commits the outer transaction (cleans up test state)
133
+ * `setContext({ key: value })` – Sets PostgreSQL config variables (like `role`) to simulate RLS contexts
134
+ * `any`, `one`, `oneOrNone`, `many`, `manyOrNone`, `none`, `result` – Typed query helpers for specific result expectations
135
+
136
+ These methods make it easier to build expressive and isolated integration tests with strong typing and error handling.
137
+
138
+ The `PgTestClient` returned by `getConnections()` is a fully-featured wrapper around `pg.Pool`. It provides:
139
+
140
+ * Automatic transaction and savepoint management for test isolation
141
+ * Easy switching of role-based contexts for RLS testing
142
+ * A clean, high-level API for integration testing PostgreSQL systems
143
+
144
+ ## Usage Examples
145
+
146
+ ### ⚡ Basic Setup
147
+
148
+ ```ts
149
+ import { getConnections } from 'pgsql-test';
150
+
151
+ let db; // A fully wrapped PgTestClient using pg.Pool with savepoint-based rollback per test
152
+ let teardown;
153
+
154
+ beforeAll(async () => {
155
+ ({ db, teardown } = await getConnections());
156
+
157
+ await db.query(`
158
+ CREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT);
159
+ CREATE TABLE posts (id SERIAL PRIMARY KEY, user_id INT REFERENCES users(id), content TEXT);
160
+
161
+ INSERT INTO users (name) VALUES ('Alice'), ('Bob');
162
+ INSERT INTO posts (user_id, content) VALUES (1, 'Hello world!'), (2, 'Graphile is cool!');
163
+ `);
164
+ });
165
+
166
+ afterAll(() => teardown());
167
+
168
+ beforeEach(() => db.beforeEach());
169
+ afterEach(() => db.afterEach());
170
+
171
+ test('user count starts at 2', async () => {
172
+ const res = await db.query('SELECT COUNT(*) FROM users');
173
+ expect(res.rows[0].count).toBe('2');
174
+ });
175
+ ```
176
+
177
+ ### 🔐 Role-Based Context
178
+
179
+
180
+ The `pgsql-test` framework provides powerful tools to simulate authentication contexts during tests, which is particularly useful when testing Row-Level Security (RLS) policies.
181
+
182
+ #### Setting Test Context
183
+
184
+ Use `setContext()` to simulate different user roles and JWT claims:
185
+
186
+ ```ts
187
+ db.setContext({
188
+ role: 'authenticated',
189
+ 'jwt.claims.user_id': '123',
190
+ 'jwt.claims.org_id': 'acme'
191
+ });
192
+ ```
193
+
194
+ This applies the settings using `SET LOCAL` statements, ensuring they persist only for the current transaction and maintain proper isolation between tests.
195
+
196
+ #### Testing Role-Based Access
197
+
198
+ ```ts
199
+ describe('authenticated role', () => {
200
+ beforeEach(async () => {
201
+ db.setContext({ role: 'authenticated' });
202
+ await db.beforeEach();
203
+ });
204
+
205
+ afterEach(() => db.afterEach());
206
+
207
+ it('runs as authenticated', async () => {
208
+ const res = await db.query(`SELECT current_setting('role', true) AS role`);
209
+ expect(res.rows[0].role).toBe('authenticated');
210
+ });
211
+ });
212
+ ```
213
+
214
+ #### Database Connection Options
215
+
216
+ For non-superuser testing, use the connection options described in the [options](#getconnections-options) section. The `db.connection` property allows you to customize the non-privileged user account for your tests.
217
+
218
+ Use `setContext()` to simulate Role-Based Access Control (RBAC) during tests. This is useful when testing Row-Level Security (RLS) policies. Your actual server should manage role/user claims via secure tokens (e.g., setting `current_setting('jwt.claims.user_id')`), but this interface helps emulate those behaviors in test environments.
219
+
220
+ #### Common Testing Scenarios
221
+
222
+ This approach enables testing various access patterns:
223
+ - Authenticated vs. anonymous user access
224
+ - Per-user data filtering
225
+ - Admin privilege bypass behavior
226
+ - Custom claim-based restrictions (organization membership, admin status)
227
+
228
+ > **Note:** While this interface helps simulate RBAC for testing, your production server should manage user/role claims via secure authentication tokens, typically by setting values like `current_setting('jwt.claims.user_id')` through proper authentication middleware.
229
+
230
+ ### 🌱 Seeding System
231
+
232
+ The second argument to `getConnections()` is an optional array of `SeedAdapter` objects:
233
+
234
+ ```ts
235
+ const { db, teardown } = await getConnections(getConnectionOptions, seedAdapters);
236
+ ```
237
+
238
+ This array lets you fully customize how your test database is seeded. You can compose multiple strategies:
239
+
240
+ * [`seed.sqlfile()`](#-sql-file-seeding) – Execute raw `.sql` files from disk
241
+ * [`seed.fn()`](#-programmatic-seeding) – Run JavaScript/TypeScript logic to programmatically insert data
242
+ * [`seed.csv()`](#️-csv-seeding) – Load tabular data from CSV files
243
+ * [`seed.json()`](#️-json-seeding) – Use in-memory objects as seed data
244
+ * [`seed.loadPgpm()`](#-pgpm-seeding) – Apply a pgpm project or set of packages (compatible with sqitch)
245
+
246
+ > ✨ **Default Behavior:** If no `SeedAdapter[]` is passed, pgpm seeding is assumed. This makes `pgsql-test` zero-config for pgpm-based projects.
247
+
248
+ This composable system allows you to mix-and-match data setup strategies for flexible, realistic, and fast database tests.
249
+
250
+ #### Two Seeding Patterns
251
+
252
+ You can seed data using either approach:
253
+
254
+ **1. Adapter Pattern** (setup phase via `getConnections`)
255
+ ```ts
256
+ const { db, teardown } = await getConnections({}, [
257
+ seed.json({ 'users': [{ id: 1, name: 'Alice' }] })
258
+ ]);
259
+ ```
260
+
261
+ **2. Direct Load Methods** (runtime via `PgTestClient`)
262
+ ```ts
263
+ await db.loadJson({ 'users': [{ id: 1, name: 'Alice' }] });
264
+ await db.loadCsv({ 'users': '/path/to/users.csv' });
265
+ await db.loadSql(['/path/to/schema.sql']);
266
+ ```
267
+
268
+ > **Note:** `loadCsv()` and `loadPgpm()` do not apply RLS context (PostgreSQL limitation). Use `loadJson()` or `loadSql()` for RLS-aware seeding.
269
+
270
+ ### 🔌 SQL File Seeding
271
+
272
+ **Adapter Pattern:**
273
+ ```ts
274
+ const { db, teardown } = await getConnections({}, [
275
+ seed.sqlfile(['schema.sql', 'fixtures.sql'])
276
+ ]);
277
+ ```
278
+
279
+ **Direct Load Method:**
280
+ ```ts
281
+ await db.loadSql(['schema.sql', 'fixtures.sql']);
282
+ ```
283
+
284
+ <details>
285
+ <summary>Full example</summary>
286
+
287
+ ```ts
288
+ import path from 'path';
289
+ import { getConnections, seed } from 'pgsql-test';
290
+
291
+ const sql = (f: string) => path.join(__dirname, 'sql', f);
292
+
293
+ let db;
294
+ let teardown;
295
+
296
+ beforeAll(async () => {
297
+ ({ db, teardown } = await getConnections({}, [
298
+ seed.sqlfile([
299
+ sql('schema.sql'),
300
+ sql('fixtures.sql')
301
+ ])
302
+ ]));
303
+ });
304
+
305
+ afterAll(async () => {
306
+ await teardown();
307
+ });
308
+ ```
309
+
310
+ </details>
311
+
312
+ ### 🧠 Programmatic Seeding
313
+
314
+ **Adapter Pattern:**
315
+ ```ts
316
+ const { db, teardown } = await getConnections({}, [
317
+ seed.fn(async ({ pg }) => {
318
+ await pg.query(`INSERT INTO users (name) VALUES ('Seeded User')`);
319
+ })
320
+ ]);
321
+ ```
322
+
323
+ **Direct Load Method:**
324
+ ```ts
325
+ // Use any PgTestClient method directly
326
+ await db.query(`INSERT INTO users (name) VALUES ('Seeded User')`);
327
+ ```
328
+
329
+ <details>
330
+ <summary>Full example</summary>
331
+
332
+ ```ts
333
+ import { getConnections, seed } from 'pgsql-test';
334
+
335
+ let db;
336
+ let teardown;
337
+
338
+ beforeAll(async () => {
339
+ ({ db, teardown } = await getConnections({}, [
340
+ seed.fn(async ({ pg }) => {
341
+ await pg.query(`
342
+ INSERT INTO users (name) VALUES ('Seeded User');
343
+ `);
344
+ })
345
+ ]));
346
+ });
347
+ ```
348
+
349
+ </details>
350
+
351
+ ## 🗃️ CSV Seeding
352
+
353
+ **Adapter Pattern:**
354
+ ```ts
355
+ const { db, teardown } = await getConnections({}, [
356
+ seed.csv({
357
+ 'users': '/path/to/users.csv',
358
+ 'posts': '/path/to/posts.csv'
359
+ })
360
+ ]);
361
+ ```
362
+
363
+ **Direct Load Method:**
364
+ ```ts
365
+ await db.loadCsv({
366
+ 'users': '/path/to/users.csv',
367
+ 'posts': '/path/to/posts.csv'
368
+ });
369
+ ```
370
+
371
+ > **Note:** CSV loading uses PostgreSQL COPY which does not support RLS context.
372
+
373
+ <details>
374
+ <summary>Full example</summary>
375
+
376
+ You can load tables from CSV files using `seed.csv({ ... })`. CSV headers must match the table column names exactly. This is useful for loading stable fixture data for integration tests or CI environments.
377
+
378
+ ```ts
379
+ import path from 'path';
380
+ import { getConnections, seed } from 'pgsql-test';
381
+
382
+ const csv = (file: string) => path.resolve(__dirname, '../csv', file);
383
+
384
+ let db;
385
+ let teardown;
386
+
387
+ beforeAll(async () => {
388
+ ({ db, teardown } = await getConnections({}, [
389
+ // Create schema
390
+ seed.fn(async ({ pg }) => {
391
+ await pg.query(`
392
+ CREATE TABLE users (
393
+ id SERIAL PRIMARY KEY,
394
+ name TEXT NOT NULL
395
+ );
396
+
397
+ CREATE TABLE posts (
398
+ id SERIAL PRIMARY KEY,
399
+ user_id INT REFERENCES users(id),
400
+ content TEXT NOT NULL
401
+ );
402
+ `);
403
+ }),
404
+ // Load from CSV
405
+ seed.csv({
406
+ users: csv('users.csv'),
407
+ posts: csv('posts.csv')
408
+ }),
409
+ // Adjust SERIAL sequences to avoid conflicts
410
+ seed.fn(async ({ pg }) => {
411
+ await pg.query(`SELECT setval(pg_get_serial_sequence('users', 'id'), (SELECT MAX(id) FROM users));`);
412
+ await pg.query(`SELECT setval(pg_get_serial_sequence('posts', 'id'), (SELECT MAX(id) FROM posts));`);
413
+ })
414
+ ]));
415
+ });
416
+
417
+ afterAll(() => teardown());
418
+
419
+ it('has loaded rows', async () => {
420
+ const res = await db.query('SELECT COUNT(*) FROM users');
421
+ expect(+res.rows[0].count).toBeGreaterThan(0);
422
+ });
423
+ ```
424
+
425
+ </details>
426
+
427
+ ## 🗃️ JSON Seeding
428
+
429
+ **Adapter Pattern:**
430
+ ```ts
431
+ const { db, teardown } = await getConnections({}, [
432
+ seed.json({
433
+ 'custom.users': [
434
+ { id: 1, name: 'Alice' },
435
+ { id: 2, name: 'Bob' }
436
+ ]
437
+ })
438
+ ]);
439
+ ```
440
+
441
+ **Direct Load Method:**
442
+ ```ts
443
+ await db.loadJson({
444
+ 'custom.users': [
445
+ { id: 1, name: 'Alice' },
446
+ { id: 2, name: 'Bob' }
447
+ ]
448
+ });
449
+ ```
450
+
451
+ <details>
452
+ <summary>Full example</summary>
453
+
454
+ You can seed tables using in-memory JSON objects. This is useful when you want fast, inline fixtures without managing external files.
455
+
456
+ ```ts
457
+ import { getConnections, seed } from 'pgsql-test';
458
+
459
+ let db;
460
+ let teardown;
461
+
462
+ beforeAll(async () => {
463
+ ({ db, teardown } = await getConnections({}, [
464
+ // Create schema
465
+ seed.fn(async ({ pg }) => {
466
+ await pg.query(`
467
+ CREATE SCHEMA custom;
468
+ CREATE TABLE custom.users (
469
+ id SERIAL PRIMARY KEY,
470
+ name TEXT NOT NULL
471
+ );
472
+
473
+ CREATE TABLE custom.posts (
474
+ id SERIAL PRIMARY KEY,
475
+ user_id INT REFERENCES custom.users(id),
476
+ content TEXT NOT NULL
477
+ );
478
+ `);
479
+ }),
480
+ // Seed with in-memory JSON
481
+ seed.json({
482
+ 'custom.users': [
483
+ { id: 1, name: 'Alice' },
484
+ { id: 2, name: 'Bob' }
485
+ ],
486
+ 'custom.posts': [
487
+ { id: 1, user_id: 1, content: 'Hello world!' },
488
+ { id: 2, user_id: 2, content: 'Graphile is cool!' }
489
+ ]
490
+ }),
491
+ // Fix SERIAL sequences
492
+ seed.fn(async ({ pg }) => {
493
+ await pg.query(`SELECT setval(pg_get_serial_sequence('custom.users', 'id'), (SELECT MAX(id) FROM custom.users));`);
494
+ await pg.query(`SELECT setval(pg_get_serial_sequence('custom.posts', 'id'), (SELECT MAX(id) FROM custom.posts));`);
495
+ })
496
+ ]));
497
+ });
498
+
499
+ afterAll(() => teardown());
500
+
501
+ it('has loaded rows', async () => {
502
+ const res = await db.query('SELECT COUNT(*) FROM custom.users');
503
+ expect(+res.rows[0].count).toBeGreaterThan(0);
504
+ });
505
+ ```
506
+
507
+ </details>
508
+
509
+ ## 🚀 pgpm Seeding
510
+
511
+ **Zero Configuration (Default):**
512
+ ```ts
513
+ // pgpm migrate is used automatically
514
+ const { db, teardown } = await getConnections();
515
+ ```
516
+
517
+ **Adapter Pattern (Custom Path):**
518
+ ```ts
519
+ const { db, teardown } = await getConnections({}, [
520
+ seed.loadPgpm('/path/to/pgpm-workspace', true) // with cache
521
+ ]);
522
+ ```
523
+
524
+ **Direct Load Method:**
525
+ ```ts
526
+ await db.loadPpgm('/path/to/pgpm-workspace', true); // with cache
527
+ ```
528
+
529
+ > **Note:** pgpm deployment has its own client handling and does not apply RLS context.
530
+
531
+ <details>
532
+ <summary>Full example</summary>
533
+
534
+ If your project uses pgpm modules with a precompiled `pgpm.plan`, you can use `pgsql-test` with **zero configuration**. Just call `getConnections()` — and it *just works*:
535
+
536
+ ```ts
537
+ import { getConnections } from 'pgsql-test';
538
+
539
+ let db, teardown;
540
+
541
+ beforeAll(async () => {
542
+ ({ db, teardown } = await getConnections()); // pgpm module is deployed automatically
543
+ });
544
+ ```
545
+
546
+ pgpm uses Sqitch-compatible syntax with a TypeScript-based migration engine. By default, `pgsql-test` automatically deploys any pgpm module found in the current working directory (`process.cwd()`).
547
+
548
+ To specify a custom path to your pgpm module, use `seed.loadPgpm()` explicitly:
549
+
550
+ ```ts
551
+ import path from 'path';
552
+ import { getConnections, seed } from 'pgsql-test';
553
+
554
+ const cwd = path.resolve(__dirname, '../path/to/pgpm-workspace');
555
+
556
+ beforeAll(async () => {
557
+ ({ db, teardown } = await getConnections({}, [
558
+ seed.loadPgpm(cwd)
559
+ ]));
560
+ });
561
+ ```
562
+
563
+ </details>
564
+
565
+ ## Why pgpm's Approach?
566
+
567
+ pgpm provides the best of both worlds:
568
+
569
+ 1. **Sqitch Compatibility**: Keep your familiar Sqitch syntax and migration approach
570
+ 2. **TypeScript Performance**: Our TS-rewritten deployment engine delivers up to 10x faster schema deployments
571
+ 3. **Developer Experience**: Tight feedback loops with near-instant schema setup for tests
572
+ 4. **CI Optimization**: Dramatically reduced test suite run times with optimized deployment
573
+
574
+ By maintaining Sqitch compatibility while supercharging performance, pgpm enables you to keep your existing migration patterns while enjoying the speed benefits of our TypeScript engine.
575
+
576
+ ## `getConnections` Options
577
+
578
+ This table documents the available options for the `getConnections` function. The options are passed as a combination of `pg` and `db` configuration objects.
579
+
580
+ ### `db` Options (PgTestConnectionOptions)
581
+
582
+ | Option | Type | Default | Description |
583
+ | ------------------------ | ---------- | ---------------- | --------------------------------------------------------------------------- |
584
+ | `db.extensions` | `string[]` | `[]` | Array of PostgreSQL extensions to include in the test database |
585
+ | `db.cwd` | `string` | `process.cwd()` | Working directory used for pgpm or Sqitch projects |
586
+ | `db.connection.user` | `string` | `'app_user'` | User for simulating RLS via `setContext()` |
587
+ | `db.connection.password` | `string` | `'app_password'` | Password for RLS test user |
588
+ | `db.connection.role` | `string` | `'anonymous'` | Default role used during `setContext()` |
589
+ | `db.template` | `string` | `undefined` | Template database used for faster test DB creation |
590
+ | `db.rootDb` | `string` | `'postgres'` | Root database used for administrative operations (e.g., creating databases) |
591
+ | `db.prefix` | `string` | `'db-'` | Prefix used when generating test database names |
592
+
593
+ ### `pg` Options (PgConfig)
594
+
595
+ Environment variables will override these options when available:
596
+
597
+ * `PGHOST`, `PGPORT`, `PGUSER`, `PGPASSWORD`, `PGDATABASE`
598
+
599
+ | Option | Type | Default | Description |
600
+ | ------------- | -------- | ------------- | ----------------------------------------------- |
601
+ | `pg.user` | `string` | `'postgres'` | Superuser for PostgreSQL |
602
+ | `pg.password` | `string` | `'password'` | Password for the PostgreSQL superuser |
603
+ | `pg.host` | `string` | `'localhost'` | Hostname for PostgreSQL |
604
+ | `pg.port` | `number` | `5423` | Port for PostgreSQL |
605
+ | `pg.database` | `string` | `'postgres'` | Default database used when connecting initially |
606
+
607
+ ### Usage
608
+
609
+ ```ts
610
+ const { conn, db, teardown } = await getConnections({
611
+ pg: { user: 'postgres', password: 'secret' },
612
+ db: {
613
+ extensions: ['uuid-ossp'],
614
+ cwd: '/path/to/project',
615
+ connection: { user: 'test_user', password: 'secret', role: 'authenticated' },
616
+ template: 'test_template',
617
+ prefix: 'test_',
618
+ rootDb: 'postgres'
619
+ }
620
+ });
621
+ ```
622
+
623
+ ## Snapshot Utilities
624
+
625
+ The `pgsql-test/utils` module provides utilities for sanitizing database query results for snapshot testing. These helpers replace dynamic values (IDs, UUIDs, dates, hashes) with stable placeholders, making snapshots deterministic.
626
+
627
+ ```ts
628
+ import { snapshot } from 'pgsql-test/utils';
629
+
630
+ const result = await db.any('SELECT * FROM users');
631
+ expect(snapshot(result)).toMatchSnapshot();
632
+ ```
633
+
634
+ ### Available Functions
635
+
636
+ | Function | Description |
637
+ |----------|-------------|
638
+ | `snapshot(obj)` | Recursively prunes all dynamic values from an object or array |
639
+ | `prune(obj)` | Applies all prune functions to a single object |
640
+ | `pruneDates(obj)` | Replaces `Date` objects and date strings (fields ending in `_at` or `At`) with `[DATE]` |
641
+ | `pruneIds(obj)` | Replaces `id` and `*_id` fields with `[ID]` |
642
+ | `pruneIdArrays(obj)` | Replaces `*_ids` array fields with `[UUIDs-N]` |
643
+ | `pruneUUIDs(obj)` | Replaces UUID strings in `uuid` and `queue_name` fields with `[UUID]` |
644
+ | `pruneHashes(obj)` | Replaces `*_hash` fields starting with `$` with `[hash]` |
645
+
646
+ ### Example
647
+
648
+ ```ts
649
+ import { snapshot, pruneIds, pruneDates } from 'pgsql-test/utils';
650
+
651
+ // Full sanitization
652
+ const users = await db.any('SELECT * FROM users');
653
+ expect(snapshot(users)).toMatchSnapshot();
654
+
655
+ // Selective sanitization
656
+ const row = await db.one('SELECT id, name, created_at FROM users WHERE id = $1', [1]);
657
+ const sanitized = pruneDates(pruneIds(row));
658
+ // { id: '[ID]', name: 'Alice', created_at: '[DATE]' }
659
+ ```
660
+
661
+ ---
662
+
663
+ ## Education and Tutorials
664
+
665
+ 1. 🚀 [Quickstart: Getting Up and Running](https://constructive.io/learn/quickstart)
666
+ Get started with modular databases in minutes. Install prerequisites and deploy your first module.
667
+
668
+ 2. 📦 [Modular PostgreSQL Development with Database Packages](https://constructive.io/learn/modular-postgres)
669
+ Learn to organize PostgreSQL projects with pgpm workspaces and reusable database modules.
670
+
671
+ 3. ✏️ [Authoring Database Changes](https://constructive.io/learn/authoring-database-changes)
672
+ Master the workflow for adding, organizing, and managing database changes with pgpm.
673
+
674
+ 4. 🧪 [End-to-End PostgreSQL Testing with TypeScript](https://constructive.io/learn/e2e-postgres-testing)
675
+ Master end-to-end PostgreSQL testing with ephemeral databases, RLS testing, and CI/CD automation.
676
+
677
+ 5. ⚡ [Supabase Testing](https://constructive.io/learn/supabase)
678
+ Use TypeScript-first tools to test Supabase projects with realistic RLS, policies, and auth contexts.
679
+
680
+ 6. 💧 [Drizzle ORM Testing](https://constructive.io/learn/drizzle-testing)
681
+ Run full-stack tests with Drizzle ORM, including database setup, teardown, and RLS enforcement.
682
+
683
+ 7. 🔧 [Troubleshooting](https://constructive.io/learn/troubleshooting)
684
+ Common issues and solutions for pgpm, PostgreSQL, and testing.
685
+
686
+ ## Related Constructive Tooling
687
+
688
+ ### 🧪 Testing
689
+
690
+ * [pgsql-test](https://github.com/constructive-io/constructive/tree/main/postgres/pgsql-test): **📊 Isolated testing environments** with per-test transaction rollbacks—ideal for integration tests, complex migrations, and RLS simulation.
691
+ * [supabase-test](https://github.com/constructive-io/constructive/tree/main/postgres/supabase-test): **🧪 Supabase-native test harness** preconfigured for the local Supabase stack—per-test rollbacks, JWT/role context helpers, and CI/GitHub Actions ready.
692
+ * [graphile-test](https://github.com/constructive-io/constructive/tree/main/graphile/graphile-test): **🔐 Authentication mocking** for Graphile-focused test helpers and emulating row-level security contexts.
693
+ * [pg-query-context](https://github.com/constructive-io/constructive/tree/main/postgres/pg-query-context): **🔒 Session context injection** to add session-local context (e.g., `SET LOCAL`) into queries—ideal for setting `role`, `jwt.claims`, and other session settings.
694
+
695
+ ### 🧠 Parsing & AST
696
+
697
+ * [pgsql-parser](https://www.npmjs.com/package/pgsql-parser): **🔄 SQL conversion engine** that interprets and converts PostgreSQL syntax.
698
+ * [libpg-query-node](https://www.npmjs.com/package/libpg-query): **🌉 Node.js bindings** for `libpg_query`, converting SQL into parse trees.
699
+ * [pg-proto-parser](https://www.npmjs.com/package/pg-proto-parser): **📦 Protobuf parser** for parsing PostgreSQL Protocol Buffers definitions to generate TypeScript interfaces, utility functions, and JSON mappings for enums.
700
+ * [@pgsql/enums](https://www.npmjs.com/package/@pgsql/enums): **🏷️ TypeScript enums** for PostgreSQL AST for safe and ergonomic parsing logic.
701
+ * [@pgsql/types](https://www.npmjs.com/package/@pgsql/types): **📝 Type definitions** for PostgreSQL AST nodes in TypeScript.
702
+ * [@pgsql/utils](https://www.npmjs.com/package/@pgsql/utils): **🛠️ AST utilities** for constructing and transforming PostgreSQL syntax trees.
703
+
704
+ ### 🚀 API & Dev Tools
705
+
706
+ * [@constructive-io/graphql-server](https://github.com/constructive-io/constructive/tree/main/graphql/server): **⚡ Express-based API server** powered by PostGraphile to expose a secure, scalable GraphQL API over your Postgres database.
707
+ * [@constructive-io/graphql-explorer](https://github.com/constructive-io/constructive/tree/main/graphql/explorer): **🔎 Visual API explorer** with GraphiQL for browsing across all databases and schemas—useful for debugging, documentation, and API prototyping.
708
+
709
+ ### 🔁 Streaming & Uploads
710
+
711
+ * [etag-hash](https://github.com/constructive-io/constructive/tree/main/streaming/etag-hash): **🏷️ S3-compatible ETags** created by streaming and hashing file uploads in chunks.
712
+ * [etag-stream](https://github.com/constructive-io/constructive/tree/main/streaming/etag-stream): **🔄 ETag computation** via Node stream transformer during upload or transfer.
713
+ * [uuid-hash](https://github.com/constructive-io/constructive/tree/main/streaming/uuid-hash): **🆔 Deterministic UUIDs** generated from hashed content, great for deduplication and asset referencing.
714
+ * [uuid-stream](https://github.com/constructive-io/constructive/tree/main/streaming/uuid-stream): **🌊 Streaming UUID generation** based on piped file content—ideal for upload pipelines.
715
+ * [@constructive-io/s3-streamer](https://github.com/constructive-io/constructive/tree/main/streaming/s3-streamer): **📤 Direct S3 streaming** for large files with support for metadata injection and content validation.
716
+ * [@constructive-io/upload-names](https://github.com/constructive-io/constructive/tree/main/streaming/upload-names): **📂 Collision-resistant filenames** utility for structured and unique file names for uploads.
717
+
718
+ ### 🧰 CLI & Codegen
719
+
720
+ * [pgpm](https://github.com/constructive-io/constructive/tree/main/pgpm/pgpm): **🖥️ PostgreSQL Package Manager** for modular Postgres development. Works with database workspaces, scaffolding, migrations, seeding, and installing database packages.
721
+ * [@constructive-io/cli](https://github.com/constructive-io/constructive/tree/main/packages/cli): **🖥️ Command-line toolkit** for managing Constructive projects—supports database scaffolding, migrations, seeding, code generation, and automation.
722
+ * [@constructive-io/graphql-codegen](https://github.com/constructive-io/constructive/tree/main/graphql/codegen): **✨ GraphQL code generation** (types, operations, SDK) from schema/endpoint introspection.
723
+ * [@constructive-io/query-builder](https://github.com/constructive-io/constructive/tree/main/packages/query-builder): **🏗️ SQL constructor** providing a robust TypeScript-based query builder for dynamic generation of `SELECT`, `INSERT`, `UPDATE`, `DELETE`, and stored procedure calls—supports advanced SQL features like `JOIN`, `GROUP BY`, and schema-qualified queries.
724
+ * [@constructive-io/graphql-query](https://github.com/constructive-io/constructive/tree/main/graphql/query): **🧩 Fluent GraphQL builder** for PostGraphile schemas. ⚡ Schema-aware via introspection, 🧩 composable and ergonomic for building deeply nested queries.
725
+
726
+ ## Credits
727
+
728
+ **🛠 Built by the [Constructive](https://constructive.io) team — creators of modular Postgres tooling for secure, composable backends. If you like our work, contribute on [GitHub](https://github.com/constructive-io).**
729
+
730
+ ## Disclaimer
731
+
732
+ AS DESCRIBED IN THE LICENSES, THE SOFTWARE IS PROVIDED "AS IS", AT YOUR OWN RISK, AND WITHOUT WARRANTIES OF ANY KIND.
733
+
734
+ No developer or entity involved in creating this software will be liable for any claims or damages whatsoever associated with your use, inability to use, or your interaction with other users of the code, including any direct, indirect, incidental, special, exemplary, punitive or consequential damages, or loss of profits, cryptocurrencies, tokens, or anything else of value.