pgsql-seed 0.0.1 → 0.2.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +23 -0
- package/README.md +39 -625
- package/esm/index.d.ts +2 -0
- package/esm/index.js +12 -7
- package/esm/pgpm.d.ts +37 -0
- package/esm/pgpm.js +52 -0
- package/index.d.ts +2 -7
- package/index.js +20 -23
- package/package.json +27 -32
- package/pgpm.d.ts +37 -0
- package/pgpm.js +56 -0
- package/admin.d.ts +0 -26
- package/admin.js +0 -144
- package/connect.d.ts +0 -19
- package/connect.js +0 -95
- package/context-utils.d.ts +0 -8
- package/context-utils.js +0 -28
- package/esm/admin.js +0 -140
- package/esm/connect.js +0 -90
- package/esm/context-utils.js +0 -25
- package/esm/manager.js +0 -138
- package/esm/roles.js +0 -32
- package/esm/seed/adapters.js +0 -23
- package/esm/seed/csv.js +0 -108
- package/esm/seed/index.js +0 -14
- package/esm/seed/json.js +0 -36
- package/esm/seed/pgpm.js +0 -28
- package/esm/seed/sql.js +0 -15
- package/esm/seed/types.js +0 -1
- package/esm/stream.js +0 -96
- package/esm/test-client.js +0 -168
- package/esm/utils.js +0 -91
- package/manager.d.ts +0 -26
- package/manager.js +0 -142
- package/roles.d.ts +0 -17
- package/roles.js +0 -38
- package/seed/adapters.d.ts +0 -4
- package/seed/adapters.js +0 -28
- package/seed/csv.d.ts +0 -15
- package/seed/csv.js +0 -114
- package/seed/index.d.ts +0 -14
- package/seed/index.js +0 -31
- package/seed/json.d.ts +0 -12
- package/seed/json.js +0 -40
- package/seed/pgpm.d.ts +0 -10
- package/seed/pgpm.js +0 -32
- package/seed/sql.d.ts +0 -7
- package/seed/sql.js +0 -18
- package/seed/types.d.ts +0 -13
- package/seed/types.js +0 -2
- package/stream.d.ts +0 -33
- package/stream.js +0 -99
- package/test-client.d.ts +0 -55
- package/test-client.js +0 -172
- package/utils.d.ts +0 -17
- package/utils.js +0 -105
package/README.md
CHANGED
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# pgsql-
|
|
1
|
+
# pgsql-seed
|
|
2
2
|
|
|
3
3
|
<p align="center" width="100%">
|
|
4
4
|
<img height="250" src="https://raw.githubusercontent.com/constructive-io/constructive/refs/heads/main/assets/outline-logo.svg" />
|
|
@@ -11,652 +11,66 @@
|
|
|
11
11
|
<a href="https://github.com/constructive-io/constructive/blob/main/LICENSE">
|
|
12
12
|
<img height="20" src="https://img.shields.io/badge/license-MIT-blue.svg"/>
|
|
13
13
|
</a>
|
|
14
|
-
<a href="https://www.npmjs.com/package/pgsql-
|
|
15
|
-
<img height="20" src="https://img.shields.io/github/package-json/v/constructive-io/constructive?filename=postgres%2Fpgsql-
|
|
14
|
+
<a href="https://www.npmjs.com/package/pgsql-seed">
|
|
15
|
+
<img height="20" src="https://img.shields.io/github/package-json/v/constructive-io/constructive?filename=postgres%2Fpgsql-seed%2Fpackage.json"/>
|
|
16
16
|
</a>
|
|
17
17
|
</p>
|
|
18
18
|
|
|
19
|
-
|
|
19
|
+
PostgreSQL seeding utilities with pgpm integration - batteries included.
|
|
20
20
|
|
|
21
|
-
|
|
21
|
+
This package re-exports everything from [`pg-seed`](https://www.npmjs.com/package/pg-seed) and adds pgpm deployment functionality.
|
|
22
22
|
|
|
23
|
-
|
|
24
|
-
npm install pgsql-test
|
|
25
|
-
```
|
|
26
|
-
|
|
27
|
-
## Features
|
|
28
|
-
|
|
29
|
-
* ⚡ **Instant test DBs** — each one seeded, isolated, and UUID-named
|
|
30
|
-
* 🔄 **Per-test rollback** — every test runs in its own transaction or savepoint
|
|
31
|
-
* 🛡️ **RLS-friendly** — test with role-based auth via `.setContext()`
|
|
32
|
-
* 🌱 **Flexible seeding** — run `.sql` files, programmatic seeds, or even load fixtures
|
|
33
|
-
* 🧪 **Compatible with any async runner** — works with `Jest`, `Mocha`, etc.
|
|
34
|
-
* 🧹 **Auto teardown** — no residue, no reboots, just clean exits
|
|
35
|
-
|
|
36
|
-
### Tutorials
|
|
37
|
-
|
|
38
|
-
📚 **[Learn how to test PG with pgsql-test →](https://constructive.io/learn/e2e-postgres-testing)**
|
|
39
|
-
|
|
40
|
-
### Using with Supabase
|
|
41
|
-
|
|
42
|
-
If you're writing tests for Supabase, check out [`supabase-test`](https://www.npmjs.com/package/supabase-test) for Supabase-optimized defaults.
|
|
43
|
-
|
|
44
|
-
### pgpm migrations
|
|
45
|
-
|
|
46
|
-
Part of the [pgpm](https://pgpm.io) ecosystem, `pgsql-test` is built to pair seamlessly with our TypeScript-based package manager and migration tool. `pgpm` gives you modular Postgres packages, deterministic plans, and tag-aware releases—perfect for authoring the migrations that `pgsql-test` runs.
|
|
47
|
-
|
|
48
|
-
## Table of Contents
|
|
49
|
-
|
|
50
|
-
1. [Install](#install)
|
|
51
|
-
2. [Features](#features)
|
|
52
|
-
3. [Quick Start](#-quick-start)
|
|
53
|
-
4. [`getConnections()` Overview](#getconnections-overview)
|
|
54
|
-
5. [PgTestClient API Overview](#pgtestclient-api-overview)
|
|
55
|
-
6. [Usage Examples](#usage-examples)
|
|
56
|
-
* [Basic Setup](#-basic-setup)
|
|
57
|
-
* [Role-Based Context](#-role-based-context)
|
|
58
|
-
* [Seeding System](#-seeding-system)
|
|
59
|
-
* [SQL File Seeding](#-sql-file-seeding)
|
|
60
|
-
* [Programmatic Seeding](#-programmatic-seeding)
|
|
61
|
-
* [CSV Seeding](#️-csv-seeding)
|
|
62
|
-
* [JSON Seeding](#️-json-seeding)
|
|
63
|
-
* [pgpm Seeding](#-pgpm-seeding)
|
|
64
|
-
7. [`getConnections() Options` ](#getconnections-options)
|
|
65
|
-
8. [Disclaimer](#disclaimer)
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
## ✨ Quick Start
|
|
69
|
-
|
|
70
|
-
```ts
|
|
71
|
-
import { getConnections } from 'pgsql-test';
|
|
72
|
-
|
|
73
|
-
let db, teardown;
|
|
74
|
-
|
|
75
|
-
beforeAll(async () => {
|
|
76
|
-
({ db, teardown } = await getConnections());
|
|
77
|
-
await db.query(`SELECT 1`); // ✅ Ready to run queries
|
|
78
|
-
});
|
|
79
|
-
|
|
80
|
-
afterAll(() => teardown());
|
|
81
|
-
```
|
|
82
|
-
|
|
83
|
-
## `getConnections()` Overview
|
|
84
|
-
|
|
85
|
-
```ts
|
|
86
|
-
import { getConnections } from 'pgsql-test';
|
|
87
|
-
|
|
88
|
-
// Complete object destructuring
|
|
89
|
-
const { pg, db, admin, teardown, manager } = await getConnections();
|
|
90
|
-
|
|
91
|
-
// Most common pattern
|
|
92
|
-
const { db, teardown } = await getConnections();
|
|
93
|
-
```
|
|
94
|
-
|
|
95
|
-
The `getConnections()` helper sets up a fresh PostgreSQL test database and returns a structured object with:
|
|
96
|
-
|
|
97
|
-
* `pg`: a `PgTestClient` connected as the root or superuser — useful for administrative setup or introspection
|
|
98
|
-
* `db`: a `PgTestClient` connected as the app-level user — used for running tests with RLS and granted permissions
|
|
99
|
-
* `admin`: a `DbAdmin` utility for managing database state, extensions, roles, and templates
|
|
100
|
-
* `teardown()`: a function that shuts down the test environment and database pool
|
|
101
|
-
* `manager`: a shared connection pool manager (`PgTestConnector`) behind both clients
|
|
102
|
-
|
|
103
|
-
Together, these allow fast, isolated, role-aware test environments with per-test rollback and full control over setup and teardown.
|
|
104
|
-
|
|
105
|
-
The `PgTestClient` returned by `getConnections()` is a fully-featured wrapper around `pg.Pool`. It provides:
|
|
106
|
-
|
|
107
|
-
* Automatic transaction and savepoint management for test isolation
|
|
108
|
-
* Easy switching of role-based contexts for RLS testing
|
|
109
|
-
* A clean, high-level API for integration testing PostgreSQL systems
|
|
110
|
-
|
|
111
|
-
## `PgTestClient` API Overview
|
|
112
|
-
|
|
113
|
-
```ts
|
|
114
|
-
let pg: PgTestClient;
|
|
115
|
-
let teardown: () => Promise<void>;
|
|
116
|
-
|
|
117
|
-
beforeAll(async () => {
|
|
118
|
-
({ pg, teardown } = await getConnections());
|
|
119
|
-
});
|
|
120
|
-
|
|
121
|
-
beforeEach(() => pg.beforeEach());
|
|
122
|
-
afterEach(() => pg.afterEach());
|
|
123
|
-
afterAll(() => teardown());
|
|
124
|
-
```
|
|
125
|
-
|
|
126
|
-
The `PgTestClient` returned by `getConnections()` wraps a `pg.Client` and provides convenient helpers for query execution, test isolation, and context switching.
|
|
127
|
-
|
|
128
|
-
### Common Methods
|
|
129
|
-
|
|
130
|
-
* `query(sql, values?)` – Run a raw SQL query and get the `QueryResult`
|
|
131
|
-
* `beforeEach()` – Begins a transaction and sets a savepoint (called at the start of each test)
|
|
132
|
-
* `afterEach()` – Rolls back to the savepoint and commits the outer transaction (cleans up test state)
|
|
133
|
-
* `setContext({ key: value })` – Sets PostgreSQL config variables (like `role`) to simulate RLS contexts
|
|
134
|
-
* `any`, `one`, `oneOrNone`, `many`, `manyOrNone`, `none`, `result` – Typed query helpers for specific result expectations
|
|
135
|
-
|
|
136
|
-
These methods make it easier to build expressive and isolated integration tests with strong typing and error handling.
|
|
137
|
-
|
|
138
|
-
The `PgTestClient` returned by `getConnections()` is a fully-featured wrapper around `pg.Pool`. It provides:
|
|
139
|
-
|
|
140
|
-
* Automatic transaction and savepoint management for test isolation
|
|
141
|
-
* Easy switching of role-based contexts for RLS testing
|
|
142
|
-
* A clean, high-level API for integration testing PostgreSQL systems
|
|
23
|
+
## Installation
|
|
143
24
|
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
|
|
147
|
-
|
|
148
|
-
```ts
|
|
149
|
-
import { getConnections } from 'pgsql-test';
|
|
150
|
-
|
|
151
|
-
let db; // A fully wrapped PgTestClient using pg.Pool with savepoint-based rollback per test
|
|
152
|
-
let teardown;
|
|
153
|
-
|
|
154
|
-
beforeAll(async () => {
|
|
155
|
-
({ db, teardown } = await getConnections());
|
|
156
|
-
|
|
157
|
-
await db.query(`
|
|
158
|
-
CREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT);
|
|
159
|
-
CREATE TABLE posts (id SERIAL PRIMARY KEY, user_id INT REFERENCES users(id), content TEXT);
|
|
160
|
-
|
|
161
|
-
INSERT INTO users (name) VALUES ('Alice'), ('Bob');
|
|
162
|
-
INSERT INTO posts (user_id, content) VALUES (1, 'Hello world!'), (2, 'Graphile is cool!');
|
|
163
|
-
`);
|
|
164
|
-
});
|
|
165
|
-
|
|
166
|
-
afterAll(() => teardown());
|
|
167
|
-
|
|
168
|
-
beforeEach(() => db.beforeEach());
|
|
169
|
-
afterEach(() => db.afterEach());
|
|
170
|
-
|
|
171
|
-
test('user count starts at 2', async () => {
|
|
172
|
-
const res = await db.query('SELECT COUNT(*) FROM users');
|
|
173
|
-
expect(res.rows[0].count).toBe('2');
|
|
174
|
-
});
|
|
175
|
-
```
|
|
176
|
-
|
|
177
|
-
### 🔐 Role-Based Context
|
|
178
|
-
|
|
179
|
-
|
|
180
|
-
The `pgsql-test` framework provides powerful tools to simulate authentication contexts during tests, which is particularly useful when testing Row-Level Security (RLS) policies.
|
|
181
|
-
|
|
182
|
-
#### Setting Test Context
|
|
183
|
-
|
|
184
|
-
Use `setContext()` to simulate different user roles and JWT claims:
|
|
185
|
-
|
|
186
|
-
```ts
|
|
187
|
-
db.setContext({
|
|
188
|
-
role: 'authenticated',
|
|
189
|
-
'jwt.claims.user_id': '123',
|
|
190
|
-
'jwt.claims.org_id': 'acme'
|
|
191
|
-
});
|
|
25
|
+
```bash
|
|
26
|
+
npm install pgsql-seed
|
|
27
|
+
# or
|
|
28
|
+
pnpm add pgsql-seed
|
|
192
29
|
```
|
|
193
30
|
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
#### Testing Role-Based Access
|
|
197
|
-
|
|
198
|
-
```ts
|
|
199
|
-
describe('authenticated role', () => {
|
|
200
|
-
beforeEach(async () => {
|
|
201
|
-
db.setContext({ role: 'authenticated' });
|
|
202
|
-
await db.beforeEach();
|
|
203
|
-
});
|
|
31
|
+
## Usage
|
|
204
32
|
|
|
205
|
-
|
|
33
|
+
### All pg-seed utilities are available
|
|
206
34
|
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
|
|
211
|
-
|
|
35
|
+
```typescript
|
|
36
|
+
import {
|
|
37
|
+
loadCsv, loadCsvMap, exportCsv, // CSV utilities
|
|
38
|
+
insertJson, insertJsonMap, // JSON utilities
|
|
39
|
+
loadSql, loadSqlFiles, execSql // SQL utilities
|
|
40
|
+
} from 'pgsql-seed';
|
|
212
41
|
```
|
|
213
42
|
|
|
214
|
-
|
|
43
|
+
See the [pg-seed documentation](https://www.npmjs.com/package/pg-seed) for details on these utilities.
|
|
215
44
|
|
|
216
|
-
|
|
45
|
+
### pgpm Integration
|
|
217
46
|
|
|
218
|
-
|
|
47
|
+
Deploy pgpm packages directly from your code:
|
|
219
48
|
|
|
220
|
-
|
|
49
|
+
```typescript
|
|
50
|
+
import { deployPgpm, loadPgpm } from 'pgsql-seed';
|
|
221
51
|
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
|
|
226
|
-
|
|
52
|
+
const config = {
|
|
53
|
+
host: 'localhost',
|
|
54
|
+
port: 5432,
|
|
55
|
+
database: 'mydb',
|
|
56
|
+
user: 'postgres',
|
|
57
|
+
password: 'password'
|
|
58
|
+
};
|
|
227
59
|
|
|
228
|
-
|
|
60
|
+
// Deploy the pgpm package in the current directory
|
|
61
|
+
await deployPgpm(config);
|
|
229
62
|
|
|
230
|
-
|
|
63
|
+
// Deploy from a specific directory
|
|
64
|
+
await deployPgpm(config, '/path/to/package');
|
|
231
65
|
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
```ts
|
|
235
|
-
const { db, teardown } = await getConnections(getConnectionOptions, seedAdapters);
|
|
66
|
+
// With caching enabled
|
|
67
|
+
await deployPgpm(config, undefined, true);
|
|
236
68
|
```
|
|
237
69
|
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
* [`seed.sqlfile()`](#-sql-file-seeding) – Execute raw `.sql` files from disk
|
|
241
|
-
* [`seed.fn()`](#-programmatic-seeding) – Run JavaScript/TypeScript logic to programmatically insert data
|
|
242
|
-
* [`seed.csv()`](#️-csv-seeding) – Load tabular data from CSV files
|
|
243
|
-
* [`seed.json()`](#️-json-seeding) – Use in-memory objects as seed data
|
|
244
|
-
* [`seed.loadPgpm()`](#-pgpm-seeding) – Apply a pgpm project or set of packages (compatible with sqitch)
|
|
245
|
-
|
|
246
|
-
> ✨ **Default Behavior:** If no `SeedAdapter[]` is passed, pgpm seeding is assumed. This makes `pgsql-test` zero-config for pgpm-based projects.
|
|
247
|
-
|
|
248
|
-
This composable system allows you to mix-and-match data setup strategies for flexible, realistic, and fast database tests.
|
|
249
|
-
|
|
250
|
-
#### Two Seeding Patterns
|
|
251
|
-
|
|
252
|
-
You can seed data using either approach:
|
|
253
|
-
|
|
254
|
-
**1. Adapter Pattern** (setup phase via `getConnections`)
|
|
255
|
-
```ts
|
|
256
|
-
const { db, teardown } = await getConnections({}, [
|
|
257
|
-
seed.json({ 'users': [{ id: 1, name: 'Alice' }] })
|
|
258
|
-
]);
|
|
259
|
-
```
|
|
260
|
-
|
|
261
|
-
**2. Direct Load Methods** (runtime via `PgTestClient`)
|
|
262
|
-
```ts
|
|
263
|
-
await db.loadJson({ 'users': [{ id: 1, name: 'Alice' }] });
|
|
264
|
-
await db.loadCsv({ 'users': '/path/to/users.csv' });
|
|
265
|
-
await db.loadSql(['/path/to/schema.sql']);
|
|
266
|
-
```
|
|
267
|
-
|
|
268
|
-
> **Note:** `loadCsv()` and `loadPgpm()` do not apply RLS context (PostgreSQL limitation). Use `loadJson()` or `loadSql()` for RLS-aware seeding.
|
|
269
|
-
|
|
270
|
-
### 🔌 SQL File Seeding
|
|
271
|
-
|
|
272
|
-
**Adapter Pattern:**
|
|
273
|
-
```ts
|
|
274
|
-
const { db, teardown } = await getConnections({}, [
|
|
275
|
-
seed.sqlfile(['schema.sql', 'fixtures.sql'])
|
|
276
|
-
]);
|
|
277
|
-
```
|
|
278
|
-
|
|
279
|
-
**Direct Load Method:**
|
|
280
|
-
```ts
|
|
281
|
-
await db.loadSql(['schema.sql', 'fixtures.sql']);
|
|
282
|
-
```
|
|
283
|
-
|
|
284
|
-
<details>
|
|
285
|
-
<summary>Full example</summary>
|
|
286
|
-
|
|
287
|
-
```ts
|
|
288
|
-
import path from 'path';
|
|
289
|
-
import { getConnections, seed } from 'pgsql-test';
|
|
290
|
-
|
|
291
|
-
const sql = (f: string) => path.join(__dirname, 'sql', f);
|
|
292
|
-
|
|
293
|
-
let db;
|
|
294
|
-
let teardown;
|
|
295
|
-
|
|
296
|
-
beforeAll(async () => {
|
|
297
|
-
({ db, teardown } = await getConnections({}, [
|
|
298
|
-
seed.sqlfile([
|
|
299
|
-
sql('schema.sql'),
|
|
300
|
-
sql('fixtures.sql')
|
|
301
|
-
])
|
|
302
|
-
]));
|
|
303
|
-
});
|
|
70
|
+
## When to use pg-seed vs pgsql-seed
|
|
304
71
|
|
|
305
|
-
|
|
306
|
-
|
|
307
|
-
});
|
|
308
|
-
```
|
|
309
|
-
|
|
310
|
-
</details>
|
|
311
|
-
|
|
312
|
-
### 🧠 Programmatic Seeding
|
|
313
|
-
|
|
314
|
-
**Adapter Pattern:**
|
|
315
|
-
```ts
|
|
316
|
-
const { db, teardown } = await getConnections({}, [
|
|
317
|
-
seed.fn(async ({ pg }) => {
|
|
318
|
-
await pg.query(`INSERT INTO users (name) VALUES ('Seeded User')`);
|
|
319
|
-
})
|
|
320
|
-
]);
|
|
321
|
-
```
|
|
322
|
-
|
|
323
|
-
**Direct Load Method:**
|
|
324
|
-
```ts
|
|
325
|
-
// Use any PgTestClient method directly
|
|
326
|
-
await db.query(`INSERT INTO users (name) VALUES ('Seeded User')`);
|
|
327
|
-
```
|
|
328
|
-
|
|
329
|
-
<details>
|
|
330
|
-
<summary>Full example</summary>
|
|
331
|
-
|
|
332
|
-
```ts
|
|
333
|
-
import { getConnections, seed } from 'pgsql-test';
|
|
334
|
-
|
|
335
|
-
let db;
|
|
336
|
-
let teardown;
|
|
337
|
-
|
|
338
|
-
beforeAll(async () => {
|
|
339
|
-
({ db, teardown } = await getConnections({}, [
|
|
340
|
-
seed.fn(async ({ pg }) => {
|
|
341
|
-
await pg.query(`
|
|
342
|
-
INSERT INTO users (name) VALUES ('Seeded User');
|
|
343
|
-
`);
|
|
344
|
-
})
|
|
345
|
-
]));
|
|
346
|
-
});
|
|
347
|
-
```
|
|
348
|
-
|
|
349
|
-
</details>
|
|
350
|
-
|
|
351
|
-
## 🗃️ CSV Seeding
|
|
352
|
-
|
|
353
|
-
**Adapter Pattern:**
|
|
354
|
-
```ts
|
|
355
|
-
const { db, teardown } = await getConnections({}, [
|
|
356
|
-
seed.csv({
|
|
357
|
-
'users': '/path/to/users.csv',
|
|
358
|
-
'posts': '/path/to/posts.csv'
|
|
359
|
-
})
|
|
360
|
-
]);
|
|
361
|
-
```
|
|
362
|
-
|
|
363
|
-
**Direct Load Method:**
|
|
364
|
-
```ts
|
|
365
|
-
await db.loadCsv({
|
|
366
|
-
'users': '/path/to/users.csv',
|
|
367
|
-
'posts': '/path/to/posts.csv'
|
|
368
|
-
});
|
|
369
|
-
```
|
|
370
|
-
|
|
371
|
-
> **Note:** CSV loading uses PostgreSQL COPY which does not support RLS context.
|
|
372
|
-
|
|
373
|
-
<details>
|
|
374
|
-
<summary>Full example</summary>
|
|
375
|
-
|
|
376
|
-
You can load tables from CSV files using `seed.csv({ ... })`. CSV headers must match the table column names exactly. This is useful for loading stable fixture data for integration tests or CI environments.
|
|
377
|
-
|
|
378
|
-
```ts
|
|
379
|
-
import path from 'path';
|
|
380
|
-
import { getConnections, seed } from 'pgsql-test';
|
|
381
|
-
|
|
382
|
-
const csv = (file: string) => path.resolve(__dirname, '../csv', file);
|
|
383
|
-
|
|
384
|
-
let db;
|
|
385
|
-
let teardown;
|
|
386
|
-
|
|
387
|
-
beforeAll(async () => {
|
|
388
|
-
({ db, teardown } = await getConnections({}, [
|
|
389
|
-
// Create schema
|
|
390
|
-
seed.fn(async ({ pg }) => {
|
|
391
|
-
await pg.query(`
|
|
392
|
-
CREATE TABLE users (
|
|
393
|
-
id SERIAL PRIMARY KEY,
|
|
394
|
-
name TEXT NOT NULL
|
|
395
|
-
);
|
|
396
|
-
|
|
397
|
-
CREATE TABLE posts (
|
|
398
|
-
id SERIAL PRIMARY KEY,
|
|
399
|
-
user_id INT REFERENCES users(id),
|
|
400
|
-
content TEXT NOT NULL
|
|
401
|
-
);
|
|
402
|
-
`);
|
|
403
|
-
}),
|
|
404
|
-
// Load from CSV
|
|
405
|
-
seed.csv({
|
|
406
|
-
users: csv('users.csv'),
|
|
407
|
-
posts: csv('posts.csv')
|
|
408
|
-
}),
|
|
409
|
-
// Adjust SERIAL sequences to avoid conflicts
|
|
410
|
-
seed.fn(async ({ pg }) => {
|
|
411
|
-
await pg.query(`SELECT setval(pg_get_serial_sequence('users', 'id'), (SELECT MAX(id) FROM users));`);
|
|
412
|
-
await pg.query(`SELECT setval(pg_get_serial_sequence('posts', 'id'), (SELECT MAX(id) FROM posts));`);
|
|
413
|
-
})
|
|
414
|
-
]));
|
|
415
|
-
});
|
|
416
|
-
|
|
417
|
-
afterAll(() => teardown());
|
|
418
|
-
|
|
419
|
-
it('has loaded rows', async () => {
|
|
420
|
-
const res = await db.query('SELECT COUNT(*) FROM users');
|
|
421
|
-
expect(+res.rows[0].count).toBeGreaterThan(0);
|
|
422
|
-
});
|
|
423
|
-
```
|
|
424
|
-
|
|
425
|
-
</details>
|
|
426
|
-
|
|
427
|
-
## 🗃️ JSON Seeding
|
|
428
|
-
|
|
429
|
-
**Adapter Pattern:**
|
|
430
|
-
```ts
|
|
431
|
-
const { db, teardown } = await getConnections({}, [
|
|
432
|
-
seed.json({
|
|
433
|
-
'custom.users': [
|
|
434
|
-
{ id: 1, name: 'Alice' },
|
|
435
|
-
{ id: 2, name: 'Bob' }
|
|
436
|
-
]
|
|
437
|
-
})
|
|
438
|
-
]);
|
|
439
|
-
```
|
|
440
|
-
|
|
441
|
-
**Direct Load Method:**
|
|
442
|
-
```ts
|
|
443
|
-
await db.loadJson({
|
|
444
|
-
'custom.users': [
|
|
445
|
-
{ id: 1, name: 'Alice' },
|
|
446
|
-
{ id: 2, name: 'Bob' }
|
|
447
|
-
]
|
|
448
|
-
});
|
|
449
|
-
```
|
|
450
|
-
|
|
451
|
-
<details>
|
|
452
|
-
<summary>Full example</summary>
|
|
453
|
-
|
|
454
|
-
You can seed tables using in-memory JSON objects. This is useful when you want fast, inline fixtures without managing external files.
|
|
455
|
-
|
|
456
|
-
```ts
|
|
457
|
-
import { getConnections, seed } from 'pgsql-test';
|
|
458
|
-
|
|
459
|
-
let db;
|
|
460
|
-
let teardown;
|
|
461
|
-
|
|
462
|
-
beforeAll(async () => {
|
|
463
|
-
({ db, teardown } = await getConnections({}, [
|
|
464
|
-
// Create schema
|
|
465
|
-
seed.fn(async ({ pg }) => {
|
|
466
|
-
await pg.query(`
|
|
467
|
-
CREATE SCHEMA custom;
|
|
468
|
-
CREATE TABLE custom.users (
|
|
469
|
-
id SERIAL PRIMARY KEY,
|
|
470
|
-
name TEXT NOT NULL
|
|
471
|
-
);
|
|
472
|
-
|
|
473
|
-
CREATE TABLE custom.posts (
|
|
474
|
-
id SERIAL PRIMARY KEY,
|
|
475
|
-
user_id INT REFERENCES custom.users(id),
|
|
476
|
-
content TEXT NOT NULL
|
|
477
|
-
);
|
|
478
|
-
`);
|
|
479
|
-
}),
|
|
480
|
-
// Seed with in-memory JSON
|
|
481
|
-
seed.json({
|
|
482
|
-
'custom.users': [
|
|
483
|
-
{ id: 1, name: 'Alice' },
|
|
484
|
-
{ id: 2, name: 'Bob' }
|
|
485
|
-
],
|
|
486
|
-
'custom.posts': [
|
|
487
|
-
{ id: 1, user_id: 1, content: 'Hello world!' },
|
|
488
|
-
{ id: 2, user_id: 2, content: 'Graphile is cool!' }
|
|
489
|
-
]
|
|
490
|
-
}),
|
|
491
|
-
// Fix SERIAL sequences
|
|
492
|
-
seed.fn(async ({ pg }) => {
|
|
493
|
-
await pg.query(`SELECT setval(pg_get_serial_sequence('custom.users', 'id'), (SELECT MAX(id) FROM custom.users));`);
|
|
494
|
-
await pg.query(`SELECT setval(pg_get_serial_sequence('custom.posts', 'id'), (SELECT MAX(id) FROM custom.posts));`);
|
|
495
|
-
})
|
|
496
|
-
]));
|
|
497
|
-
});
|
|
498
|
-
|
|
499
|
-
afterAll(() => teardown());
|
|
500
|
-
|
|
501
|
-
it('has loaded rows', async () => {
|
|
502
|
-
const res = await db.query('SELECT COUNT(*) FROM custom.users');
|
|
503
|
-
expect(+res.rows[0].count).toBeGreaterThan(0);
|
|
504
|
-
});
|
|
505
|
-
```
|
|
506
|
-
|
|
507
|
-
</details>
|
|
508
|
-
|
|
509
|
-
## 🚀 pgpm Seeding
|
|
510
|
-
|
|
511
|
-
**Zero Configuration (Default):**
|
|
512
|
-
```ts
|
|
513
|
-
// pgpm migrate is used automatically
|
|
514
|
-
const { db, teardown } = await getConnections();
|
|
515
|
-
```
|
|
516
|
-
|
|
517
|
-
**Adapter Pattern (Custom Path):**
|
|
518
|
-
```ts
|
|
519
|
-
const { db, teardown } = await getConnections({}, [
|
|
520
|
-
seed.loadPgpm('/path/to/pgpm-workspace', true) // with cache
|
|
521
|
-
]);
|
|
522
|
-
```
|
|
523
|
-
|
|
524
|
-
**Direct Load Method:**
|
|
525
|
-
```ts
|
|
526
|
-
await db.loadPpgm('/path/to/pgpm-workspace', true); // with cache
|
|
527
|
-
```
|
|
528
|
-
|
|
529
|
-
> **Note:** pgpm deployment has its own client handling and does not apply RLS context.
|
|
530
|
-
|
|
531
|
-
<details>
|
|
532
|
-
<summary>Full example</summary>
|
|
533
|
-
|
|
534
|
-
If your project uses pgpm modules with a precompiled `pgpm.plan`, you can use `pgsql-test` with **zero configuration**. Just call `getConnections()` — and it *just works*:
|
|
535
|
-
|
|
536
|
-
```ts
|
|
537
|
-
import { getConnections } from 'pgsql-test';
|
|
538
|
-
|
|
539
|
-
let db, teardown;
|
|
540
|
-
|
|
541
|
-
beforeAll(async () => {
|
|
542
|
-
({ db, teardown } = await getConnections()); // pgpm module is deployed automatically
|
|
543
|
-
});
|
|
544
|
-
```
|
|
545
|
-
|
|
546
|
-
pgpm uses Sqitch-compatible syntax with a TypeScript-based migration engine. By default, `pgsql-test` automatically deploys any pgpm module found in the current working directory (`process.cwd()`).
|
|
547
|
-
|
|
548
|
-
To specify a custom path to your pgpm module, use `seed.loadPgpm()` explicitly:
|
|
549
|
-
|
|
550
|
-
```ts
|
|
551
|
-
import path from 'path';
|
|
552
|
-
import { getConnections, seed } from 'pgsql-test';
|
|
553
|
-
|
|
554
|
-
const cwd = path.resolve(__dirname, '../path/to/pgpm-workspace');
|
|
555
|
-
|
|
556
|
-
beforeAll(async () => {
|
|
557
|
-
({ db, teardown } = await getConnections({}, [
|
|
558
|
-
seed.loadPgpm(cwd)
|
|
559
|
-
]));
|
|
560
|
-
});
|
|
561
|
-
```
|
|
562
|
-
|
|
563
|
-
</details>
|
|
564
|
-
|
|
565
|
-
## Why pgpm's Approach?
|
|
566
|
-
|
|
567
|
-
pgpm provides the best of both worlds:
|
|
568
|
-
|
|
569
|
-
1. **Sqitch Compatibility**: Keep your familiar Sqitch syntax and migration approach
|
|
570
|
-
2. **TypeScript Performance**: Our TS-rewritten deployment engine delivers up to 10x faster schema deployments
|
|
571
|
-
3. **Developer Experience**: Tight feedback loops with near-instant schema setup for tests
|
|
572
|
-
4. **CI Optimization**: Dramatically reduced test suite run times with optimized deployment
|
|
573
|
-
|
|
574
|
-
By maintaining Sqitch compatibility while supercharging performance, pgpm enables you to keep your existing migration patterns while enjoying the speed benefits of our TypeScript engine.
|
|
575
|
-
|
|
576
|
-
## `getConnections` Options
|
|
577
|
-
|
|
578
|
-
This table documents the available options for the `getConnections` function. The options are passed as a combination of `pg` and `db` configuration objects.
|
|
579
|
-
|
|
580
|
-
### `db` Options (PgTestConnectionOptions)
|
|
581
|
-
|
|
582
|
-
| Option | Type | Default | Description |
|
|
583
|
-
| ------------------------ | ---------- | ---------------- | --------------------------------------------------------------------------- |
|
|
584
|
-
| `db.extensions` | `string[]` | `[]` | Array of PostgreSQL extensions to include in the test database |
|
|
585
|
-
| `db.cwd` | `string` | `process.cwd()` | Working directory used for pgpm or Sqitch projects |
|
|
586
|
-
| `db.connection.user` | `string` | `'app_user'` | User for simulating RLS via `setContext()` |
|
|
587
|
-
| `db.connection.password` | `string` | `'app_password'` | Password for RLS test user |
|
|
588
|
-
| `db.connection.role` | `string` | `'anonymous'` | Default role used during `setContext()` |
|
|
589
|
-
| `db.template` | `string` | `undefined` | Template database used for faster test DB creation |
|
|
590
|
-
| `db.rootDb` | `string` | `'postgres'` | Root database used for administrative operations (e.g., creating databases) |
|
|
591
|
-
| `db.prefix` | `string` | `'db-'` | Prefix used when generating test database names |
|
|
592
|
-
|
|
593
|
-
### `pg` Options (PgConfig)
|
|
594
|
-
|
|
595
|
-
Environment variables will override these options when available:
|
|
596
|
-
|
|
597
|
-
* `PGHOST`, `PGPORT`, `PGUSER`, `PGPASSWORD`, `PGDATABASE`
|
|
598
|
-
|
|
599
|
-
| Option | Type | Default | Description |
|
|
600
|
-
| ------------- | -------- | ------------- | ----------------------------------------------- |
|
|
601
|
-
| `pg.user` | `string` | `'postgres'` | Superuser for PostgreSQL |
|
|
602
|
-
| `pg.password` | `string` | `'password'` | Password for the PostgreSQL superuser |
|
|
603
|
-
| `pg.host` | `string` | `'localhost'` | Hostname for PostgreSQL |
|
|
604
|
-
| `pg.port` | `number` | `5423` | Port for PostgreSQL |
|
|
605
|
-
| `pg.database` | `string` | `'postgres'` | Default database used when connecting initially |
|
|
606
|
-
|
|
607
|
-
### Usage
|
|
608
|
-
|
|
609
|
-
```ts
|
|
610
|
-
const { conn, db, teardown } = await getConnections({
|
|
611
|
-
pg: { user: 'postgres', password: 'secret' },
|
|
612
|
-
db: {
|
|
613
|
-
extensions: ['uuid-ossp'],
|
|
614
|
-
cwd: '/path/to/project',
|
|
615
|
-
connection: { user: 'test_user', password: 'secret', role: 'authenticated' },
|
|
616
|
-
template: 'test_template',
|
|
617
|
-
prefix: 'test_',
|
|
618
|
-
rootDb: 'postgres'
|
|
619
|
-
}
|
|
620
|
-
});
|
|
621
|
-
```
|
|
622
|
-
|
|
623
|
-
## Snapshot Utilities
|
|
624
|
-
|
|
625
|
-
The `pgsql-test/utils` module provides utilities for sanitizing database query results for snapshot testing. These helpers replace dynamic values (IDs, UUIDs, dates, hashes) with stable placeholders, making snapshots deterministic.
|
|
626
|
-
|
|
627
|
-
```ts
|
|
628
|
-
import { snapshot } from 'pgsql-test/utils';
|
|
629
|
-
|
|
630
|
-
const result = await db.any('SELECT * FROM users');
|
|
631
|
-
expect(snapshot(result)).toMatchSnapshot();
|
|
632
|
-
```
|
|
633
|
-
|
|
634
|
-
### Available Functions
|
|
635
|
-
|
|
636
|
-
| Function | Description |
|
|
637
|
-
|----------|-------------|
|
|
638
|
-
| `snapshot(obj)` | Recursively prunes all dynamic values from an object or array |
|
|
639
|
-
| `prune(obj)` | Applies all prune functions to a single object |
|
|
640
|
-
| `pruneDates(obj)` | Replaces `Date` objects and date strings (fields ending in `_at` or `At`) with `[DATE]` |
|
|
641
|
-
| `pruneIds(obj)` | Replaces `id` and `*_id` fields with `[ID]` |
|
|
642
|
-
| `pruneIdArrays(obj)` | Replaces `*_ids` array fields with `[UUIDs-N]` |
|
|
643
|
-
| `pruneUUIDs(obj)` | Replaces UUID strings in `uuid` and `queue_name` fields with `[UUID]` |
|
|
644
|
-
| `pruneHashes(obj)` | Replaces `*_hash` fields starting with `$` with `[hash]` |
|
|
645
|
-
|
|
646
|
-
### Example
|
|
647
|
-
|
|
648
|
-
```ts
|
|
649
|
-
import { snapshot, pruneIds, pruneDates } from 'pgsql-test/utils';
|
|
650
|
-
|
|
651
|
-
// Full sanitization
|
|
652
|
-
const users = await db.any('SELECT * FROM users');
|
|
653
|
-
expect(snapshot(users)).toMatchSnapshot();
|
|
654
|
-
|
|
655
|
-
// Selective sanitization
|
|
656
|
-
const row = await db.one('SELECT id, name, created_at FROM users WHERE id = $1', [1]);
|
|
657
|
-
const sanitized = pruneDates(pruneIds(row));
|
|
658
|
-
// { id: '[ID]', name: 'Alice', created_at: '[DATE]' }
|
|
659
|
-
```
|
|
72
|
+
- Use **`pg-seed`** if you only need CSV/JSON/SQL seeding and want a lightweight package with no pgpm dependency
|
|
73
|
+
- Use **`pgsql-seed`** if you need pgpm deployment functionality or want a single package with all seeding utilities
|
|
660
74
|
|
|
661
75
|
---
|
|
662
76
|
|
package/esm/index.d.ts
ADDED