@rawsql-ts/ztd-cli 0.13.2 → 0.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,326 @@
1
+ # Appendix: Development Workflow Using Zero Table Dependency (ZTD)
2
+
3
+ This application uses **Zero Table Dependency (ZTD)** as an internal development workflow for writing, testing, and maintaining SQL logic.
4
+ ZTD is not part of the application's runtime behavior; rather, it provides a framework for:
5
+
6
+ - Maintaining consistent SQL across the project
7
+ - Keeping schema, domain specifications, and enums synchronized
8
+ - Ensuring deterministic SQL unit tests
9
+ - Enabling structured collaboration between humans and AI
10
+
11
+ This section documents how ZTD is used *inside this repository* as a development methodology.
12
+
13
+ ---
14
+
15
+ ## Generated files (important)
16
+
17
+ - `tests/generated/` is auto-generated and must never be committed.
18
+ - After cloning the repository (or in a clean environment), run `npx ztd ztd-config`.
19
+ - If TypeScript reports missing modules or type errors because `tests/generated/` is missing, run `npx ztd ztd-config`.
20
+
21
+ ## ZTD Implementation Guide (src/)
22
+
23
+ The `src/` directory should contain pure TypeScript logic that operates on the row interfaces generated by `tests/generated/ztd-row-map.generated.ts`. Tests should import the row map, repositories should import DTOs, and fixtures must stay under `tests/`. Keep production code decoupled from the generated row map to preserve the distinction between implementation and test scaffolding.
24
+
25
+ ### Repository Classes: What to Care About
26
+
27
+ #### Scope and Responsibility
28
+ - Repository classes are responsible for executing SQL and returning query results.
29
+ - Avoid embedding business logic, thresholds, or data reshaping inside repositories.
30
+ - Treat repositories as thin adapters over SQL.
31
+
32
+ #### SQL Management
33
+ - Prefer keeping SQL in separate `.sql` files.
34
+ - If no explicit instruction is given, separate SQL into files by default.
35
+ - Repository classes should reference SQL files by name, not inline long SQL strings.
36
+
37
+ #### Specifications and Documentation
38
+ - If a markdown file with the same base name as the repository or SQL exists, read it before implementation.
39
+ - Such files may contain repository-local specifications (e.g. decision tables, thresholds, request/response notes).
40
+ - Naming should be aligned (e.g. `FooRepository.ts`, `foo.sql`, `foo.md`).
41
+
42
+ #### Request and Response Contracts
43
+ - Be explicit about request parameters (types, nullability, constraints).
44
+ - Be explicit about response shape (columns, ordering, cardinality).
45
+ - Prefer documenting contracts in the repository-local markdown file rather than code comments when they are non-trivial.
46
+
47
+ ### SqlClient lifecycle policy (important)
48
+
49
+ - When using `src/db/sql-client.ts`, prefer a shared `SqlClient` per worker process (singleton).
50
+ - Avoid creating a new database connection for every query or test case.
51
+ - Do not share a live connection across parallel workers; each worker should own its own shared client or pool.
52
+ - If you need strict isolation, create a dedicated client for that scope and close it explicitly.
53
+
54
+ ### Repository SQL and DTO policy (important)
55
+
56
+ - Repository SQL must return application-facing DTO shapes.
57
+ - SQL SELECT statements should alias columns to camelCase and match the repository return types.
58
+ - Do not introduce intermediate `*Row` types when SQL already returns DTO-compatible shapes.
59
+ - Define separate Row types only when SQL intentionally returns database-shaped (snake_case) rows, and always convert them explicitly.
60
+
61
+ ### Sequence / identity column policy (important)
62
+
63
+ - Sequence / identity columns (auto-generated IDs) are infrastructure concerns.
64
+ - Do **not** explicitly assign values to sequence / identity columns in `INSERT` statements unless explicitly instructed.
65
+ - Repository method inputs should omit sequence / identity columns by default.
66
+ - Only treat an ID as input data when it represents a business rule (e.g. natural keys, externally assigned IDs).
67
+
68
+ ### No test-driven fallbacks in production code (important)
69
+
70
+ - **Do not add fallbacks in `src/` that exist only to accommodate ZTD/testkit/rewriter limitations.**
71
+ - If a query fails to be rewritten into ZTD form (e.g. rawsql-ts parsing/rewrite failure), **do not “paper over” it** by changing runtime behavior, adding `max(id)+1`, or introducing alternative logic paths.
72
+ - Instead, stop and report the issue with evidence:
73
+ - The exact SQL that fails
74
+ - The error message / symptoms
75
+ - A minimal reproduction (smallest query that triggers the failure)
76
+ - The expected behavior (what ZTD/testkit should have produced)
77
+
78
+ Rationale: Production code must not diverge from the intended SQL semantics due to tooling constraints. Tooling issues should be fixed in the tooling layer (rawsql-ts / ztd / testkit), not by altering runtime logic.
79
+
80
+ ---
81
+
82
+ ## ZTD Test Guide (tests/)
83
+
84
+ Fixtures come from the `ztd/ddl/` definitions and power `pg-testkit`. Always import table types from `tests/generated/ztd-row-map.generated.ts` when constructing scenarios, and rerun `npx ztd ztd-config` whenever schema changes to keep the fixtures and row map synchronized.
85
+
86
+ If you are working inside this repository's `ztd-playground`, regenerate the generated artifacts with `pnpm --filter ztd-playground exec ztd ztd-config`.
87
+
88
+ - Set `ZTD_EXECUTION_MODE=traditional` or pass `{ mode: 'traditional', traditional: { isolation: 'schema', cleanup: 'drop_schema' } }` to `createTestkitClient()` when you must exercise real Postgres semantics (locks, isolation, constraints). Traditional mode still runs the DDL inside `ztd/ddl/`, seeds the fixtures, executes any optional `setupSql`, and honors the configured `cleanup` strategy (`drop_schema` by default, `custom_sql`, or `none` for debugging) so the environment stays tidy. Use `isolation: 'none'` when your queries explicitly reference an existing schema and you cannot rely on schema-based isolation.
89
+
90
+
91
+ ### Tests: What to Care About
92
+
93
+ #### Test Intent
94
+ - Each test should have a single clear observation point.
95
+ - Decide whether the test verifies:
96
+ - Query semantics (input to output behavior), or
97
+ - Structural properties (ordering, filtering, boundary cases).
98
+
99
+ #### Fixtures
100
+ - Keep fixtures minimal and intention-revealing.
101
+ - Prefer small datasets over large ones.
102
+ - Avoid adding rows or columns that are not directly related to the test intent.
103
+
104
+ #### Assertions
105
+ - Assert only on relevant columns and values.
106
+ - Do not rely on implicit ordering; if order matters, make it explicit in SQL and assertions.
107
+ - Avoid custom verification logic; rely on query results and straightforward assertions.
108
+
109
+ #### Relationship to Repositories
110
+ - Tests for repositories should focus on observable behavior, not internal implementation details.
111
+ - Avoid duplicating repository logic inside tests.
112
+
113
+ ### Notes
114
+ - These guidelines are intentionally lightweight.
115
+ - They are expected to evolve based on actual usage and failure cases.
116
+ - If user instructions conflict with these guidelines, follow user instructions.
117
+
118
+ ### ID expectations in tests (important)
119
+
120
+ - Do not assert auto-generated ID values (sequence / identity), such as "the next id is 11".
121
+ - When creating rows, assert only that an ID exists and has the correct type, or that it differs from known existing IDs.
122
+ - Only assert specific ID values when the ID is part of a business rule (not infrastructure), e.g. a natural key or a fixed, meaningful identifier.
123
+
124
+ ## Parallel test policy (important)
125
+
126
+ - ZTD tests should be safe to run in parallel against a single Postgres instance because pg-testkit rewrites CRUD into fixture-backed SELECT queries (no physical schema changes).
127
+ - Do not start multiple Postgres instances per test file/worker, and do not isolate tests by creating per-test databases or schemas. This is unnecessary for ZTD and adds failure modes.
128
+ - Prefer one shared Postgres instance + multiple connections, limited only by your DB resources.
129
+
130
+ ---
131
+
132
+ # ZTD Directory Layout
133
+
134
+ ```
135
+ /ztd
136
+ /ddl
137
+ *.sql <- physical schema definitions
138
+
139
+ /domain-specs
140
+ *.md <- one behavioral SELECT per file (one SQL block)
141
+
142
+ /enums
143
+ *.md <- one enum definition per file (one SQL block)
144
+
145
+ README.md <- documentation for the layout
146
+ AGENTS.md <- combined guidance for DDL, enums, and specs
147
+
148
+ /src <- application & repository logic
149
+ /tests <- ZTD tests, fixtures, row-maps
150
+ ```
151
+
152
+ The file `tests/generated/ztd-layout.generated.ts` ensures ZTD CLI always points to the correct directories.
153
+
154
+ ---
155
+
156
+ # Principles of ZTD in This Repository
157
+
158
+ ### 1. Humans own the **definitions**
159
+ - Physical schema (DDL)
160
+ - Domain semantics (domain-specs)
161
+ - Enumerations (enums)
162
+ - Repository interfaces
163
+
164
+ ### 2. AI assists with **implementation**
165
+ - Generating repository SQL
166
+ - Updating fixtures
167
+ - Producing intermediate TypeScript structures
168
+ - Ensuring SQL adheres to DDL, enums, and domain-specs
169
+
170
+ ### 3. ZTD enforces **consistency**
171
+ ZTD tests verify that:
172
+ - SQL logic matches DDL shapes
173
+ - SQL semantics match domain-specs
174
+ - SQL values match enumerations
175
+
176
+ If anything diverges, ZTD failures surface immediately and deterministically.
177
+
178
+ ---
179
+
180
+ # Development Workflows
181
+
182
+ Different types of changes start from different entry points. Use the workflow appropriate for your situation.
183
+
184
+ ---
185
+
186
+ # Workflow A — Starting From *DDL Changes*
187
+ Modifying tables, columns, constraints, indexes.
188
+
189
+ 1. Edit DDL files in `ztd/ddl/`.
190
+ 2. Run:
191
+
192
+ ```bash
193
+ npx ztd ztd-config
194
+ ```
195
+
196
+ This regenerates `tests/generated/ztd-row-map.generated.ts` from the updated schema.
197
+
198
+ 3. Update repository SQL to match the new schema.
199
+ 4. Update fixtures if result shapes changed.
200
+ 5. Run tests.
201
+
202
+ **Flow:** DDL -> Repository SQL -> Fixtures/Tests -> Application
203
+
204
+ ---
205
+
206
+ # Workflow B — Starting From *Repository Interface Changes*
207
+ Changing method signatures, adding new repository methods, etc.
208
+
209
+ 1. Modify the repository interface or implementation in `/src`.
210
+ 2. Use AI assistance to generate or update the SQL implementation.
211
+ 3. If the generated SQL conflicts with domain-specs or enums, update definitions first.
212
+ 4. Run ZTD tests.
213
+ 5. Regenerate config if SQL output shape changed.
214
+
215
+ **Flow:** Interface -> SQL -> Specs (if needed) -> Tests
216
+
217
+ ---
218
+
219
+ # Workflow C — Starting From *Repository SQL Logic Changes*
220
+ Bug fixes, refactoring, rewriting queries.
221
+
222
+ 1. Edit SQL inside the repository.
223
+ 2. Run ZTD tests.
224
+ 3. If intended behavior changes, update the appropriate file in `ztd/domain-specs/`.
225
+ 4. Update fixtures as needed.
226
+ 5. Regenerate config if result shape changed.
227
+
228
+ **Flow:** SQL -> Domain-specs -> Tests
229
+
230
+ ---
231
+
232
+ # Workflow D — Starting From *Enum or Domain Specification Changes*
233
+ Business rule changes or conceptual model updates.
234
+
235
+ ## Editing enums:
236
+
237
+ 1. Update the relevant `.md` file under `ztd/enums/`.
238
+ 2. Run:
239
+
240
+ ```bash
241
+ npx ztd ztd-config
242
+ ```
243
+
244
+ 3. Update repository SQL referencing enum values.
245
+ 4. Update fixtures/tests.
246
+
247
+ ## Editing domain-specs:
248
+
249
+ 1. Update the relevant `.md` file under `ztd/domain-specs/`.
250
+ 2. Update repository SQL to reflect the new semantics.
251
+ 3. Update or add tests.
252
+ 4. Update DDL only if the new rules require schema changes.
253
+
254
+ **Flow:** Specs/Enums -> SQL -> Tests -> (DDL if required)
255
+
256
+ ---
257
+
258
+ # Combined Real-World Examples
259
+
260
+ - Adding a new contract state:
261
+ enums -> domain-spec -> SQL -> config -> tests
262
+
263
+ - Adding a new table:
264
+ DDL -> config -> SQL -> fixtures -> tests
265
+
266
+ - Fixing business logic:
267
+ SQL -> domain-spec -> tests
268
+
269
+ ZTD ensures the development always converges into a consistent, validated workflow.
270
+
271
+ ---
272
+
273
+ # Human Responsibilities
274
+
275
+ Humans maintain:
276
+
277
+ - Schema definitions (`ztd/ddl`)
278
+ - Domain logic definitions (`ztd/domain-specs`)
279
+ - Domain enumerations (`ztd/enums`)
280
+ - Repository interfaces and architectural decisions
281
+ - Acceptance/review of AI-generated patches
282
+
283
+ Humans decide **what is correct**.
284
+
285
+ ---
286
+
287
+ # AI Responsibilities
288
+
289
+ AI must:
290
+
291
+ - Use domain-specs as the semantic source of truth
292
+ - Use enums as the canonical vocabulary source
293
+ - Use DDL as the physical structure constraint
294
+ - Generate SQL consistent with all definitions
295
+ - Update fixtures when needed
296
+ - Never modify `ztd/AGENTS.md` or `ztd/README.md` without explicit instruction
297
+
298
+ AI decides **how to implement**, but not **what is correct**.
299
+
300
+ ---
301
+
302
+ # ZTD CLI Responsibilities
303
+
304
+ ZTD CLI:
305
+
306
+ - Parses DDL to compute schema shapes
307
+ - Rewrites SQL via CTE shadowing for testing
308
+ - Generates `ztd-row-map.generated.ts`
309
+ - Enables deterministic, parallel SQL unit tests
310
+
311
+ ZTD is the verification engine that validates correctness beyond static typing.
312
+
313
+ ---
314
+
315
+ # Summary
316
+
317
+ This appendix documents how ZTD is used strictly as an **internal implementation and maintenance guide**.
318
+ It does not affect the runtime behavior of the application.
319
+ Its purpose is ensuring:
320
+
321
+ - Schema integrity
322
+ - SQL correctness
323
+ - Domain consistency
324
+ - Reliable AI-assisted development
325
+
326
+ With ZTD, **humans define the meaning**, **AI writes the implementation**, and **tests guarantee correctness**.
@@ -0,0 +1,239 @@
1
+ # Zero Table Dependency Project
2
+
3
+ This project organizes all SQL‑related artifacts under the `ztd/` directory, separating concerns so both humans and AI can collaborate effectively without interfering with each other's responsibilities.
4
+
5
+ ```
6
+ /ztd
7
+ /ddl
8
+ *.sql <- schema definitions
9
+ /domain-specs
10
+ *.md <- one behavior per file (one SQL block)
11
+ /enums
12
+ *.md <- one enum per file (one SQL block)
13
+ README.md <- documentation for the layout
14
+ AGENTS.md <- combined guidance for people and agents
15
+
16
+ /src <- application & repository code
17
+ /tests <- ZTD tests, fixtures, generated maps
18
+ ```
19
+
20
+ ## Generated files (important)
21
+
22
+ `tests/generated/` is auto-generated and must never be committed to git.
23
+
24
+ After cloning the repository (or in a clean environment), run:
25
+
26
+ ```bash
27
+ npx ztd ztd-config
28
+ ```
29
+
30
+ If TypeScript reports missing modules or type errors because `tests/generated/` is missing, run `npx ztd ztd-config`.
31
+
32
+ `tests/generated/ztd-layout.generated.ts` declares the directories above so the CLI and your tests always point at the correct files.
33
+
34
+ ---
35
+
36
+ # Optional SqlClient seam
37
+
38
+ If this project was initialized with `npx ztd init --with-sqlclient`, you'll also have `src/db/sql-client.ts`.
39
+ It defines a minimal `SqlClient` interface that repositories can depend on:
40
+
41
+ - Use it for tutorials and greenfield projects to keep repository SQL decoupled from drivers.
42
+ - Skip it when you already have a database abstraction (Prisma, Drizzle, Kysely, custom adapters).
43
+ - For `pg`, adapt `client.query(...)` so it returns a plain `T[]` row array that matches the interface.
44
+ - Prefer a shared client per worker process so tests and scripts do not reconnect on every query.
45
+ - Do not share a live connection across parallel workers; each worker should own its own shared client.
46
+
47
+ Example (driver-agnostic):
48
+
49
+ ```ts
50
+ let sharedClient: SqlClient | undefined;
51
+
52
+ export function getSqlClient(): SqlClient {
53
+ if (!sharedClient) {
54
+ // Create the client once using your chosen driver (pg, mysql, etc.).
55
+ sharedClient = createSqlClientOnce();
56
+ }
57
+ return sharedClient;
58
+ }
59
+ ```
60
+
61
+ ---
62
+
63
+ # Principles
64
+
65
+ ### 1. Humans own the *definitions*
66
+ - DDL (physical schema)
67
+ - Domain specifications (business logic -> SQL semantics)
68
+ - Enums (canonical domain values)
69
+
70
+ ### 2. AI owns the *implementation*
71
+ - Repository SQL generation
72
+ - Test fixture updates
73
+ - Intermediate TypeScript structures
74
+ - SQL rewriting, parameter binding, shape resolution
75
+
76
+ ### 3. ZTD ensures these stay in sync
77
+ ZTD acts as the consistency layer ensuring:
78
+ - DDL ↔ SQL shape consistency
79
+ - domain-specs ↔ query logic consistency
80
+ - enums ↔ code‑level constants consistency
81
+
82
+ If any part diverges, ZTD tests fail deterministically.
83
+
84
+ ---
85
+
86
+ # Workflow Overview
87
+
88
+ Different tasks start from different entry points. Choose the workflow that matches what you want to change.
89
+
90
+ ---
91
+
92
+ # Workflow A — Starting From *DDL Changes*
93
+ (Adding tables/columns, changing constraints)
94
+
95
+ 1. Edit files under `ztd/ddl/`.
96
+ 2. Run:
97
+
98
+ ```bash
99
+ npx ztd ztd-config
100
+ ```
101
+
102
+ This regenerates `tests/generated/ztd-row-map.generated.ts` from the new schema.
103
+
104
+ 3. Update repository SQL so it matches the new schema.
105
+ 4. Update fixtures if shapes changed.
106
+ 5. Run tests. Any schema mismatch will fail fast.
107
+
108
+ **Flow:**
109
+ **DDL -> repository SQL -> fixtures/tests -> application**
110
+
111
+ ---
112
+
113
+ # Workflow B — Starting From *Repository Interface Changes*
114
+ (Adding a method, changing return types, etc.)
115
+
116
+ 1. Modify the repository interface or class in `/src`.
117
+ 2. Allow AI to generate the SQL needed to satisfy the interface.
118
+ 3. If the query contradicts domain-specs or enums, update specs first.
119
+ 4. Run ZTD tests to confirm logic is consistent.
120
+ 5. Regenerate ZTD config if result shapes changed.
121
+
122
+ **Flow:**
123
+ **repository interface -> SQL -> (update specs if needed) -> tests**
124
+
125
+ ---
126
+
127
+ # Workflow C — Starting From *Repository SQL Logic Changes*
128
+ (Fixing a bug, optimizing logic, rewriting a query)
129
+
130
+ 1. Edit SQL inside the repository.
131
+ 2. Run existing ZTD tests.
132
+ 3. If the intended behavior changes, update `ztd/domain-specs/`.
133
+ 4. Update fixtures if necessary.
134
+ 5. If SQL result shape changed, run:
135
+
136
+ ```bash
137
+ npx ztd ztd-config
138
+ ```
139
+
140
+ **Flow:**
141
+ **SQL -> domain-specs (if needed) -> fixtures/tests**
142
+
143
+ ---
144
+
145
+ # Workflow D — Starting From *Enums or Domain Spec Changes*
146
+ (Business rules change, new status added, new definition created)
147
+
148
+ ## For enums:
149
+
150
+ 1. Update the relevant `.md` file under `ztd/enums/`.
151
+ 2. Regenerate row-map:
152
+
153
+ ```bash
154
+ npx ztd ztd-config
155
+ ```
156
+
157
+ 3. Update SQL referencing enum values.
158
+ 4. Update domain-specs or repository SQL if behaviors change.
159
+ 5. Update fixtures and tests.
160
+
161
+ ## For domain-specs:
162
+
163
+ 1. Modify the `.md` spec in `ztd/domain-specs/`.
164
+ 2. Update SQL in `/src` to follow the new semantics.
165
+ 3. Update tests and fixtures.
166
+ 4. Update DDL only if the new behavior requires schema changes.
167
+
168
+ **Flow:**
169
+ **spec/enums -> SQL -> tests -> (DDL if required)**
170
+
171
+ ---
172
+
173
+ # Combined Real‑World Flow Examples
174
+
175
+ - **Add a new contract status**
176
+ enums -> domain-spec -> SQL -> config -> tests
177
+
178
+ - **Add a new table**
179
+ DDL -> config -> SQL -> fixtures -> tests
180
+
181
+ - **Fix business logic**
182
+ SQL -> domain-spec -> tests
183
+
184
+ ZTD ensures all changes converge into the same consistency pipeline.
185
+
186
+ ---
187
+
188
+ # Human Responsibilities
189
+
190
+ Humans maintain:
191
+
192
+ - Business logic definitions (`domain-specs`)
193
+ - Physical schema (`ddl`)
194
+ - Domain vocabularies (`enums`)
195
+ - High‑level repository interfaces
196
+ - Acceptance of AI-generated changes
197
+
198
+ Humans decide “what is correct.”
199
+
200
+ ---
201
+
202
+ # AI Responsibilities
203
+
204
+ AI must:
205
+
206
+ - Use domain-specs as the **semantic source of truth**
207
+ - Use enums as the **canonical vocabulary source**
208
+ - Use DDL as the **physical shape constraint**
209
+ - Generate repository SQL consistent with all three
210
+ - Regenerate fixtures and tests as instructed
211
+ - Never modify `ztd/AGENTS.md` or `ztd/README.md` unless explicitly asked
212
+
213
+ AI decides “how to implement” within those constraints.
214
+
215
+ ---
216
+
217
+ # ZTD CLI Responsibilities
218
+
219
+ ZTD CLI:
220
+
221
+ - Parses DDL files to build accurate table/column shapes
222
+ - Rewrites SQL with fixture-based CTE shadowing (via testkit adapters)
223
+ - Generates `ztd-row-map.generated.ts`
224
+ - Produces deterministic, parallelizable tests
225
+
226
+ ZTD is the verification engine guaranteeing correctness.
227
+
228
+ ## Traditional execution mode
229
+
230
+ - Set `ZTD_EXECUTION_MODE=traditional` or pass `{ mode: 'traditional', traditional: { isolation: 'schema', cleanup: 'drop_schema' } }` when you need to run the tests against a real Postgres schema (locking, isolation, constraints). The helper still applies the DDL inside `ztd/ddl/`, loads the fixture rows into the schema, optionally executes `setupSql`, and carries out the chosen cleanup strategy (`drop_schema`, `custom_sql`, or `none`).
231
+ - Use `isolation: 'none'` if you need to target a schema that is already defined or if your SQL embeds schema qualifiers explicitly.
232
+
233
+ ---
234
+
235
+ # Summary
236
+
237
+ ZTD enables a workflow where **humans define meaning**, **AI writes implementation**, and **tests guarantee correctness**.
238
+
239
+ The project layout and workflows above ensure long-term maintainability, clarity, and full reproducibility of SQL logic independent of physical database state.
@@ -0,0 +1,24 @@
1
+ /**
2
+ * Promise that resolves to the array of rows produced by an SQL query.
3
+ * @template T Shape of each row yielded by the SQL client.
4
+ * @example
5
+ * const rows: SqlQueryRows<{ id: number; name: string }> = client.query('SELECT id, name FROM users');
6
+ */
7
+ export type SqlQueryRows<T> = Promise<T[]>;
8
+
9
+ /**
10
+ * Minimal SQL client interface required by the repository layer.
11
+ *
12
+ * - Production: adapt `pg` (or other drivers) to normalize results into `T[]`
13
+ * - Tests: compatible with `pg-testkit` clients returned by `createTestkitClient()`
14
+ *
15
+ * Connection strategy note:
16
+ * - Prefer a shared client per worker process for performance.
17
+ * - Do not share a live client across parallel workers.
18
+ */
19
+ export type SqlClient = {
20
+ query<T extends Record<string, unknown> = Record<string, unknown>>(
21
+ text: string,
22
+ values?: readonly unknown[]
23
+ ): SqlQueryRows<T>;
24
+ };
@@ -0,0 +1,30 @@
1
+ import { PostgreSqlContainer } from '@testcontainers/postgresql';
2
+
3
+ /**
4
+ * Vitest global setup.
5
+ *
6
+ * ZTD tests are safe to run in parallel against a single Postgres instance because pg-testkit
7
+ * rewrites CRUD into fixture-backed SELECT queries (no physical tables are created/mutated).
8
+ *
9
+ * This setup starts exactly one disposable Postgres container when DATABASE_URL is not provided,
10
+ * and shares the resulting DATABASE_URL with all Vitest workers.
11
+ */
12
+ export default async function globalSetup() {
13
+ const configuredUrl = process.env.DATABASE_URL;
14
+ if (configuredUrl && configuredUrl.length > 0) {
15
+ return () => undefined;
16
+ }
17
+
18
+ const container = new PostgreSqlContainer('postgres:18-alpine')
19
+ .withDatabase('ztd_playground')
20
+ .withUsername('postgres')
21
+ .withPassword('postgres');
22
+
23
+ const started = await container.start();
24
+ process.env.DATABASE_URL = started.getConnectionUri();
25
+
26
+ return async () => {
27
+ await started.stop();
28
+ };
29
+ }
30
+