@rawsql-ts/sql-contract 0.1.0 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,476 +1,617 @@
1
- # @rawsql-ts/sql-contract
2
-
3
- ## Overview
4
-
5
- @rawsql-ts/sql-contract is a lightweight library designed to reduce the repetitive, mechanical code commonly encountered when working with handwritten SQL.
6
-
7
- It improves the following aspects of the development experience:
8
-
9
- - Mapping query results to models
10
- - Writing simple INSERT, UPDATE, and DELETE statements
11
-
12
- ---
13
-
14
- ## Features
15
-
16
- * Zero runtime dependencies
17
- (pure JavaScript; no external packages required at runtime)
18
- * Zero DBMS dependency
19
- (tested with PostgreSQL, MySQL, SQL Server, and SQLite)
20
- * Zero database client dependency
21
- (works with any client that executes SQL and returns rows)
22
- * Zero framework and ORM dependency
23
- (fits into any application architecture that uses raw SQL)
24
- * No schema models or metadata required
25
- (tables, columns, and relationships are defined only in SQL)
26
- * Result mapping helpers that operate on any SQL returning rows
27
- (including SELECT queries and CUD statements with RETURNING or aggregate results)
28
- * Simple builders for common INSERT, UPDATE, and DELETE cases, without query inference
29
-
30
- ---
31
-
32
- ## Philosophy
33
-
34
- sql-contract treats SQL?especially SELECT statements?as a language for expressing domain requirements.
35
-
36
- In SQL development, it is essential to iterate quickly through the cycle of design, writing, verification, and refinement. To achieve this, a SQL client is indispensable. SQL must remain SQL, directly executable and verifiable; it cannot be adequately replaced by a DSL without breaking this feedback loop.
37
-
38
- Based on this philosophy, this library intentionally does not provide query construction features for SELECT statements. Queries should be written by humans, as raw SQL, and validated directly against the database.
39
-
40
- At the same time, writing SQL inevitably involves mechanical tasks. In particular, mapping returned rows to application-level models is not part of the domain logic, yet it often becomes verbose and error-prone. sql-contract focuses on reducing this burden.
41
-
42
- By contrast, write operations such as INSERT, UPDATE, and DELETE generally do not carry the same level of domain significance as SELECT statements. They are often repetitive, consisting of short and predictable patterns such as primary-key-based updates.
43
-
44
- To address this, the library provides minimal builder helpers for common cases only.
45
-
46
- It deliberately goes no further than this.
47
-
48
- ---
49
-
50
- ## Getting Started
51
-
52
- ### Installation
53
-
54
- ```sh
55
- pnpm add @rawsql-ts/sql-contract
56
- ```
57
- ### Minimal CRUD sample
58
-
59
- ```ts
60
- import { Pool } from 'pg'
61
- import { insert, update, remove } from '@rawsql-ts/sql-contract/writer'
62
- import {
63
- createMapperFromExecutor,
64
- mapperPresets,
65
- type QueryParams,
66
- } from '@rawsql-ts/sql-contract/mapper'
67
-
68
- type Customer = {
69
- customerId: number
70
- customerName: string
71
- customerStatus: string
72
- }
73
-
74
- async function main() {
75
- // Prepare an executor that runs SQL and returns rows.
76
- // sql-contract remains DBMS- and driver-agnostic by depending only on this function.
77
- const pool = new Pool({ connectionString: process.env.DATABASE_URL })
78
-
79
- const executor = async (sql: string, params: QueryParams) => {
80
- const result = await pool.query(sql, params as unknown[])
81
- return result.rows
82
- }
83
-
84
- // SELECT:
85
- // Map snake_case SQL columns to a typed DTO without writing per-column mapping code.
86
- const mapper = createMapperFromExecutor(executor, mapperPresets.appLike())
87
- const rows = await mapper.query<Customer>(
88
- `
89
- select
90
- customer_id,
91
- customer_name,
92
- customer_status
93
- from customers
94
- where customer_id = $1
95
- `,
96
- [42],
97
- )
98
-
99
- // INSERT:
100
- // Simplify repetitive SQL for common write operations.
101
- const insertResult = insert('customers', {
102
- name: 'alice',
103
- status: 'pending',
104
- })
105
- await executor(insertResult.sql, insertResult.params)
106
-
107
- // UPDATE:
108
- // Simplify repetitive SQL for common write operations.
109
- const updateResult = update(
110
- 'customers',
111
- { status: 'active' },
112
- { id: 42 },
113
- )
114
- await executor(updateResult.sql, updateResult.params)
115
-
116
- // DELETE:
117
- // Simplify repetitive SQL for common write operations.
118
- const deleteResult = remove('customers', { id: 17 })
119
- await executor(deleteResult.sql, deleteResult.params)
120
-
121
- await pool.end()
122
- void rows
123
- }
124
-
125
- void main()
126
- ```
127
-
128
- ---
129
-
130
- ## Executor: DBMS / Driver Integration
131
-
132
- `sql-contract` is designed as a reusable, DBMS-agnostic library.
133
- To integrate it with a specific database or driver, **you must define a small executor function**.
134
-
135
- An executor receives a SQL string and parameters, executes them using your DB driver, and returns the resulting rows as `Row[]`.
136
- By doing so, `sql-contract` can consume query results without knowing anything about the underlying database or driver.
137
-
138
- ```ts
139
- const executor = async (sql: string, params: QueryParams) => {
140
- const result = await pool.query(sql, params as unknown[])
141
- return result.rows
142
- }
143
- ```
144
-
145
- This function is the single integration point between `sql-contract` and the DBMS.
146
- Connection pooling, transactions, retries, error handling, and other DBMS- or driver-specific concerns should all be handled within the executor.
147
-
148
- The `params` argument uses the exported `QueryParams` type.
149
- It supports both positional arrays and named records, allowing executors to work with positional, anonymous, or named parameter styles depending on the driver.
150
-
151
- ---
152
- ## Mapper: Query Result Mapping (R)
153
-
154
- The mapper is responsible for projecting query results (`Row[]`) into DTOs.
155
-
156
- R looks like this:
157
-
158
- ```ts
159
- const reader = mapper.bind(customerMapping)
160
- await reader.one('SELECT ...', [42])
161
- ```
162
-
163
- In a typical application, a mapper is created once and reused across queries.
164
- It defines application-wide mapping behavior, while individual queries decide how results are projected.
165
-
166
- The mapper operates purely on returned rows and never inspects SQL, parameters, or execution behavior.
167
- To keep mapping predictable, it does not guess column semantics or relationships.
168
- All transformations are applied through explicit configuration.
169
-
170
- ```ts
171
- import {
172
- createMapperFromExecutor,
173
- mapperPresets,
174
- } from '@rawsql-ts/sql-contract/mapper'
175
-
176
- // `executor` is defined according to the Executor section above.
177
- const mapper = createMapperFromExecutor(
178
- executor,
179
- mapperPresets.appLike(),
180
- )
181
- ```
182
-
183
- This example shows a typical mapper setup.
184
-
185
- For read-heavy flows you can also call `createReader(executor)`, which aliases the same factory and applies `mapperPresets.appLike()` by default so you get camelCase mapping with minimal setup.
186
-
187
- `createMapperFromExecutor` binds an executor to a mapper and accepts optional mapping options.
188
- These options control how column names are normalized, how values are coerced, and how identifiers are treated.
189
-
190
- For convenience, `mapperPresets` provide reusable configurations for common scenarios:
191
-
192
- | Preset | Description |
193
- | ------------------------- | ----------------------------------------------------------------------------------------------------------- |
194
- | `mapperPresets.appLike()` | Applies common application-friendly defaults, such as snake_case to camelCase conversion and date coercion. |
195
- | `mapperPresets.safe()` | Leaves column names and values untouched, suitable for exploratory queries or legacy schemas. |
196
-
197
- When a specific query needs fine-grained control, you can also provide a custom options object.
198
- This allows localized adjustments without changing the preset used elsewhere.
199
-
200
- ```ts
201
- const mapper = createMapperFromExecutor(executor, {
202
- keyTransform: 'snake_to_camel',
203
- coerceDates: true,
204
- idKeysAsString: true,
205
- typeHints: {
206
- createdAt: 'date',
207
- },
208
- })
209
- ```
210
-
211
- This form mirrors `mapperPresets.appLike()` while allowing targeted overrides for a specific mapping.
212
-
213
- ---
214
-
215
- ### Duck-typed mapping (no model definitions)
216
-
217
- For lightweight or localized use cases, the mapper supports duck-typed projections without defining any schema or mapping models.
218
-
219
- In duck-typed mapping, the mapper applies no additional structural assumptions beyond its configured defaults.
220
- The shape of the result is defined locally at the query site, either by providing a TypeScript type or by relying on the raw row shape.
221
-
222
- ```ts
223
- // Explicitly typed projection
224
- const rows = await mapper.query<{ customerId: number }>(
225
- 'select customer_id from customers limit 1',
226
- )
227
- ```
228
-
229
- Although not recommended, you can omit the DTO type for quick exploration:
230
-
231
- ```ts
232
- const rows = await mapper.query(
233
- 'select customer_id from customers limit 1',
234
- )
235
- ```
236
-
237
- Duck-typed mapping is intentionally minimal and local.
238
- If the shape of the query results is important or reused throughout your application, consider moving to explicit row mapping.
239
-
240
- ---
241
-
242
- ### Mapping to a Single Model
243
-
244
- sql-contract allows you to map query results to typed DTOs.
245
-
246
- ```ts
247
- type Customer = {
248
- customerId: number
249
- customerName: string
250
- }
251
-
252
- const rows = await mapper.query<Customer>(
253
- `
254
- select
255
- customer_id,
256
- customer_name
257
- from customers
258
- where customer_id = $1
259
- `,
260
- [42],
261
- )
262
-
263
- // rows[0].customerName is type-safe
264
- ```
265
-
266
- The normalization rules applied during mapping are controlled by the selected mapper preset.
267
-
268
- You can also define one-off mapping rules using columnMap.
269
-
270
- ```ts
271
- const customerMapping = rowMapping<Customer>({
272
- columnMap: {
273
- customerId: 'customer_id',
274
- customerName: 'customer_name',
275
- },
276
- })
277
-
278
- const rows = await mapper.query<Customer>(
279
- `
280
- select
281
- customer_id,
282
- customer_name
283
- from customers
284
- where customer_id = $1
285
- `,
286
- [42],
287
- customerMapping, // explicitly specify the mapping rule
288
- )
289
-
290
- // rows[0].customerName is type-safe
291
- ```
292
-
293
- The `rowMapping()` helper replaces the previous `entity()` alias. The old name is still supported for now but is deprecated in favor of `rowMapping()`.
294
-
295
- When a query includes JOINs or relationships, explicit row mappings are required.
296
- The structure for such mappings is explained in the next section.
297
-
298
- ---
299
-
300
- ### Mapping to multiple models (joined rows)
301
-
302
- The mapper also supports mapping joined result sets into multiple related models.
303
-
304
- Relations are explicitly defined and never inferred.
305
-
306
- ```ts
307
- const orderMapping = rowMapping({
308
- name: 'order',
309
- key: 'orderId',
310
- prefix: 'order_',
311
- }).belongsTo('customer', customerMapping, 'customerId')
312
- ```
313
-
314
- Joined queries remain transparent and deterministic:
315
-
316
- ```ts
317
- const mapper = createMapperFromExecutor(executor)
318
-
319
- const rows = await mapper.query(
320
- `
321
- select
322
- o.order_id,
323
- o.order_total,
324
- c.customer_id,
325
- c.customer_name
326
- from orders o
327
- join customers c on c.customer_id = o.customer_id
328
- where o.order_id = $1
329
- `,
330
- [123],
331
- orderMapping,
332
- )
333
- ```
334
-
335
- ---
336
-
337
- ## Writer: emitting simple C / U / D statements
338
-
339
- The writer helpers provide a small, opinionated DSL for common
340
- INSERT, UPDATE, and DELETE statements.
341
-
342
- They accept table names and plain objects of column-value pairs, and deterministically emit `{ sql, params }`.
343
-
344
- The writer focuses on *construction*, not execution.
345
-
346
- ### Writer basics
347
-
348
- Writer helpers are intentionally limited:
349
-
350
- * `undefined` values are omitted
351
- * identifiers are validated against ASCII-safe patterns unless explicitly allowed
352
- * WHERE clauses are limited to equality-based AND fragments
353
- * no inference, no joins, no multi-table logic
354
-
355
- If `returning` is provided, a `RETURNING` clause is appended.
356
- Using `'all'` maps to `RETURNING *`; otherwise, column names are sorted alphabetically.
357
-
358
- The writer never checks backend support for `RETURNING`.
359
- It emits SQL exactly as specified so that success or failure remains observable at execution time.
360
-
361
- ### INSERT
362
-
363
- ```ts
364
- await writer.insert(
365
- 'projects',
366
- { name: 'Apollo', owner_id: 7 },
367
- { returning: ['project_id'] },
368
- )
369
- ```
370
-
371
- ### UPDATE
372
-
373
- ```ts
374
- await writer.update(
375
- 'projects',
376
- { name: 'Apollo' },
377
- { project_id: 1 },
378
- )
379
- ```
380
-
381
- ### DELETE
382
-
383
- ```ts
384
- await writer.remove(
385
- 'projects',
386
- { project_id: 1 },
387
- )
388
- ```
389
-
390
- Statements can also be built without execution:
391
-
392
- ```ts
393
- const built = writer.build.insert(
394
- 'projects',
395
- { name: 'Apollo', owner_id: 7 },
396
- { returning: ['project_id'] },
397
- )
398
- ```
399
-
400
- ### Writer presets and placeholder strategies
401
-
402
- Advanced usage flows through `createWriter` (aliasing the historic `createWriterFromExecutor`), which binds an executor to a concrete placeholder strategy.
403
-
404
- A writer preset defines:
405
-
406
- 1. how placeholders are formatted,
407
- 2. whether parameters are positional or named,
408
- 3. how parameters are ordered and bound.
409
-
410
- ```ts
411
- import { createWriter, writerPresets } from '@rawsql-ts/sql-contract/writer'
412
-
413
- const writer = createWriter(
414
- executor,
415
- writerPresets.named({
416
- formatPlaceholder: (paramName) => ':' + paramName,
417
- }),
418
- )
419
- ```
420
-
421
- ### Named placeholders
422
-
423
- Named presets derive parameter names from column names.
424
-
425
- Each bind increments a counter and produces deterministic names such as
426
- `name_1`, `owner_id_2`.
427
-
428
- ```ts
429
- await writer.insert('projects', { name: 'Apollo', owner_id: 7 })
430
- // SQL: INSERT INTO projects (name, owner_id) VALUES (:name_1, :owner_id_2)
431
- // params: { name_1: 'Apollo', owner_id_2: 7 }
432
- ```
433
-
434
- ---
435
-
436
- ## DBMS and driver differences
437
-
438
- `sql-contract` does not normalize SQL dialects or placeholder styles.
439
-
440
- Write SQL using the placeholder syntax required by your driver, and bind parameters exactly as that driver expects.
441
- Whether parameters are positional or named is a concern of the executor and driver, not `sql-contract`.
442
-
443
- Examples of common placeholder styles:
444
-
445
- | DBMS / driver | Placeholder style |
446
- | --------------------------------- | ------------------ |
447
- | PostgreSQL / Neon (node-postgres) | `$1`, `$2`, ... |
448
- | PostgreSQL / pg-promise | `$/name/` |
449
- | MySQL / SQLite | `?` |
450
- | SQL Server | `@p1`, `@p2`, ... |
451
- | Oracle | `:1`, `:name`, ... |
452
-
453
- ```ts
454
- await executor(
455
- 'select * from customers where customer_id = $1',
456
- [42],
457
- )
458
- ```
459
-
460
- ```ts
461
- await executor(
462
- 'select * from customers where customer_id = :customerId',
463
- { customerId: 42 },
464
- )
465
- ```
466
-
467
- All DBMS- and driver-specific concerns live in the executor.
468
- Both writer and mapper remain independent of these differences.
469
-
470
- ---
471
-
472
- ## Influences / Related Ideas
473
-
474
- Sql-contract is inspired by minimal mapping libraries such as Dapper and other thin contracts that keep SQL visible while wiring rows to typed results. These projects demonstrate the value of stopping short of a full ORM and instead providing a predictable, testable layer for purely mechanical concerns.
475
-
476
- Sql-contract adopts that lesson within the rawsql-ts ecosystem: SQL remains the domain language, and this package automates only the tedious bridging work around it.
1
+ # @rawsql-ts/sql-contract
2
+
3
+ ![npm version](https://img.shields.io/npm/v/@rawsql-ts/sql-contract)
4
+ ![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)
5
+
6
+ A lightweight library for mapping SQL query results into typed application models. It removes the repetitive, mechanical code around handwritten SQL while keeping SQL as the authoritative source for domain logic.
7
+
8
+ Inspired by minimal mapping libraries such as Dapper — stopping short of a full ORM and instead providing a predictable, transparent layer.
9
+
10
+ ## Features
11
+
12
+ - Zero runtime dependencies
13
+ - Works with any SQL executor returning rows (driver/DBMS agnostic)
14
+ - Automatic snake_case to camelCase column name conversion
15
+ - Single and multi-model mapping with `rowMapping`
16
+ - Validator-agnostic schema integration (Zod, ArkType, or any `parse`/`assert` compatible library)
17
+ - Scalar query helpers for COUNT / aggregate values
18
+
19
+ ## Installation
20
+
21
+ ```bash
22
+ npm install @rawsql-ts/sql-contract
23
+ ```
24
+
25
+ ## Quick Start
26
+
27
+ ```ts
28
+ import { createReader, type QueryParams } from '@rawsql-ts/sql-contract'
29
+
30
+ const executor = async (sql: string, params: QueryParams) => {
31
+ const result = await pool.query(sql, params as unknown[])
32
+ return result.rows
33
+ }
34
+
35
+ const reader = createReader(executor)
36
+
37
+ const customers = await reader.list(
38
+ 'SELECT customer_id, customer_name FROM customers WHERE status = $1',
39
+ ['active']
40
+ )
41
+ // [{ customerId: 1, customerName: 'Alice' }, ...]
42
+ ```
43
+
44
+ ## Reader API
45
+
46
+ ### Basic queries: `one` and `list`
47
+
48
+ ```ts
49
+ const customer = await reader.one(
50
+ 'SELECT customer_id, customer_name FROM customers WHERE customer_id = $1',
51
+ [1]
52
+ )
53
+
54
+ const customers = await reader.list(
55
+ 'SELECT customer_id, customer_name FROM customers'
56
+ )
57
+ ```
58
+
59
+ ### Column naming conventions
60
+
61
+ By default, Reader converts snake_case columns to camelCase properties automatically:
62
+
63
+ ```sql
64
+ SELECT customer_id, created_at FROM customers
65
+ ```
66
+
67
+ becomes:
68
+
69
+ ```ts
70
+ { customerId: number, createdAt: Date }
71
+ ```
72
+
73
+ Presets are available to change this behavior:
74
+
75
+ ```ts
76
+ import { mapperPresets } from '@rawsql-ts/sql-contract'
77
+
78
+ const reader = createReader(executor, mapperPresets.safe()) // no transformation
79
+ ```
80
+
81
+ ### Custom mapping
82
+
83
+ For derived values or format conversion:
84
+
85
+ ```ts
86
+ const rows = await reader.map(
87
+ 'SELECT price, quantity FROM order_items',
88
+ (row) => ({ total: row.price * row.quantity })
89
+ )
90
+ ```
91
+
92
+ ### Row mapping with `rowMapping`
93
+
94
+ For reusable, explicit mappings where domain terms differ from column names:
95
+
96
+ ```ts
97
+ import { rowMapping } from '@rawsql-ts/sql-contract'
98
+
99
+ const orderSummaryMapping = rowMapping({
100
+ name: 'OrderSummary',
101
+ key: 'orderId',
102
+ columnMap: {
103
+ orderId: 'order_id',
104
+ customerLabel: 'customer_display_name',
105
+ totalAmount: 'grand_total',
106
+ },
107
+ })
108
+
109
+ const summaries = await reader
110
+ .bind(orderSummaryMapping)
111
+ .list('SELECT order_id, customer_display_name, grand_total FROM order_view')
112
+ ```
113
+
114
+ Keys can be composite (`key: ['col_a', 'col_b']`) or derived (`key: (row) => [row.col_a, row.col_b]`).
115
+
116
+ ### Multi-model mapping
117
+
118
+ Map joined results into nested domain models with `belongsTo`:
119
+
120
+ ```ts
121
+ const customerMapping = rowMapping({
122
+ name: 'Customer',
123
+ key: 'customerId',
124
+ columnMap: {
125
+ customerId: 'customer_customer_id',
126
+ customerName: 'customer_customer_name',
127
+ },
128
+ })
129
+
130
+ const orderMapping = rowMapping({
131
+ name: 'Order',
132
+ key: 'orderId',
133
+ columnMap: {
134
+ orderId: 'order_order_id',
135
+ orderTotal: 'order_total',
136
+ customerId: 'order_customer_id',
137
+ },
138
+ }).belongsTo('customer', customerMapping, 'customerId')
139
+
140
+ const orders = await reader.bind(orderMapping).list(`
141
+ SELECT
142
+ c.id AS customer_customer_id,
143
+ c.name AS customer_customer_name,
144
+ o.id AS order_order_id,
145
+ o.total AS order_total,
146
+ o.customer_id AS order_customer_id
147
+ FROM customers c
148
+ JOIN orders o ON o.customer_id = c.customer_id
149
+ `)
150
+ // [{ orderId: 1, orderTotal: 500, customerId: 3, customer: { customerId: 3, customerName: 'Alice' } }]
151
+ ```
152
+
153
+ ### Scalar queries
154
+
155
+ ```ts
156
+ const count = await reader.scalar(
157
+ 'SELECT count(*) FROM customers WHERE status = $1',
158
+ ['active']
159
+ )
160
+ ```
161
+
162
+ ## Validation Integration
163
+
164
+ The `.validator()` method accepts any object implementing `parse(value)` or `assert(value)`. This means **Zod**, **ArkType**, and other validation libraries work out of the box — no additional adapter packages required.
165
+
166
+ ### With Zod
167
+
168
+ ```ts
169
+ import { z } from 'zod'
170
+
171
+ const CustomerSchema = z.object({
172
+ customerId: z.number(),
173
+ customerName: z.string(),
174
+ })
175
+
176
+ const customers = await reader
177
+ .validator(CustomerSchema)
178
+ .list('SELECT customer_id, customer_name FROM customers')
179
+ ```
180
+
181
+ ### With ArkType
182
+
183
+ ```ts
184
+ import { type } from 'arktype'
185
+
186
+ const CustomerSchema = type({
187
+ customerId: 'number',
188
+ customerName: 'string',
189
+ })
190
+
191
+ const customers = await reader
192
+ .validator((value) => {
193
+ CustomerSchema.assert(value)
194
+ return value
195
+ })
196
+ .list('SELECT customer_id, customer_name FROM customers')
197
+ ```
198
+
199
+ Validators run after row mapping, so schema errors surface before application code relies on the result shape. Validators are also chainable: `.validator(v1).validator(v2)`.
200
+
201
+ ## Catalog Executor
202
+
203
+ For larger projects, `createCatalogExecutor` executes queries through a `QuerySpec` contract instead of raw SQL strings. A `QuerySpec` couples an SQL file, parameter shape, and output rules into a stable identity for debugging and observability.
204
+
205
+ ### QuerySpec
206
+
207
+ A `QuerySpec` is the core contract type:
208
+
209
+ ```ts
210
+ import type { QuerySpec } from '@rawsql-ts/sql-contract'
211
+ import { rowMapping } from '@rawsql-ts/sql-contract'
212
+
213
+ const activeCustomersSpec: QuerySpec<[], { customerId: number; customerName: string }> = {
214
+ id: 'customers.active',
215
+ sqlFile: 'customers/active.sql',
216
+ params: { shape: 'positional', example: [] },
217
+ metadata: {
218
+ material: ['active_customer_ids'],
219
+ scalarMaterial: ['active_customer_count'],
220
+ },
221
+ output: {
222
+ mapping: rowMapping({
223
+ name: 'Customer',
224
+ key: 'customerId',
225
+ columnMap: {
226
+ customerId: 'customer_id',
227
+ customerName: 'customer_name',
228
+ },
229
+ }),
230
+ example: { customerId: 1, customerName: 'Alice' },
231
+ },
232
+ tags: { domain: 'crm' },
233
+ }
234
+ ```
235
+
236
+ | Field | Description |
237
+ |-------|-------------|
238
+ | `id` | Unique identifier for debugging and observability |
239
+ | `sqlFile` | Path passed to the SQL loader |
240
+ | `params.shape` | `'positional'` (array) or `'named'` (record) |
241
+ | `params.example` | Example parameters (for documentation and testing) |
242
+ | `output.mapping` | Optional `rowMapping` applied before validation |
243
+ | `output.validate` | Optional function to validate/transform each row |
244
+ | `output.example` | Example output (for documentation and testing) |
245
+ | `notes` | Optional human-readable description |
246
+ | `tags` | Optional key-value metadata forwarded to observability events |
247
+ | `metadata.material` | Optional CTE names to materialize as temp tables at runtime |
248
+ | `metadata.scalarMaterial` | Optional CTE names to treat as scalar materializations at runtime |
249
+
250
+ ### QuerySpec metadata
251
+
252
+ Use `metadata` when runtime adapters need execution hints without changing the SQL asset itself:
253
+
254
+ ```ts
255
+ const monthlyReportSpec: QuerySpec<{ tenantId: string }, { value: number }> = {
256
+ id: 'reports.monthly',
257
+ sqlFile: 'reports/monthly.sql',
258
+ params: {
259
+ shape: 'named',
260
+ example: { tenantId: 'tenant-1' },
261
+ },
262
+ metadata: {
263
+ material: ['report_base'],
264
+ scalarMaterial: ['report_total'],
265
+ },
266
+ output: {
267
+ example: { value: 1 },
268
+ },
269
+ }
270
+ ```
271
+
272
+ The metadata remains available on `spec.metadata` inside rewriters and is also forwarded to runtime extensions through `ExecInput.metadata`.
273
+
274
+ ### Creating a CatalogExecutor
275
+
276
+ ```ts
277
+ import { createCatalogExecutor } from '@rawsql-ts/sql-contract'
278
+ import { readFile } from 'node:fs/promises'
279
+ import { resolve } from 'node:path'
280
+
281
+ function createFileSqlLoader(baseDir: string) {
282
+ return {
283
+ load(sqlFile: string) {
284
+ return readFile(resolve(baseDir, sqlFile), 'utf-8')
285
+ },
286
+ }
287
+ }
288
+
289
+ const catalog = createCatalogExecutor({
290
+ loader: createFileSqlLoader('sql'),
291
+ executor,
292
+ })
293
+ ```
294
+
295
+ The executor exposes three methods matching the Reader API:
296
+
297
+ ```ts
298
+ const customers = await catalog.list(activeCustomersSpec, [])
299
+ const customer = await catalog.one(customerByIdSpec, [42])
300
+ const count = await catalog.scalar(customerCountSpec, [])
301
+ ```
302
+
303
+ For larger applications, keeping the file-backed loader in one helper avoids
304
+ repeating the same `readFile(resolve(...))` wiring in every repository module.
305
+
306
+ ### Common catalog output patterns
307
+
308
+ The output pipeline for `list()` / `one()` is:
309
+
310
+ 1. raw SQL row
311
+ 2. `output.mapping` (optional)
312
+ 3. `output.validate` (optional)
313
+
314
+ That means validators should read the mapped DTO shape, not the raw SQL row.
315
+
316
+ For scalar queries, the pipeline is:
317
+
318
+ 1. raw SQL row
319
+ 2. single-column scalar extraction
320
+ 3. `output.validate` (optional)
321
+
322
+ That makes `count(*)` and `RETURNING id` contracts read more clearly when they
323
+ validate the extracted scalar directly instead of inventing a one-field DTO.
324
+
325
+ See [docs/recipes/sql-contract.md](../../docs/recipes/sql-contract.md) for
326
+ copy-paste-ready catalog examples covering:
327
+
328
+ - reusable file-backed loaders
329
+ - mapped DTO validation
330
+ - scalar contract patterns
331
+
332
+ ### Named parameters
333
+
334
+ Specs declaring `shape: 'named'` require either a `Binder` or an explicit opt-in:
335
+
336
+ ```ts
337
+ const catalog = createCatalogExecutor({
338
+ loader,
339
+ executor,
340
+ // Option A: provide a binder that converts named → positional
341
+ binders: [{
342
+ name: 'pg-named',
343
+ bind: ({ sql, params }) => {
344
+ // convert :name placeholders to $1, $2, ...
345
+ return { sql: boundSql, params: positionalArray }
346
+ },
347
+ }],
348
+ // Option B: pass named params directly to the executor
349
+ // allowNamedParamsWithoutBinder: true,
350
+ })
351
+ ```
352
+
353
+ ### Mutation specs
354
+
355
+ Catalog specs can declare mutation metadata for `INSERT`, `UPDATE`, and `DELETE` assets:
356
+
357
+ ```ts
358
+ const createUserSpec: QuerySpec<
359
+ { id: string; display_name?: string | null; created_at?: string },
360
+ never
361
+ > = {
362
+ id: 'user.create',
363
+ sqlFile: 'user/create.sql',
364
+ params: {
365
+ shape: 'named',
366
+ example: {
367
+ id: 'user-1',
368
+ display_name: 'Alice',
369
+ created_at: '2026-03-05T00:00:00.000Z',
370
+ },
371
+ },
372
+ mutation: {
373
+ kind: 'insert',
374
+ },
375
+ output: {
376
+ example: undefined as never,
377
+ },
378
+ }
379
+ ```
380
+
381
+ The `insert` behavior is covered by `packages/sql-contract/tests/catalog.create.test.ts`.
382
+
383
+ ```ts
384
+ const updateUserSpec: QuerySpec<
385
+ { id: string; display_name?: string | null; bio?: string | null },
386
+ never
387
+ > = {
388
+ id: 'user.update-profile',
389
+ sqlFile: 'user/update-profile.sql',
390
+ params: {
391
+ shape: 'named',
392
+ example: { id: 'user-1', display_name: 'Alice', bio: null },
393
+ },
394
+ mutation: {
395
+ kind: 'update',
396
+ },
397
+ output: {
398
+ example: undefined as never,
399
+ },
400
+ }
401
+ ```
402
+
403
+ Phase 1 intentionally keeps the safety rules narrow:
404
+
405
+ - `INSERT` subtracts only direct `VALUES (:named_param)` entries when the key is missing or `undefined`.
406
+ - `UPDATE` and `DELETE` require a `WHERE` clause by default.
407
+ - `UPDATE` subtracts only simple `SET column = :param` assignments when the key is missing or `undefined`.
408
+ - `null` is preserved, so `SET column = :param` still executes and binds `NULL`.
409
+ - Mandatory parameter validation only inspects the `WHERE` clause because Phase 1 focuses on preventing accidental broad mutations first.
410
+
411
+ For example, the SQL asset below will drop `display_name = :display_name` when
412
+ `display_name` is omitted or `undefined`, but it keeps the fixed timestamp write:
413
+
414
+ ```sql
415
+ UPDATE public.user_account
416
+ SET display_name = :display_name,
417
+ bio = :bio,
418
+ updated_at = NOW()
419
+ WHERE id = :id
420
+ ```
421
+
422
+ Assignments with inline comments or more complex expressions stay untouched in
423
+ Phase 1. They remain visible in SQL and any unresolved placeholders still flow
424
+ through the configured binder/executor path.
425
+
426
+ ### Rewriters
427
+
428
+ Rewriters apply semantic-preserving SQL transformations before execution:
429
+
430
+ ```ts
431
+ const catalog = createCatalogExecutor({
432
+ loader,
433
+ executor,
434
+ rewriters: [{
435
+ name: 'add-limit',
436
+ rewrite: ({ sql, params }) => ({
437
+ sql: `${sql} LIMIT 1000`,
438
+ params,
439
+ }),
440
+ }],
441
+ })
442
+ ```
443
+
444
+ The execution pipeline order is: **SQL load → rewriters → binders → executor**.
445
+
446
+ Mutation specs apply one extra safety rule in Phase 1: every configured
447
+ rewriter must explicitly declare `mutationSafety: 'safe'`. This keeps mutation
448
+ preprocessing stable by rejecting rewriters that might alter `SET` or `WHERE`
449
+ structure.
450
+
451
+ ```ts
452
+ const auditCommentRewriter: Rewriter & { mutationSafety: 'safe' } = {
453
+ name: 'audit-comment',
454
+ mutationSafety: 'safe',
455
+ rewrite: ({ sql, params }) => ({
456
+ sql: `${sql} -- audit`,
457
+ params,
458
+ }),
459
+ }
460
+ ```
461
+
462
+ Rewriters without that explicit marker still work for non-mutation specs.
463
+
464
+ ### DELETE guards and `rowCount`
465
+
466
+ Physical deletes default to an affected-row guard of `exactly 1`. To evaluate
467
+ that guard safely, the configured executor must expose `rowCount` via
468
+ `{ rows, rowCount }` results.
469
+
470
+ ```ts
471
+ const executor = async (sql: string, params: QueryParams) => {
472
+ const result = await client.query(sql, params as unknown[])
473
+ return {
474
+ rows: result.rows,
475
+ rowCount: result.rowCount,
476
+ }
477
+ }
478
+ ```
479
+
480
+ If the executor does not expose `rowCount`, delete specs fail by default. You
481
+ may opt out per spec only when you intentionally want no guard:
482
+
483
+ ```ts
484
+ mutation: {
485
+ kind: 'delete',
486
+ delete: {
487
+ affectedRowsGuard: { mode: 'none' },
488
+ },
489
+ }
490
+ ```
491
+
492
+ For fixture-backed tests, `@rawsql-ts/testkit-core` provides `createCatalogRewriter()` so you can plug `SelectFixtureRewriter` into the catalog pipeline without writing an adapter:
493
+
494
+ ```ts
495
+ import { createCatalogExecutor } from '@rawsql-ts/sql-contract'
496
+ import { createCatalogRewriter } from '@rawsql-ts/testkit-core'
497
+
498
+ const catalog = createCatalogExecutor({
499
+ loader,
500
+ executor,
501
+ rewriters: [createCatalogRewriter({
502
+ fixtures: [{
503
+ tableName: 'users',
504
+ rows: [{ id: 1, name: 'Alice' }],
505
+ schema: {
506
+ columns: {
507
+ id: 'INTEGER',
508
+ name: 'TEXT',
509
+ },
510
+ },
511
+ }],
512
+ })],
513
+ })
514
+ ```
515
+
516
+ ### Observability
517
+
518
+ When an `observabilitySink` is provided, the executor emits lifecycle events:
519
+
520
+ ```ts
521
+ const catalog = createCatalogExecutor({
522
+ loader,
523
+ executor,
524
+ observabilitySink: {
525
+ emit(event) {
526
+ // event.kind: 'query_start' | 'query_end' | 'query_error'
527
+ // event.specId, event.sqlFile, event.execId, event.durationMs, ...
528
+ console.log(`[${event.kind}] ${event.specId}`)
529
+ },
530
+ },
531
+ })
532
+ ```
533
+
534
+ ### Error handling
535
+
536
+ Catalog errors form a hierarchy rooted at `CatalogError`:
537
+
538
+ | Error class | Cause |
539
+ |-------------|-------|
540
+ | `SQLLoaderError` | SQL file could not be loaded |
541
+ | `RewriterError` | A rewriter threw during transformation |
542
+ | `BinderError` | A binder failed or returned invalid output |
543
+ | `ContractViolationError` | Parameter shape mismatch, unexpected row count, etc. |
544
+ | `CatalogExecutionError` | The underlying query executor failed |
545
+
546
+ All error classes expose `specId` and `cause` properties for structured logging.
547
+
548
+ ## Execution Scope and Transaction Boundaries
549
+
550
+ sql-contract is responsible for **query definition and result mapping**. Transaction control (`BEGIN` / `COMMIT` / `ROLLBACK`) and connection lifecycle management are outside its scope — they remain the caller's execution concern.
551
+
552
+ ### What sql-contract manages
553
+
554
+ - SQL loading and transformation (rewriters, binders)
555
+ - Parameter binding and placeholder conversion
556
+ - Result row mapping and validation
557
+ - Observability events for query execution
558
+
559
+ ### What the caller manages
560
+
561
+ - Connection pooling and lifecycle (open, close, release)
562
+ - Transaction boundaries (`BEGIN` / `COMMIT` / `ROLLBACK`)
563
+ - Error recovery and retry policies
564
+ - Connection scoping (ensuring related queries share one connection)
565
+
566
+ ### QueryExecutor and connection scoping
567
+
568
+ The `QueryExecutor` type assumes it runs within a **single connection scope**. When using a connection pool, each call to the executor may be dispatched to a different connection, which makes multi-statement transactions unsafe.
569
+
570
+ To execute transactional workflows, the caller should obtain a dedicated connection and build the executor from it:
571
+
572
+ ```ts
573
+ // Acquire a dedicated connection from the pool
574
+ const client = await pool.connect();
575
+ try {
576
+ await client.query('BEGIN');
577
+
578
+ // Build an executor scoped to this connection
579
+ const executor = async (sql: string, params: readonly unknown[]) => {
580
+ const result = await client.query(sql, params as unknown[]);
581
+ return result.rows;
582
+ };
583
+
584
+ const reader = createReader(executor);
585
+ const user = await reader.one('SELECT ...', [userId]);
586
+ // ... additional queries on the same connection ...
587
+
588
+ await client.query('COMMIT');
589
+ } catch (e) {
590
+ try {
591
+ await client.query('ROLLBACK');
592
+ } catch {
593
+ // ignore secondary rollback failure
594
+ }
595
+ throw e;
596
+ } finally {
597
+ client.release();
598
+ }
599
+ ```
600
+
601
+ This separation keeps sql-contract focused on the mapping layer while leaving execution policy decisions — such as isolation level, retry logic, and savepoints — in the application layer where they belong.
602
+
603
+ ## DBMS Differences
604
+
605
+ sql-contract does not normalize SQL dialects or placeholder styles. Use the syntax required by your driver:
606
+
607
+ ```ts
608
+ // PostgreSQL ($1, $2, ...)
609
+ await executor('SELECT * FROM customers WHERE id = $1', [42])
610
+
611
+ // Named parameters (:id)
612
+ await executor('SELECT * FROM customers WHERE id = :id', { id: 42 })
613
+ ```
614
+
615
+ ## License
616
+
617
+ MIT