@rawsql-ts/sql-contract 0.2.0 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,582 +1,617 @@
1
- 
2
- # @rawsql-ts/sql-contract
3
-
4
- ## Overview
5
-
6
- `@rawsql-ts/sql-contract` is a lightweight library for reducing repetitive, mechanical code around handwritten SQL.
7
- It focuses on mapping query results into application-typed models and simplifying common CUD (Create / Update / Delete) statement construction.
8
-
9
- ## Getting Started
10
-
11
- ### Installation
12
-
13
- ```sh
14
- pnpm add @rawsql-ts/sql-contract
15
- ```
16
-
17
- ### Minimal CRUD sample
18
-
19
- ```ts
20
- import { Pool } from 'pg'
21
- import {
22
- createReader,
23
- createWriter,
24
- type QueryParams
25
- } from '@rawsql-ts/sql-contract'
26
-
27
- async function main() {
28
- const pool = new Pool({
29
- connectionString: process.env.DATABASE_URL
30
- })
31
-
32
- const executor = async (sql: string, params: QueryParams) => {
33
- const result = await pool.query(sql, params as unknown[])
34
- return result.rows
35
- }
36
-
37
- const reader = createReader(executor)
38
- const writer = createWriter(executor)
39
-
40
- const rows = await reader.list(
41
- `SELECT customer_id, customer_name FROM customers WHERE status = $1`,
42
- ['active']
43
- )
44
-
45
- await writer.insert('customers', { name: 'alice', status: 'active' })
46
- await writer.update(
47
- 'customers',
48
- { status: 'inactive' },
49
- { id: 42 }
50
- )
51
- await writer.remove('customers', { id: 17 })
52
-
53
- await pool.end()
54
-
55
- void rows
56
- }
57
- ```
58
-
59
- ## Features
60
-
61
- * Zero runtime dependencies
62
- * Zero DBMS and driver dependencies
63
- * Works with any SQL-executor returning rows
64
- * Minimal mapping helpers for SELECT results
65
- * Simple builders for INSERT / UPDATE / DELETE
66
-
67
- ## Philosophy
68
-
69
- ### SQL as Domain Specification
70
-
71
- SQL is the primary language for expressing domain requirements—precise, unambiguous, and directly verifiable against the database.
72
-
73
- Mapping returned rows to typed domain models is mechanical and repetitive.
74
- This library removes that burden, letting domain logic remain in SQL.
75
-
76
- Write operations (INSERT/UPDATE/DELETE) are usually repetitive and predictable.
77
- So the library offers simple builders for common cases only.
78
-
79
- ## Concepts
80
-
81
- ### Executor: DBMS / Driver Integration
82
-
83
- To integrate with any database or driver, define a single executor:
84
-
85
- ```ts
86
- const executor = async (sql: string, params: QueryParams) => {
87
- const result = await pool.query(sql, params as unknown[])
88
- return result.rows
89
- }
90
- ```
91
-
92
- Connection pooling, retries, transactions, and error handling belong inside the executor.
93
-
94
- ### Reader: Query Execution and Result Mapping
95
-
96
- Reader executes SELECT queries and maps raw database rows into application-friendly structures.
97
-
98
- It supports multiple levels of mapping depending on your needs,
99
- from quick projections to fully validated domain models.
100
-
101
- ### Catalog executor: QuerySpec contract and observability
102
-
103
- Catalog executor executes queries through a `QuerySpec` instead of running raw
104
- SQL directly.
105
-
106
- A `QuerySpec` defines a stable query contract that couples an SQL file,
107
- parameter shape, and output rules. By executing queries through this contract,
108
- the executor can enforce parameter expectations, apply output mapping or
109
- validation, and provide a stable identity for debugging and observability.
110
-
111
- `createCatalogExecutor` is wired with a SQL loader and a concrete query executor,
112
- and can optionally apply rewriters, binders, SQL caching,
113
- `allowNamedParamsWithoutBinder`, extensions, or an `observabilitySink`.
114
-
115
- When observability is enabled, execution emits lifecycle events
116
- (`query_start`, `query_end`, `query_error`) including `spec.id`, `sqlFile`,
117
- and execution identifiers, allowing queries to be traced and debugged by
118
- specification rather than raw SQL strings.
119
-
120
- ---
121
-
122
- #### Basic result APIs: one and list
123
-
124
- Reader provides two primary methods:
125
-
126
- - one : returns a single row
127
- - list : returns multiple rows
128
-
129
- ```ts
130
- const customer = await reader.one(
131
- 'select customer_id, customer_name from customers where customer_id = $1',
132
- [1]
133
- )
134
-
135
- const customers = await reader.list(
136
- 'select customer_id, customer_name from customers'
137
- )
138
- ```
139
-
140
- These methods focus only on execution.
141
- Mapping behavior depends on how the reader is configured.
142
-
143
- ---
144
-
145
- #### Duck typing (minimal, disposable)
146
-
147
- For quick or localized queries, you can rely on structural typing without defining models.
148
-
149
- ```ts
150
- const rows = await reader.list<{ customerId: number }>(
151
- 'select customer_id from customers limit 1',
152
- )
153
- ```
154
-
155
- You can also omit the DTO type entirely:
156
-
157
- ```ts
158
- const rows = await reader.list(
159
- 'select customer_id from customers limit 1',
160
- )
161
- ```
162
-
163
- This approach is:
164
-
165
- * Fast to write
166
- * Suitable for one-off queries
167
- * No runtime validation
168
-
169
- ---
170
-
171
- #### Custom mapping
172
-
173
- Reader allows custom projection logic when structural mapping is insufficient.
174
-
175
- ```ts
176
- const rows = await reader.map(
177
- 'select price, quantity from order_items',
178
- (row) => ({
179
- total: row.price * row.quantity,
180
- })
181
- )
182
- ```
183
-
184
- This is useful for:
185
-
186
- * Derived values
187
- * Format conversion
188
- * Aggregated projections
189
-
190
- ---
191
-
192
-
193
- #### Column naming conventions (default behavior)
194
-
195
- Reader applies a default naming rule that converts snake_case database columns
196
- into camelCase JavaScript properties.
197
-
198
- This allows most queries to work without explicit mapping.
199
-
200
- Example:
201
-
202
- ```sql
203
- select customer_id, created_at from customers
204
- ```
205
-
206
- becomes:
207
-
208
- ```ts
209
- {
210
- customerId: number
211
- createdAt: Date
212
- }
213
- ```
214
-
215
- No mapping definition is required for this transformation.
216
-
217
- ---
218
-
219
- #### Mapper presets
220
-
221
- You can configure how column names are transformed.
222
-
223
- Example:
224
-
225
- ```ts
226
- const reader = createReader(executor, mapperPresets.safe())
227
- ```
228
-
229
- Common presets include:
230
-
231
- * appLike : snake_case → camelCase conversion
232
- * safe : no column name transformation
233
-
234
- Choose a preset based on how closely your domain models align with database naming.
235
-
236
- ---
237
-
238
- #### When explicit mapping is useful
239
-
240
- Even with automatic naming conversion, explicit mappings become valuable when:
241
-
242
- * Domain terms differ from column names
243
- * Multiple columns combine into one field
244
- * Queries are reused across modules
245
- * Schema stability should be decoupled from application models
246
-
247
- ---
248
-
249
- #### Single model mapping (reusable definition)
250
-
251
- Mapping models provide explicit control over how rows map to domain objects.
252
-
253
- Example:
254
-
255
- ```ts
256
- const orderSummaryMapping = rowMapping({
257
- name: 'OrderSummary',
258
- key: 'orderId',
259
- columnMap: {
260
- orderId: 'order_id',
261
- customerLabel: 'customer_display_name',
262
- totalAmount: 'grand_total',
263
- },
264
- })
265
-
266
- const summaries = await reader
267
- .bind(orderSummaryMapping)
268
- .list(`
269
- select
270
- order_id,
271
- customer_display_name,
272
- grand_total
273
- from order_view
274
- `)
275
- ```
276
-
277
- In this example:
278
-
279
- * Domain terminology differs from database naming
280
- * Mapping clarifies intent
281
- * The definition can be reused across queries
282
-
283
- Benefits:
284
-
285
- * Reusable mapping definitions
286
- * Explicit domain language alignment
287
- * Reduced accidental schema coupling
288
- * Better long-term maintainability
289
-
290
- ---
291
-
292
- #### Composite keys
293
-
294
- `rowMapping` keys can now be more than a single column without breaking existing consumers:
295
-
296
- * **Array-based composite keys** — pass the raw column names in SQL order (`key: ['col_a', 'col_b']`). These column values are extracted directly from the executor’s row, so `columnMap` / `prefix` rules are not involved.
297
- * **Derived keys** — supply a function, e.g. `key: (row) => [row.col_a, row.col_b]`, that returns strings/numbers/bigints or an array thereof. The library type-tags each component so `'1'` and `1` are never conflated, and order of the array is preserved.
298
-
299
- Both forms feed through a single normalization path, so you can combine mixed types safely and receive clear errors if a value is `null`, `undefined`, or missing. Creating a synthetic column inside SQL (e.g. `SELECT CONCAT(col_a, '|', col_b) AS composite_key`) still works as a workaround, but we recommend using the multi-column helpers because they keep the schema explicit and avoid delimiter collisions.
300
-
301
- `name` continues to serve as the user-visible label for error messages, independent of whether the key is scalar, composite, or derived.
302
-
303
- #### Multi-model mapping
304
-
305
- Reader supports mapping joined results into multiple domain models by composing `rowMapping` definitions.
306
-
307
- ```ts
308
- const customerMapping = rowMapping({
309
- name: 'Customer',
310
- key: 'customerId',
311
- columnMap: {
312
- customerId: 'customer_customer_id',
313
- customerName: 'customer_customer_name',
314
- },
315
- })
316
-
317
- const orderMapping = rowMapping<{
318
- orderId: number
319
- orderTotal: number
320
- customerId: number
321
- customer: { customerId: number; customerName: string }
322
- }>({
323
- name: 'Order',
324
- key: 'orderId',
325
- columnMap: {
326
- orderId: 'order_order_id',
327
- orderTotal: 'order_total',
328
- customerId: 'order_customer_id',
329
- },
330
- }).belongsTo('customer', customerMapping, 'customerId')
331
-
332
- const result = await reader
333
- .bind(orderMapping)
334
- .list(`
335
- select
336
- c.id as customer_customer_id,
337
- c.name as customer_customer_name,
338
- o.id as order_order_id,
339
- o.total as order_total,
340
- o.customer_id as order_customer_id
341
- from customers c
342
- join orders o on o.customer_id = c.customer_id
343
- `)
344
- ```
345
-
346
- `belongsTo` attaches each customer row to its owning order, so the mapped result exposes a nested `customer` object without duplicating join logic.
347
-
348
- This enables structured projections from complex joins.
349
-
350
- ---
351
-
352
- #### Validator-backed mapping (recommended)
353
-
354
- Runtime validation ensures data correctness.
355
- Zod integration is the recommended approach.
356
-
357
- ```ts
358
- import { z } from 'zod'
359
-
360
- const CustomerSchema = z.object({
361
- customerId: z.number(),
362
- customerName: z.string(),
363
- })
364
-
365
- const row = await reader
366
- .validator(CustomerSchema)
367
- .one(
368
- 'select customer_id, customer_name from customers where customer_id = $1',
369
- [1]
370
- )
371
- ```
372
-
373
- Benefits include:
374
-
375
- * Runtime safety
376
- * Explicit schema documentation
377
- * Refactoring confidence
378
- * AI-friendly feedback loops
379
-
380
- ---
381
-
382
- ### Scalar Queries
383
-
384
- Use scalar helpers when a query returns a single value.
385
-
386
- #### Basic scalar usage
387
-
388
- ```ts
389
- const count = await reader.scalar(
390
- 'select count(*) from customers where status = $1',
391
- ['active']
392
- )
393
- ```
394
-
395
- This is useful for:
396
-
397
- * COUNT queries
398
- * Aggregate values
399
- * Existence checks
400
-
401
- ---
402
-
403
- #### Typed scalar mapping
404
-
405
- You can explicitly define the expected scalar type.
406
-
407
- ```ts
408
- const count = await reader.scalar<number>(
409
- 'select count(*) from customers where status = $1',
410
- ['active']
411
- )
412
- ```
413
-
414
- This improves readability and helps prevent accidental misuse.
415
-
416
- ---
417
-
418
- #### Scalar validation with Zod (recommended)
419
-
420
- For stricter guarantees, scalar values can be validated at runtime.
421
-
422
- ```ts
423
- import { z } from 'zod'
424
-
425
- const count = await reader.scalar(
426
- z.number(),
427
- 'select count(*) from customers where status = $1',
428
- ['active']
429
- )
430
- ```
431
-
432
- This approach ensures:
433
-
434
- * Runtime type safety
435
- * Clear intent
436
- * Safer refactoring
437
-
438
- ---
439
-
440
- ### Zod integration and coercion helpers
441
-
442
- Reader integrates smoothly with Zod for runtime validation and safe type conversion.
443
-
444
- Zod validation helps ensure that query results match your domain expectations,
445
- especially when working with numeric or date values returned as strings by drivers.
446
-
447
- ---
448
-
449
- #### Row validation with Zod
450
-
451
- ```ts
452
- import { z } from 'zod'
453
-
454
- const CustomerSchema = z.object({
455
- customerId: z.number(),
456
- customerName: z.string(),
457
- })
458
-
459
- const row = await reader
460
- .validator(CustomerSchema)
461
- .one(
462
- 'select customer_id, customer_name from customers where customer_id = $1',
463
- [1]
464
- )
465
- ```
466
-
467
- This provides:
468
-
469
- * Runtime safety
470
- * Clear schema documentation
471
- * Refactoring confidence
472
-
473
- ---
474
-
475
- #### Scalar validation with Zod
476
-
477
- ```ts
478
- const count = await reader.scalar(
479
- z.number(),
480
- 'select count(*) from customers'
481
- )
482
- ```
483
-
484
- ---
485
-
486
- #### Coercion helpers for numeric values
487
-
488
- Some database drivers return numeric values as strings.
489
- The package provides helpers to safely convert them.
490
-
491
- Example:
492
-
493
- ```ts
494
- import { z } from 'zod'
495
- import {
496
- zNumberFromString,
497
- zBigIntFromString
498
- } from '@rawsql-ts/sql-contract-zod'
499
-
500
- const schema = z.object({
501
- totalAmount: zNumberFromString,
502
- largeCounter: zBigIntFromString,
503
- })
504
- ```
505
-
506
- These helpers:
507
-
508
- * Convert strings into numeric types
509
- * Fail fast when values are invalid
510
- * Reduce manual parsing logic
511
-
512
- ---
513
-
514
- #### When to use coercion
515
-
516
- Use coercion helpers when:
517
-
518
- * Working with NUMERIC / DECIMAL columns
519
- * Drivers return BIGINT as strings
520
- * You want runtime guarantees
521
-
522
- ---
523
-
524
- ### Writer: Simple CUD Helpers
525
-
526
- Writer helpers build simple INSERT/UPDATE/DELETE SQL:
527
-
528
- ```ts
529
- await writer.insert('projects', {
530
- name: 'Apollo',
531
- owner_id: 7
532
- })
533
-
534
- await writer.update(
535
- 'projects',
536
- { name: 'Apollo' },
537
- { project_id: 1 }
538
- )
539
-
540
- await writer.remove('projects', { project_id: 1 })
541
- ```
542
-
543
- You can also build statements without execution:
544
-
545
- ```ts
546
- const stmt = writer.build.insert(
547
- 'projects',
548
- { name: 'Apollo' }
549
- )
550
- ```
551
-
552
- ## Reducers (Coercion Helpers)
553
-
554
- The package exposes pure coercion helpers:
555
-
556
- * `decimalStringToNumberUnsafe`
557
- * `bigintStringToBigInt`
558
-
559
- They convert raw DB output strings into numbers or bigints when needed.
560
-
561
- ## DBMS Differences
562
-
563
- sql-contract does not normalize SQL dialects or placeholder styles.
564
- You must use the placeholder syntax required by your driver.
565
-
566
- Examples:
567
-
568
- ```ts
569
- await executor(
570
- 'select * from customers where id = $1',
571
- [42],
572
- )
573
- await executor(
574
- 'select * from customers where id = :id',
575
- { id: 42 },
576
- )
577
- ```
578
-
579
- ## Influences / Related Ideas
580
-
581
- sql-contract is inspired by minimal mapping libraries such as Dapper,
582
- stopping short of a full ORM and instead providing a predictable, transparent layer.
1
+ # @rawsql-ts/sql-contract
2
+
3
+ ![npm version](https://img.shields.io/npm/v/@rawsql-ts/sql-contract)
4
+ ![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)
5
+
6
+ A lightweight library for mapping SQL query results into typed application models. It removes the repetitive, mechanical code around handwritten SQL while keeping SQL as the authoritative source for domain logic.
7
+
8
+ Inspired by minimal mapping libraries such as Dapper — stopping short of a full ORM and instead providing a predictable, transparent layer.
9
+
10
+ ## Features
11
+
12
+ - Zero runtime dependencies
13
+ - Works with any SQL executor returning rows (driver/DBMS agnostic)
14
+ - Automatic snake_case to camelCase column name conversion
15
+ - Single and multi-model mapping with `rowMapping`
16
+ - Validator-agnostic schema integration (Zod, ArkType, or any `parse`/`assert` compatible library)
17
+ - Scalar query helpers for COUNT / aggregate values
18
+
19
+ ## Installation
20
+
21
+ ```bash
22
+ npm install @rawsql-ts/sql-contract
23
+ ```
24
+
25
+ ## Quick Start
26
+
27
+ ```ts
28
+ import { createReader, type QueryParams } from '@rawsql-ts/sql-contract'
29
+
30
+ const executor = async (sql: string, params: QueryParams) => {
31
+ const result = await pool.query(sql, params as unknown[])
32
+ return result.rows
33
+ }
34
+
35
+ const reader = createReader(executor)
36
+
37
+ const customers = await reader.list(
38
+ 'SELECT customer_id, customer_name FROM customers WHERE status = $1',
39
+ ['active']
40
+ )
41
+ // [{ customerId: 1, customerName: 'Alice' }, ...]
42
+ ```
43
+
44
+ ## Reader API
45
+
46
+ ### Basic queries: `one` and `list`
47
+
48
+ ```ts
49
+ const customer = await reader.one(
50
+ 'SELECT customer_id, customer_name FROM customers WHERE customer_id = $1',
51
+ [1]
52
+ )
53
+
54
+ const customers = await reader.list(
55
+ 'SELECT customer_id, customer_name FROM customers'
56
+ )
57
+ ```
58
+
59
+ ### Column naming conventions
60
+
61
+ By default, Reader converts snake_case columns to camelCase properties automatically:
62
+
63
+ ```sql
64
+ SELECT customer_id, created_at FROM customers
65
+ ```
66
+
67
+ becomes:
68
+
69
+ ```ts
70
+ { customerId: number, createdAt: Date }
71
+ ```
72
+
73
+ Presets are available to change this behavior:
74
+
75
+ ```ts
76
+ import { mapperPresets } from '@rawsql-ts/sql-contract'
77
+
78
+ const reader = createReader(executor, mapperPresets.safe()) // no transformation
79
+ ```
80
+
81
+ ### Custom mapping
82
+
83
+ For derived values or format conversion:
84
+
85
+ ```ts
86
+ const rows = await reader.map(
87
+ 'SELECT price, quantity FROM order_items',
88
+ (row) => ({ total: row.price * row.quantity })
89
+ )
90
+ ```
91
+
92
+ ### Row mapping with `rowMapping`
93
+
94
+ For reusable, explicit mappings where domain terms differ from column names:
95
+
96
+ ```ts
97
+ import { rowMapping } from '@rawsql-ts/sql-contract'
98
+
99
+ const orderSummaryMapping = rowMapping({
100
+ name: 'OrderSummary',
101
+ key: 'orderId',
102
+ columnMap: {
103
+ orderId: 'order_id',
104
+ customerLabel: 'customer_display_name',
105
+ totalAmount: 'grand_total',
106
+ },
107
+ })
108
+
109
+ const summaries = await reader
110
+ .bind(orderSummaryMapping)
111
+ .list('SELECT order_id, customer_display_name, grand_total FROM order_view')
112
+ ```
113
+
114
+ Keys can be composite (`key: ['col_a', 'col_b']`) or derived (`key: (row) => [row.col_a, row.col_b]`).
115
+
116
+ ### Multi-model mapping
117
+
118
+ Map joined results into nested domain models with `belongsTo`:
119
+
120
+ ```ts
121
+ const customerMapping = rowMapping({
122
+ name: 'Customer',
123
+ key: 'customerId',
124
+ columnMap: {
125
+ customerId: 'customer_customer_id',
126
+ customerName: 'customer_customer_name',
127
+ },
128
+ })
129
+
130
+ const orderMapping = rowMapping({
131
+ name: 'Order',
132
+ key: 'orderId',
133
+ columnMap: {
134
+ orderId: 'order_order_id',
135
+ orderTotal: 'order_total',
136
+ customerId: 'order_customer_id',
137
+ },
138
+ }).belongsTo('customer', customerMapping, 'customerId')
139
+
140
+ const orders = await reader.bind(orderMapping).list(`
141
+ SELECT
142
+ c.id AS customer_customer_id,
143
+ c.name AS customer_customer_name,
144
+ o.id AS order_order_id,
145
+ o.total AS order_total,
146
+ o.customer_id AS order_customer_id
147
+ FROM customers c
148
+ JOIN orders o ON o.customer_id = c.customer_id
149
+ `)
150
+ // [{ orderId: 1, orderTotal: 500, customerId: 3, customer: { customerId: 3, customerName: 'Alice' } }]
151
+ ```
152
+
153
+ ### Scalar queries
154
+
155
+ ```ts
156
+ const count = await reader.scalar(
157
+ 'SELECT count(*) FROM customers WHERE status = $1',
158
+ ['active']
159
+ )
160
+ ```
161
+
162
+ ## Validation Integration
163
+
164
+ The `.validator()` method accepts any object implementing `parse(value)` or `assert(value)`. This means **Zod**, **ArkType**, and other validation libraries work out of the box — no additional adapter packages required.
165
+
166
+ ### With Zod
167
+
168
+ ```ts
169
+ import { z } from 'zod'
170
+
171
+ const CustomerSchema = z.object({
172
+ customerId: z.number(),
173
+ customerName: z.string(),
174
+ })
175
+
176
+ const customers = await reader
177
+ .validator(CustomerSchema)
178
+ .list('SELECT customer_id, customer_name FROM customers')
179
+ ```
180
+
181
+ ### With ArkType
182
+
183
+ ```ts
184
+ import { type } from 'arktype'
185
+
186
+ const CustomerSchema = type({
187
+ customerId: 'number',
188
+ customerName: 'string',
189
+ })
190
+
191
+ const customers = await reader
192
+ .validator((value) => {
193
+ CustomerSchema.assert(value)
194
+ return value
195
+ })
196
+ .list('SELECT customer_id, customer_name FROM customers')
197
+ ```
198
+
199
+ Validators run after row mapping, so schema errors surface before application code relies on the result shape. Validators are also chainable: `.validator(v1).validator(v2)`.
200
+
201
+ ## Catalog Executor
202
+
203
+ For larger projects, `createCatalogExecutor` executes queries through a `QuerySpec` contract instead of raw SQL strings. A `QuerySpec` couples an SQL file, parameter shape, and output rules into a stable identity for debugging and observability.
204
+
205
+ ### QuerySpec
206
+
207
+ A `QuerySpec` is the core contract type:
208
+
209
+ ```ts
210
+ import type { QuerySpec } from '@rawsql-ts/sql-contract'
211
+ import { rowMapping } from '@rawsql-ts/sql-contract'
212
+
213
+ const activeCustomersSpec: QuerySpec<[], { customerId: number; customerName: string }> = {
214
+ id: 'customers.active',
215
+ sqlFile: 'customers/active.sql',
216
+ params: { shape: 'positional', example: [] },
217
+ metadata: {
218
+ material: ['active_customer_ids'],
219
+ scalarMaterial: ['active_customer_count'],
220
+ },
221
+ output: {
222
+ mapping: rowMapping({
223
+ name: 'Customer',
224
+ key: 'customerId',
225
+ columnMap: {
226
+ customerId: 'customer_id',
227
+ customerName: 'customer_name',
228
+ },
229
+ }),
230
+ example: { customerId: 1, customerName: 'Alice' },
231
+ },
232
+ tags: { domain: 'crm' },
233
+ }
234
+ ```
235
+
236
+ | Field | Description |
237
+ |-------|-------------|
238
+ | `id` | Unique identifier for debugging and observability |
239
+ | `sqlFile` | Path passed to the SQL loader |
240
+ | `params.shape` | `'positional'` (array) or `'named'` (record) |
241
+ | `params.example` | Example parameters (for documentation and testing) |
242
+ | `output.mapping` | Optional `rowMapping` applied before validation |
243
+ | `output.validate` | Optional function to validate/transform each row |
244
+ | `output.example` | Example output (for documentation and testing) |
245
+ | `notes` | Optional human-readable description |
246
+ | `tags` | Optional key-value metadata forwarded to observability events |
247
+ | `metadata.material` | Optional CTE names to materialize as temp tables at runtime |
248
+ | `metadata.scalarMaterial` | Optional CTE names to treat as scalar materializations at runtime |
249
+
250
+ ### QuerySpec metadata
251
+
252
+ Use `metadata` when runtime adapters need execution hints without changing the SQL asset itself:
253
+
254
+ ```ts
255
+ const monthlyReportSpec: QuerySpec<{ tenantId: string }, { value: number }> = {
256
+ id: 'reports.monthly',
257
+ sqlFile: 'reports/monthly.sql',
258
+ params: {
259
+ shape: 'named',
260
+ example: { tenantId: 'tenant-1' },
261
+ },
262
+ metadata: {
263
+ material: ['report_base'],
264
+ scalarMaterial: ['report_total'],
265
+ },
266
+ output: {
267
+ example: { value: 1 },
268
+ },
269
+ }
270
+ ```
271
+
272
+ The metadata remains available on `spec.metadata` inside rewriters and is also forwarded to runtime extensions through `ExecInput.metadata`.
273
+
274
+ ### Creating a CatalogExecutor
275
+
276
+ ```ts
277
+ import { createCatalogExecutor } from '@rawsql-ts/sql-contract'
278
+ import { readFile } from 'node:fs/promises'
279
+ import { resolve } from 'node:path'
280
+
281
+ function createFileSqlLoader(baseDir: string) {
282
+ return {
283
+ load(sqlFile: string) {
284
+ return readFile(resolve(baseDir, sqlFile), 'utf-8')
285
+ },
286
+ }
287
+ }
288
+
289
+ const catalog = createCatalogExecutor({
290
+ loader: createFileSqlLoader('sql'),
291
+ executor,
292
+ })
293
+ ```
294
+
295
+ The executor exposes three methods matching the Reader API:
296
+
297
+ ```ts
298
+ const customers = await catalog.list(activeCustomersSpec, [])
299
+ const customer = await catalog.one(customerByIdSpec, [42])
300
+ const count = await catalog.scalar(customerCountSpec, [])
301
+ ```
302
+
303
+ For larger applications, keeping the file-backed loader in one helper avoids
304
+ repeating the same `readFile(resolve(...))` wiring in every repository module.
305
+
306
+ ### Common catalog output patterns
307
+
308
+ The output pipeline for `list()` / `one()` is:
309
+
310
+ 1. raw SQL row
311
+ 2. `output.mapping` (optional)
312
+ 3. `output.validate` (optional)
313
+
314
+ That means validators should read the mapped DTO shape, not the raw SQL row.
315
+
316
+ For scalar queries, the pipeline is:
317
+
318
+ 1. raw SQL row
319
+ 2. single-column scalar extraction
320
+ 3. `output.validate` (optional)
321
+
322
+ That makes `count(*)` and `RETURNING id` contracts read more clearly when they
323
+ validate the extracted scalar directly instead of inventing a one-field DTO.
324
+
325
+ See [docs/recipes/sql-contract.md](../../docs/recipes/sql-contract.md) for
326
+ copy-paste-ready catalog examples covering:
327
+
328
+ - reusable file-backed loaders
329
+ - mapped DTO validation
330
+ - scalar contract patterns
331
+
332
+ ### Named parameters
333
+
334
+ Specs declaring `shape: 'named'` require either a `Binder` or an explicit opt-in:
335
+
336
+ ```ts
337
+ const catalog = createCatalogExecutor({
338
+ loader,
339
+ executor,
340
+ // Option A: provide a binder that converts named → positional
341
+ binders: [{
342
+ name: 'pg-named',
343
+ bind: ({ sql, params }) => {
344
+ // convert :name placeholders to $1, $2, ...
345
+ return { sql: boundSql, params: positionalArray }
346
+ },
347
+ }],
348
+ // Option B: pass named params directly to the executor
349
+ // allowNamedParamsWithoutBinder: true,
350
+ })
351
+ ```
352
+
353
+ ### Mutation specs
354
+
355
+ Catalog specs can declare mutation metadata for `INSERT`, `UPDATE`, and `DELETE` assets:
356
+
357
+ ```ts
358
+ const createUserSpec: QuerySpec<
359
+ { id: string; display_name?: string | null; created_at?: string },
360
+ never
361
+ > = {
362
+ id: 'user.create',
363
+ sqlFile: 'user/create.sql',
364
+ params: {
365
+ shape: 'named',
366
+ example: {
367
+ id: 'user-1',
368
+ display_name: 'Alice',
369
+ created_at: '2026-03-05T00:00:00.000Z',
370
+ },
371
+ },
372
+ mutation: {
373
+ kind: 'insert',
374
+ },
375
+ output: {
376
+ example: undefined as never,
377
+ },
378
+ }
379
+ ```
380
+
381
+ The `insert` behavior is covered by `packages/sql-contract/tests/catalog.create.test.ts`.
382
+
383
+ ```ts
384
+ const updateUserSpec: QuerySpec<
385
+ { id: string; display_name?: string | null; bio?: string | null },
386
+ never
387
+ > = {
388
+ id: 'user.update-profile',
389
+ sqlFile: 'user/update-profile.sql',
390
+ params: {
391
+ shape: 'named',
392
+ example: { id: 'user-1', display_name: 'Alice', bio: null },
393
+ },
394
+ mutation: {
395
+ kind: 'update',
396
+ },
397
+ output: {
398
+ example: undefined as never,
399
+ },
400
+ }
401
+ ```
402
+
403
+ Phase 1 intentionally keeps the safety rules narrow:
404
+
405
+ - `INSERT` subtracts only direct `VALUES (:named_param)` entries when the key is missing or `undefined`.
406
+ - `UPDATE` and `DELETE` require a `WHERE` clause by default.
407
+ - `UPDATE` subtracts only simple `SET column = :param` assignments when the key is missing or `undefined`.
408
+ - `null` is preserved, so `SET column = :param` still executes and binds `NULL`.
409
+ - Mandatory parameter validation only inspects the `WHERE` clause because Phase 1 focuses on preventing accidental broad mutations first.
410
+
411
+ For example, the SQL asset below will drop `display_name = :display_name` when
412
+ `display_name` is omitted or `undefined`, but it keeps the fixed timestamp write:
413
+
414
+ ```sql
415
+ UPDATE public.user_account
416
+ SET display_name = :display_name,
417
+ bio = :bio,
418
+ updated_at = NOW()
419
+ WHERE id = :id
420
+ ```
421
+
422
+ Assignments with inline comments or more complex expressions stay untouched in
423
+ Phase 1. They remain visible in SQL and any unresolved placeholders still flow
424
+ through the configured binder/executor path.
425
+
426
+ ### Rewriters
427
+
428
+ Rewriters apply semantic-preserving SQL transformations before execution:
429
+
430
+ ```ts
431
+ const catalog = createCatalogExecutor({
432
+ loader,
433
+ executor,
434
+ rewriters: [{
435
+ name: 'add-limit',
436
+ rewrite: ({ sql, params }) => ({
437
+ sql: `${sql} LIMIT 1000`,
438
+ params,
439
+ }),
440
+ }],
441
+ })
442
+ ```
443
+
444
+ The execution pipeline order is: **SQL load rewriters binders → executor**.
445
+
446
+ Mutation specs apply one extra safety rule in Phase 1: every configured
447
+ rewriter must explicitly declare `mutationSafety: 'safe'`. This keeps mutation
448
+ preprocessing stable by rejecting rewriters that might alter `SET` or `WHERE`
449
+ structure.
450
+
451
+ ```ts
452
+ const auditCommentRewriter: Rewriter & { mutationSafety: 'safe' } = {
453
+ name: 'audit-comment',
454
+ mutationSafety: 'safe',
455
+ rewrite: ({ sql, params }) => ({
456
+ sql: `${sql} -- audit`,
457
+ params,
458
+ }),
459
+ }
460
+ ```
461
+
462
+ Rewriters without that explicit marker still work for non-mutation specs.
463
+
464
+ ### DELETE guards and `rowCount`
465
+
466
+ Physical deletes default to an affected-row guard of `exactly 1`. To evaluate
467
+ that guard safely, the configured executor must expose `rowCount` via
468
+ `{ rows, rowCount }` results.
469
+
470
+ ```ts
471
+ const executor = async (sql: string, params: QueryParams) => {
472
+ const result = await client.query(sql, params as unknown[])
473
+ return {
474
+ rows: result.rows,
475
+ rowCount: result.rowCount,
476
+ }
477
+ }
478
+ ```
479
+
480
+ If the executor does not expose `rowCount`, delete specs fail by default. You
481
+ may opt out per spec only when you intentionally want no guard:
482
+
483
+ ```ts
484
+ mutation: {
485
+ kind: 'delete',
486
+ delete: {
487
+ affectedRowsGuard: { mode: 'none' },
488
+ },
489
+ }
490
+ ```
491
+
492
+ For fixture-backed tests, `@rawsql-ts/testkit-core` provides `createCatalogRewriter()` so you can plug `SelectFixtureRewriter` into the catalog pipeline without writing an adapter:
493
+
494
+ ```ts
495
+ import { createCatalogExecutor } from '@rawsql-ts/sql-contract'
496
+ import { createCatalogRewriter } from '@rawsql-ts/testkit-core'
497
+
498
+ const catalog = createCatalogExecutor({
499
+ loader,
500
+ executor,
501
+ rewriters: [createCatalogRewriter({
502
+ fixtures: [{
503
+ tableName: 'users',
504
+ rows: [{ id: 1, name: 'Alice' }],
505
+ schema: {
506
+ columns: {
507
+ id: 'INTEGER',
508
+ name: 'TEXT',
509
+ },
510
+ },
511
+ }],
512
+ })],
513
+ })
514
+ ```
515
+
516
+ ### Observability
517
+
518
+ When an `observabilitySink` is provided, the executor emits lifecycle events:
519
+
520
+ ```ts
521
+ const catalog = createCatalogExecutor({
522
+ loader,
523
+ executor,
524
+ observabilitySink: {
525
+ emit(event) {
526
+ // event.kind: 'query_start' | 'query_end' | 'query_error'
527
+ // event.specId, event.sqlFile, event.execId, event.durationMs, ...
528
+ console.log(`[${event.kind}] ${event.specId}`)
529
+ },
530
+ },
531
+ })
532
+ ```
533
+
534
+ ### Error handling
535
+
536
+ Catalog errors form a hierarchy rooted at `CatalogError`:
537
+
538
+ | Error class | Cause |
539
+ |-------------|-------|
540
+ | `SQLLoaderError` | SQL file could not be loaded |
541
+ | `RewriterError` | A rewriter threw during transformation |
542
+ | `BinderError` | A binder failed or returned invalid output |
543
+ | `ContractViolationError` | Parameter shape mismatch, unexpected row count, etc. |
544
+ | `CatalogExecutionError` | The underlying query executor failed |
545
+
546
+ All error classes expose `specId` and `cause` properties for structured logging.
547
+
548
+ ## Execution Scope and Transaction Boundaries
549
+
550
+ sql-contract is responsible for **query definition and result mapping**. Transaction control (`BEGIN` / `COMMIT` / `ROLLBACK`) and connection lifecycle management are outside its scope — they remain the caller's execution concern.
551
+
552
+ ### What sql-contract manages
553
+
554
+ - SQL loading and transformation (rewriters, binders)
555
+ - Parameter binding and placeholder conversion
556
+ - Result row mapping and validation
557
+ - Observability events for query execution
558
+
559
+ ### What the caller manages
560
+
561
+ - Connection pooling and lifecycle (open, close, release)
562
+ - Transaction boundaries (`BEGIN` / `COMMIT` / `ROLLBACK`)
563
+ - Error recovery and retry policies
564
+ - Connection scoping (ensuring related queries share one connection)
565
+
566
+ ### QueryExecutor and connection scoping
567
+
568
+ The `QueryExecutor` type assumes it runs within a **single connection scope**. When using a connection pool, each call to the executor may be dispatched to a different connection, which makes multi-statement transactions unsafe.
569
+
570
+ To execute transactional workflows, the caller should obtain a dedicated connection and build the executor from it:
571
+
572
+ ```ts
573
+ // Acquire a dedicated connection from the pool
574
+ const client = await pool.connect();
575
+ try {
576
+ await client.query('BEGIN');
577
+
578
+ // Build an executor scoped to this connection
579
+ const executor = async (sql: string, params: readonly unknown[]) => {
580
+ const result = await client.query(sql, params as unknown[]);
581
+ return result.rows;
582
+ };
583
+
584
+ const reader = createReader(executor);
585
+ const user = await reader.one('SELECT ...', [userId]);
586
+ // ... additional queries on the same connection ...
587
+
588
+ await client.query('COMMIT');
589
+ } catch (e) {
590
+ try {
591
+ await client.query('ROLLBACK');
592
+ } catch {
593
+ // ignore secondary rollback failure
594
+ }
595
+ throw e;
596
+ } finally {
597
+ client.release();
598
+ }
599
+ ```
600
+
601
+ This separation keeps sql-contract focused on the mapping layer while leaving execution policy decisions — such as isolation level, retry logic, and savepoints — in the application layer where they belong.
602
+
603
+ ## DBMS Differences
604
+
605
+ sql-contract does not normalize SQL dialects or placeholder styles. Use the syntax required by your driver:
606
+
607
+ ```ts
608
+ // PostgreSQL ($1, $2, ...)
609
+ await executor('SELECT * FROM customers WHERE id = $1', [42])
610
+
611
+ // Named parameters (:id)
612
+ await executor('SELECT * FROM customers WHERE id = :id', { id: 42 })
613
+ ```
614
+
615
+ ## License
616
+
617
+ MIT