@cfast/db 0.2.0 → 0.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +15 -15
- package/dist/index.d.ts +434 -26
- package/dist/index.js +351 -46
- package/llms.txt +342 -12
- package/package.json +7 -5
package/llms.txt
CHANGED
|
@@ -10,8 +10,9 @@ Use `@cfast/db` whenever you need to read or write a D1 database in a cfast app.
|
|
|
10
10
|
|
|
11
11
|
- **Operations are lazy.** Every `db.query/insert/update/delete` call returns an `Operation<TResult>` with `.permissions` (inspectable immediately) and `.run(params)` (executes with permission checks). Nothing touches D1 until you call `.run()`.
|
|
12
12
|
- **Permission checks are structural.** `.run()` always checks the user's grants before executing SQL. Row-level WHERE clauses from grants are injected automatically.
|
|
13
|
-
- **
|
|
14
|
-
-
|
|
13
|
+
- **Cross-table grants run prerequisite lookups.** When a grant declares a `with` map (see `@cfast/permissions`), `@cfast/db` resolves those lookups against an unsafe-mode handle **before** running the main query, caches the results for the lifetime of the per-request `Db`, and threads them into the `where` clause as its third argument. A single lookup runs at most once per request even across many queries.
|
|
14
|
+
- **One Db per request.** `createDb()` captures the user at creation time. Never share a `Db` across requests. The per-request grant lookup cache defaults to a cache owned by the `Db` instance, so creating a fresh one each request gives every request a fresh cache. If you must reuse a single `Db` across logical requests (typically tests or long-lived workers) wrap each request in `runWithLookupCache()` to get a fresh ALS-scoped cache per logical request.
|
|
15
|
+
- **`db.unsafe()` is the only escape hatch.** Returns a `Db` that skips all permission checks. Greppable via `git grep '.unsafe()'`. The unsafe sibling shares the per-request lookup cache so lookups dispatched through it (the `LookupDb` passed to grant `with` functions) never duplicate work.
|
|
15
16
|
|
|
16
17
|
## API Reference
|
|
17
18
|
|
|
@@ -46,14 +47,45 @@ type Operation<TResult> = {
|
|
|
46
47
|
### Reads
|
|
47
48
|
|
|
48
49
|
```typescript
|
|
49
|
-
db.query(table).findMany(options?): Operation<
|
|
50
|
-
db.query(table).findFirst(options?): Operation<
|
|
51
|
-
db.query(table).paginate(params, options?): Operation<CursorPage | OffsetPage
|
|
50
|
+
db.query(table).findMany(options?): Operation<Row[]>
|
|
51
|
+
db.query(table).findFirst(options?): Operation<Row | undefined>
|
|
52
|
+
db.query(table).paginate(params, options?): Operation<CursorPage<Row> | OffsetPage<Row>>
|
|
52
53
|
```
|
|
53
54
|
|
|
55
|
+
`Row` is inferred from the table via `InferRow<typeof table>` -- callers get
|
|
56
|
+
IntelliSense on `(row) => row.title` without any cast.
|
|
57
|
+
|
|
54
58
|
FindManyOptions: `{ columns?, where?, orderBy?, limit?, offset?, with?, cache? }`
|
|
55
59
|
FindFirstOptions: same without `limit`/`offset`.
|
|
56
60
|
|
|
61
|
+
#### Relations escape hatch (#158)
|
|
62
|
+
|
|
63
|
+
When you pass `with: { relation: true }`, Drizzle's relational query builder
|
|
64
|
+
embeds the joined rows into the result. `@cfast/db` cannot statically infer
|
|
65
|
+
that shape from the `Record<string, unknown>` schema we accept on `createDb`,
|
|
66
|
+
so the default row type does not include the relation. Override the row type
|
|
67
|
+
via the `findMany`/`findFirst`/`paginate` generic to claim the shape you know
|
|
68
|
+
the query will produce, instead of `as any`-casting the result downstream:
|
|
69
|
+
|
|
70
|
+
```typescript
|
|
71
|
+
type RecipeWithIngredients = Recipe & { ingredients: Ingredient[] };
|
|
72
|
+
|
|
73
|
+
const recipes = await db
|
|
74
|
+
.query(recipesTable)
|
|
75
|
+
.findMany<RecipeWithIngredients>({ with: { ingredients: true } })
|
|
76
|
+
.run();
|
|
77
|
+
// recipes is RecipeWithIngredients[], no cast needed.
|
|
78
|
+
|
|
79
|
+
const recipe = await db
|
|
80
|
+
.query(recipesTable)
|
|
81
|
+
.findFirst<RecipeWithIngredients>({
|
|
82
|
+
where: eq(recipesTable.id, id),
|
|
83
|
+
with: { ingredients: true },
|
|
84
|
+
})
|
|
85
|
+
.run();
|
|
86
|
+
// recipe is RecipeWithIngredients | undefined.
|
|
87
|
+
```
|
|
88
|
+
|
|
57
89
|
### Writes
|
|
58
90
|
|
|
59
91
|
```typescript
|
|
@@ -84,6 +116,20 @@ await db.delete(cards).where(eq(cards.id, id)).run();
|
|
|
84
116
|
|
|
85
117
|
### compose(operations, executor): Operation<TResult>
|
|
86
118
|
|
|
119
|
+
> ⚠️ **Footgun warning (#182):** the `executor` callback receives one `run`
|
|
120
|
+
> function per entry in `operations`, **in array order**. TypeScript cannot
|
|
121
|
+
> enforce that the parameter names line up, so a callback like
|
|
122
|
+
> `(runUpdate, runVersion) => ...` that gets reordered or shadowed by an
|
|
123
|
+
> outer-scope variable will silently invoke the wrong sub-operation. Prefer
|
|
124
|
+
> `composeSequentialCallback(db, async tx => ...)` for any workflow with data
|
|
125
|
+
> dependencies — it binds by name via the normal db builders. Use `compose()`
|
|
126
|
+
> only when you genuinely need to interleave non-db logic between sub-ops.
|
|
127
|
+
>
|
|
128
|
+
> As a runtime safety net, `compose()` throws at construction time when the
|
|
129
|
+
> executor's parameter count doesn't match the operations array length (so
|
|
130
|
+
> `compose([a, b], (runA) => ...)` fails loudly instead of silently dropping
|
|
131
|
+
> `runB`). Use `(...runs) => ...` or `() => ...` to opt out of the check.
|
|
132
|
+
|
|
87
133
|
```typescript
|
|
88
134
|
import { compose } from "@cfast/db";
|
|
89
135
|
|
|
@@ -153,13 +199,12 @@ every operation was produced by `db.insert/update/delete`: the batch is sent to
|
|
|
153
199
|
D1's native `batch()` API, which executes the statements as a single transaction
|
|
154
200
|
and rolls everything back if any statement fails.
|
|
155
201
|
|
|
156
|
-
Use
|
|
157
|
-
|
|
158
|
-
line items together:
|
|
202
|
+
Use `db.batch` for a **static, pre-known list** of writes — the shape of the
|
|
203
|
+
list doesn't depend on anything read from the database:
|
|
159
204
|
|
|
160
205
|
```typescript
|
|
161
|
-
//
|
|
162
|
-
//
|
|
206
|
+
// Static batch: the list of ops is known upfront. Permissions for every
|
|
207
|
+
// sub-op are checked before the first statement runs.
|
|
163
208
|
await db.batch([
|
|
164
209
|
db.update(products).set({ stock: sql`stock - 1` }).where(eq(products.id, id1)),
|
|
165
210
|
db.update(products).set({ stock: sql`stock - 1` }).where(eq(products.id, id2)),
|
|
@@ -167,8 +212,10 @@ await db.batch([
|
|
|
167
212
|
]).run();
|
|
168
213
|
```
|
|
169
214
|
|
|
170
|
-
|
|
171
|
-
|
|
215
|
+
**Reads happen before writes begin**, so `db.batch` is NOT safe for
|
|
216
|
+
read-modify-write: a read issued inside a batch cannot influence a later
|
|
217
|
+
statement in the same batch. For "read row, decide, write" logic, use
|
|
218
|
+
`db.transaction(async tx => ...)` below.
|
|
172
219
|
|
|
173
220
|
Operations produced by `compose()`/`composeSequential()` executors don't carry
|
|
174
221
|
the batchable hook; mixing them into `db.batch([...])` causes a fallback to
|
|
@@ -176,6 +223,131 @@ sequential execution (and loses the atomicity guarantee). For pure compose
|
|
|
176
223
|
workflows that need atomicity, build the underlying ops with `db.insert/update/
|
|
177
224
|
delete` directly and pass them straight to `db.batch([...])`.
|
|
178
225
|
|
|
226
|
+
### db.transaction(async tx => ...): Promise<T>
|
|
227
|
+
|
|
228
|
+
Runs a callback inside a transaction. Writes (`tx.insert`, `tx.update`,
|
|
229
|
+
`tx.delete`) are **recorded** as the callback runs and flushed together as a
|
|
230
|
+
single atomic `db.batch([...])` when the callback returns successfully. If the
|
|
231
|
+
callback throws, the pending writes are discarded and the error is re-thrown —
|
|
232
|
+
nothing reaches D1.
|
|
233
|
+
|
|
234
|
+
Use `db.transaction` whenever the set of writes depends on logic inside the
|
|
235
|
+
callback (read-modify-write, conditional inserts, state machines):
|
|
236
|
+
|
|
237
|
+
```typescript
|
|
238
|
+
import { and, eq, gte, sql } from "drizzle-orm";
|
|
239
|
+
|
|
240
|
+
// Oversell-safe checkout: atomic + guarded against concurrent decrements.
|
|
241
|
+
const order = await db.transaction(async (tx) => {
|
|
242
|
+
// Reads execute eagerly against the underlying db. They see whatever is
|
|
243
|
+
// committed right now — D1 does NOT provide snapshot isolation across
|
|
244
|
+
// async code, so another request can modify the row between read and
|
|
245
|
+
// write. The WHERE guard on the update is what keeps us concurrency-safe.
|
|
246
|
+
const product = await tx.query(products).findFirst({
|
|
247
|
+
where: eq(products.id, pid),
|
|
248
|
+
}).run();
|
|
249
|
+
if (!product || product.stock < qty) {
|
|
250
|
+
throw new Error("out of stock"); // rolls back, nothing is written
|
|
251
|
+
}
|
|
252
|
+
|
|
253
|
+
// Guarded decrement: relative SQL + WHERE stock >= qty. The guard is
|
|
254
|
+
// re-evaluated by D1 at commit time, so two concurrent transactions
|
|
255
|
+
// cannot BOTH decrement past zero. Either one succeeds and the other
|
|
256
|
+
// is a no-op (0 rows matched), or the application-level check above
|
|
257
|
+
// rejects the second one first.
|
|
258
|
+
await tx.update(products)
|
|
259
|
+
.set({ stock: sql`stock - ${qty}` })
|
|
260
|
+
.where(and(eq(products.id, pid), gte(products.stock, qty)))
|
|
261
|
+
.run();
|
|
262
|
+
|
|
263
|
+
// Generate the order id client-side so we don't need `.returning()`
|
|
264
|
+
// inside the transaction (see "Returning inside a transaction" below).
|
|
265
|
+
const orderId = crypto.randomUUID();
|
|
266
|
+
await tx.insert(orders).values({ id: orderId, productId: pid, qty }).run();
|
|
267
|
+
|
|
268
|
+
// Whatever the callback returns becomes the transaction's return value.
|
|
269
|
+
return { orderId, productId: pid, qty };
|
|
270
|
+
});
|
|
271
|
+
```
|
|
272
|
+
|
|
273
|
+
**`tx` is a `Pick<Db, "query" | "insert" | "update" | "delete">`** plus a
|
|
274
|
+
`transaction` method for nesting. It intentionally does NOT expose `unsafe`,
|
|
275
|
+
`batch`, or `cache`:
|
|
276
|
+
|
|
277
|
+
- `unsafe()` would bypass permission checks mid-tx. Use
|
|
278
|
+
`db.unsafe().transaction(...)` on the outer handle if you need system-level
|
|
279
|
+
writes in a transaction.
|
|
280
|
+
- `batch()` inside a transaction is redundant — everything in the tx already
|
|
281
|
+
commits as one batch.
|
|
282
|
+
- `cache` invalidation is driven by the commit at the end of the transaction,
|
|
283
|
+
not per-sub-op.
|
|
284
|
+
|
|
285
|
+
**Permissions:** every recorded write has its permissions checked **upfront**
|
|
286
|
+
at flush time via the underlying `db.batch()`. If any op lacks a grant, the
|
|
287
|
+
entire transaction fails before any SQL is issued.
|
|
288
|
+
|
|
289
|
+
**Nested transactions** are flattened into the parent's pending queue, so
|
|
290
|
+
`tx.transaction(async inner => ...)` still commits in the same single batch
|
|
291
|
+
as the outer. Any error thrown in the inner callback aborts the entire
|
|
292
|
+
outer transaction. This means helpers that always use `tx.transaction(...)`
|
|
293
|
+
to "own" a transaction scope compose naturally — they run as-is when called
|
|
294
|
+
inside an outer tx and start their own when called standalone.
|
|
295
|
+
|
|
296
|
+
#### Returning inside a transaction
|
|
297
|
+
|
|
298
|
+
`tx.insert(...).returning().run()` inside a transaction callback resolves to
|
|
299
|
+
`undefined`. The row cannot be surfaced to the caller because the batch is
|
|
300
|
+
only flushed **after** the callback returns — awaiting a returning promise
|
|
301
|
+
inside the callback would deadlock the flush. Work around this in one of
|
|
302
|
+
three ways:
|
|
303
|
+
|
|
304
|
+
1. **Generate ids client-side** (`crypto.randomUUID()`) and pass them into the
|
|
305
|
+
insert. The callback already knows the id without needing `.returning()`.
|
|
306
|
+
2. **Query the row after the transaction commits** with a regular
|
|
307
|
+
`db.query(...)` outside the `db.transaction(...)` call.
|
|
308
|
+
3. **Use `db.batch([...])` instead** if the read-after-write you want is
|
|
309
|
+
actually just "grab the inserted row for logging" — batch results include
|
|
310
|
+
every op's output in the same array.
|
|
311
|
+
|
|
312
|
+
#### Reads inside a transaction
|
|
313
|
+
|
|
314
|
+
Reads (`tx.query(...).findFirst().run()` etc.) execute eagerly against the
|
|
315
|
+
underlying db at the moment they're called. They see whatever is committed
|
|
316
|
+
right now and are **not** isolated from concurrent transactions. Do not rely
|
|
317
|
+
on a read's value alone to gate a write — combine the read with a relative
|
|
318
|
+
SQL update and a WHERE guard so the guard is re-evaluated at commit time.
|
|
319
|
+
|
|
320
|
+
#### Pattern: cheap single-row oversell guard without a transaction
|
|
321
|
+
|
|
322
|
+
For a single-row counter-decrement you don't even need a transaction. Use
|
|
323
|
+
relative SQL with a WHERE guard and inspect the affected-rows count — this
|
|
324
|
+
is ~one round trip instead of two and needs no pending-queue bookkeeping:
|
|
325
|
+
|
|
326
|
+
```typescript
|
|
327
|
+
const result = await db.unsafe().d1.prepare(
|
|
328
|
+
"UPDATE products SET stock = stock - ? WHERE id = ? AND stock >= ?",
|
|
329
|
+
).bind(qty, pid, qty).run();
|
|
330
|
+
|
|
331
|
+
if (result.meta.changes === 0) {
|
|
332
|
+
throw new Error("out of stock");
|
|
333
|
+
}
|
|
334
|
+
```
|
|
335
|
+
|
|
336
|
+
Use `db.transaction` when you need **multiple** writes to commit atomically,
|
|
337
|
+
or when you need read-modify-write across more than one row.
|
|
338
|
+
|
|
339
|
+
### batch vs. transaction at a glance
|
|
340
|
+
|
|
341
|
+
| Use case | `db.batch([...])` | `db.transaction(async tx => ...)` |
|
|
342
|
+
|---|---|---|
|
|
343
|
+
| Static list of writes | Yes (preferred) | Works but heavier |
|
|
344
|
+
| Shape of writes depends on logic | No — list must be static | Yes |
|
|
345
|
+
| Read a row, decide, then write | **No** — not concurrency-safe | Yes (combine with WHERE guard) |
|
|
346
|
+
| Conditional inserts / state machines | No | Yes |
|
|
347
|
+
| Returning the inserted row | Yes (batch result array) | Resolves to `undefined` inside the callback — generate ids client-side |
|
|
348
|
+
| Atomic (all-or-nothing) | Yes | Yes |
|
|
349
|
+
| Concurrency safety | Guard writes with WHERE clauses | Guard writes with WHERE clauses (reads are NOT isolated) |
|
|
350
|
+
|
|
179
351
|
### db.unsafe(): Db
|
|
180
352
|
|
|
181
353
|
Returns a new Db that skips permission checks. Use for cron jobs, migrations, system tasks.
|
|
@@ -213,6 +385,35 @@ cache: {
|
|
|
213
385
|
Per-query: `db.query(t).findMany({ cache: false })` or `{ cache: { ttl: "5m", tags: ["posts"] } }`.
|
|
214
386
|
Manual invalidation: `await db.cache.invalidate({ tags: ["posts"], tables: ["posts"] })`.
|
|
215
387
|
|
|
388
|
+
### defineSeed
|
|
389
|
+
|
|
390
|
+
`defineSeed({ entries })` is the canonical way to seed a local D1 database.
|
|
391
|
+
Accepts an ordered list of `{ table, rows }` entries — put parent tables
|
|
392
|
+
before child tables to respect FK ordering. `seed.run(db)` hops through
|
|
393
|
+
`db.unsafe()` internally so seeds never need their own grants. Empty `rows`
|
|
394
|
+
arrays are skipped, so placeholder entries are safe.
|
|
395
|
+
|
|
396
|
+
```typescript
|
|
397
|
+
import { createDb, defineSeed } from "@cfast/db";
|
|
398
|
+
import * as schema from "~/db/schema";
|
|
399
|
+
|
|
400
|
+
const seed = defineSeed({
|
|
401
|
+
entries: [
|
|
402
|
+
{ table: schema.users, rows: [{ id: "u-1", email: "ada@example.com", name: "Ada" }] },
|
|
403
|
+
{ table: schema.posts, rows: [{ id: "p-1", title: "Hello", authorId: "u-1" }] },
|
|
404
|
+
],
|
|
405
|
+
});
|
|
406
|
+
|
|
407
|
+
const db = createDb({ d1, schema, grants: [], user: null });
|
|
408
|
+
await seed.run(db);
|
|
409
|
+
```
|
|
410
|
+
|
|
411
|
+
Multi-row entries are flushed via `db.batch([...])` (atomic per table);
|
|
412
|
+
single-row entries skip the batch path so Drizzle's non-empty tuple
|
|
413
|
+
invariant holds. Use this from `scripts/seed.ts` and run it via
|
|
414
|
+
`pnpm db:seed:local` (the scaffolded `create-cfast` package ships this
|
|
415
|
+
script and command out of the box).
|
|
416
|
+
|
|
216
417
|
## Usage Examples
|
|
217
418
|
|
|
218
419
|
### Standard loader pattern
|
|
@@ -254,6 +455,83 @@ export async function action({ context, request }) {
|
|
|
254
455
|
}
|
|
255
456
|
```
|
|
256
457
|
|
|
458
|
+
### Cross-table grants ("show recipes from my friends")
|
|
459
|
+
|
|
460
|
+
When a row-level filter needs data from another table, declare a `with` map on the grant. `@cfast/db` resolves every prerequisite once per request and threads the result into the `where` clause:
|
|
461
|
+
|
|
462
|
+
```typescript
|
|
463
|
+
// permissions.ts
|
|
464
|
+
export const permissions = definePermissions<AppUser, typeof schema>()({
|
|
465
|
+
roles: ["user"] as const,
|
|
466
|
+
grants: (g) => ({
|
|
467
|
+
user: [
|
|
468
|
+
g("read", recipes, {
|
|
469
|
+
with: {
|
|
470
|
+
friendIds: async (user, db) => {
|
|
471
|
+
const rows = await db
|
|
472
|
+
.query(friendGrants)
|
|
473
|
+
.findMany({ where: eq(friendGrants.grantee, user.id) })
|
|
474
|
+
.run();
|
|
475
|
+
return (rows as { target: string }[]).map((r) => r.target);
|
|
476
|
+
},
|
|
477
|
+
},
|
|
478
|
+
where: (recipe, user, { friendIds }) =>
|
|
479
|
+
or(
|
|
480
|
+
eq(recipe.visibility, "public"),
|
|
481
|
+
eq(recipe.authorId, user.id),
|
|
482
|
+
inArray(recipe.authorId, friendIds as string[]),
|
|
483
|
+
),
|
|
484
|
+
}),
|
|
485
|
+
],
|
|
486
|
+
}),
|
|
487
|
+
});
|
|
488
|
+
|
|
489
|
+
// loader.ts -- one createDb() per request, one friend-grant fetch per request
|
|
490
|
+
export async function loader({ context, request }) {
|
|
491
|
+
const { user, grants } = await auth.requireUser(request);
|
|
492
|
+
const db = createDb({ d1: context.env.DB, schema, grants, user });
|
|
493
|
+
|
|
494
|
+
// Both reads share the cached friendIds lookup -- it runs once total.
|
|
495
|
+
const myRecipes = await db.query(recipes).findMany({ limit: 10 }).run();
|
|
496
|
+
const popular = await db.query(recipes).paginate(params).run();
|
|
497
|
+
|
|
498
|
+
return { myRecipes, popular };
|
|
499
|
+
}
|
|
500
|
+
```
|
|
501
|
+
|
|
502
|
+
### Reusing a Db across logical requests (tests, long-lived workers)
|
|
503
|
+
|
|
504
|
+
`createDb()` always works and you should keep the "one Db per request" default. If
|
|
505
|
+
you need to reuse a single `Db` across multiple logical requests (commonly in
|
|
506
|
+
tests that insert new grants mid-run and expect subsequent queries to see them)
|
|
507
|
+
wrap each logical request in `runWithLookupCache()`:
|
|
508
|
+
|
|
509
|
+
```typescript
|
|
510
|
+
import { createDb, runWithLookupCache } from "@cfast/db";
|
|
511
|
+
|
|
512
|
+
// Shared Db reused across the whole test file.
|
|
513
|
+
const db = createDb({ d1, schema, grants, user, cache: false });
|
|
514
|
+
|
|
515
|
+
test("new grants become visible to subsequent queries", async () => {
|
|
516
|
+
await runWithLookupCache(async () => {
|
|
517
|
+
await db.insert(friendGrants).values({ grantee: "u1", target: "u2" }).run();
|
|
518
|
+
// The next query starts in the same scope, so it sees a *fresh* cache
|
|
519
|
+
// and re-runs the `with` lookup with the updated grant row.
|
|
520
|
+
});
|
|
521
|
+
|
|
522
|
+
await runWithLookupCache(async () => {
|
|
523
|
+
const visible = await db.query(recipes).findMany().run();
|
|
524
|
+
expect(visible).toContainEqual(expect.objectContaining({ authorId: "u2" }));
|
|
525
|
+
});
|
|
526
|
+
});
|
|
527
|
+
```
|
|
528
|
+
|
|
529
|
+
`runWithLookupCache(fn, cache?)` establishes an `AsyncLocalStorage` scope;
|
|
530
|
+
every `@cfast/db` operation inside `fn` (and any promises it starts) will
|
|
531
|
+
prefer the scoped cache over the `Db`-owned fallback. Pass an explicit
|
|
532
|
+
`LookupCache` instance as the second argument if you want to share a cache
|
|
533
|
+
across multiple sibling `runWithLookupCache` calls.
|
|
534
|
+
|
|
257
535
|
## Integration
|
|
258
536
|
|
|
259
537
|
- **@cfast/permissions** -- `grants` come from `resolveGrants(permissions, user.roles)`. Permission WHERE clauses are defined via `grant()` in your permissions config.
|
|
@@ -280,6 +558,58 @@ export const folders = sqliteTable("folders", {
|
|
|
280
558
|
|
|
281
559
|
Use `AnyPgColumn` / `AnyMySqlColumn` for other dialects. The same pattern applies to any self-reference (comment threads, org charts, category trees).
|
|
282
560
|
|
|
561
|
+
## Testing
|
|
562
|
+
|
|
563
|
+
### `node:sqlite` with vitest 4 + Node 22+
|
|
564
|
+
|
|
565
|
+
Vitest 4 auto-externalises modules from Node's `builtinModules` list, but
|
|
566
|
+
`node:sqlite` is still flagged as experimental and is **not** included in that
|
|
567
|
+
list. The first time a test imports it (directly, or transitively via a local
|
|
568
|
+
in-memory D1 stand-in), Vite tries to bundle it and crashes with:
|
|
569
|
+
|
|
570
|
+
```
|
|
571
|
+
Cannot bundle Node.js built-in 'node:sqlite'
|
|
572
|
+
```
|
|
573
|
+
|
|
574
|
+
Explicitly externalise the module in your `vitest.config.ts`:
|
|
575
|
+
|
|
576
|
+
```ts
|
|
577
|
+
// vitest.config.ts
|
|
578
|
+
import { defineConfig } from "vitest/config";
|
|
579
|
+
|
|
580
|
+
export default defineConfig({
|
|
581
|
+
test: {
|
|
582
|
+
environment: "node",
|
|
583
|
+
// Prevent vitest from trying to bundle experimental Node builtins.
|
|
584
|
+
server: {
|
|
585
|
+
deps: {
|
|
586
|
+
external: [/^node:sqlite$/],
|
|
587
|
+
},
|
|
588
|
+
},
|
|
589
|
+
},
|
|
590
|
+
});
|
|
591
|
+
```
|
|
592
|
+
|
|
593
|
+
If you need more than one experimental builtin (e.g. `node:test` helpers),
|
|
594
|
+
add each as a separate regex in the `external` array.
|
|
595
|
+
|
|
596
|
+
### Long cold-import times
|
|
597
|
+
|
|
598
|
+
When a test suite uses `@cfast/admin`, vitest's first import of
|
|
599
|
+
`~/admin.server` can take several seconds even with the server-only entry
|
|
600
|
+
(`@cfast/admin/server`), because drizzle-orm's SQLite core is large. If the
|
|
601
|
+
default 5s test timeout trips during initial discovery, bump it explicitly
|
|
602
|
+
for admin-using suites:
|
|
603
|
+
|
|
604
|
+
```ts
|
|
605
|
+
// vitest.config.ts
|
|
606
|
+
export default defineConfig({
|
|
607
|
+
test: {
|
|
608
|
+
testTimeout: 30000,
|
|
609
|
+
},
|
|
610
|
+
});
|
|
611
|
+
```
|
|
612
|
+
|
|
283
613
|
## Common Mistakes
|
|
284
614
|
|
|
285
615
|
- **Forgetting `.run()`** -- Operations are lazy. `db.query(t).findMany()` returns an Operation, not results. You must call `.run()`.
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@cfast/db",
|
|
3
|
-
"version": "0.
|
|
3
|
+
"version": "0.4.0",
|
|
4
4
|
"description": "Permission-aware Drizzle queries for Cloudflare D1",
|
|
5
5
|
"keywords": [
|
|
6
6
|
"cfast",
|
|
@@ -34,12 +34,13 @@
|
|
|
34
34
|
"access": "public"
|
|
35
35
|
},
|
|
36
36
|
"peerDependencies": {
|
|
37
|
+
"@cfast/permissions": ">=0.3.0 <0.5.0",
|
|
37
38
|
"drizzle-orm": ">=0.35"
|
|
38
39
|
},
|
|
39
|
-
"dependencies": {
|
|
40
|
-
"@cfast/permissions": "0.2.0"
|
|
41
|
-
},
|
|
42
40
|
"peerDependenciesMeta": {
|
|
41
|
+
"@cfast/permissions": {
|
|
42
|
+
"optional": false
|
|
43
|
+
},
|
|
43
44
|
"@cloudflare/workers-types": {
|
|
44
45
|
"optional": true
|
|
45
46
|
}
|
|
@@ -49,7 +50,8 @@
|
|
|
49
50
|
"drizzle-orm": "^0.45.1",
|
|
50
51
|
"tsup": "^8",
|
|
51
52
|
"typescript": "^5.7",
|
|
52
|
-
"vitest": "^4.1.0"
|
|
53
|
+
"vitest": "^4.1.0",
|
|
54
|
+
"@cfast/permissions": "0.4.0"
|
|
53
55
|
},
|
|
54
56
|
"scripts": {
|
|
55
57
|
"build": "tsup src/index.ts --format esm --dts",
|