dyno-table 2.2.1 → 2.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (102) hide show
  1. package/README.md +187 -1865
  2. package/dist/builders.cjs +55 -0
  3. package/dist/builders.d.cts +4 -0
  4. package/dist/builders.d.ts +4 -0
  5. package/dist/builders.js +2 -0
  6. package/dist/chunk-2EWNZOUK.js +618 -0
  7. package/dist/chunk-2WIBY7PZ.js +46 -0
  8. package/dist/chunk-7UJJ7JXM.cjs +63 -0
  9. package/dist/chunk-DTFJJASK.js +3200 -0
  10. package/dist/chunk-EODPMYPE.js +558 -0
  11. package/dist/chunk-KA3VPIPS.cjs +560 -0
  12. package/dist/chunk-NTA6GDPP.cjs +622 -0
  13. package/dist/chunk-PB7BBCZO.cjs +32 -0
  14. package/dist/chunk-QVRMYGC4.js +29 -0
  15. package/dist/chunk-XYL43FDX.cjs +3217 -0
  16. package/dist/conditions.cjs +67 -62
  17. package/dist/conditions.js +1 -48
  18. package/dist/entity.cjs +14 -625
  19. package/dist/entity.d.cts +2 -10
  20. package/dist/entity.d.ts +2 -10
  21. package/dist/entity.js +2 -626
  22. package/dist/index-2cbm07Bi.d.ts +2797 -0
  23. package/dist/index-DlN8G9hd.d.cts +2797 -0
  24. package/dist/index.cjs +111 -4460
  25. package/dist/index.d.cts +2 -10
  26. package/dist/index.d.ts +2 -10
  27. package/dist/index.js +5 -4442
  28. package/dist/standard-schema.cjs +0 -2
  29. package/dist/standard-schema.js +0 -2
  30. package/dist/table.cjs +7 -3796
  31. package/dist/table.d.cts +163 -12
  32. package/dist/table.d.ts +163 -12
  33. package/dist/table.js +3 -3799
  34. package/dist/types.cjs +0 -2
  35. package/dist/types.js +0 -2
  36. package/dist/utils.cjs +10 -30
  37. package/dist/utils.js +1 -31
  38. package/package.json +6 -66
  39. package/dist/batch-builder-BiQDIZ7p.d.cts +0 -398
  40. package/dist/batch-builder-CNsLS6sR.d.ts +0 -398
  41. package/dist/builder-types-BTVhQSHI.d.cts +0 -169
  42. package/dist/builder-types-CzuLR4Th.d.ts +0 -169
  43. package/dist/builders/condition-check-builder.cjs +0 -422
  44. package/dist/builders/condition-check-builder.cjs.map +0 -1
  45. package/dist/builders/condition-check-builder.d.cts +0 -153
  46. package/dist/builders/condition-check-builder.d.ts +0 -153
  47. package/dist/builders/condition-check-builder.js +0 -420
  48. package/dist/builders/condition-check-builder.js.map +0 -1
  49. package/dist/builders/delete-builder.cjs +0 -484
  50. package/dist/builders/delete-builder.cjs.map +0 -1
  51. package/dist/builders/delete-builder.d.cts +0 -211
  52. package/dist/builders/delete-builder.d.ts +0 -211
  53. package/dist/builders/delete-builder.js +0 -482
  54. package/dist/builders/delete-builder.js.map +0 -1
  55. package/dist/builders/paginator.cjs +0 -193
  56. package/dist/builders/paginator.cjs.map +0 -1
  57. package/dist/builders/paginator.d.cts +0 -155
  58. package/dist/builders/paginator.d.ts +0 -155
  59. package/dist/builders/paginator.js +0 -191
  60. package/dist/builders/paginator.js.map +0 -1
  61. package/dist/builders/put-builder.cjs +0 -554
  62. package/dist/builders/put-builder.cjs.map +0 -1
  63. package/dist/builders/put-builder.d.cts +0 -319
  64. package/dist/builders/put-builder.d.ts +0 -319
  65. package/dist/builders/put-builder.js +0 -552
  66. package/dist/builders/put-builder.js.map +0 -1
  67. package/dist/builders/query-builder.cjs +0 -757
  68. package/dist/builders/query-builder.cjs.map +0 -1
  69. package/dist/builders/query-builder.d.cts +0 -6
  70. package/dist/builders/query-builder.d.ts +0 -6
  71. package/dist/builders/query-builder.js +0 -755
  72. package/dist/builders/query-builder.js.map +0 -1
  73. package/dist/builders/transaction-builder.cjs +0 -906
  74. package/dist/builders/transaction-builder.cjs.map +0 -1
  75. package/dist/builders/transaction-builder.d.cts +0 -464
  76. package/dist/builders/transaction-builder.d.ts +0 -464
  77. package/dist/builders/transaction-builder.js +0 -904
  78. package/dist/builders/transaction-builder.js.map +0 -1
  79. package/dist/builders/update-builder.cjs +0 -668
  80. package/dist/builders/update-builder.cjs.map +0 -1
  81. package/dist/builders/update-builder.d.cts +0 -374
  82. package/dist/builders/update-builder.d.ts +0 -374
  83. package/dist/builders/update-builder.js +0 -666
  84. package/dist/builders/update-builder.js.map +0 -1
  85. package/dist/conditions.cjs.map +0 -1
  86. package/dist/conditions.js.map +0 -1
  87. package/dist/entity.cjs.map +0 -1
  88. package/dist/entity.js.map +0 -1
  89. package/dist/index.cjs.map +0 -1
  90. package/dist/index.js.map +0 -1
  91. package/dist/query-builder-D3URwK9k.d.cts +0 -477
  92. package/dist/query-builder-cfEkU0_w.d.ts +0 -477
  93. package/dist/standard-schema.cjs.map +0 -1
  94. package/dist/standard-schema.js.map +0 -1
  95. package/dist/table-ClST8nkR.d.cts +0 -276
  96. package/dist/table-vE3cGoDy.d.ts +0 -276
  97. package/dist/table.cjs.map +0 -1
  98. package/dist/table.js.map +0 -1
  99. package/dist/types.cjs.map +0 -1
  100. package/dist/types.js.map +0 -1
  101. package/dist/utils.cjs.map +0 -1
  102. package/dist/utils.js.map +0 -1
package/README.md CHANGED
@@ -1,1959 +1,281 @@
1
- <div align="center">
2
-
3
- # 🦖 dyno-table
4
-
5
- ### **Tame Your DynamoDB Data with Type-Safe Precision**
6
-
7
- [![npm version](https://img.shields.io/npm/v/dyno-table.svg?style=for-the-badge)](https://www.npmjs.com/package/dyno-table)
8
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=for-the-badge)](https://opensource.org/licenses/MIT)
9
- [![TypeScript](https://img.shields.io/badge/TypeScript-4.0%2B-blue?style=for-the-badge&logo=typescript)](https://www.typescriptlang.org/)
10
- [![AWS DynamoDB](https://img.shields.io/badge/AWS-DynamoDB-orange?style=for-the-badge&logo=amazon-aws)](https://aws.amazon.com/dynamodb/)
11
-
12
- </div>
13
-
14
- <p align="center"><strong>A powerful, type-safe abstraction layer for DynamoDB single-table designs</strong><br/>
15
- <em>Write cleaner, safer, and more maintainable DynamoDB code</em></p>
16
-
17
- <img src="docs/images/geoff-the-dyno.png" width="400" height="250" alt="Geoff the Dyno" style="float: right; margin-left: 20px; margin-bottom: 20px;">
18
-
19
- ## 🔥 Why Developers Choose dyno-table
20
-
21
- ```ts
22
- // Type-safe dinosaur tracking operations made simple
23
- await dinoTable
24
- .update<Dinosaur>({
25
- pk: "SPECIES#trex",
26
- sk: "PROFILE#001",
27
- })
28
- .set("diet", "Carnivore") // Update dietary classification
29
- .add("sightings", 1) // Increment sighting counter
30
- .condition((op) => op.eq("status", "ACTIVE")) // Only if dinosaur is active
31
- .execute();
32
- ```
33
-
34
- ## 🌟 Why dyno-table Stands Out From The Pack
35
-
36
- <table>
37
- <tr>
38
- <td width="50%">
39
- <h3>🦕 Dinosaur-sized data made manageable</h3>
40
- <p>Clean abstraction layer that simplifies complex DynamoDB patterns and makes single-table design approachable</p>
41
- </td>
42
- <td width="50%">
43
- <h3>🛡️ Extinction-proof type safety</h3>
44
- <p>Full TypeScript support with strict type checking that catches errors at compile time, not runtime</p>
45
- </td>
46
- </tr>
47
- <tr>
48
- <td width="50%">
49
- <h3>⚡ Velociraptor-fast API</h3>
50
- <p>Intuitive chainable builder pattern for complex operations that feels natural and reduces boilerplate</p>
51
- </td>
52
- <td width="50%">
53
- <h3>🎯 Semantic data access patterns</h3>
54
- <p>Encourages meaningful, descriptive method names like <code>getUserByEmail()</code> instead of cryptic <code>gsi1</code> references</p>
55
- </td>
56
- </tr>
57
- <tr>
58
- <td width="50%">
59
- <h3>📈 Jurassic-scale performance</h3>
60
- <p>Automatic batch chunking and pagination handling that scales with your data without extra code</p>
61
- </td>
62
- <td width="50%">
63
- <h3>🧩 Flexible schema validation</h3>
64
- <p>Works with your favorite validation libraries including Zod, ArkType, and Valibot</p>
65
- </td>
66
- </tr>
67
- </table>
1
+ # dyno-table
68
2
 
69
- ## 📑 Table of Contents
3
+ > A powerful, type-safe DynamoDB library for TypeScript that simplifies working with DynamoDB through intuitive APIs and comprehensive type safety.
70
4
 
71
- - [📦 Installation](#-installation)
72
- - [🎯 DynamoDB Best Practices](#-dynamodb-best-practices)
73
- - [Semantic Data Access Patterns](#semantic-data-access-patterns)
74
- - [The Problem with Generic Index Names](#the-problem-with-generic-index-names)
75
- - [The Solution: Meaningful Method Names](#the-solution-meaningful-method-names)
76
- - [🚀 Quick Start](#-quick-start)
77
- - [1. Configure Your Jurassic Table](#1-configure-your-jurassic-table)
78
- - [2. Perform Type-Safe Dinosaur Operations](#2-perform-type-safe-dinosaur-operations)
79
- - [🏗️ Entity Pattern](#-entity-pattern-with-standard-schema-validators)
80
- - [Defining Entities](#defining-entities)
81
- - [Entity Features](#entity-features)
82
- - [1. Schema Validation](#1-schema-validation)
83
- - [2. CRUD Operations](#2-crud-operations)
84
- - [3. Custom Queries](#3-custom-queries)
85
- - [4. Defining GSI Access Patterns](#4-defining-gsi-access-patterns)
86
- - [5. Lifecycle Hooks](#5-lifecycle-hooks)
87
- - [Complete Entity Example](#complete-entity-example)
88
- - [🧩 Advanced Features](#-advanced-features)
89
- - [Transactional Operations](#transactional-operations)
90
- - [Batch Processing](#batch-processing)
91
- - [Pagination Made Simple](#pagination-made-simple)
92
- - [🛡️ Type-Safe Query Building](#️-type-safe-query-building)
93
- - [Comparison Operators](#comparison-operators)
94
- - [Logical Operators](#logical-operators)
95
- - [Query Operations](#query-operations)
96
- - [Put Operations](#put-operations)
97
- - [Update Operations](#update-operations)
98
- - [Condition Operators](#condition-operators)
99
- - [Multiple Operations](#multiple-operations)
100
- - [Force Rebuilding Read-Only Indexes](#force-rebuilding-read-only-indexes)
101
- - [🔄 Type Safety Features](#-type-safety-features)
102
- - [Nested Object Support](#nested-object-support)
103
- - [Type-Safe Conditions](#type-safe-conditions)
104
- - [🔄 Batch Operations](#-batch-operations)
105
- - [Entity-Based Batch Operations](#-entity-based-batch-operations)
106
- - [Table-Direct Batch Operations](#-table-direct-batch-operations)
107
- - [🔒 Transaction Operations](#-transaction-operations)
108
- - [Transaction Builder](#transaction-builder)
109
- - [Transaction Options](#transaction-options)
110
- - [🚨 Error Handling](#-error-handling)
111
- - [📚 API Reference](#-api-reference)
112
- - [Condition Operators](#condition-operators-1)
113
- - [Comparison Operators](#comparison-operators-1)
114
- - [Attribute Operators](#attribute-operators)
115
- - [Logical Operators](#logical-operators-1)
116
- - [Key Condition Operators](#key-condition-operators)
117
- - [🔮 Future Roadmap](#-future-roadmap)
118
- - [🤝 Contributing](#-contributing)
119
- - [🦔 Running Examples](#-running-examples)
5
+ [![npm version](https://img.shields.io/npm/v/dyno-table.svg)](https://www.npmjs.com/package/dyno-table)
6
+ [![npm downloads](https://img.shields.io/npm/dm/dyno-table.svg)](https://www.npmjs.com/package/dyno-table)
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
8
+ [![TypeScript](https://img.shields.io/badge/TypeScript-4.0%2B-blue?logo=typescript)](https://www.typescriptlang.org/)
120
9
 
121
- ## 📦 Installation
10
+ ## Why dyno-table?
122
11
 
123
- <div align="center">
124
-
125
- ### Get Started in Seconds
126
-
127
- </div>
128
-
129
- ```bash
130
- # Install the core library
131
- npm install dyno-table
132
-
133
- # Install required AWS SDK v3 peer dependencies
134
- npm install @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
135
- ```
12
+ - **Type Safety First** - Full TypeScript support with compile-time error checking
13
+ - **Schema Validation** - Built-in support for Zod, ArkType, Valibot, and other validation libraries
14
+ - **Semantic Queries** - Write meaningful method names like `getDinosaurBySpecies()` instead of cryptic `gsi1` references
15
+ - **Single-Table Design** - Optimized for modern DynamoDB best practices
16
+ - **Repository Pattern** - Clean, maintainable code architecture
136
17
 
137
- <details>
138
- <summary><b>📋 Other Package Managers</b></summary>
18
+ ## Quick Start
139
19
 
140
20
  ```bash
141
- # Using Yarn
142
- yarn add dyno-table @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
143
-
144
- # Using PNPM
145
- pnpm add dyno-table @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
21
+ npm install dyno-table @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb
146
22
  ```
147
23
 
148
- </details>
149
-
150
- ## 🎯 DynamoDB Best Practices
151
-
152
- <div align="center">
153
-
154
- ### **Design Your Data Access Patterns First, Name Them Meaningfully**
155
-
156
- </div>
157
-
158
- dyno-table follows DynamoDB best practices by encouraging developers to **define their data access patterns upfront** and assign them **meaningful, descriptive names**. This approach ensures that when writing business logic, developers call semantically clear methods instead of cryptic index references.
159
-
160
- ### Semantic Data Access Patterns
161
-
162
- The core principle is simple: **your code should read like business logic, not database implementation details**.
163
-
164
- <table>
165
- <tr>
166
- <th>❌ Cryptic Implementation</th>
167
- <th>✅ Semantic Business Logic</th>
168
- </tr>
169
- <tr>
170
- <td>
171
-
172
- ```ts
173
- // Hard to understand what this does - using raw AWS Document Client
174
- import { DynamoDBDocument } from "@aws-sdk/lib-dynamodb";
175
- import { QueryCommand } from "@aws-sdk/lib-dynamodb";
176
-
177
- const docClient = DynamoDBDocument.from(new DynamoDBClient({}));
178
-
179
- const users = await docClient.send(
180
- new QueryCommand({
181
- TableName: "MyTable",
182
- IndexName: "gsi1",
183
- KeyConditionExpression: "#pk = :pk",
184
- ExpressionAttributeNames: { "#pk": "pk" },
185
- ExpressionAttributeValues: { ":pk": "STATUS#active" },
186
- }),
187
- );
188
-
189
- const orders = await docClient.send(
190
- new QueryCommand({
191
- TableName: "MyTable",
192
- IndexName: "gsi2",
193
- KeyConditionExpression: "#pk = :pk",
194
- ExpressionAttributeNames: { "#pk": "pk" },
195
- ExpressionAttributeValues: { ":pk": "CUSTOMER#123" },
196
- }),
197
- );
198
-
199
- const products = await docClient.send(
200
- new QueryCommand({
201
- TableName: "MyTable",
202
- IndexName: "gsi3",
203
- KeyConditionExpression: "#pk = :pk",
204
- ExpressionAttributeNames: { "#pk": "pk" },
205
- ExpressionAttributeValues: { ":pk": "CATEGORY#electronics" },
206
- }),
207
- );
208
- ```
209
-
210
- </td>
211
- <td>
212
-
213
- ```ts
214
- // Clear business intent
215
- const activeUsers = await userRepo.query.getActiveUsers().execute();
216
-
217
- const customerOrders = await orderRepo.query
218
- .getOrdersByCustomer({ customerId: "123" })
219
- .execute();
220
-
221
- const electronics = await productRepo.query
222
- .getProductsByCategory({ category: "electronics" })
223
- .execute();
224
- ```
225
-
226
- </td>
227
- </tr>
228
- </table>
229
-
230
- ### The Problem with Generic Index Names
231
-
232
- When you use generic names like `gsi1`, `gsi2`, `gsi3`, you create several problems:
233
-
234
- - **Cognitive Load**: Developers must remember what each index does
235
- - **Poor Documentation**: Code doesn't self-document its purpose
236
- - **Error-Prone**: Easy to use the wrong index for a query
237
- - **Team Friction**: New team members struggle to understand data access patterns
238
- - **Maintenance Issues**: Refactoring becomes risky and unclear
239
-
240
- ### The Solution: Meaningful Method Names
241
-
242
- dyno-table encourages you to define your access patterns with descriptive names that reflect their business purpose:
243
-
244
- ```ts
245
- // Define your access patterns with meaningful names
246
- const UserEntity = defineEntity({
247
- name: "User",
248
- schema: userSchema,
249
- primaryKey,
250
- queries: {
251
- // ✅ Clear business purpose
252
- getActiveUsers: createQuery
253
- .input(z.object({}))
254
- .query(({ entity }) =>
255
- entity.query({ pk: "STATUS#active" }).useIndex("gsi1"),
256
- ),
257
-
258
- getUsersByEmail: createQuery
259
- .input(z.object({ email: z.string() }))
260
- .query(({ input, entity }) =>
261
- entity.query({ pk: `EMAIL#${input.email}` }).useIndex("gsi1"),
262
- ),
263
-
264
- getUsersByDepartment: createQuery
265
- .input(z.object({ department: z.string() }))
266
- .query(({ input, entity }) =>
267
- entity.query({ pk: `DEPT#${input.department}` }).useIndex("gsi2"),
268
- ),
269
- },
270
- });
271
-
272
- // Usage in business logic is now self-documenting
273
- const activeUsers = await userRepo.query.getActiveUsers().execute();
274
- const engineeringTeam = await userRepo.query
275
- .getUsersByDepartment({ department: "engineering" })
276
- .execute();
277
- const user = await userRepo.query
278
- .getUsersByEmail({ email: "john@company.com" })
279
- .execute();
280
- ```
281
-
282
- **This pattern promotes:**
283
-
284
- - ✅ **Better code readability and maintainability**
285
- - ✅ **Self-documenting API design**
286
- - ✅ **Easier onboarding for new team members**
287
- - ✅ **Reduced cognitive load when understanding data access patterns**
288
- - ✅ **Clear separation between business logic and database implementation**
289
-
290
- > **🏗️ Important Note**: Keep your actual DynamoDB table GSI names generic (`gsi1`, `gsi2`, etc.) for flexibility across different entities. The meaningful, descriptive names should live at the entity/repository level, not at the table level. This allows multiple entities to share the same GSIs while maintaining semantic clarity in your business logic.
291
-
292
- ## 🚀 Quick Start
293
-
294
- <div align="center">
295
-
296
- ### From Zero to DynamoDB Hero in Minutes
297
-
298
- </div>
299
-
300
- ### 1. Configure Your Jurassic Table
301
-
302
- > **Note:** dyno-table does not create or manage the actual DynamoDB table for you. We recommend using infrastructure as code tools like Terraform, OpenTofu, SST, or AWS CDK to provision and manage your DynamoDB tables.
303
-
304
- ```ts
305
- import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
306
- import { DynamoDBDocument } from "@aws-sdk/lib-dynamodb";
307
- import { Table } from "dyno-table/table";
308
-
309
- // Configure AWS SDK clients
310
- const client = new DynamoDBClient({ region: "us-west-2" });
311
- const docClient = DynamoDBDocument.from(client);
312
-
313
- // Initialise table
314
- const dinoTable = new Table({
315
- client: docClient,
316
- tableName: "JurassicPark",
317
- indexes: {
318
- partitionKey: "pk",
319
- sortKey: "sk",
320
- gsis: {
321
- gsi1: {
322
- partitionKey: "gsi1pk",
323
- sortKey: "gsi1sk",
324
- },
325
- },
326
- },
327
- });
328
- ```
329
-
330
- ### 2. Perform Type-Safe Operations directly on the table instance
331
-
332
- > **💡 Pro Tip**: While you can use the table directly, we recommend using the [Entity Pattern](#-entity-pattern-with-standard-schema-validators) with meaningful, descriptive method names like `getUserByEmail()` instead of generic index references. This follows DynamoDB best practices and makes your code self-documenting.
333
-
334
- <table>
335
- <tr>
336
- <td>
337
-
338
- #### 🦖 Creating a new dinosaur specimen
339
-
340
- ```ts
341
- // Add a new T-Rex with complete type safety
342
- const rex = await dinoTable
343
- .create<Dinosaur>({
344
- pk: "SPECIES#trex",
345
- sk: "PROFILE#trex",
346
- speciesId: "trex",
347
- name: "Tyrannosaurus Rex",
348
- diet: "carnivore",
349
- length: 12.3,
350
- discoveryYear: 1902,
351
- })
352
- .execute();
353
- ```
354
-
355
- </td>
356
- <td>
357
-
358
- #### 🔍 Query with powerful conditions
359
-
360
- ```ts
361
- // Find large carnivorous dinosaurs
362
- const largeDinos = await dinoTable
363
- .query<Dinosaur>({
364
- pk: "SPECIES#trex",
365
- sk: (op) => op.beginsWith("PROFILE#"),
366
- })
367
- .filter((op) => op.and(op.gte("length", 10), op.eq("diet", "carnivore")))
368
- .limit(10)
369
- .execute();
370
- ```
371
-
372
- </td>
373
- </tr>
374
- <tr>
375
- <td>
376
-
377
- #### 🔄 Update with type-safe operations
378
-
379
- ```ts
380
- // Update a dinosaur's classification
381
- await dinoTable
382
- .update<Dinosaur>({
383
- pk: "SPECIES#trex",
384
- sk: "PROFILE#trex",
385
- })
386
- .set("diet", "omnivore")
387
- .add("discoveryYear", 1)
388
- .remove("outdatedField")
389
- .condition((op) => op.attributeExists("discoverySite"))
390
- .execute();
391
- ```
392
-
393
- </td>
394
- <td>
395
-
396
- #### 🔒 Transactional operations
397
-
398
- ```ts
399
- // Perform multiple operations atomically
400
- await dinoTable.transaction((tx) => {
401
- // Move dinosaur to new enclosure
402
- dinoTable.delete({ pk: "ENCLOSURE#A", sk: "DINO#1" }).withTransaction(tx);
403
-
404
- dinoTable
405
- .create({ pk: "ENCLOSURE#B", sk: "DINO#1", status: "ACTIVE" })
406
- .withTransaction(tx);
407
- });
408
- ```
409
-
410
- </td>
411
- </tr>
412
- </table>
413
-
414
- <div align="center">
415
- <h3>💡 See the difference with dyno-table</h3>
416
- </div>
417
-
418
- <table>
419
- <tr>
420
- <th>❌ Without dyno-table</th>
421
- <th>✅ With dyno-table (Entity Pattern)</th>
422
- </tr>
423
- <tr>
424
- <td>
425
-
426
- ```ts
427
- // Verbose, error-prone, no type safety
428
- await docClient.send(
429
- new QueryCommand({
430
- TableName: "JurassicPark",
431
- IndexName: "gsi1", // What does gsi1 do?
432
- KeyConditionExpression: "#pk = :pk",
433
- FilterExpression: "contains(#features, :feathers)",
434
- ExpressionAttributeNames: {
435
- "#pk": "pk",
436
- "#features": "features",
437
- },
438
- ExpressionAttributeValues: {
439
- ":pk": "SPECIES#trex",
440
- ":feathers": "feathers",
441
- },
442
- }),
443
- );
444
- ```
445
-
446
- </td>
447
- <td>
448
-
449
- ```ts
450
- // Self-documenting, type-safe, semantic
451
- const featheredTRexes = await dinosaurRepo.query
452
- .getFeatheredDinosaursBySpecies({
453
- species: "trex",
454
- })
455
- .execute();
456
-
457
- // Or using table directly (still better than raw SDK)
458
- await dinoTable
459
- .query<Dinosaur>({
460
- pk: "SPECIES#trex",
461
- })
462
- .filter((op) => op.contains("features", "feathers"))
463
- .execute();
464
- ```
465
-
466
- </td>
467
- </tr>
468
- </table>
469
-
470
- **Key improvements:**
471
-
472
- - 🛡️ **Type Safety**: Compile-time error checking prevents runtime failures
473
- - 📖 **Self-Documenting**: Code clearly expresses business intent
474
- - 🧠 **Reduced Complexity**: No manual expression building or attribute mapping
475
-
476
- ## 🏗️ Entity Pattern with Standard Schema validators
477
-
478
- <div align="center">
479
-
480
- ### The Most Type-Safe Way to Model Your DynamoDB Data
481
-
482
- </div>
483
-
484
- <table>
485
- <tr>
486
- <td width="70%">
487
- <p>The entity pattern provides a structured, type-safe way to work with DynamoDB items. It combines schema validation, key management, and repository operations into a cohesive abstraction.</p>
488
-
489
- <p>✨ This library supports all <a href="https://github.com/standard-schema/standard-schema#what-schema-libraries-implement-the-spec">Standard Schema</a> validation libraries, including <strong>zod</strong>, <strong>arktype</strong>, and <strong>valibot</strong>, allowing you to choose your preferred validation tool!</p>
490
-
491
- <p>You can find a full example implementation here of <a href="https://github.com/Kysumi/dyno-table/blob/main/examples/entity-example/src/dinosaur-entity.ts">Entities</a></p>
492
- </td>
493
- <td width="30%">
494
-
495
- #### Entity Pattern Benefits
496
-
497
- - 🛡️ **Type-safe operations**
498
- - 🧪 **Schema validation**
499
- - 🔑 **Automatic key generation**
500
- - 📦 **Repository pattern**
501
- - 🔍 **Custom query builders**
502
-
503
- </td>
504
- </tr>
505
- </table>
506
-
507
- ### Defining Entities
508
-
509
- Entities are defined using the `defineEntity` function, which takes a configuration object that includes a schema, primary key definition, and optional indexes and queries.
510
-
511
24
  ```ts
512
25
  import { z } from "zod";
513
- import { defineEntity, createIndex } from "dyno-table/entity";
514
-
515
- // Define your schema using Zod
516
- const dinosaurSchema = z.object({
517
- id: z.string(),
518
- species: z.string(),
519
- name: z.string(),
520
- diet: z.enum(["carnivore", "herbivore", "omnivore"]),
521
- dangerLevel: z.number().int().min(1).max(10),
522
- height: z.number().positive(),
523
- weight: z.number().positive(),
524
- status: z.enum(["active", "inactive", "sick", "deceased"]),
525
- createdAt: z.string().optional(),
526
- updatedAt: z.string().optional(),
527
- });
528
-
529
- // Infer the type from the schema
530
- type Dinosaur = z.infer<typeof dinosaurSchema>;
531
-
532
- // Define key templates for Dinosaur entity
533
- const dinosaurPK = partitionKey`ENTITY#DINOSAUR#DIET#${"diet"}`;
534
- const dinosaurSK = sortKey`ID#${"id"}#SPECIES#${"species"}`;
26
+ import { defineEntity, createIndex, createQueries } from "dyno-table/entity";
535
27
 
536
- // Create a primary index for Dinosaur entity
537
- const primaryKey = createIndex()
538
- .input(z.object({ id: z.string(), diet: z.string(), species: z.string() }))
539
- .partitionKey(({ diet }) => dinosaurPK({ diet }))
540
- .sortKey(({ id, species }) => dinosaurSK({ species, id }));
28
+ const createQuery = createQueries<typeof dinosaurSchema._type>();
541
29
 
542
- // Define the entity
543
- const DinosaurEntity = defineEntity({
544
- name: "Dinosaur",
545
- schema: dinosaurSchema,
546
- primaryKey,
547
- });
548
-
549
- // Create a repository
550
- const dinosaurRepo = DinosaurEntity.createRepository(table);
551
- ```
552
-
553
- ### Entity Features
554
-
555
- #### 1. Schema Validation
556
-
557
- Entities use Zod schemas to validate data before operations:
558
-
559
- ```ts
560
- // Define a schema with Zod
30
+ // 🦕 Define your dinosaur schema
561
31
  const dinosaurSchema = z.object({
562
32
  id: z.string(),
563
33
  species: z.string(),
564
- name: z.string(),
565
- diet: z.enum(["carnivore", "herbivore", "omnivore"]),
566
- dangerLevel: z.number().int().min(1).max(10),
567
- height: z.number().positive(),
568
- weight: z.number().positive(),
569
- status: z.enum(["active", "inactive", "sick", "deceased"]),
570
- tags: z.array(z.string()).optional(),
34
+ period: z.enum(["triassic", "jurassic", "cretaceous"]),
35
+ diet: z.enum(["herbivore", "carnivore", "omnivore"]),
36
+ discoveryYear: z.number(),
37
+ weight: z.number(),
571
38
  });
572
39
 
573
- // Create an entity with the schema
40
+ // Create your entity with indexes for efficient queries
574
41
  const DinosaurEntity = defineEntity({
575
42
  name: "Dinosaur",
576
43
  schema: dinosaurSchema,
577
44
  primaryKey: createIndex()
578
- .input(z.object({ id: z.string(), diet: z.string(), species: z.string() }))
579
- .partitionKey(({ diet }) => dinosaurPK({ diet }))
580
- // could also be .withoutSortKey() if your table doesn't use sort keys
581
- .sortKey(({ id, species }) => dinosaurSK({ species, id })),
582
- });
583
- ```
584
-
585
- #### 2. CRUD Operations
586
-
587
- Entities provide type-safe CRUD operations:
588
-
589
- ```ts
590
- // Create a new dinosaur
591
- await dinosaurRepo
592
- .create({
593
- id: "dino-001",
594
- species: "Tyrannosaurus Rex",
595
- name: "Rexy",
596
- diet: "carnivore",
597
- dangerLevel: 10,
598
- height: 5.2,
599
- weight: 7000,
600
- status: "active",
601
- })
602
- .execute();
603
-
604
- // Get a dinosaur
605
- const dino = await dinosaurRepo
606
- .get({
607
- id: "dino-001",
608
- diet: "carnivore",
609
- species: "Tyrannosaurus Rex",
610
- })
611
- .execute();
612
-
613
- // Update a dinosaur
614
- await dinosaurRepo
615
- .update(
616
- { id: "dino-001", diet: "carnivore", species: "Tyrannosaurus Rex" },
617
- { weight: 7200, status: "sick" },
618
- )
619
- .execute();
620
-
621
- // Delete a dinosaur
622
- await dinosaurRepo
623
- .delete({
624
- id: "dino-001",
625
- diet: "carnivore",
626
- species: "Tyrannosaurus Rex",
627
- })
628
- .execute();
629
- ```
630
-
631
- #### 3. Custom Queries
632
-
633
- Define custom queries with **meaningful, descriptive names** that reflect their business purpose. This follows DynamoDB best practices by making your data access patterns self-documenting:
634
-
635
- ```ts
636
- import { createQueries } from "dyno-table/entity";
637
-
638
- const createQuery = createQueries<Dinosaur>();
639
-
640
- const DinosaurEntity = defineEntity({
641
- name: "Dinosaur",
642
- schema: dinosaurSchema,
643
- primaryKey,
45
+ .input(z.object({ id: z.string() }))
46
+ .partitionKey(({ id }) => `DINO#${id}`)
47
+ .sortKey(() => "PROFILE"),
48
+ indexes: {
49
+ byDiet: createIndex()
50
+ .input(dinosaurSchema)
51
+ .partitionKey(({ diet }) => `DIET#${diet}`)
52
+ .sortKey(({ species }) => species),
53
+ },
644
54
  queries: {
645
- // ✅ Semantic method names that describe business intent
646
55
  getDinosaursByDiet: createQuery
647
- .input(
648
- z.object({
649
- diet: z.enum(["carnivore", "herbivore", "omnivore"]),
650
- }),
651
- )
652
- .query(({ input, entity }) => {
653
- return entity.query({
654
- pk: dinosaurPK({ diet: input.diet }),
655
- });
656
- }),
657
-
658
- findDinosaursBySpecies: createQuery
659
- .input(
660
- z.object({
661
- species: z.string(),
662
- }),
663
- )
664
- .query(({ input, entity }) => {
665
- return entity.scan().filter((op) => op.eq("species", input.species));
666
- }),
667
-
668
- getActiveCarnivores: createQuery.input(z.object({})).query(({ entity }) => {
669
- return entity
670
- .query({
671
- pk: dinosaurPK({ diet: "carnivore" }),
672
- })
673
- .filter((op) => op.eq("status", "active"));
674
- }),
675
-
676
- getDangerousDinosaursInEnclosure: createQuery
677
- .input(
678
- z.object({
679
- enclosureId: z.string(),
680
- minDangerLevel: z.number().min(1).max(10),
681
- }),
682
- )
683
- .query(({ input, entity }) => {
684
- return entity
685
- .scan()
686
- .filter((op) =>
687
- op.and(
688
- op.contains("enclosureId", input.enclosureId),
689
- op.gte("dangerLevel", input.minDangerLevel),
690
- ),
691
- );
692
- }),
56
+ .input(z.object({ diet: z.enum(["herbivore", "carnivore", "omnivore"]) }))
57
+ .query(({ input, entity }) =>
58
+ entity.query({ pk: `DIET#${input.diet}` }).useIndex("byDiet")
59
+ ),
693
60
  },
694
61
  });
695
62
 
696
- // Usage in business logic is now self-documenting
697
- const carnivores = await dinosaurRepo.query
63
+ // Start using it!
64
+ const dinoRepo = DinosaurEntity.createRepository(table);
65
+
66
+ // Create a T-Rex
67
+ const tRex = await dinoRepo.create({
68
+ id: "t-rex-1",
69
+ species: "Tyrannosaurus Rex",
70
+ period: "cretaceous",
71
+ diet: "carnivore",
72
+ discoveryYear: 1905,
73
+ weight: 8000,
74
+ }).execute();
75
+
76
+ // Find all carnivores (efficient query using index!)
77
+ const carnivores = await dinoRepo.query
698
78
  .getDinosaursByDiet({ diet: "carnivore" })
699
79
  .execute();
700
- const trexes = await dinosaurRepo.query
701
- .findDinosaursBySpecies({ species: "Tyrannosaurus Rex" })
702
- .execute();
703
- const activeCarnivores = await dinosaurRepo.query
704
- .getActiveCarnivores()
705
- .execute();
706
- const dangerousDinos = await dinosaurRepo.query
707
- .getDangerousDinosaursInEnclosure({
708
- enclosureId: "PADDOCK-A",
709
- minDangerLevel: 8,
710
- })
711
- .execute();
712
- ```
713
-
714
- **Filter Chaining in Entity Queries**
715
-
716
- When defining custom queries, you can chain multiple filters together. These filters are automatically combined using AND logic. Additionally, filters applied in the query definition and filters applied at execution time are both respected:
717
-
718
- ```ts
719
- const DinosaurEntity = defineEntity({
720
- name: "Dinosaur",
721
- schema: dinosaurSchema,
722
- primaryKey,
723
- queries: {
724
- // Multiple filters are combined with AND logic
725
- getHealthyActiveDinosaurs: createQuery
726
- .input(z.object({}))
727
- .query(({ entity }) => {
728
- return entity
729
- .scan()
730
- .filter((op) => op.eq("status", "active"))
731
- .filter((op) => op.gt("health", 80))
732
- .filter((op) => op.attributeExists("lastFed"));
733
- }),
734
-
735
- // Complex filter chaining with conditional logic
736
- getDinosaursForVetCheck: createQuery
737
- .input(
738
- z.object({
739
- minHealth: z.number().optional(),
740
- requiredTag: z.string().optional(),
741
- }),
742
- )
743
- .query(({ input, entity }) => {
744
- const builder = entity.scan();
745
-
746
- // Always filter for dinosaurs that need vet attention
747
- builder.filter((op) => op.lt("health", 90));
748
-
749
- // Conditionally apply additional filters
750
- if (input.minHealth) {
751
- builder.filter((op) => op.gt("health", input.minHealth));
752
- }
753
-
754
- if (input.requiredTag) {
755
- builder.filter((op) => op.contains("tags", input.requiredTag));
756
- }
757
-
758
- return builder;
759
- }),
760
-
761
- // Pre-applied filters combined with execution-time filters
762
- getActiveDinosaursByDiet: createQuery
763
- .input(
764
- z.object({
765
- diet: z.enum(["carnivore", "herbivore", "omnivore"]),
766
- }),
767
- )
768
- .query(({ input, entity }) => {
769
- // Apply a filter in the query definition
770
- return entity
771
- .scan()
772
- .filter((op) => op.eq("diet", input.diet))
773
- .filter((op) => op.eq("status", "active"));
774
- }),
775
- },
776
- });
777
-
778
- // Usage with additional execution-time filters
779
- // Both the pre-applied filters (diet = "carnivore", status = "active")
780
- // and the execution-time filter (health > 50) will be applied
781
- const healthyActiveCarnivores = await dinosaurRepo.query
782
- .getActiveDinosaursByDiet({ diet: "carnivore" })
783
- .filter((op) => op.gt("health", 50))
784
- .execute();
785
80
  ```
786
81
 
787
- **Benefits of semantic naming:**
82
+ **That's it!** You now have a fully type-safe, validated database with semantic queries.
788
83
 
789
- - 🎯 **Clear Intent**: Method names immediately convey what data you're accessing
790
- - 📖 **Self-Documenting**: No need to look up what `gsi1` or `gsi2` does
791
- - 🧠 **Reduced Cognitive Load**: Developers can focus on business logic, not database details
792
- - 👥 **Team Collaboration**: New team members understand the codebase faster
793
- - 🔍 **Better IDE Support**: Autocomplete shows meaningful method names
84
+ ---
794
85
 
795
- #### 4. Defining GSI Access Patterns
86
+ ## Feature Overview
796
87
 
797
- Define GSI access patterns with **meaningful names** that reflect their business purpose. This is crucial for maintaining readable, self-documenting code:
88
+ ### Entity Pattern (Recommended)
89
+ *Schema-validated, semantic queries with business logic*
798
90
 
799
91
  ```ts
800
- import { createIndex } from "dyno-table/entity";
801
-
802
- // Define GSI templates with descriptive names that reflect their purpose
803
- const speciesPK = partitionKey`SPECIES#${"species"}`;
804
- const speciesSK = sortKey`DINOSAUR#${"id"}`;
805
-
806
- const enclosurePK = partitionKey`ENCLOSURE#${"enclosureId"}`;
807
- const enclosureSK = sortKey`DANGER#${"dangerLevel"}#ID#${"id"}`;
92
+ // Get specific dinosaur
93
+ const tRex = await dinoRepo.get({ id: "t-rex-1" });
808
94
 
809
- // Create indexes with meaningful names
810
- const speciesIndex = createIndex()
811
- .input(dinosaurSchema)
812
- .partitionKey(({ species }) => speciesPK({ species }))
813
- .sortKey(({ id }) => speciesSK({ id }));
814
-
815
- const enclosureIndex = createIndex()
816
- .input(dinosaurSchema)
817
- .partitionKey(({ enclosureId }) => enclosurePK({ enclosureId }))
818
- .sortKey(({ dangerLevel, id }) => enclosureSK({ dangerLevel, id }));
819
-
820
- const DinosaurEntity = defineEntity({
821
- name: "Dinosaur",
822
- schema: dinosaurSchema,
823
- primaryKey,
824
- indexes: {
825
- // ✅ Map to generic GSI names for table flexibility
826
- gsi1: speciesIndex,
827
- gsi2: enclosureIndex,
828
- },
829
- queries: {
830
- // ✅ Semantic method names that describe business intent
831
- getDinosaursBySpecies: createQuery
832
- .input(
833
- z.object({
834
- species: z.string(),
835
- }),
836
- )
837
- .query(({ input, entity }) => {
838
- return entity
839
- .query({
840
- pk: speciesPK({ species: input.species }),
841
- })
842
- .useIndex("gsi1"); // Generic GSI name for table flexibility
843
- }),
844
-
845
- getDinosaursByEnclosure: createQuery
846
- .input(
847
- z.object({
848
- enclosureId: z.string(),
849
- }),
850
- )
851
- .query(({ input, entity }) => {
852
- return entity
853
- .query({
854
- pk: enclosurePK({ enclosureId: input.enclosureId }),
855
- })
856
- .useIndex("gsi2");
857
- }),
858
-
859
- getMostDangerousInEnclosure: createQuery
860
- .input(
861
- z.object({
862
- enclosureId: z.string(),
863
- minDangerLevel: z.number().min(1).max(10),
864
- }),
865
- )
866
- .query(({ input, entity }) => {
867
- return entity
868
- .query({
869
- pk: enclosurePK({ enclosureId: input.enclosureId }),
870
- sk: (op) => op.gte(`DANGER#${input.minDangerLevel}`),
871
- })
872
- .useIndex("gsi2")
873
- .sortDescending(); // Get most dangerous first
874
- }),
875
- },
876
- });
877
-
878
- // Usage is now self-documenting
879
- const trexes = await dinosaurRepo.query
880
- .getDinosaursBySpecies({ species: "Tyrannosaurus Rex" })
881
- .execute();
882
- const paddockADinos = await dinosaurRepo.query
883
- .getDinosaursByEnclosure({ enclosureId: "PADDOCK-A" })
884
- .execute();
885
- const dangerousDinos = await dinosaurRepo.query
886
- .getMostDangerousInEnclosure({
887
- enclosureId: "PADDOCK-A",
888
- minDangerLevel: 8,
889
- })
95
+ // Semantic queries
96
+ const cretaceousDinos = await dinoRepo.query
97
+ .getDinosaursByPeriod({ period: "cretaceous" })
890
98
  .execute();
891
99
  ```
100
+ **[Complete Entity Guide →](docs/entities.md)**
892
101
 
893
- **Key principles for access pattern naming:**
894
-
895
- - 🎯 **Generic GSI Names**: Keep table-level GSI names generic (`gsi1`, `gsi2`) for flexibility across entities
896
- - 🔍 **Business-Focused**: Method names should reflect what the query achieves, not how it works
897
- - 📚 **Self-Documenting**: Anyone reading the code should understand the purpose immediately
898
- - 🏗️ **Entity-Level Semantics**: The meaningful names live at the entity/repository level, not the table level
899
-
900
- ### Complete Entity Example
901
-
902
- Here's a complete example of using Zod schemas directly:
903
-
904
- ```ts
905
- import { z } from "zod";
906
- import { defineEntity, createQueries, createIndex } from "dyno-table/entity";
907
- import { Table } from "dyno-table/table";
908
- import { sortKey } from "dyno-table/utils/sort-key-template";
909
- import { partitionKey } from "dyno-table/utils/partition-key-template";
910
-
911
- // Define the schema with Zod
912
- const dinosaurSchema = z.object({
913
- id: z.string(),
914
- species: z.string(),
915
- name: z.string(),
916
- enclosureId: z.string(),
917
- diet: z.enum(["carnivore", "herbivore", "omnivore"]),
918
- dangerLevel: z.number().int().min(1).max(10),
919
- height: z.number().positive(),
920
- weight: z.number().positive(),
921
- status: z.enum(["active", "inactive", "sick", "deceased"]),
922
- trackingChipId: z.string().optional(),
923
- lastFed: z.string().optional(),
924
- createdAt: z.string().optional(),
925
- updatedAt: z.string().optional(),
926
- });
927
-
928
- // Infer the type from the schema
929
- type Dinosaur = z.infer<typeof dinosaurSchema>;
930
-
931
- // Define key templates
932
- const dinosaurPK = partitionKey`DINOSAUR#${"id"}`;
933
- const dinosaurSK = sortKey`STATUS#${"status"}`;
934
-
935
- const gsi1PK = partitionKey`SPECIES#${"species"}`;
936
- const gsi1SK = sortKey`DINOSAUR#${"id"}`;
937
-
938
- const gsi2PK = partitionKey`ENCLOSURE#${"enclosureId"}`;
939
- const gsi2SK = sortKey`DINOSAUR#${"id"}`;
940
-
941
- // Create a primary index
942
- const primaryKey = createIndex()
943
- .input(dinosaurSchema)
944
- .partitionKey(({ id }) => dinosaurPK(id))
945
- .sortKey(({ status }) => dinosaurSK(status));
946
-
947
- // Create a GSI for querying by species
948
- const speciesIndex = createIndex()
949
- .input(dinosaurSchema)
950
- .partitionKey(({ species }) => gsi1PK({ species }))
951
- .sortKey(({ id }) => gsiSK({ id }));
952
-
953
- // Create a GSI for querying by enclosure
954
- const enclosureIndex = createIndex()
955
- .input(dinosaurSchema)
956
- .partitionKey(({ enclosureId }) => gsi2PK({ enclosureId }))
957
- .sortKey(({ id }) => gsi2SK({ id }));
958
-
959
- // Example of a read-only index for audit trail data
960
- // This index will never be updated during entity update operations
961
- const auditIndex = createIndex()
962
- .input(dinosaurSchema)
963
- .partitionKey(({ createdAt }) => partitionKey`CREATED#${createdAt}`)
964
- .sortKey(({ id }) => sortKey`DINOSAUR#${id}`)
965
- .readOnly(); // Mark this index as read-only
966
-
967
- // Create query builders
968
- const createQuery = createQueries<Dinosaur>();
969
-
970
- // Define the entity
971
- const DinosaurEntity = defineEntity({
972
- name: "Dinosaur",
973
- schema: dinosaurSchema,
974
- primaryKey,
975
- indexes: {
976
- // These keys need to be named after the name of the GSI that is defined in your table instance
977
- gsi1: speciesIndex,
978
- gsi2: enclosureIndex,
979
- // Example of a read-only index for audit trail data
980
- gsi3: auditIndex, // This index will never be updated during entity update operations
981
- // unless explicitly forced with .forceIndexRebuild('gsi3')
982
- },
983
- queries: {
984
- // ✅ Semantic method names that describe business intent
985
- getDinosaursBySpecies: createQuery
986
- .input(
987
- z.object({
988
- species: z.string(),
989
- }),
990
- )
991
- .query(({ input, entity }) => {
992
- return entity
993
- .query({
994
- pk: gsi1PK({ species: input.species }),
995
- })
996
- .useIndex("gsi1");
997
- }),
998
-
999
- getDinosaursByEnclosure: createQuery
1000
- .input(
1001
- z.object({
1002
- enclosureId: z.string(),
1003
- }),
1004
- )
1005
- .query(({ input, entity }) => {
1006
- return entity
1007
- .query({
1008
- pk: gsi2PK({ enclosureId: input.enclosureId }),
1009
- })
1010
- .useIndex("gsi2");
1011
- }),
1012
-
1013
- getDangerousDinosaursInEnclosure: createQuery
1014
- .input(
1015
- z.object({
1016
- enclosureId: z.string(),
1017
- minDangerLevel: z.number().int().min(1).max(10),
1018
- }),
1019
- )
1020
- .query(({ input, entity }) => {
1021
- return entity
1022
- .query({
1023
- pk: gsi2PK({ enclosureId: input.enclosureId }),
1024
- })
1025
- .useIndex("gsi2")
1026
- .filter((op) => op.gte("dangerLevel", input.minDangerLevel));
1027
- }),
1028
- },
1029
- });
1030
-
1031
- // Create a repository
1032
- const dinosaurRepo = DinosaurEntity.createRepository(table);
1033
-
1034
- // Use the repository
1035
- async function main() {
1036
- // Create a dinosaur
1037
- await dinosaurRepo
1038
- .create({
1039
- id: "dino-001",
1040
- species: "Tyrannosaurus Rex",
1041
- name: "Rexy",
1042
- enclosureId: "enc-001",
1043
- diet: "carnivore",
1044
- dangerLevel: 10,
1045
- height: 5.2,
1046
- weight: 7000,
1047
- status: "active",
1048
- trackingChipId: "TRX-001",
1049
- })
1050
- .execute();
1051
-
1052
- // Query dinosaurs by species using semantic method names
1053
- const trexes = await dinosaurRepo.query
1054
- .getDinosaursBySpecies({
1055
- species: "Tyrannosaurus Rex",
1056
- })
1057
- .execute();
1058
-
1059
- // Query dangerous dinosaurs in an enclosure
1060
- const dangerousDinos = await dinosaurRepo.query
1061
- .getDangerousDinosaursInEnclosure({
1062
- enclosureId: "enc-001",
1063
- minDangerLevel: 8,
1064
- })
1065
- .execute();
1066
- }
1067
- ```
1068
-
1069
- ## 🧩 Advanced Features
1070
-
1071
- ### Transactional Operations
1072
-
1073
- **Safe dinosaur transfer between enclosures**
1074
-
1075
- ```ts
1076
- // Start a transaction session for transferring a T-Rex to a new enclosure
1077
- // Critical for safety: All operations must succeed or none will be applied
1078
- await dinoTable.transaction(async (tx) => {
1079
- // All operations are executed as a single transaction (up to 100 operations)
1080
- // This ensures the dinosaur transfer is atomic - preventing half-completed transfers
1081
-
1082
- // STEP 1: Check if destination enclosure is ready and compatible with the dinosaur
1083
- // We must verify the enclosure is prepared and suitable for a carnivore
1084
- await dinoTable
1085
- .conditionCheck({
1086
- pk: "ENCLOSURE#B", // Target enclosure B
1087
- sk: "STATUS", // Check the enclosure status record
1088
- })
1089
- .condition((op) =>
1090
- op.and(
1091
- op.eq("status", "READY"), // Enclosure must be in READY state
1092
- op.eq("diet", "Carnivore"), // Must support carnivorous dinosaurs
1093
- ),
1094
- )
1095
- .withTransaction(tx);
1096
-
1097
- // STEP 2: Remove dinosaur from current enclosure
1098
- // Only proceed if the dinosaur is healthy enough for transfer
1099
- await dinoTable
1100
- .delete<Dinosaur>({
1101
- pk: "ENCLOSURE#A", // Source enclosure A
1102
- sk: "DINO#001", // T-Rex with ID 001
1103
- })
1104
- .condition((op) =>
1105
- op.and(
1106
- op.eq("status", "HEALTHY"), // Dinosaur must be in HEALTHY state
1107
- op.gte("health", 80), // Health must be at least 80%
1108
- ),
1109
- )
1110
- .withTransaction(tx);
1111
-
1112
- // STEP 3: Add dinosaur to new enclosure
1113
- // Create a fresh record in the destination enclosure
1114
- await dinoTable
1115
- .create<Dinosaur>({
1116
- pk: "ENCLOSURE#B", // Destination enclosure B
1117
- sk: "DINO#001", // Same dinosaur ID for tracking
1118
- name: "Rex", // Dinosaur name
1119
- species: "Tyrannosaurus", // Species classification
1120
- diet: "Carnivore", // Dietary requirements
1121
- status: "HEALTHY", // Current health status
1122
- health: 100, // Reset health to 100% after transfer
1123
- enclosureId: "B", // Update enclosure reference
1124
- lastFed: new Date().toISOString(), // Reset feeding clock
1125
- })
1126
- .withTransaction(tx);
1127
-
1128
- // STEP 4: Update enclosure occupancy tracking
1129
- // Keep accurate count of dinosaurs in each enclosure
1130
- await dinoTable
1131
- .update<Dinosaur>({
1132
- pk: "ENCLOSURE#B", // Target enclosure B
1133
- sk: "OCCUPANCY", // Occupancy tracking record
1134
- })
1135
- .add("currentOccupants", 1) // Increment occupant count
1136
- .set("lastUpdated", new Date().toISOString()) // Update timestamp
1137
- .withTransaction(tx);
1138
- });
1139
-
1140
- // Transaction for dinosaur feeding and health monitoring
1141
- // Ensures feeding status and schedule are updated atomically
1142
- await dinoTable.transaction(
1143
- async (tx) => {
1144
- // STEP 1: Update Stegosaurus health and feeding status
1145
- // Record that the dinosaur has been fed and update its health metrics
1146
- await dinoTable
1147
- .update<Dinosaur>({
1148
- pk: "ENCLOSURE#D", // Herbivore enclosure D
1149
- sk: "DINO#003", // Stegosaurus with ID 003
1150
- })
1151
- .set({
1152
- status: "HEALTHY", // Update health status
1153
- lastFed: new Date().toISOString(), // Record feeding time
1154
- health: 100, // Reset health to 100%
1155
- })
1156
- .deleteElementsFromSet("tags", ["needs_feeding"]) // Remove feeding alert tag
1157
- .withTransaction(tx);
1158
-
1159
- // STEP 2: Update enclosure feeding schedule
1160
- // Schedule next feeding time for tomorrow
1161
- await dinoTable
1162
- .update<Dinosaur>({
1163
- pk: "ENCLOSURE#D", // Same herbivore enclosure
1164
- sk: "SCHEDULE", // Feeding schedule record
1165
- })
1166
- .set(
1167
- "nextFeedingTime",
1168
- new Date(Date.now() + 24 * 60 * 60 * 1000).toISOString(),
1169
- ) // 24 hours from now
1170
- .withTransaction(tx);
1171
- },
1172
- {
1173
- // Transaction options for tracking and idempotency
1174
- clientRequestToken: "feeding-session-001", // Prevents duplicate feeding operations
1175
- returnConsumedCapacity: "TOTAL", // Track capacity usage for park operations
1176
- },
1177
- );
1178
- ```
1179
-
1180
- ### Pagination Made Simple
1181
-
1182
- **Efficient dinosaur record browsing for park management**
102
+ ### Direct Table Operations
103
+ *Low-level control for advanced use cases*
1183
104
 
1184
105
  ```ts
1185
- // SCENARIO 1: Herbivore health monitoring with pagination
1186
- // Create a paginator for viewing healthy herbivores in manageable chunks
1187
- // Perfect for veterinary staff doing routine health checks
1188
- const healthyHerbivores = dinoTable
1189
- .query<Dinosaur>({
1190
- pk: "DIET#herbivore", // Target all herbivorous dinosaurs
1191
- sk: (op) => op.beginsWith("STATUS#HEALTHY"), // Only those with HEALTHY status
1192
- })
1193
- .filter((op) =>
1194
- op.and(
1195
- op.gte("health", 90), // Only those with excellent health (90%+)
1196
- op.attributeExists("lastFed"), // Must have feeding records
1197
- ),
1198
- )
1199
- .paginate(5); // Process in small batches of 5 dinosaurs
1200
-
1201
- // Iterate through all pages of results - useful for processing large datasets
1202
- // without loading everything into memory at once
1203
- console.log("🦕 Beginning herbivore health inspection rounds...");
1204
- while (healthyHerbivores.hasNextPage()) {
1205
- // Get the next page of dinosaurs
1206
- const page = await healthyHerbivores.getNextPage();
1207
- console.log(
1208
- `Checking herbivores page ${page.page}, found ${page.items.length} dinosaurs`,
1209
- );
1210
-
1211
- // Process each dinosaur in the current page
1212
- page.items.forEach((dino) => {
1213
- console.log(
1214
- `${dino.name}: Health ${dino.health}%, Last fed: ${dino.lastFed}`,
1215
- );
1216
- // In a real app, you might update health records or schedule next checkup
1217
- });
1218
- }
1219
-
1220
- // SCENARIO 2: Preparing carnivore feeding schedule
1221
- // Get all carnivores at once for daily feeding planning
1222
- // This approach loads all matching items into memory
1223
- const carnivoreSchedule = await dinoTable
1224
- .query<Dinosaur>({
1225
- pk: "DIET#carnivore", // Target all carnivorous dinosaurs
1226
- sk: (op) => op.beginsWith("ENCLOSURE#"), // Organized by enclosure
1227
- })
1228
- .filter((op) => op.attributeExists("lastFed")) // Only those with feeding records
1229
- .paginate(10) // Process in pages of 10
1230
- .getAllPages(); // But collect all results at once
1231
-
1232
- console.log(`Scheduling feeding for ${carnivoreSchedule.length} carnivores`);
1233
- // Now we can sort and organize feeding times based on species, size, etc.
1234
-
1235
- // SCENARIO 3: Visitor information kiosk with limited display
1236
- // Create a paginated view for the public-facing dinosaur information kiosk
1237
- const visitorKiosk = dinoTable
1238
- .query<Dinosaur>({
1239
- pk: "VISITOR_VIEW", // Special partition for visitor-facing data
1240
- sk: (op) => op.beginsWith("SPECIES#"), // Organized by species
1241
- })
1242
- .filter((op) => op.eq("status", "ON_DISPLAY")) // Only show dinosaurs currently on display
1243
- .limit(12) // Show maximum 12 dinosaurs total
1244
- .paginate(4); // Display 4 at a time for easy viewing
1245
-
1246
- // Get first page for initial kiosk display
1247
- const firstPage = await visitorKiosk.getNextPage();
1248
- console.log(`🦖 Now showing: ${firstPage.items.map((d) => d.name).join(", ")}`);
1249
- // Visitors can press "Next" to see more dinosaurs in the collection
1250
- ```
1251
-
1252
- ## 🛡️ Type-Safe Query Building
1253
-
1254
- Dyno-table provides comprehensive query methods that match DynamoDB's capabilities while maintaining type safety:
1255
-
1256
- ### Comparison Operators
1257
-
1258
- | Operation | Method Example | Generated Expression |
1259
- | ------------------------- | ------------------------------------------------------------ | --------------------------------- |
1260
- | **Equals** | `.filter(op => op.eq("status", "ACTIVE"))` | `status = :v1` |
1261
- | **Not Equals** | `.filter(op => op.ne("status", "DELETED"))` | `status <> :v1` |
1262
- | **Less Than** | `.filter(op => op.lt("age", 18))` | `age < :v1` |
1263
- | **Less Than or Equal** | `.filter(op => op.lte("score", 100))` | `score <= :v1` |
1264
- | **Greater Than** | `.filter(op => op.gt("price", 50))` | `price > :v1` |
1265
- | **Greater Than or Equal** | `.filter(op => op.gte("rating", 4))` | `rating >= :v1` |
1266
- | **Between** | `.filter(op => op.between("age", 18, 65))` | `age BETWEEN :v1 AND :v2` |
1267
- | **In Array** | `.filter(op => op.inArray("status", ["ACTIVE", "PENDING"]))` | `status IN (:v1, :v2)` |
1268
- | **Begins With** | `.filter(op => op.beginsWith("email", "@example.com"))` | `begins_with(email, :v1)` |
1269
- | **Contains** | `.filter(op => op.contains("tags", "important"))` | `contains(tags, :v1)` |
1270
- | **Attribute Exists** | `.filter(op => op.attributeExists("email"))` | `attribute_exists(email)` |
1271
- | **Attribute Not Exists** | `.filter(op => op.attributeNotExists("deletedAt"))` | `attribute_not_exists(deletedAt)` |
1272
- | **Nested Attributes** | `.filter(op => op.eq("address.city", "London"))` | `address.city = :v1` |
1273
-
1274
- ### Filter Chaining
1275
-
1276
- Filters can be chained together using multiple `.filter()` calls. When multiple filters are applied, they are automatically combined using AND logic:
1277
-
1278
- ```ts
1279
- // Chaining multiple filters - these are combined with AND
1280
- const result = await table
1281
- .query({ pk: "USER#123" })
1282
- .filter((op) => op.eq("status", "ACTIVE"))
1283
- .filter((op) => op.gt("age", 18))
1284
- .filter((op) => op.contains("tags", "premium"))
106
+ // Direct DynamoDB access with query
107
+ const carnivoresInCretaceous = await table
108
+ .query({ pk: "PERIOD#cretaceous" })
109
+ .filter(op => op.eq("diet", "carnivore"))
1285
110
  .execute();
1286
-
1287
- // This is equivalent to:
1288
- const result = await table
1289
- .query({ pk: "USER#123" })
1290
- .filter((op) =>
1291
- op.and(
1292
- op.eq("status", "ACTIVE"),
1293
- op.gt("age", 18),
1294
- op.contains("tags", "premium"),
1295
- ),
1296
- )
1297
- .execute();
1298
- ```
1299
-
1300
- Both approaches produce the same DynamoDB expression: `status = :v1 AND age > :v2 AND contains(tags, :v3)`
1301
-
1302
- Filter chaining provides a more readable way to build complex conditions, especially when filters are applied conditionally:
1303
-
1304
- ```ts
1305
- const builder = table.query({ pk: "USER#123" });
1306
-
1307
- // Conditionally apply filters
1308
- if (statusFilter) {
1309
- builder.filter((op) => op.eq("status", statusFilter));
1310
- }
1311
-
1312
- if (minAge) {
1313
- builder.filter((op) => op.gt("age", minAge));
1314
- }
1315
-
1316
- if (requiredTag) {
1317
- builder.filter((op) => op.contains("tags", requiredTag));
1318
- }
1319
-
1320
- const result = await builder.execute();
1321
111
  ```
112
+ **[Table Operations Guide →](docs/table-query-builder.md)**
1322
113
 
1323
- ### Logical Operators
1324
-
1325
- | Operation | Method Example | Generated Expression |
1326
- | --------- | --------------------------------------------------------------------------------- | ------------------------------ |
1327
- | **AND** | `.filter(op => op.and(op.eq("status", "ACTIVE"), op.gt("age", 18)))` | `status = :v1 AND age > :v2` |
1328
- | **OR** | `.filter(op => op.or(op.eq("status", "PENDING"), op.eq("status", "PROCESSING")))` | `status = :v1 OR status = :v2` |
1329
- | **NOT** | `.filter(op => op.not(op.eq("status", "DELETED")))` | `NOT status = :v1` |
1330
-
1331
- ### Query Operations
1332
-
1333
- | Operation | Method Example | Generated Expression |
1334
- | ------------------------ | ------------------------------------------------------------------------------------ | ------------------------------------- |
1335
- | **Partition Key Equals** | `.query({ pk: "USER#123" })` | `pk = :pk` |
1336
- | **Sort Key Begins With** | `.query({ pk: "USER#123", sk: op => op.beginsWith("ORDER#2023") })` | `pk = :pk AND begins_with(sk, :v1)` |
1337
- | **Sort Key Between** | `.query({ pk: "USER#123", sk: op => op.between("ORDER#2023-01", "ORDER#2023-12") })` | `pk = :pk AND sk BETWEEN :v1 AND :v2` |
1338
-
1339
- Additional query options:
114
+ ### Advanced Querying & Filtering
115
+ *Complex business logic with AND/OR operations*
1340
116
 
1341
117
  ```ts
1342
- // Sort order
1343
- const ascending = await table
1344
- .query({ pk: "USER#123" })
1345
- .sortAscending()
1346
- .execute();
1347
-
1348
- const descending = await table
1349
- .query({ pk: "USER#123" })
1350
- .sortDescending()
1351
- .execute();
1352
-
1353
- // Projection (select specific attributes)
1354
- const partial = await table
1355
- .query({ pk: "USER#123" })
1356
- .select(["name", "email"])
118
+ // Find large herbivores from Jurassic period using query + filter
119
+ const conditions = await dinoRepo.query
120
+ .getDinosaursByDiet({ diet: "herbivore" })
121
+ .filter(op => op.and(
122
+ op.eq("period", "jurassic"),
123
+ op.gt("weight", 3000)
124
+ ))
1357
125
  .execute();
1358
-
1359
- // Limit results
1360
- const limited = await table.query({ pk: "USER#123" }).limit(10).execute();
1361
126
  ```
127
+ **[Advanced Queries Guide →](docs/query-builder.md)**
1362
128
 
1363
- ### Put Operations
1364
-
1365
- | Operation | Method Example | Description |
1366
- | ------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------------------- |
1367
- | **Create New Item** | `.create<Dinosaur>({ pk: "SPECIES#trex", sk: "PROFILE#001", ... })` | Creates a new item with a condition to ensure it doesn't already exist |
1368
- | **Put Item** | `.put<Dinosaur>({ pk: "SPECIES#trex", sk: "PROFILE#001", ... })` | Creates or replaces an item |
1369
- | **With Condition** | `.put(item).condition(op => op.attributeNotExists("pk"))` | Adds a condition that must be satisfied |
1370
-
1371
- #### Return Values
1372
-
1373
- Control what data is returned from put operations:
1374
-
1375
- | Option | Description | Example |
1376
- | -------------- | ------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------- |
1377
- | **NONE** | Default. No return value. | `.put(item).returnValues("NONE").execute()` |
1378
- | **ALL_OLD** | Returns the item's previous state if it existed. (Does not consume any RCU and returns strongly consistent values) | `.put(item).returnValues("ALL_OLD").execute()` |
1379
- | **CONSISTENT** | Performs a consistent GET operation after the put to retrieve the item's new state. (Does consume RCU) | `.put(item).returnValues("CONSISTENT").execute()` |
129
+ ### Batch Operations
130
+ *Efficient bulk operations*
1380
131
 
1381
132
  ```ts
1382
- // Create with no return value (default)
1383
- await table
1384
- .put<Dinosaur>({
1385
- pk: "SPECIES#trex",
1386
- sk: "PROFILE#001",
1387
- name: "Tyrannosaurus Rex",
1388
- diet: "carnivore",
1389
- })
1390
- .execute();
1391
-
1392
- // Create and return the newly created item
1393
- const newDino = await table
1394
- .put<Dinosaur>({
1395
- pk: "SPECIES#trex",
1396
- sk: "PROFILE#002",
1397
- name: "Tyrannosaurus Rex",
1398
- diet: "carnivore",
1399
- })
1400
- .returnValues("CONSISTENT")
1401
- .execute();
1402
-
1403
- // Update with condition and get previous values
1404
- const oldDino = await table
1405
- .put<Dinosaur>({
1406
- pk: "SPECIES#trex",
1407
- sk: "PROFILE#001",
1408
- name: "Tyrannosaurus Rex",
1409
- diet: "omnivore", // Updated diet
1410
- discoveryYear: 1905,
1411
- })
1412
- .returnValues("ALL_OLD")
1413
- .execute();
1414
- ```
1415
-
1416
- ### Update Operations
1417
-
1418
- | Operation | Method Example | Generated Expression |
1419
- | -------------------- | ----------------------------------------------------- | -------------------- |
1420
- | **Set Attributes** | `.update(key).set("name", "New Name")` | `SET #name = :v1` |
1421
- | **Add to Number** | `.update(key).add("score", 10)` | `ADD #score :v1` |
1422
- | **Remove Attribute** | `.update(key).remove("temporary")` | `REMOVE #temporary` |
1423
- | **Delete From Set** | `.update(key).deleteElementsFromSet("tags", ["old"])` | `DELETE #tags :v1` |
1424
-
1425
- #### Condition Operators
1426
-
1427
- The library supports a comprehensive set of type-safe condition operators:
1428
-
1429
- | Category | Operators | Example |
1430
- | -------------- | ---------------------------------------------- | ----------------------------------------------------------------------- |
1431
- | **Comparison** | `eq`, `ne`, `lt`, `lte`, `gt`, `gte` | `.condition(op => op.gt("age", 18))` |
1432
- | **String/Set** | `between`, `beginsWith`, `contains`, `inArray` | `.condition(op => op.inArray("status", ["active", "pending"]))` |
1433
- | **Existence** | `attributeExists`, `attributeNotExists` | `.condition(op => op.attributeExists("email"))` |
1434
- | **Logical** | `and`, `or`, `not` | `.condition(op => op.and(op.eq("status", "active"), op.gt("age", 18)))` |
1435
-
1436
- All operators are type-safe and will provide proper TypeScript inference for nested attributes.
133
+ // Get multiple dinosaurs at once
134
+ const dinos = await dinoRepo.batchGet([
135
+ { id: "t-rex-1" },
136
+ { id: "triceratops-1" },
137
+ { id: "stegosaurus-1" }
138
+ ]).execute();
1437
139
 
1438
- #### Multiple Operations
140
+ // Bulk create carnivores
141
+ const batch = table.batchBuilder();
1439
142
 
1440
- Operations can be combined in a single update:
143
+ carnivores.forEach(dino =>
144
+ dinoRepo.create(dino).withBatch(batch)
145
+ );
1441
146
 
1442
- ```ts
1443
- const result = await table
1444
- .update({ pk: "USER#123", sk: "PROFILE" })
1445
- .set("name", "Updated Name")
1446
- .add("loginCount", 1)
1447
- .remove("temporaryFlag")
1448
- .condition((op) => op.attributeExists("email"))
1449
- .execute();
147
+ await batch.execute();
1450
148
  ```
149
+ **[Batch Operations Guide →](docs/batch-operations.md)**
1451
150
 
1452
- #### Force Rebuilding Read-Only Indexes
1453
-
1454
- When working with entities, some indexes may be marked as read-only to prevent any updates. However, you can force these indexes to be rebuilt during updates using the `forceIndexRebuild()` method:
151
+ ### Transactions
152
+ *ACID transactions for data consistency*
1455
153
 
1456
154
  ```ts
1457
- // Force rebuild a single read-only index
1458
- await dinoRepo
1459
- .update(
1460
- { id: "TREX-001" },
1461
- {
1462
- name: "Updated T-Rex",
1463
- excavationSiteId: "new-site-001",
1464
- },
1465
- )
1466
- .forceIndexRebuild("excavation-site-index")
1467
- .execute();
1468
-
1469
- // Force rebuild multiple read-only indexes
1470
- await dinoRepo
1471
- .update(
1472
- { id: "TREX-001" },
1473
- {
1474
- name: "Updated T-Rex",
1475
- excavationSiteId: "new-site-001",
1476
- species: "Tyrannosaurus Rex",
1477
- diet: "carnivore",
1478
- },
1479
- )
1480
- .forceIndexRebuild(["excavation-site-index", "species-diet-index"])
1481
- .execute();
1482
-
1483
- // Chain with other update operations
1484
- await dinoRepo
1485
- .update(
1486
- { id: "TREX-001" },
1487
- {
1488
- excavationSiteId: "new-site-002",
1489
- },
1490
- )
1491
- .forceIndexRebuild("excavation-site-index")
1492
- .set("lastUpdated", new Date().toISOString())
1493
- .condition((op) => op.eq("status", "INACTIVE"))
1494
- .returnValues("ALL_NEW")
1495
- .execute();
155
+ // Atomic dinosaur discovery
156
+ await table.transaction(tx => [
157
+ dinoRepo.create(newDinosaur).withTransaction(tx),
158
+ researchRepo.update(
159
+ { id: "paleontologist-1" },
160
+ { discoveriesCount: val => val.add(1) }
161
+ ).withTransaction(tx),
162
+ ]);
1496
163
  ```
164
+ **[Transactions Guide →](docs/transactions.md)**
1497
165
 
1498
- **When to use `forceIndexRebuild()`:**
1499
-
1500
- - 🔄 You need to update a read-only index with new data
1501
- - 🛠️ You're performing maintenance operations that require index consistency
1502
- - 📊 You have all required attributes available for the index and want to force an update
1503
- - ⚡ You want to override the read-only protection for specific update operations
1504
-
1505
- **Important Notes:**
1506
-
1507
- - This method only works with entity repositories, not direct table operations, as it requires knowledge of the entity's index definitions
1508
- - The index name must be a valid index defined in your entity configuration, otherwise an error will be thrown
1509
- - You must provide all required attributes for the index template variables, otherwise the update will fail with an error
1510
-
1511
- ## 🔄 Type Safety Features
1512
-
1513
- The library provides comprehensive type safety for all operations:
1514
-
1515
- ### Nested Object Support
166
+ ### Pagination & Memory Management
167
+ *Handle large datasets efficiently*
1516
168
 
1517
169
  ```ts
1518
- interface Dinosaur {
1519
- pk: string;
1520
- sk: string;
1521
- name: string;
1522
- species: string;
1523
- stats: {
1524
- health: number;
1525
- weight: number;
1526
- length: number;
1527
- age: number;
1528
- };
1529
- habitat: {
1530
- enclosure: {
1531
- id: string;
1532
- section: string;
1533
- climate: string;
1534
- };
1535
- requirements: {
1536
- temperature: number;
1537
- humidity: number;
1538
- };
1539
- };
1540
- care: {
1541
- feeding: {
1542
- schedule: string;
1543
- diet: string;
1544
- lastFed: string;
1545
- };
1546
- medical: {
1547
- lastCheckup: string;
1548
- vaccinations: string[];
1549
- };
1550
- };
1551
- }
1552
-
1553
- // TypeScript ensures type safety for all nested dinosaur attributes
1554
- await table
1555
- .update<Dinosaur>({ pk: "ENCLOSURE#F", sk: "DINO#007" })
1556
- .set("stats.health", 95) // ✓ Valid
1557
- .set("habitat.enclosure.climate", "Tropical") // ✓ Valid
1558
- .set("care.feeding.lastFed", new Date().toISOString()) // ✓ Valid
1559
- .set("stats.invalid", true) // ❌ TypeScript Error: property doesn't exist
170
+ // Stream large datasets (memory efficient)
171
+ const allCarnivores = await dinoRepo.query
172
+ .getDinosaursByDiet({ diet: "carnivore" })
1560
173
  .execute();
1561
- ```
1562
-
1563
- ### Type-Safe Conditions
1564
-
1565
- ```ts
1566
- interface DinosaurMonitoring {
1567
- species: string;
1568
- health: number;
1569
- lastFed: string;
1570
- temperature: number;
1571
- behavior: string[];
1572
- alertLevel: "LOW" | "MEDIUM" | "HIGH";
174
+ for await (const dino of allCarnivores) {
175
+ await processDiscovery(dino); // Process one at a time
1573
176
  }
1574
177
 
1575
- await table
1576
- .query<DinosaurMonitoring>({
1577
- pk: "MONITORING",
1578
- sk: (op) => op.beginsWith("ENCLOSURE#"),
1579
- })
1580
- .filter((op) =>
1581
- op.and(
1582
- op.lt("health", "90"), // ❌ TypeScript Error: health expects number
1583
- op.gt("temperature", 38), // ✓ Valid
1584
- op.contains("behavior", "aggressive"), // ✓ Valid
1585
- op.inArray("alertLevel", ["LOW", "MEDIUM", "HIGH"]), // ✓ Valid: matches union type
1586
- op.inArray("alertLevel", ["UNKNOWN", "INVALID"]), // ❌ TypeScript Error: invalid alert levels
1587
- op.eq("alertLevel", "UNKNOWN"), // ❌ TypeScript Error: invalid alert level
1588
- ),
1589
- )
1590
- .execute();
1591
- ```
1592
-
1593
- ## 🔄 Batch Operations
1594
-
1595
- Efficiently handle multiple items in a single request with automatic chunking and type safety.
1596
-
1597
- ### 🏗️ Entity-Based Batch Operations
1598
-
1599
- **Type-safe batch operations with automatic entity type inference**
1600
-
1601
- ```ts
1602
- // Create a typed batch builder
1603
- const batch = table.batchBuilder<{
1604
- Dinosaur: DinosaurEntity;
1605
- Fossil: FossilEntity;
1606
- }>();
1607
-
1608
- // Add operations - entity type is automatically inferred
1609
- dinosaurRepo.create(newDinosaur).withBatch(batch);
1610
- dinosaurRepo
1611
- .get({ id: "dino-123", diet: "carnivore", species: "Tyrannosaurus Rex" })
1612
- .withBatch(batch);
1613
- fossilRepo.create(newFossil).withBatch(batch);
1614
-
1615
- // Execute and get typed results
1616
- const result = await batch.execute();
1617
- const dinosaurs: DinosaurEntity[] = result.reads.itemsByType.Dinosaur;
1618
- const fossils: FossilEntity[] = result.reads.itemsByType.Fossil;
1619
- ```
1620
-
1621
- ### 📋 Table-Direct Batch Operations
1622
-
1623
- **Direct table access for maximum control**
1624
-
1625
- ```ts
1626
- // Batch get - retrieve multiple items
1627
- const keys = [
1628
- { pk: "DIET#carnivore", sk: "SPECIES#Tyrannosaurus Rex#ID#dino-123" },
1629
- { pk: "FOSSIL#456", sk: "DISCOVERY#2024" },
1630
- ];
1631
-
1632
- const { items, unprocessedKeys } = await table.batchGet<DynamoItem>(keys);
1633
-
1634
- // Batch write - mix of operations
1635
- const operations = [
1636
- {
1637
- type: "put" as const,
1638
- item: {
1639
- pk: "DIET#herbivore",
1640
- sk: "SPECIES#Triceratops#ID#dino-789",
1641
- name: "Spike",
1642
- dangerLevel: 3,
1643
- },
1644
- },
1645
- { type: "delete" as const, key: { pk: "FOSSIL#OLD", sk: "DISCOVERY#1990" } },
1646
- ];
1647
-
1648
- const { unprocessedItems } = await table.batchWrite(operations);
1649
-
1650
- // Handle unprocessed items (retry if needed)
1651
- if (unprocessedItems.length > 0) {
1652
- await table.batchWrite(unprocessedItems);
178
+ // Paginated results
179
+ const paginator = dinoRepo.query
180
+ .getDinosaursByDiet({ diet: "herbivore" })
181
+ .paginate(50);
182
+ while (paginator.hasNextPage()) {
183
+ const page = await paginator.getNextPage();
184
+ console.log(`Processing ${page.items.length} herbivores...`);
1653
185
  }
1654
186
  ```
187
+ **[Pagination Guide →](docs/pagination.md)**
1655
188
 
1656
- ## 🔒 Transaction Operations
1657
-
1658
- Perform multiple operations atomically with transaction support:
1659
-
1660
- ### Transaction Builder
189
+ ### Schema Validation
190
+ *Works with any Standard Schema library*
1661
191
 
1662
192
  ```ts
1663
- const result = await table.transaction(async (tx) => {
1664
- // Building the expression manually
1665
- tx.put(
1666
- "TableName",
1667
- { pk: "123", sk: "123" },
1668
- and(op.attributeNotExists("pk"), op.attributeExists("sk")),
1669
- );
1670
-
1671
- // Using table to build the operation
1672
- table
1673
- .put({ pk: "123", sk: "123" })
1674
- .condition((op) => {
1675
- return op.and(op.attributeNotExists("pk"), op.attributeExists("sk"));
1676
- })
1677
- .withTransaction(tx);
1678
-
1679
- // Building raw condition check
1680
- tx.conditionCheck(
1681
- "TestTable",
1682
- { pk: "transaction#test", sk: "condition#item" },
1683
- eq("status", "active"),
1684
- );
1685
-
1686
- // Using table to build the condition check
1687
- table
1688
- .conditionCheck({
1689
- pk: "transaction#test",
1690
- sk: "conditional#item",
1691
- })
1692
- .condition((op) => op.eq("status", "active"));
193
+ // Zod (included)
194
+ const dinoSchema = z.object({
195
+ species: z.string().min(3),
196
+ weight: z.number().positive(),
1693
197
  });
1694
- ```
1695
-
1696
- ### Transaction Options
1697
-
1698
- ```ts
1699
- const result = await table.transaction(
1700
- async (tx) => {
1701
- // ... transaction operations
1702
- },
1703
- {
1704
- // Optional transaction settings
1705
- idempotencyToken: "unique-token",
1706
- returnValuesOnConditionCheckFailure: true,
1707
- },
1708
- );
1709
- ```
1710
-
1711
- ## 🚨 Error Handling
1712
-
1713
- **TODO:**
1714
- to provide a more clear set of error classes and additional information to allow for an easier debugging experience
1715
-
1716
- ## 📚 API Reference
1717
-
1718
- ### Condition Operators
1719
-
1720
- All condition operators are type-safe and will validate against your item type. For detailed information about DynamoDB conditions and expressions, see the [AWS DynamoDB Developer Guide](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html).
1721
198
 
1722
- #### Comparison Operators
1723
-
1724
- - `eq(attr, value)` - Equals (=)
1725
- - `ne(attr, value)` - Not equals (≠)
1726
- - `lt(attr, value)` - Less than (<)
1727
- - `lte(attr, value)` - Less than or equal to (≤)
1728
- - `gt(attr, value)` - Greater than (>)
1729
- - `gte(attr, value)` - Greater than or equal to (≥)
1730
- - `between(attr, lower, upper)` - Between two values (inclusive)
1731
- - `inArray(attr, values)` - Checks if value is in a list of values (IN operator, max 100 values)
1732
- - `beginsWith(attr, value)` - Checks if string begins with value
1733
- - `contains(attr, value)` - Checks if string/set contains value
1734
-
1735
- ```ts
1736
- // Example: Health and feeding monitoring
1737
- await dinoTable
1738
- .query<Dinosaur>({
1739
- pk: "ENCLOSURE#G",
1740
- })
1741
- .filter((op) =>
1742
- op.and(
1743
- op.lt("stats.health", 85), // Health below 85%
1744
- op.lt(
1745
- "care.feeding.lastFed",
1746
- new Date(Date.now() - 12 * 60 * 60 * 1000).toISOString(),
1747
- ), // Not fed in 12 hours
1748
- op.between("stats.weight", 1000, 5000), // Medium-sized dinosaurs
1749
- ),
1750
- )
1751
- .execute();
1752
-
1753
- // Example: Filter dinosaurs by multiple status values using inArray
1754
- await dinoTable
1755
- .query<Dinosaur>({
1756
- pk: "SPECIES#trex",
1757
- })
1758
- .filter((op) =>
1759
- op.and(
1760
- op.inArray("status", ["ACTIVE", "FEEDING", "RESTING"]), // Multiple valid statuses
1761
- op.inArray("diet", ["carnivore", "omnivore"]), // Meat-eating dinosaurs
1762
- op.gt("dangerLevel", 5), // High danger level
1763
- ),
1764
- )
1765
- .execute();
1766
- ```
1767
-
1768
- #### Attribute Operators
1769
-
1770
- - `attributeExists(attr)` - Checks if attribute exists
1771
- - `attributeNotExists(attr)` - Checks if attribute does not exist
1772
-
1773
- ```ts
1774
- // Example: Validate required attributes for dinosaur transfer
1775
- await dinoTable
1776
- .update<Dinosaur>({
1777
- pk: "ENCLOSURE#H",
1778
- sk: "DINO#008",
1779
- })
1780
- .set("habitat.enclosure.id", "ENCLOSURE#J")
1781
- .condition((op) =>
1782
- op.and(
1783
- // Ensure all required health data is present
1784
- op.attributeExists("stats.health"),
1785
- op.attributeExists("care.medical.lastCheckup"),
1786
- // Ensure not already in transfer
1787
- op.attributeNotExists("transfer.inProgress"),
1788
- // Verify required monitoring tags
1789
- op.attributeExists("care.medical.vaccinations"),
1790
- ),
1791
- )
1792
- .execute();
1793
- ```
1794
-
1795
- #### Logical Operators
1796
-
1797
- - `and(...conditions)` - Combines conditions with AND
1798
- - `or(...conditions)` - Combines conditions with OR
1799
- - `not(condition)` - Negates a condition
199
+ // ArkType
200
+ const dinoSchema = type({
201
+ species: "string>2",
202
+ weight: "number>0",
203
+ });
1800
204
 
1801
- ```ts
1802
- // Example: Complex safety monitoring conditions
1803
- await dinoTable
1804
- .query<Dinosaur>({
1805
- pk: "MONITORING#ALERTS",
1806
- })
1807
- .filter((op) =>
1808
- op.or(
1809
- // Alert: Aggressive carnivores with low health
1810
- op.and(
1811
- op.eq("care.feeding.diet", "Carnivore"),
1812
- op.lt("stats.health", 70),
1813
- op.contains("behavior", "aggressive"),
1814
- ),
1815
- // Alert: Any dinosaur not fed recently and showing stress
1816
- op.and(
1817
- op.lt(
1818
- "care.feeding.lastFed",
1819
- new Date(Date.now() - 8 * 60 * 60 * 1000).toISOString(),
1820
- ),
1821
- op.contains("behavior", "stressed"),
1822
- ),
1823
- // Alert: Critical status dinosaurs requiring immediate attention
1824
- op.and(
1825
- op.inArray("status", ["SICK", "INJURED", "QUARANTINE"]), // Critical statuses
1826
- op.inArray("priority", ["HIGH", "URGENT"]), // High priority levels
1827
- ),
1828
- // Alert: Enclosure climate issues
1829
- op.and(
1830
- op.not(op.eq("habitat.enclosure.climate", "Optimal")),
1831
- op.or(
1832
- op.gt("habitat.requirements.temperature", 40),
1833
- op.lt("habitat.requirements.humidity", 50),
1834
- ),
1835
- ),
1836
- ),
1837
- )
1838
- .execute();
205
+ // Valibot
206
+ const dinoSchema = v.object({
207
+ species: v.pipe(v.string(), v.minLength(3)),
208
+ weight: v.pipe(v.number(), v.minValue(1)),
209
+ });
1839
210
  ```
211
+ **[Schema Validation Guide →](docs/schema-validation.md)**
1840
212
 
1841
- ### Key Condition Operators
1842
-
1843
- Special operators for sort key conditions in queries. See [AWS DynamoDB Key Condition Expressions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html#Query.KeyConditionExpressions) for more details.
213
+ ### Performance Optimization
214
+ *Built for scale*
1844
215
 
1845
216
  ```ts
1846
- // Example: Query recent health checks by enclosure
1847
- const recentHealthChecks = await dinoTable
1848
- .query<Dinosaur>({
1849
- pk: "ENCLOSURE#K",
1850
- sk: (op) =>
1851
- op.beginsWith(`HEALTH#${new Date().toISOString().slice(0, 10)}`), // Today's checks
1852
- })
1853
- .execute();
1854
-
1855
- // Example: Query dinosaurs by weight range in specific enclosure
1856
- const largeHerbivores = await dinoTable
1857
- .query<Dinosaur>({
1858
- pk: "DIET#herbivore",
1859
- sk: (op) =>
1860
- op.between(
1861
- `WEIGHT#${5000}`, // 5 tons minimum
1862
- `WEIGHT#${15000}`, // 15 tons maximum
1863
- ),
1864
- })
1865
- .execute();
1866
-
1867
- // Example: Find all dinosaurs in quarantine by date range
1868
- const quarantinedDinos = await dinoTable
1869
- .query<Dinosaur>({
1870
- pk: "STATUS#quarantine",
1871
- sk: (op) =>
1872
- op.between(
1873
- `DATE#${new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString().slice(0, 10)}`, // Last 7 days
1874
- `DATE#${new Date().toISOString().slice(0, 10)}`, // Today
1875
- ),
217
+ // Use indexes for fast lookups
218
+ const jurassicCarnivores = await dinoRepo.query
219
+ .getDinosaursByPeriodAndDiet({
220
+ period: "jurassic",
221
+ diet: "carnivore"
1876
222
  })
223
+ .useIndex("period-diet-index")
1877
224
  .execute();
1878
- ```
1879
-
1880
- ## 🔮 Future Roadmap
1881
-
1882
- - [ ] Enhanced query plan visualization
1883
- - [ ] Migration tooling
1884
- - [ ] Local secondary index support
1885
- - [ ] Multi-table transaction support
1886
225
 
1887
- ## 🤝 Contributing
1888
-
1889
- ```bash
1890
- # Set up development environment
1891
- pnpm install
1892
-
1893
- # Run tests (requires local DynamoDB)
1894
- pnpm run ddb:start
1895
- pnpm test
1896
-
1897
- # Build the project
1898
- pnpm build
226
+ // Efficient filtering with batchGet for known species
227
+ const largeDinos = await dinoRepo.batchGet([
228
+ { id: "t-rex-1" },
229
+ { id: "triceratops-1" },
230
+ { id: "brontosaurus-1" }
231
+ ]).execute();
1899
232
  ```
233
+ **[Performance Guide →](docs/performance.md)**
1900
234
 
1901
- ## 📦 Release Process
235
+ ---
1902
236
 
1903
- This project uses [semantic-release](https://github.com/semantic-release/semantic-release) for automated versioning and package publishing. The configuration is maintained in the `.releaserc.json` file. Releases are automatically triggered by commits to specific branches:
237
+ ## Documentation
1904
238
 
1905
- - **Main Channel**: Stable releases from the `main` branch
1906
- - **Alpha Channel**: Pre-releases from the `alpha` branch
239
+ ### Getting Started
240
+ - **[Quick Start Tutorial →](docs/quick-start.md)** - Get up and running quickly
241
+ - **[Installation Guide →](docs/installation.md)** - Setup and configuration
242
+ - **[Your First Entity →](docs/first-entity.md)** - Create your first entity
1907
243
 
1908
- ### Commit Message Format
244
+ ### Core Concepts
245
+ - **[Entity vs Table →](docs/entity-vs-table.md)** - Choose your approach
246
+ - **[Single Table Design →](docs/single-table.md)** - DynamoDB best practices
247
+ - **[Key Design Patterns →](docs/key-patterns.md)** - Partition and sort keys
1909
248
 
1910
- We follow the [Conventional Commits](https://www.conventionalcommits.org/) specification for commit messages, which determines the release type:
249
+ ### Features
250
+ - **[Query Building →](docs/query-builder.md)** - Complex queries and filtering
251
+ - **[Schema Validation →](docs/schema-validation.md)** - Type safety and validation
252
+ - **[Transactions →](docs/transactions.md)** - ACID operations
253
+ - **[Batch Operations →](docs/batch-operations.md)** - Bulk operations
254
+ - **[Pagination →](docs/pagination.md)** - Handle large datasets
255
+ - **[Type Safety →](docs/type-safety.md)** - TypeScript integration
1911
256
 
1912
- - `fix: ...` - Patch release (bug fixes)
1913
- - `feat: ...` - Minor release (new features)
1914
- - `feat!: ...` or `fix!: ...` or any commit with `BREAKING CHANGE:` in the footer - Major release
257
+ ### Advanced Topics
258
+ - **[Performance →](docs/performance.md)** - Optimization strategies
259
+ - **[Error Handling →](docs/error-handling.md)** - Robust error management
260
+ - **[Migration →](docs/migration.md)** - Evolving your schema
1915
261
 
1916
- ### Release Workflow
262
+ ### Examples
263
+ - **[E-commerce Store →](examples/ecommerce)** - Product catalog and orders
264
+ - **[User Management →](examples/users)** - Authentication and profiles
265
+ - **[Content Management →](examples/cms)** - Blog posts and comments
266
+ - **[Analytics →](examples/analytics)** - Event tracking and reporting
1917
267
 
1918
- 1. For regular features and fixes:
1919
- - Create a PR against the `main` branch
1920
- - Once merged, a new release will be automatically published
268
+ ---
1921
269
 
1922
- 2. For experimental features:
1923
- - Create a PR against the `alpha` branch
1924
- - Once merged, a new alpha release will be published with an alpha tag
270
+ ## Links
1925
271
 
1926
- ### Installing Specific Channels
1927
-
1928
- ```bash
1929
- # Install the latest stable version
1930
- npm install dyno-table
1931
-
1932
- # Install the latest alpha version
1933
- npm install dyno-table@alpha
1934
- ```
272
+ - **[Documentation](docs/)** - Complete guides and references
273
+ - **[Issues](https://github.com/Kysumi/dyno-table/issues)** - Report bugs or request features
274
+ - **[Discussions](https://github.com/Kysumi/dyno-table/discussions)** - Ask questions and share ideas
275
+ - **[NPM](https://www.npmjs.com/package/dyno-table)** - Package information
1935
276
 
1936
- ## 🦔 Running Examples
277
+ ---
1937
278
 
1938
- There's a few pre-configured example scripts in the `examples` directory.
1939
-
1940
- First you'll need to install the dependencies:
1941
-
1942
- ```bash
1943
- pnpm install
1944
- ```
1945
-
1946
- Then setup the test table in local DynamoDB by running the following command:
1947
-
1948
- ```bash
1949
- pnpm run ddb:start
1950
- pnpm run local:setup
1951
- ```
1952
-
1953
- To run the examples, you can use the following command:
1954
-
1955
- ```bash
1956
- npx tsx examples/[EXAMPLE_NAME].ts
1957
- ```
1958
-
1959
- To view the test table GUI in action: [DynamoDB Admin](http://localhost:8001/)
279
+ <div align="center">
280
+ <em>Built by developers who believe working with DynamoDB should be intuitive and type-safe</em>
281
+ </div>