s3db.js 3.3.2 โ†’ 4.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,907 +2,1411 @@
2
2
 
3
3
  [![license: unlicense](https://img.shields.io/badge/license-Unlicense-blue.svg)](http://unlicense.org/) [![npm version](https://img.shields.io/npm/v/s3db.js.svg?style=flat)](https://www.npmjs.com/package/s3db.js) [![Maintainability](https://api.codeclimate.com/v1/badges/26e3dc46c42367d44f18/maintainability)](https://codeclimate.com/github/forattini-dev/s3db.js/maintainability) [![Coverage Status](https://coveralls.io/repos/github/forattini-dev/s3db.js/badge.svg?branch=main)](https://coveralls.io/github/forattini-dev/s3db.js?branch=main)
4
4
 
5
- Another way to create a cheap document-base database with an easy ORM to handle your dataset!
5
+ **A document-based database built on AWS S3 with a powerful ORM-like interface**
6
6
 
7
- <table width="100%">
8
- <tr>
9
- <td>
7
+ Transform AWS S3 into a fully functional document database with automatic validation, encryption, caching, and streaming capabilities.
10
8
 
11
- 1. <a href="#motivation">Motivation</a>
12
- 1. <a href="#usage">Usage</a>
13
- 1. <a href="#install">Install</a>
14
- 1. <a href="#quick-setup">Quick Setup</a>
15
- 1. <a href="#insights">Insights</a>
16
- 1. <a href="#database">Database</a>
17
- 1. <a href="#create-a-resource">Create a resource</a>
18
- 1. <a href="#resource-methods">Resource methods</a>
19
- 1. <a href="#insert-one">Insert one</a>
20
- 1. <a href="#get-one">Get one</a>
21
- 1. <a href="#update-one">Update one</a>
22
- 1. <a href="#delete-one">Delete one</a>
23
- 1. <a href="#count">Count</a>
24
- 1. <a href="#insert-many">Insert many</a>
25
- 1. <a href="#get-many">Get many</a>
26
- 1. <a href="#get-all">Get all</a>
27
- 1. <a href="#delete-many">Delete many</a>
28
- 1. <a href="#delete-all">Delete all</a>
29
- 1. <a href="#list-ids">List ids</a>
30
- 1. <a href="#resource-streams">Resource streams</a>
31
- 1. <a href="#readable-stream">Readable stream</a>
32
- 1. <a href="#writable-stream">Writable stream</a>
33
- 1. <a href="#s3-client">S3 Client</a>
34
- 1. <a href="#events">Events</a>
35
- 1. <a href="#plugins">Plugins</a>
36
- 1. <a href="#cost-simulation">Cost Simulation</a>
37
- 1. <a href="#big-example">Big Example</a>
38
- 1. <a href="#small-example">Small example</a>
39
- 1. <a href="#roadmap">Roadmap</a>
9
+ ## ๐Ÿš€ Quick Start
40
10
 
41
- </td>
42
- </tr>
43
- </table>
11
+ ```bash
12
+ npm i s3db.js
13
+ ```
14
+
15
+ ```javascript
16
+ import { S3db } from "s3db.js";
17
+
18
+ // Connect to your S3 database
19
+ const s3db = new S3db({
20
+ uri: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp"
21
+ });
22
+
23
+ await s3db.connect();
44
24
 
45
- ---
25
+ // Create a resource (collection)
26
+ const users = await s3db.createResource({
27
+ name: "users",
28
+ attributes: {
29
+ name: "string|min:2|max:100",
30
+ email: "email|unique",
31
+ age: "number|integer|positive",
32
+ isActive: "boolean",
33
+ createdAt: "date"
34
+ }
35
+ });
36
+
37
+ // Insert data
38
+ const user = await users.insert({
39
+ name: "John Doe",
40
+ email: "john@example.com",
41
+ age: 30,
42
+ isActive: true,
43
+ createdAt: new Date()
44
+ });
46
45
 
47
- ## Motivation
46
+ // Query data
47
+ const foundUser = await users.get(user.id);
48
+ console.log(foundUser.name); // "John Doe"
49
+ ```
48
50
 
49
- First of all:
51
+ ## ๐Ÿ“‹ Table of Contents
50
52
 
51
- 1. Nothing is for free, but it can be cheaper.
52
- 2. I'm not responsible for your AWS Costs strategy, use `s3db.js` at your own risk.
53
- 3. Please, do not use in production!
53
+ - [๐ŸŽฏ What is s3db.js?](#-what-is-s3dbjs)
54
+ - [๐Ÿ’ก How it Works](#-how-it-works)
55
+ - [โšก Installation & Setup](#-installation--setup)
56
+ - [๐Ÿ”ง Configuration](#-configuration)
57
+ - [๐Ÿ“š Core Concepts](#-core-concepts)
58
+ - [๐Ÿ› ๏ธ API Reference](#๏ธ-api-reference)
59
+ - [๐Ÿ“Š Examples](#-examples)
60
+ - [๐Ÿ”„ Streaming](#-streaming)
61
+ - [๐Ÿ” Security & Encryption](#-security--encryption)
62
+ - [๐Ÿ’ฐ Cost Analysis](#-cost-analysis)
63
+ - [๐ŸŽ›๏ธ Advanced Features](#๏ธ-advanced-features)
64
+ - [๐Ÿšจ Limitations & Best Practices](#-limitations--best-practices)
65
+ - [๐Ÿงช Testing](#-testing)
66
+ - [๐Ÿ“… Version Compatibility](#-version-compatibility)
54
67
 
55
- **Let's go!**
68
+ ## ๐ŸŽฏ What is s3db.js?
56
69
 
57
- You might know AWS's S3 product for its high availability and its cheap pricing rules. I'll show you another clever and funny way to use S3.
70
+ `s3db.js` is a document database that leverages AWS S3's metadata capabilities to store structured data. Instead of storing data in file bodies, it uses S3's metadata fields (up to 2KB) to store document data, making it extremely cost-effective for document storage.
58
71
 
59
- AWS allows you define `Metadata` to every single file you upload into your bucket. This attribute must be defined within a **2kb** limit using in `UTF-8` encoding. As this encoding [may vary the bytes width for each symbol](https://en.wikipedia.org/wiki/UTF-8) you may use [500 to 2000] chars of metadata storage. Follow the docs at [AWS S3 User Guide: Using metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html#object-metadata).
72
+ ### Key Features
60
73
 
61
- There is another management subset of data called `tags` that is used globally as [key, value] params. You can assign 10 tags with the conditions of: the key must be at most 128 unicode chars lengthy and the value up to 256 chars. With those key-values we can use more `2.5kb` of data, unicode will allow you to use up to 2500 more chars. Follow the official docs at [AWS User Guide: Object Tagging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html).
74
+ - **๐Ÿ”„ ORM-like Interface**: Familiar database operations (insert, get, update, delete)
75
+ - **โœ… Automatic Validation**: Built-in schema validation using fastest-validator
76
+ - **๐Ÿ” Encryption**: Optional field-level encryption for sensitive data
77
+ - **โšก Streaming**: Handle large datasets with readable/writable streams
78
+ - **๐Ÿ’พ Caching**: Reduce API calls with intelligent caching
79
+ - **๐Ÿ“Š Cost Tracking**: Monitor AWS costs with built-in plugins
80
+ - **๐Ÿ›ก๏ธ Type Safety**: Full TypeScript support
81
+ - **๐Ÿ”ง Robust Serialization**: Advanced handling of arrays and objects with edge cases
82
+ - **๐Ÿ“ Comprehensive Testing**: Complete test suite with journey-based scenarios
83
+ - **๐Ÿ•’ Automatic Timestamps**: Optional createdAt/updatedAt fields
84
+ - **๐Ÿ“ฆ Partitions**: Organize data by fields for efficient queries
85
+ - **๐ŸŽฃ Hooks**: Custom logic before/after operations
86
+ - **๐Ÿ”Œ Plugins**: Extensible architecture
62
87
 
63
- With all this set you may store objects that should be able to store up to `4.5kb` of free space **per object**.
88
+ ## ๐Ÿ’ก How it Works
64
89
 
65
- Check the <a href="#cost-simulation">cost simulation</a> section below for a deep cost dive!
90
+ ### The Magic Behind s3db.js
66
91
 
67
- Lets give it a try! :)
92
+ AWS S3 allows you to store metadata with each object:
93
+ - **Metadata**: Up to 2KB of UTF-8 encoded data
68
94
 
69
- ---
95
+ `s3db.js` cleverly uses these fields to store document data instead of file contents, making each S3 object act as a database record.
70
96
 
71
- ## Usage
97
+ ### Data Storage Strategy
72
98
 
73
- You may check the snippets bellow or go straight to the <a href="#examples">Examples</a> section!
99
+ ```javascript
100
+ // Your document
101
+ {
102
+ id: "user-123",
103
+ name: "John Doe",
104
+ email: "john@example.com",
105
+ age: 30
106
+ }
107
+
108
+ // Stored in S3 as:
109
+ // Key: users/user-123
110
+ // Metadata: { "name": "John Doe", "email": "john@example.com", "age": "30", "id": "user-123" }
111
+ ```
112
+
113
+ ## โšก Installation & Setup
74
114
 
75
115
  ### Install
76
116
 
77
117
  ```bash
78
118
  npm i s3db.js
79
-
80
119
  # or
81
-
120
+ pnpm add s3db.js
121
+ # or
82
122
  yarn add s3db.js
83
123
  ```
84
124
 
85
- ### Quick setup
86
-
87
- Our S3db client use connection string params.
125
+ ### Basic Setup
88
126
 
89
127
  ```javascript
90
128
  import { S3db } from "s3db.js";
91
129
 
92
- const {
93
- AWS_BUCKET,
94
- AWS_ACCESS_KEY_ID,
95
- AWS_SECRET_ACCESS_KEY,
96
- } = process.env
97
-
98
130
  const s3db = new S3db({
99
- uri: `s3://${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}@${AWS_BUCKET}/databases/mydatabase`
131
+ uri: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp"
100
132
  });
101
133
 
102
- s3db
103
- .connect()
104
- .then(() => console.log('connected!')))
134
+ await s3db.connect();
135
+ console.log("Connected to S3 database!");
105
136
  ```
106
137
 
107
- If you do use `dotenv` package:
138
+ ### Environment Variables Setup
108
139
 
109
140
  ```javascript
110
141
  import * as dotenv from "dotenv";
111
142
  dotenv.config();
112
143
 
113
144
  import { S3db } from "s3db.js";
114
- ```
115
145
 
116
- ### Insights
146
+ const s3db = new S3db({
147
+ uri: `s3://${process.env.AWS_ACCESS_KEY_ID}:${process.env.AWS_SECRET_ACCESS_KEY}@${process.env.AWS_BUCKET}/databases/${process.env.DATABASE_NAME}`
148
+ });
149
+ ```
117
150
 
118
- - This implementation of ORM simulates a document repository. Due to the fact that `s3db.js` uses `aws-sdk`'s' S3 api; all requests are GET/PUT as `key=value` resources. So the best case scenario is to access like a document implementation.
151
+ ## ๐Ÿ”ง Configuration
119
152
 
120
- - For better use of the <a href="#cache">cache</a> and listing, the best ID format is to use sequential ids with leading zeros (eq: 00001, 00002, 00003) due to S3 internal keys sorting method. But you will need to manage this incremental ID by your own.
153
+ ### Connection Options
121
154
 
122
- ### Database
155
+ | Option | Type | Default | Description |
156
+ |--------|------|---------|-------------|
157
+ | `uri` | `string` | **required** | S3 connection string |
158
+ | `parallelism` | `number` | `10` | Concurrent operations |
159
+ | `passphrase` | `string` | `"secret"` | Encryption key |
160
+ | `cache` | `boolean` | `false` | Enable caching |
161
+ | `ttl` | `number` | `86400` | Cache TTL in seconds |
162
+ | `plugins` | `array` | `[]` | Custom plugins |
123
163
 
124
- Your `s3db.js` client can be initiated with options:
164
+ ### ๐Ÿ” Authentication & Connectivity
125
165
 
126
- | option | optional | description | type | default |
127
- | :---------: | :------: | :-------------------------------------------------: | :-------: | :---------: |
128
- | cache | true | Persist searched data to reduce repeated requests | `boolean` | `undefined` |
129
- | parallelism | true | Number of simultaneous tasks | `number` | 10 |
130
- | passphrase | true | Your encryption secret | `string` | `undefined` |
131
- | ttl | true | (Coming soon) TTL to your cache duration in seconds | `number` | 86400 |
132
- | uri | false | A url as your S3 connection string | `string` | `undefined` |
166
+ `s3db.js` supports multiple authentication methods and can connect to various S3-compatible services:
133
167
 
134
- Config example:
168
+ #### Connection String Format
135
169
 
136
- ```javascript
137
- const {
138
- AWS_BUCKET = "my-bucket",
139
- AWS_ACCESS_KEY_ID = "secret",
140
- AWS_SECRET_ACCESS_KEY = "secret",
141
- AWS_BUCKET_PREFIX = "databases/test-" + Date.now(),
142
- } = process.env;
170
+ ```
171
+ s3://[ACCESS_KEY:SECRET_KEY@]BUCKET_NAME[/PREFIX]
172
+ ```
143
173
 
144
- const uri = `s3://${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}@${AWS_BUCKET}/${AWS_BUCKET_PREFIX}`;
174
+ #### 1. AWS S3 with Access Keys
145
175
 
146
- const options = {
147
- uri,
148
- parallelism: 25,
149
- passphrase: fs.readFileSync("./cert.pem"),
150
- };
176
+ ```javascript
177
+ const s3db = new S3db({
178
+ uri: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp"
179
+ });
151
180
  ```
152
181
 
153
- #### s3db.connect()
182
+ #### 2. AWS S3 with IAM Roles (EC2/EKS)
154
183
 
155
- This method must always be invoked before any operation take place. This will interact with AWS' S3 api and check the itens below:
184
+ ```javascript
185
+ // No credentials needed - uses IAM role permissions
186
+ const s3db = new S3db({
187
+ uri: "s3://BUCKET_NAME/databases/myapp"
188
+ });
189
+ ```
156
190
 
157
- 1. With current credentials:
158
- - Check if client has access to the S3 bucket.
159
- - Check if client has access to bucket life-cycle policies.
160
- 1. With defined database:
161
- - Check if there is already a database in this connection string.
162
- - If any database is found, downloads it's medatada and loads each `Resource` definition.
163
- - Else, it will generate an empty <a href="#metadata-file">`metadata`</a> file into this prefix and mark that this is a new database from scratch.
191
+ #### 3. MinIO or S3-Compatible Services
164
192
 
165
- #### Metadata file
193
+ ```javascript
194
+ const s3db = new S3db({
195
+ uri: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp",
196
+ endpoint: "http://localhost:9000" // MinIO default endpoint
197
+ });
198
+ ```
166
199
 
167
- `s3db.js` will generate a file `/s3db.json` at the pre-defined prefix with this structure:
200
+ #### 4. Environment-Based Configuration
168
201
 
169
202
  ```javascript
170
- {
171
- // file version
172
- "version": "1",
173
-
174
- // previously defined resources
175
- "resources": {
176
- // definition example
177
- "leads": {
178
- "name": "leads",
179
-
180
- // resource options
181
- "options": {},
182
-
183
- // resource defined schema
184
- "schema": {
185
- "name": "string",
186
- "token": "secret"
187
- },
188
-
189
- // rules to simplify metadata usage
190
- "mapper": {
191
- "name": "0",
192
- "token": "1"
193
- },
194
- }
195
- }
196
- }
203
+ const s3db = new S3db({
204
+ uri: `s3://${process.env.AWS_ACCESS_KEY_ID}:${process.env.AWS_SECRET_ACCESS_KEY}@${process.env.AWS_BUCKET}/databases/${process.env.DATABASE_NAME}`,
205
+ endpoint: process.env.S3_ENDPOINT
206
+ });
197
207
  ```
198
208
 
199
- ### Create a resource
209
+ #### Security Best Practices
210
+
211
+ - **IAM Roles**: Use IAM roles instead of access keys when possible (EC2, EKS, Lambda)
212
+ - **Environment Variables**: Store credentials in environment variables, not in code
213
+ - **Bucket Permissions**: Ensure your IAM role/user has the necessary S3 permissions:
214
+ - `s3:GetObject`, `s3:PutObject`, `s3:DeleteObject`, `s3:ListBucket`, `s3:GetBucketLocation`
200
215
 
201
- Resources are definitions of data collections.
216
+ ### Advanced Configuration
202
217
 
203
218
  ```javascript
204
- // resource
205
- const attributes = {
206
- utm: {
207
- source: "string|optional",
208
- medium: "string|optional",
209
- campaign: "string|optional",
210
- term: "string|optional",
211
- },
212
- lead: {
213
- fullName: "string",
214
- mobileNumber: "string",
215
- personalEmail: "email",
216
- },
217
- };
219
+ import fs from "fs";
218
220
 
219
- const resource = await s3db.createResource({
220
- name: "leads",
221
- attributes,
221
+ const s3db = new S3db({
222
+ uri: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp",
223
+ parallelism: 25, // Handle 25 concurrent operations
224
+ passphrase: fs.readFileSync("./cert.pem"), // Custom encryption key
225
+ cache: true, // Enable caching
226
+ ttl: 3600, // 1 hour cache TTL
227
+ plugins: [CostsPlugin] // Enable cost tracking
222
228
  });
223
229
  ```
224
230
 
225
- Resources' names **cannot** prefix each other, like: `leads` and `leads-copy`! S3's api lists keys using prefix notation, so every time you list `leads`, all keys of `leads-copy` will appear as well.
231
+ ## ๐Ÿ“š Core Concepts
226
232
 
227
- ##### Attributes
233
+ ### 1. Database
228
234
 
229
- `s3db.js` use the [fastest-validator](https://www.npmjs.com/package/fastest-validator) package to define and validate your resource. Some few examples:
235
+ A database is a logical container for your resources, stored in a specific S3 prefix.
230
236
 
231
237
  ```javascript
232
- const attributes = {
233
- // few simple examples
234
- name: "string|min:4|max:64|trim",
235
- email: "email|nullable",
236
- mobile: "string|optional",
237
- count: "number|integer|positive",
238
- corrency: "corrency|symbol:R$",
239
- createdAt: "date",
240
- website: "url",
241
- id: "uuid",
242
- ids: "array|items:uuid|unique",
243
-
244
- // s3db defines a custom type "secret" that is encrypted
245
- token: "secret",
246
-
247
- // nested data works aswell
248
- geo: {
249
- lat: "number",
250
- long: "number",
251
- city: "string",
252
- },
253
-
254
- // may have multiple definitions.
255
- address_number: ["string", "number"],
256
- };
238
+ // This creates/connects to a database at:
239
+ // s3://bucket/databases/myapp/
240
+ const s3db = new S3db({
241
+ uri: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp"
242
+ });
257
243
  ```
258
244
 
259
- ##### Reference:
245
+ ### 2. Resources (Collections)
260
246
 
261
- You may just use the reference:
247
+ Resources are like tables in traditional databases - they define the structure of your documents.
262
248
 
263
249
  ```javascript
264
- const Leads = s3db.resource("leads");
250
+ const users = await s3db.createResource({
251
+ name: "users",
252
+ attributes: {
253
+ name: "string|min:2|max:100",
254
+ email: "email|unique",
255
+ age: "number|integer|positive",
256
+ profile: {
257
+ bio: "string|optional",
258
+ avatar: "url|optional"
259
+ },
260
+ tags: "array|items:string",
261
+ metadata: "object|optional"
262
+ }
263
+ });
265
264
  ```
266
265
 
267
- ##### Limitations:
268
-
269
- As we need to store the resource definition within a JSON file, to keep your definitions intact the best way is to use the [string-based shorthand definitions](https://github.com/icebob/fastest-validator#shorthand-definitions) in your resource definition.
266
+ #### Automatic Timestamps
270
267
 
271
- By design, the resource definition **will will strip all functions** in attributes to avoid `eval()` calls.
268
+ If you enable the `timestamps` option, `s3db.js` will automatically add `createdAt` and `updatedAt` fields to your resource, and keep them updated on insert and update operations.
272
269
 
273
- The `fastest-validator` starts with the params below:
270
+ ```js
271
+ const users = await s3db.createResource({
272
+ name: "users",
273
+ attributes: { name: "string", email: "email" },
274
+ options: { timestamps: true }
275
+ });
274
276
 
275
- ```javascript
276
- // fastest-validator params
277
- {
278
- useNewCustomCheckerFunction: true,
279
- defaults: {
280
- object: {
281
- strict: "remove",
282
- },
283
- },
284
- }
277
+ const user = await users.insert({ name: "John", email: "john@example.com" });
278
+ console.log(user.createdAt); // e.g. "2024-06-27T12:34:56.789Z"
279
+ console.log(user.updatedAt); // same as createdAt on insert
285
280
  ```
286
281
 
287
- ---
282
+ #### Resource Behaviors
288
283
 
289
- ## Resources methods
284
+ `s3db.js` provides a powerful behavior system to handle how your data is managed when it approaches or exceeds S3's 2KB metadata limit. Each behavior implements different strategies for handling large documents.
290
285
 
291
- Consider `resource` as:
286
+ ##### Available Behaviors
292
287
 
293
- ```javascript
294
- const resource = s3db.resource("leads");
295
- ```
288
+ | Behavior | Description | Use Case |
289
+ |----------|-------------|----------|
290
+ | `user-management` | **Default** - Emits warnings but allows operations | Development and testing |
291
+ | `enforce-limits` | Throws errors when limit is exceeded | Strict data size control |
292
+ | `data-truncate` | Truncates data to fit within limits | Preserve structure, lose data |
293
+ | `body-overflow` | Stores excess data in S3 object body | Preserve all data |
296
294
 
297
- ### Insert one
295
+ ##### Behavior Configuration
298
296
 
299
297
  ```javascript
300
- // data
301
- const insertedData = await resource.insert({
302
- id: "mypersonal@email.com", // if not defined a id will be generated!
303
- utm: {
304
- source: "abc",
305
- },
306
- lead: {
307
- fullName: "My Complex Name",
308
- personalEmail: "mypersonal@email.com",
309
- mobileNumber: "+5511234567890",
298
+ const users = await s3db.createResource({
299
+ name: "users",
300
+ attributes: {
301
+ name: "string|min:2|max:100",
302
+ email: "email|unique",
303
+ bio: "string|optional",
304
+ preferences: "object|optional"
310
305
  },
311
- invalidAttr: "this attribute will disappear",
306
+ options: {
307
+ behavior: "body-overflow", // Choose behavior strategy
308
+ timestamps: true, // Enable automatic timestamps
309
+ partitions: { // Define data partitions
310
+ byRegion: {
311
+ fields: { region: "string" }
312
+ }
313
+ },
314
+ hooks: { // Custom operation hooks
315
+ preInsert: [async (data) => {
316
+ // Custom validation logic
317
+ return data;
318
+ }],
319
+ afterInsert: [async (data) => {
320
+ console.log("User created:", data.id);
321
+ }]
322
+ }
323
+ }
312
324
  });
313
-
314
- // {
315
- // id: "mypersonal@email.com",
316
- // utm: {
317
- // source: "abc",
318
- // },
319
- // lead: {
320
- // fullName: "My Complex Name",
321
- // personalEmail: "mypersonal@email.com",
322
- // mobileNumber: "+5511234567890",
323
- // },
324
- // invalidAttr: "this attribute will disappear",
325
- // }
326
325
  ```
327
326
 
328
- If not defined an id attribute, `s3db.js` will use [`nanoid`](https://github.com/ai/nanoid) to generate a random unique id!
327
+ ##### 1. User Management Behavior (Default)
329
328
 
330
- ### Get one
329
+ The default behavior that gives you full control over data size management:
331
330
 
332
331
  ```javascript
333
- const obj = await resource.get("mypersonal@email.com");
332
+ const users = await s3db.createResource({
333
+ name: "users",
334
+ attributes: { name: "string", email: "email" },
335
+ options: { behavior: "user-management" }
336
+ });
334
337
 
335
- // {
336
- // id: "mypersonal@email.com",
337
- // utm: {
338
- // source: "abc",
339
- // },
340
- // lead: {
341
- // fullName: "My Complex Name",
342
- // personalEmail: "mypersonal@email.com",
343
- // mobileNumber: "+5511234567890",
344
- // },
345
- // }
338
+ // Listen for limit warnings
339
+ users.on("exceedsLimit", (info) => {
340
+ console.log(`Document ${info.operation} exceeds 2KB limit:`, {
341
+ totalSize: info.totalSize,
342
+ limit: info.limit,
343
+ excess: info.excess
344
+ });
345
+ });
346
+
347
+ // Operations continue normally even if limit is exceeded
348
+ const user = await users.insert({
349
+ name: "John Doe",
350
+ email: "john@example.com",
351
+ largeBio: "Very long bio...".repeat(100) // Will trigger warning but succeed
352
+ });
346
353
  ```
347
354
 
348
- ### Update one
355
+ ##### 2. Enforce Limits Behavior
356
+
357
+ Strict behavior that prevents operations when data exceeds the limit:
349
358
 
350
359
  ```javascript
351
- const obj = await resource.update("mypersonal@email.com", {
352
- lead: {
353
- fullName: "My New Name",
354
- mobileNumber: "+5511999999999",
355
- },
360
+ const users = await s3db.createResource({
361
+ name: "users",
362
+ attributes: { name: "string", email: "email" },
363
+ options: { behavior: "enforce-limits" }
356
364
  });
357
365
 
358
- // {
359
- // id: "mypersonal@email.com",
360
- // utm: {
361
- // source: "abc",
362
- // },
363
- // lead: {
364
- // fullName: "My New Name",
365
- // personalEmail: "mypersonal@email.com",
366
- // mobileNumber: "+5511999999999",
367
- // },
368
- // }
366
+ try {
367
+ const user = await users.insert({
368
+ name: "John Doe",
369
+ email: "john@example.com",
370
+ largeBio: "Very long bio...".repeat(100)
371
+ });
372
+ } catch (error) {
373
+ console.error("Operation failed:", error.message);
374
+ // Error: S3 metadata size exceeds 2KB limit. Current size: 2500 bytes, limit: 2048 bytes
375
+ }
369
376
  ```
370
377
 
371
- ### Delete one
378
+ ##### 3. Data Truncate Behavior
379
+
380
+ Intelligently truncates data to fit within limits while preserving structure:
372
381
 
373
382
  ```javascript
374
- await resource.delete(id);
383
+ const users = await s3db.createResource({
384
+ name: "users",
385
+ attributes: { name: "string", email: "email", bio: "string" },
386
+ options: { behavior: "data-truncate" }
387
+ });
388
+
389
+ const user = await users.insert({
390
+ name: "John Doe",
391
+ email: "john@example.com",
392
+ bio: "This is a very long biography that will be truncated to fit within the 2KB metadata limit..."
393
+ });
394
+
395
+ console.log(user.bio); // "This is a very long biography that will be truncated to fit within the 2KB metadata limit..."
396
+ // Note: The bio will be truncated with "..." suffix if it exceeds available space
375
397
  ```
376
398
 
377
- ### Count
399
+ ##### 4. Body Overflow Behavior
400
+
401
+ Stores excess data in the S3 object body, preserving all information:
378
402
 
379
403
  ```javascript
380
- await resource.count();
404
+ const users = await s3db.createResource({
405
+ name: "users",
406
+ attributes: { name: "string", email: "email", bio: "string" },
407
+ options: { behavior: "body-overflow" }
408
+ });
381
409
 
382
- // 101
410
+ const user = await users.insert({
411
+ name: "John Doe",
412
+ email: "john@example.com",
413
+ bio: "This is a very long biography that will be stored in the S3 object body..."
414
+ });
415
+
416
+ // All data is preserved and automatically merged when retrieved
417
+ console.log(user.bio); // Full biography preserved
383
418
  ```
384
419
 
385
- ### Insert many
420
+ **How Body Overflow Works:**
421
+ - Small attributes stay in metadata for fast access
422
+ - Large attributes are moved to S3 object body
423
+ - Data is automatically merged when retrieved
424
+ - Maintains full data integrity
386
425
 
387
- You may bulk insert data with a friendly method that receives a list of objects.
426
+ ##### Complete Resource Configuration Reference
388
427
 
389
428
  ```javascript
390
- const objects = new Array(100).fill(0).map((v, k) => ({
391
- id: `bulk-${k}@mymail.com`,
392
- lead: {
393
- fullName: "My Test Name",
394
- personalEmail: `bulk-${k}@mymail.com`,
395
- mobileNumber: "+55 11 1234567890",
429
+ const resource = await s3db.createResource({
430
+ // Required: Resource name (unique within database)
431
+ name: "users",
432
+
433
+ // Required: Schema definition
434
+ attributes: {
435
+ // Basic types
436
+ name: "string|min:2|max:100",
437
+ email: "email|unique",
438
+ age: "number|integer|positive",
439
+ isActive: "boolean",
440
+
441
+ // Advanced types
442
+ website: "url",
443
+ uuid: "uuid",
444
+ createdAt: "date",
445
+ price: "currency|symbol:$",
446
+
447
+ // Encrypted fields
448
+ password: "secret",
449
+ apiKey: "secret",
450
+
451
+ // Nested objects
452
+ address: {
453
+ street: "string",
454
+ city: "string",
455
+ country: "string",
456
+ zipCode: "string|optional"
457
+ },
458
+
459
+ // Arrays
460
+ tags: "array|items:string|unique",
461
+ scores: "array|items:number|min:1",
462
+
463
+ // Multiple types
464
+ id: ["string", "number"],
465
+
466
+ // Complex nested structures
467
+ metadata: {
468
+ settings: "object|optional",
469
+ preferences: "object|optional"
470
+ }
396
471
  },
397
- }));
398
-
399
- await resource.insertMany(objects);
472
+
473
+ // Optional: Resource configuration
474
+ options: {
475
+ // Behavior strategy for handling 2KB metadata limits
476
+ behavior: "user-management", // "user-management" | "enforce-limits" | "data-truncate" | "body-overflow"
477
+
478
+ // Enable automatic timestamps
479
+ timestamps: true, // Adds createdAt and updatedAt fields
480
+
481
+ // Define data partitions for efficient querying
482
+ partitions: {
483
+ byRegion: {
484
+ fields: { region: "string" }
485
+ },
486
+ byAgeGroup: {
487
+ fields: { ageGroup: "string" }
488
+ },
489
+ byDate: {
490
+ fields: { createdAt: "date|maxlength:10" }
491
+ }
492
+ },
493
+
494
+ // Custom operation hooks
495
+ hooks: {
496
+ // Pre-operation hooks (can modify data)
497
+ preInsert: [
498
+ async (data) => {
499
+ // Validate or transform data before insert
500
+ if (!data.email.includes("@")) {
501
+ throw new Error("Invalid email format");
502
+ }
503
+ return data;
504
+ }
505
+ ],
506
+ preUpdate: [
507
+ async (id, data) => {
508
+ // Validate or transform data before update
509
+ return data;
510
+ }
511
+ ],
512
+ preDelete: [
513
+ async (id) => {
514
+ // Validate before deletion
515
+ return true; // Return false to abort
516
+ }
517
+ ],
518
+
519
+ // Post-operation hooks (cannot modify data)
520
+ afterInsert: [
521
+ async (data) => {
522
+ console.log("User created:", data.id);
523
+ }
524
+ ],
525
+ afterUpdate: [
526
+ async (id, data) => {
527
+ console.log("User updated:", id);
528
+ }
529
+ ],
530
+ afterDelete: [
531
+ async (id) => {
532
+ console.log("User deleted:", id);
533
+ }
534
+ ]
535
+ }
536
+ }
537
+ });
400
538
  ```
401
539
 
402
- Keep in mind that we need to send a request to each object to be created. There is an option to change the amount of simultaneos connections that your client will handle.
540
+ ### 3. Schema Validation
541
+
542
+ `s3db.js` uses [fastest-validator](https://github.com/icebob/fastest-validator) for schema validation with robust handling of edge cases:
403
543
 
404
544
  ```javascript
405
- const s3db = new S3db({
406
- parallelism: 100, // default = 10
407
- });
545
+ const attributes = {
546
+ // Basic types
547
+ name: "string|min:2|max:100|trim",
548
+ email: "email|nullable",
549
+ age: "number|integer|positive",
550
+ isActive: "boolean",
551
+
552
+ // Advanced types
553
+ website: "url",
554
+ uuid: "uuid",
555
+ createdAt: "date",
556
+ price: "currency|symbol:$",
557
+
558
+ // Custom s3db types
559
+ password: "secret", // Encrypted field
560
+
561
+ // Nested objects (supports empty objects and null values)
562
+ address: {
563
+ street: "string",
564
+ city: "string",
565
+ country: "string",
566
+ zipCode: "string|optional"
567
+ },
568
+
569
+ // Arrays (robust serialization with special character handling)
570
+ tags: "array|items:string|unique", // Handles empty arrays: []
571
+ scores: "array|items:number|min:1", // Handles null arrays
572
+ categories: "array|items:string", // Handles arrays with pipe characters: ['tag|special', 'normal']
573
+
574
+ // Multiple types
575
+ id: ["string", "number"],
576
+
577
+ // Complex nested structures
578
+ metadata: {
579
+ settings: "object|optional", // Can be empty: {}
580
+ preferences: "object|optional" // Can be null
581
+ }
582
+ };
408
583
  ```
409
584
 
410
- This method uses [`supercharge/promise-pool`](https://github.com/supercharge/promise-pool) to organize the parallel promises.
585
+ ### Enhanced Array and Object Handling
411
586
 
412
- ### Get many
587
+ s3db.js now provides robust serialization for complex data structures:
413
588
 
414
589
  ```javascript
415
- await resource.getMany(["id1", "id2", "id3 "]);
590
+ // โœ… Supported: Empty arrays and objects
591
+ const user = await users.insert({
592
+ name: "John Doe",
593
+ tags: [], // Empty array - properly serialized
594
+ metadata: {}, // Empty object - properly handled
595
+ preferences: null // Null object - correctly preserved
596
+ });
416
597
 
417
- // [
418
- // obj1,
419
- // obj2,
420
- // obj3,
421
- // ]
598
+ // โœ… Supported: Arrays with special characters
599
+ const product = await products.insert({
600
+ name: "Widget",
601
+ categories: ["electronics|gadgets", "home|office"], // Pipe characters escaped
602
+ tags: ["tag|with|pipes", "normal-tag"] // Multiple pipes handled
603
+ });
422
604
  ```
423
605
 
424
- ### Get all
606
+ ## ๐Ÿ› ๏ธ API Reference
425
607
 
426
- ```javascript
427
- const data = await resource.getAll();
608
+ ### Database Operations
609
+
610
+ #### Connect to Database
428
611
 
429
- // [
430
- // obj1,
431
- // obj2,
432
- // ...
433
- // ]
612
+ ```javascript
613
+ await s3db.connect();
614
+ // Emits 'connected' event when ready
434
615
  ```
435
616
 
436
- ### Delete many
617
+ #### Create Resource
437
618
 
438
619
  ```javascript
439
- await resource.deleteMany(["id1", "id2", "id3 "]);
620
+ const resource = await s3db.createResource({
621
+ name: "users",
622
+ attributes: {
623
+ name: "string",
624
+ email: "email"
625
+ }
626
+ });
440
627
  ```
441
628
 
442
- ### Delete all
629
+ #### Get Resource Reference
443
630
 
444
631
  ```javascript
445
- await resource.deleteAll();
632
+ const users = s3db.resource("users");
633
+ // or
634
+ const users = s3db.resources.users
446
635
  ```
447
636
 
448
- ### List ids
637
+ ### Resource Operations
638
+
639
+ #### Insert Document
449
640
 
450
641
  ```javascript
451
- const ids = await resource.listIds();
642
+ // With custom ID
643
+ const user = await users.insert({
644
+ id: "user-123",
645
+ name: "John Doe",
646
+ email: "john@example.com"
647
+ });
452
648
 
453
- // [
454
- // 'id1',
455
- // 'id2',
456
- // 'id3',
457
- // ]
649
+ // Auto-generated ID
650
+ const user = await users.insert({
651
+ name: "Jane Doe",
652
+ email: "jane@example.com"
653
+ });
654
+ // ID will be auto-generated using nanoid
458
655
  ```
459
656
 
460
- ---
657
+ #### Get Document
461
658
 
462
- ## Resource streams
463
-
464
- As we need to request the metadata for each id to return it's attributes, a better way to handle a huge amount off data might be using streams.
659
+ ```javascript
660
+ const user = await users.get("user-123");
661
+ console.log(user.name); // "John Doe"
662
+ ```
465
663
 
466
- ### Readable stream
664
+ #### Update Document
467
665
 
468
666
  ```javascript
469
- const readableStream = await resource.readable();
470
-
471
- readableStream.on("id", (id) => console.log("id =", id));
472
- readableStream.on("data", (lead) => console.log("lead.id =", lead.id));
473
- readableStream.on("end", console.log("end"));
667
+ const updatedUser = await users.update("user-123", {
668
+ name: "John Smith",
669
+ age: 31
670
+ });
671
+ // Only specified fields are updated
474
672
  ```
475
673
 
476
- ### Writable stream
674
+ #### Upsert Document
477
675
 
478
676
  ```javascript
479
- const writableStream = await resource.writable();
480
-
481
- writableStream.write({
482
- lead: {
483
- fullName: "My Test Name",
484
- personalEmail: `bulk-${k}@mymail.com`,
485
- mobileNumber: "+55 11 1234567890",
486
- },
677
+ // Insert if doesn't exist, update if exists
678
+ const user = await users.upsert("user-123", {
679
+ name: "John Doe",
680
+ email: "john@example.com",
681
+ age: 30
487
682
  });
488
683
  ```
489
684
 
490
- ---
685
+ #### Delete Document
491
686
 
492
- ## S3 Client
687
+ ```javascript
688
+ await users.delete("user-123");
689
+ ```
493
690
 
494
- `s3db.js` has a S3 proxied client named [`S3Client`](https://github.com/forattini-dev/s3db.js/blob/main/src/s3-client.class.ts). It brings a few handy and less verbose functions to deal with AWS S3's api.
691
+ #### Count Documents
495
692
 
496
693
  ```javascript
497
- import { S3Client } from "s3db.js";
498
-
499
- const client = new S3Client({ connectionString });
694
+ const count = await users.count();
695
+ console.log(`Total users: ${count}`);
500
696
  ```
501
697
 
502
- Each method has a **[:link:](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html) link** to the official `aws-sdk` docs.
698
+ ### Bulk Operations
503
699
 
504
- ##### getObject [:link:](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property)
700
+ #### Insert Many
505
701
 
506
702
  ```javascript
507
- const { Body, Metadata } = await client.getObject({
508
- key: `my-prefixed-file.csv`,
509
- });
703
+ const users = [
704
+ { name: "User 1", email: "user1@example.com" },
705
+ { name: "User 2", email: "user2@example.com" },
706
+ { name: "User 3", email: "user3@example.com" }
707
+ ];
510
708
 
511
- // AWS.Response
709
+ await users.insertMany(users);
512
710
  ```
513
711
 
514
- ##### putObject [:link:](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property)
712
+ #### Get Many
515
713
 
516
714
  ```javascript
517
- const response = await client.putObject({
518
- key: `my-prefixed-file.csv`,
519
- contentType: "text/csv",
520
- metadata: { a: "1", b: "2", c: "3" },
521
- body: "a;b;c\n1;2;3\n4;5;6",
522
- });
523
-
524
- // AWS.Response
715
+ const userList = await users.getMany(["user-1", "user-2", "user-3"]);
525
716
  ```
526
717
 
527
- ##### headObject [:link:](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#headObject-property)
718
+ #### Delete Many
528
719
 
529
720
  ```javascript
530
- const { Metadata } = await client.headObject({
531
- key: `my-prefixed-file.csv`,
532
- });
533
-
534
- // AWS.Response
721
+ await users.deleteMany(["user-1", "user-2", "user-3"]);
535
722
  ```
536
723
 
537
- ##### deleteObject [:link:](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObject-property)
724
+ #### Get All
538
725
 
539
726
  ```javascript
540
- const response = await client.deleteObject({
541
- key: `my-prefixed-file.csv`,
542
- });
543
-
544
- // AWS.Response
727
+ const allUsers = await users.getAll();
728
+ // Returns all documents in the resource
545
729
  ```
546
730
 
547
- ##### deleteObjects [:link:](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObjects-property)
731
+ #### List IDs
548
732
 
549
733
  ```javascript
550
- const response = await client.deleteObjects({
551
- keys: [`my-prefixed-file.csv`, `my-other-prefixed-file.csv`],
552
- });
553
-
554
- // AWS.Response
734
+ const userIds = await users.listIds();
735
+ // Returns array of all document IDs
555
736
  ```
556
737
 
557
- ##### listObjects [:link:](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property)
738
+ #### Delete All
558
739
 
559
740
  ```javascript
560
- const response = await client.listObjects({
561
- prefix: `my-subdir`,
562
- });
563
-
564
- // AWS.Response
741
+ await users.deleteAll();
742
+ // โš ๏ธ Destructive operation - removes all documents
565
743
  ```
566
744
 
567
- ##### count
745
+ ## ๐Ÿ“Š Examples
568
746
 
569
- Custom made method to make it easier to count keys within a listObjects loop.
747
+ ### E-commerce Application
570
748
 
571
749
  ```javascript
572
- const count = await client.count({
573
- prefix: `my-subdir`,
750
+ // Create product resource with body-overflow behavior for long descriptions
751
+ const products = await s3db.createResource({
752
+ name: "products",
753
+ attributes: {
754
+ name: "string|min:2|max:200",
755
+ description: "string|optional",
756
+ price: "number|positive",
757
+ category: "string",
758
+ tags: "array|items:string",
759
+ inStock: "boolean",
760
+ images: "array|items:url",
761
+ metadata: "object|optional"
762
+ },
763
+ options: {
764
+ behavior: "body-overflow", // Handle long product descriptions
765
+ timestamps: true // Track creation and update times
766
+ }
574
767
  });
575
768
 
576
- // 10
577
- ```
769
+ // Create order resource with enforce-limits for strict data control
770
+ const orders = await s3db.createResource({
771
+ name: "orders",
772
+ attributes: {
773
+ customerId: "string",
774
+ products: "array|items:string",
775
+ total: "number|positive",
776
+ status: "string|enum:pending,paid,shipped,delivered",
777
+ shippingAddress: {
778
+ street: "string",
779
+ city: "string",
780
+ country: "string",
781
+ zipCode: "string"
782
+ },
783
+ createdAt: "date"
784
+ },
785
+ options: {
786
+ behavior: "enforce-limits", // Strict validation for order data
787
+ timestamps: true
788
+ }
789
+ });
578
790
 
579
- ##### getAllKeys
791
+ // Insert products (long descriptions will be handled by body-overflow)
792
+ const product = await products.insert({
793
+ name: "Wireless Headphones",
794
+ description: "High-quality wireless headphones with noise cancellation, 30-hour battery life, premium comfort design, and crystal-clear audio quality. Perfect for music lovers, professionals, and gamers alike. Features include Bluetooth 5.0, active noise cancellation, touch controls, and a premium carrying case.",
795
+ price: 99.99,
796
+ category: "electronics",
797
+ tags: ["wireless", "bluetooth", "audio", "noise-cancellation"],
798
+ inStock: true,
799
+ images: ["https://example.com/headphones.jpg"]
800
+ });
580
801
 
581
- Custom made method to make it easier to return all keys in a subpath within a listObjects loop.
802
+ // Create order (enforce-limits ensures data integrity)
803
+ const order = await orders.insert({
804
+ customerId: "customer-123",
805
+ products: [product.id],
806
+ total: 99.99,
807
+ status: "pending",
808
+ shippingAddress: {
809
+ street: "123 Main St",
810
+ city: "New York",
811
+ country: "USA",
812
+ zipCode: "10001"
813
+ },
814
+ createdAt: new Date()
815
+ });
816
+ ```
582
817
 
583
- All returned keys will have the it's fullpath replaced with the current "scope" path.
818
+ ### User Authentication System
584
819
 
585
820
  ```javascript
586
- const keys = await client.getAllKeys({
587
- prefix: `my-subdir`,
821
+ // Create users resource with encrypted password and strict validation
822
+ const users = await s3db.createResource({
823
+ name: "users",
824
+ attributes: {
825
+ username: "string|min:3|max:50|unique",
826
+ email: "email|unique",
827
+ password: "secret", // Encrypted field
828
+ role: "string|enum:user,admin,moderator",
829
+ isActive: "boolean",
830
+ lastLogin: "date|optional",
831
+ profile: {
832
+ firstName: "string",
833
+ lastName: "string",
834
+ avatar: "url|optional",
835
+ bio: "string|optional"
836
+ }
837
+ },
838
+ options: {
839
+ behavior: "enforce-limits", // Strict validation for user data
840
+ timestamps: true // Track account creation and updates
841
+ }
588
842
  });
589
843
 
590
- // [
591
- // key1,
592
- // key2,
593
- // ...
594
- // ]
844
+ // Create sessions resource with body-overflow for session data
845
+ const sessions = await s3db.createResource({
846
+ name: "sessions",
847
+ attributes: {
848
+ userId: "string",
849
+ token: "secret", // Encrypted session token
850
+ expiresAt: "date",
851
+ userAgent: "string|optional",
852
+ ipAddress: "string|optional",
853
+ sessionData: "object|optional" // Additional session metadata
854
+ },
855
+ options: {
856
+ behavior: "body-overflow", // Handle large session data
857
+ timestamps: true
858
+ }
859
+ });
860
+
861
+ // Register user (enforce-limits ensures data integrity)
862
+ const user = await users.insert({
863
+ username: "john_doe",
864
+ email: "john@example.com",
865
+ password: "secure_password_123",
866
+ role: "user",
867
+ isActive: true,
868
+ profile: {
869
+ firstName: "John",
870
+ lastName: "Doe"
871
+ }
872
+ });
873
+
874
+ // Create session (body-overflow preserves all session data)
875
+ const session = await sessions.insert({
876
+ userId: user.id,
877
+ token: "jwt_token_here",
878
+ expiresAt: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours
879
+ userAgent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
880
+ ipAddress: "192.168.1.1",
881
+ sessionData: {
882
+ preferences: { theme: "dark", language: "en" },
883
+ lastActivity: new Date(),
884
+ deviceInfo: { type: "desktop", os: "Windows" }
885
+ }
886
+ });
595
887
  ```
596
888
 
597
- ---
889
+ ## ๐Ÿ”„ Streaming
598
890
 
599
- ## Events
891
+ For large datasets, use streams to process data efficiently:
600
892
 
601
- The 3 main classes `S3db`, `Resource` and `S3Client` are extensions of Javascript's `EventEmitter`.
893
+ ### Readable Stream
602
894
 
603
- | S3Database | S3Client | S3Resource | S3Resource Readable Stream |
604
- | ---------- | ------------- | ---------- | -------------------------- |
605
- | error | error | error | error |
606
- | connected | request | insert | id |
607
- | | response | get | data |
608
- | | response | update | |
609
- | | getObject | delete | |
610
- | | putObject | count | |
611
- | | headObject | insertMany | |
612
- | | deleteObject | deleteAll | |
613
- | | deleteObjects | listIds | |
614
- | | listObjects | getMany | |
615
- | | count | getAll | |
616
- | | getAllKeys | | |
895
+ ```javascript
896
+ const readableStream = await users.readable();
617
897
 
618
- ### S3Database
898
+ readableStream.on("id", (id) => {
899
+ console.log("Processing user ID:", id);
900
+ });
619
901
 
620
- #### error
902
+ readableStream.on("data", (user) => {
903
+ console.log("User:", user.name);
904
+ // Process each user
905
+ });
621
906
 
622
- ```javascript
623
- s3db.on("error", (error) => console.error(error));
907
+ readableStream.on("end", () => {
908
+ console.log("Finished processing all users");
909
+ });
910
+
911
+ readableStream.on("error", (error) => {
912
+ console.error("Stream error:", error);
913
+ });
624
914
  ```
625
915
 
626
- #### connected
916
+ ### Writable Stream
627
917
 
628
918
  ```javascript
629
- s3db.on("connected", () => {});
630
- ```
919
+ const writableStream = await users.writable();
631
920
 
632
- ### S3Client
921
+ // Write data to stream
922
+ writableStream.write({
923
+ name: "User 1",
924
+ email: "user1@example.com"
925
+ });
633
926
 
634
- Using this reference for the events:
927
+ writableStream.write({
928
+ name: "User 2",
929
+ email: "user2@example.com"
930
+ });
635
931
 
636
- ```javascript
637
- const client = s3db.client;
932
+ // End stream
933
+ writableStream.end();
638
934
  ```
639
935
 
640
- #### error
936
+ ### Stream to CSV
641
937
 
642
938
  ```javascript
643
- client.on("error", (error) => console.error(error));
644
- ```
939
+ import fs from "fs";
940
+ import { createObjectCsvWriter } from "csv-writer";
941
+
942
+ const csvWriter = createObjectCsvWriter({
943
+ path: "users.csv",
944
+ header: [
945
+ { id: "id", title: "ID" },
946
+ { id: "name", title: "Name" },
947
+ { id: "email", title: "Email" }
948
+ ]
949
+ });
645
950
 
646
- #### request
951
+ const readableStream = await users.readable();
952
+ const records = [];
647
953
 
648
- Emitted when a request is generated to AWS.
954
+ readableStream.on("data", (user) => {
955
+ records.push(user);
956
+ });
649
957
 
650
- ```javascript
651
- client.on("request", (action, params) => {});
958
+ readableStream.on("end", async () => {
959
+ await csvWriter.writeRecords(records);
960
+ console.log("CSV file created successfully");
961
+ });
652
962
  ```
653
963
 
654
- #### response
964
+ ## ๐Ÿ” Security & Encryption
655
965
 
656
- Emitted when a response is received from AWS.
966
+ ### Field-Level Encryption
967
+
968
+ Use the `"secret"` type for sensitive data:
657
969
 
658
970
  ```javascript
659
- client.on("response", (action, params, response) => {});
660
- ```
971
+ const users = await s3db.createResource({
972
+ name: "users",
973
+ attributes: {
974
+ username: "string",
975
+ email: "email",
976
+ password: "secret", // Encrypted
977
+ apiKey: "secret", // Encrypted
978
+ creditCard: "secret" // Encrypted
979
+ }
980
+ });
661
981
 
662
- #### getObject
982
+ // Data is automatically encrypted/decrypted
983
+ const user = await users.insert({
984
+ username: "john_doe",
985
+ email: "john@example.com",
986
+ password: "my_secure_password", // Stored encrypted
987
+ apiKey: "sk_live_123456789", // Stored encrypted
988
+ creditCard: "4111111111111111" // Stored encrypted
989
+ });
663
990
 
664
- ```javascript
665
- client.on("getObject", (options, response) => {});
991
+ // Retrieved data is automatically decrypted
992
+ const retrieved = await users.get(user.id);
993
+ console.log(retrieved.password); // "my_secure_password" (decrypted)
666
994
  ```
667
995
 
668
- #### putObject
996
+ ### Custom Encryption Key
669
997
 
670
998
  ```javascript
671
- client.on("putObject", (options, response) => {});
999
+ import fs from "fs";
1000
+
1001
+ const s3db = new S3db({
1002
+ uri: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp",
1003
+ passphrase: fs.readFileSync("./private-key.pem") // Custom encryption key
1004
+ });
672
1005
  ```
673
1006
 
674
- #### headObject
1007
+ ## ๐Ÿ’ฐ Cost Analysis
675
1008
 
676
- ```javascript
677
- client.on("headObject", (options, response) => {});
678
- ```
1009
+ ### Understanding S3 Costs
679
1010
 
680
- #### deleteObject
1011
+ - **PUT Requests**: $0.000005 per 1,000 requests
1012
+ - **GET Requests**: $0.0000004 per 1,000 requests
1013
+ - **Data Transfer**: $0.09 per GB
1014
+ - **Storage**: $0.023 per GB (but s3db.js uses 0-byte files)
681
1015
 
682
- ```javascript
683
- client.on("deleteObject", (options, response) => {});
684
- ```
1016
+ ### Cost Examples
685
1017
 
686
- #### deleteObjects
1018
+ #### Small Application (1,000 users)
687
1019
 
688
1020
  ```javascript
689
- client.on("deleteObjects", (options, response) => {});
690
- ```
1021
+ // Setup cost (one-time)
1022
+ const setupCost = 0.005; // 1,000 PUT requests
691
1023
 
692
- #### listObjects
1024
+ // Monthly read cost
1025
+ const monthlyReadCost = 0.0004; // 1,000 GET requests
693
1026
 
694
- ```javascript
695
- client.on("listObjects", (options, response) => {});
1027
+ console.log(`Setup: $${setupCost}`);
1028
+ console.log(`Monthly reads: $${monthlyReadCost}`);
696
1029
  ```
697
1030
 
698
- #### count
1031
+ #### Large Application (1,000,000 users)
699
1032
 
700
1033
  ```javascript
701
- client.on("count", (options, response) => {});
702
- ```
1034
+ // Setup cost (one-time)
1035
+ const setupCost = 5.00; // 1,000,000 PUT requests
703
1036
 
704
- #### getAllKeys
1037
+ // Monthly read cost
1038
+ const monthlyReadCost = 0.40; // 1,000,000 GET requests
705
1039
 
706
- ```javascript
707
- client.on("getAllKeys", (options, response) => {});
1040
+ console.log(`Setup: $${setupCost}`);
1041
+ console.log(`Monthly reads: $${monthlyReadCost}`);
708
1042
  ```
709
1043
 
710
- ### S3Resource
711
-
712
- Using this reference for the events:
1044
+ ### Cost Tracking Plugin
713
1045
 
714
1046
  ```javascript
715
- const resource = s3db.resource("leads");
716
- ```
1047
+ import { CostsPlugin } from "s3db.js";
717
1048
 
718
- #### error
1049
+ const s3db = new S3db({
1050
+ uri: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp",
1051
+ plugins: [CostsPlugin]
1052
+ });
719
1053
 
720
- ```javascript
721
- resource.on("error", (err) => console.error(err));
1054
+ // After operations
1055
+ console.log("Total cost:", s3db.client.costs.total.toFixed(4), "USD");
1056
+ console.log("Requests made:", s3db.client.costs.requests.total);
722
1057
  ```
723
1058
 
724
- #### insert
1059
+ ## ๐ŸŽ›๏ธ Advanced Features
725
1060
 
726
- ```javascript
727
- resource.on("insert", (data) => {});
728
- ```
1061
+ ### AutoEncrypt / AutoDecrypt
729
1062
 
730
- #### get
1063
+ Fields with the type `secret` are automatically encrypted and decrypted using the resource's passphrase. This ensures sensitive data is protected at rest.
731
1064
 
732
- ```javascript
733
- resource.on("get", (data) => {});
734
- ```
1065
+ ```js
1066
+ const users = await s3db.createResource({
1067
+ name: "users",
1068
+ attributes: {
1069
+ username: "string",
1070
+ password: "secret" // Will be encrypted
1071
+ }
1072
+ });
735
1073
 
736
- #### update
1074
+ const user = await users.insert({
1075
+ username: "john_doe",
1076
+ password: "my_secret_password"
1077
+ });
737
1078
 
738
- ```javascript
739
- resource.on("update", (attrs, data) => {});
1079
+ // The password is stored encrypted in S3, but automatically decrypted when retrieved
1080
+ const retrieved = await users.get(user.id);
1081
+ console.log(retrieved.password); // "my_secret_password"
740
1082
  ```
741
1083
 
742
- #### delete
1084
+ ### Resource Events
743
1085
 
744
- ```javascript
745
- resource.on("delete", (id) => {});
1086
+ All resources emit events for key operations. You can listen to these events for logging, analytics, or custom workflows.
1087
+
1088
+ ```js
1089
+ users.on("insert", (data) => console.log("User inserted:", data.id));
1090
+ users.on("get", (data) => console.log("User retrieved:", data.id));
1091
+ users.on("update", (attrs, data) => console.log("User updated:", data.id));
1092
+ users.on("delete", (id) => console.log("User deleted:", id));
746
1093
  ```
747
1094
 
748
- #### count
1095
+ ### Resource Schema Export/Import
749
1096
 
750
- ```javascript
751
- resource.on("count", (count) => {});
752
- ```
1097
+ You can export and import resource schemas for backup, migration, or versioning purposes.
753
1098
 
754
- #### insertMany
1099
+ ```js
1100
+ // Export schema
1101
+ const schemaData = users.schema.export();
755
1102
 
756
- ```javascript
757
- resource.on("insertMany", (count) => {});
1103
+ // Import schema
1104
+ const importedSchema = Schema.import(schemaData);
758
1105
  ```
759
1106
 
760
- #### getMany
1107
+ ## Partitions
761
1108
 
762
- ```javascript
763
- resource.on("getMany", (count) => {});
1109
+ `s3db.js` supports **partitions** to organize and query your data efficiently. Partitions allow you to group documents by one or more fields, making it easy to filter, archive, or manage large datasets.
1110
+
1111
+ ### Defining partitions
1112
+
1113
+ You can define partitions when creating a resource using the `options.partitions` property:
1114
+
1115
+ ```js
1116
+ const users = await s3db.createResource({
1117
+ name: "users",
1118
+ attributes: {
1119
+ name: "string",
1120
+ email: "email",
1121
+ region: "string",
1122
+ ageGroup: "string"
1123
+ },
1124
+ options: {
1125
+ partitions: {
1126
+ byRegion: {
1127
+ fields: { region: "string" }
1128
+ },
1129
+ byAgeGroup: {
1130
+ fields: { ageGroup: "string" }
1131
+ }
1132
+ }
1133
+ }
1134
+ });
764
1135
  ```
765
1136
 
766
- #### getAll
1137
+ ### Querying by partition
767
1138
 
768
- ```javascript
769
- resource.on("getAll", (count) => {});
1139
+ ```js
1140
+ // Find all users in the 'south' region
1141
+ const usersSouth = await users.query({ region: "south" });
1142
+
1143
+ // Find all users in the 'adult' age group
1144
+ const adults = await users.query({ ageGroup: "adult" });
770
1145
  ```
771
1146
 
772
- #### deleteAll
1147
+ ### Example: Time-based partition
773
1148
 
774
- ```javascript
775
- resource.on("deleteAll", (count) => {});
1149
+ ```js
1150
+ const logs = await s3db.createResource({
1151
+ name: "logs",
1152
+ attributes: {
1153
+ message: "string",
1154
+ level: "string",
1155
+ createdAt: "date"
1156
+ },
1157
+ options: {
1158
+ partitions: {
1159
+ byDate: {
1160
+ fields: { createdAt: "date|maxlength:10" }
1161
+ }
1162
+ }
1163
+ }
1164
+ });
1165
+
1166
+ // Query logs for a specific day
1167
+ const logsToday = await logs.query({ createdAt: "2024-06-27" });
776
1168
  ```
777
1169
 
778
- #### listIds
1170
+ ## Hooks
1171
+
1172
+ `s3db.js` provides a powerful hooks system to let you run custom logic before and after key operations on your resources. Hooks can be used for validation, transformation, logging, or any custom workflow.
1173
+
1174
+ ### Supported hooks
1175
+ - `preInsert` / `afterInsert`
1176
+ - `preUpdate` / `afterUpdate`
1177
+ - `preDelete` / `afterDelete`
1178
+
1179
+ ### Registering hooks
1180
+ You can register hooks when creating a resource or dynamically:
1181
+
1182
+ ```js
1183
+ const users = await s3db.createResource({
1184
+ name: "users",
1185
+ attributes: { name: "string", email: "email" },
1186
+ options: {
1187
+ hooks: {
1188
+ preInsert: [async (data) => {
1189
+ if (!data.email.includes("@")) throw new Error("Invalid email");
1190
+ return data;
1191
+ }],
1192
+ afterInsert: [async (data) => {
1193
+ console.log("User inserted:", data.id);
1194
+ }]
1195
+ }
1196
+ }
1197
+ });
779
1198
 
780
- ```javascript
781
- resource.on("listIds", (count) => {});
1199
+ // Or dynamically:
1200
+ users.addHook('preInsert', async (data) => {
1201
+ // Custom logic
1202
+ return data;
1203
+ });
782
1204
  ```
783
1205
 
784
- ---
1206
+ ### Hook execution order
1207
+ - Internal hooks run first, user hooks run last (in the order they were added).
1208
+ - Hooks can be async and can modify the data (for `pre*` hooks).
1209
+ - If a hook throws, the operation is aborted.
785
1210
 
786
1211
  ## Plugins
787
1212
 
788
- Anatomy of a plugin:
1213
+ `s3db.js` supports plugins to extend or customize its behavior. Plugins can hook into lifecycle events, add new methods, or integrate with external systems.
789
1214
 
790
- ```javascript
1215
+ ### Example: Custom plugin
1216
+
1217
+ ```js
791
1218
  const MyPlugin = {
792
- setup(s3db: S3db) {},
793
- start() {},
1219
+ setup(s3db) {
1220
+ console.log("Plugin setup");
1221
+ },
1222
+ start() {
1223
+ console.log("Plugin started");
1224
+ },
1225
+ onUserCreated(user) {
1226
+ console.log("New user created:", user.id);
1227
+ }
794
1228
  };
1229
+
1230
+ const s3db = new S3db({
1231
+ uri: "s3://...",
1232
+ plugins: [MyPlugin]
1233
+ });
795
1234
  ```
796
1235
 
797
- We have an example of a _costs simulator plugin_ [here!](https://github.com/forattini-dev/s3db.js/blob/main/src/plugins/costs.plugin.js)
1236
+ ## ๐Ÿšจ Limitations & Best Practices
798
1237
 
799
- ---
1238
+ ### Limitations
800
1239
 
801
- ## Cost simulation
1240
+ 1. **Document Size**: Maximum ~2KB per document (metadata only) - **๐Ÿ’ก Use behaviors to handle larger documents**
1241
+ 2. **No Complex Queries**: No SQL-like WHERE clauses or joins
1242
+ 3. **No Indexes**: No automatic indexing for fast lookups
1243
+ 4. **Sequential IDs**: Best performance with sequential IDs (00001, 00002, etc.)
1244
+ 5. **No Transactions**: No ACID transactions across multiple operations
1245
+ 6. **S3 Pagination**: S3 lists objects in pages of 1000 items maximum, and these operations are not parallelizable, which can make listing large datasets slow
802
1246
 
803
- S3's pricing deep dive:
1247
+ **๐Ÿ’ก Overcoming the 2KB Limit**: Use resource behaviors to handle documents that exceed the 2KB metadata limit:
1248
+ - **`body-overflow`**: Stores excess data in S3 object body (preserves all data)
1249
+ - **`data-truncate`**: Intelligently truncates data to fit within limits
1250
+ - **`enforce-limits`**: Strict validation to prevent oversized documents
1251
+ - **`user-management`**: Default behavior with warnings and monitoring
804
1252
 
805
- - Data volume [1 GB x 0.023 USD]: it relates to the total volume of storage used and requests volume but, in this implementation, we just upload `0 bytes` files.
806
- - GET Requests [1,000 GET requests in a month x 0.0000004 USD per request = 0.0004 USD]: every read requests
807
- - PUT Requests [1,000 PUT requests for S3 Standard Storage x 0.000005 USD per request = 0.005 USD]: every write request
808
- - Data transfer [Internet: 1 GB x 0.09 USD per GB = 0.09 USD]:
1253
+ ### โœ… Recent Improvements
809
1254
 
810
- Check by yourself the pricing page details at https://aws.amazon.com/s3/pricing/ and https://calculator.aws/#/addService/S3.
1255
+ **๐Ÿ”ง Enhanced Data Serialization (v3.3.2+)**
811
1256
 
812
- ### Big example
1257
+ s3db.js now handles complex data structures robustly:
813
1258
 
814
- Lets try to simulate a big project where you have a database with a few tables:
1259
+ - **Empty Arrays**: `[]` correctly serialized and preserved
1260
+ - **Null Arrays**: `null` values maintained without corruption
1261
+ - **Special Characters**: Arrays with pipe `|` characters properly escaped
1262
+ - **Empty Objects**: `{}` correctly mapped and stored
1263
+ - **Null Objects**: `null` object values preserved during serialization
1264
+ - **Nested Structures**: Complex nested objects with mixed empty/null values supported
815
1265
 
816
- - pageviews: 100,000,000 lines of 100 bytes each
817
- - leads: 1,000,000 lines of 200 bytes each
1266
+ ### Best Practices
818
1267
 
819
- ```javascript
820
- const Fakerator = require("fakerator");
821
- const fake = Fakerator("pt-BR");
1268
+ #### 1. Design for Document Storage
822
1269
 
823
- const pageview = {
824
- ip: this.faker.internet.ip(),
825
- domain: this.faker.internet.url(),
826
- path: this.faker.internet.url(),
827
- query: `?q=${this.faker.lorem.word()}`,
1270
+ ```javascript
1271
+ // โœ… Good: Nested structure is fine
1272
+ const user = {
1273
+ id: "user-123",
1274
+ name: "John Doe",
1275
+ email: "john@example.com",
1276
+ profile: {
1277
+ bio: "Software developer",
1278
+ avatar: "https://example.com/avatar.jpg",
1279
+ preferences: {
1280
+ theme: "dark",
1281
+ notifications: true
1282
+ }
1283
+ }
828
1284
  };
829
1285
 
830
- const lead = {
831
- name: fake.names.name(),
832
- mobile: fake.phone.number(),
833
- email: fake.internet.email(),
834
- country: "Brazil",
835
- city: fake.address.city(),
836
- state: fake.address.countryCode(),
837
- address: fake.address.street(),
1286
+ // โŒ Avoid: Large arrays in documents
1287
+ const user = {
1288
+ id: "user-123",
1289
+ name: "John Doe",
1290
+ // This could exceed metadata limits
1291
+ purchaseHistory: [
1292
+ { id: "order-1", date: "2023-01-01", total: 99.99 },
1293
+ { id: "order-2", date: "2023-01-15", total: 149.99 },
1294
+ // ... many more items
1295
+ ]
838
1296
  };
839
1297
  ```
840
1298
 
841
- If you write the whole database of:
1299
+ #### 2. Use Sequential IDs
842
1300
 
843
- - pageviews:
844
- - 100,000,000 PUT requests for S3 Standard Storage x 0.000005 USD per request = 500.00 USD (S3 Standard PUT requests cost)
845
- - leads:
846
- - 1,000,000 PUT requests for S3 Standard Storage x 0.000005 USD per request = 5.00 USD (S3 Standard PUT requests cost)
1301
+ ```javascript
1302
+ // โœ… Good: Sequential IDs for better performance
1303
+ const users = ["00001", "00002", "00003", "00004"];
847
1304
 
848
- It will cost 505.00 USD, once.
1305
+ // โš ๏ธ Acceptable: Random IDs (but ensure sufficient uniqueness)
1306
+ const users = ["abc123", "def456", "ghi789", "jkl012"];
849
1307
 
850
- If you want to read the whole database:
1308
+ // โŒ Avoid: Random IDs with low combinations (risk of collisions)
1309
+ const users = ["a1", "b2", "c3", "d4"]; // Only 26*10 = 260 combinations
1310
+ ```
851
1311
 
852
- - pageviews:
853
- - 100,000,000 GET requests in a month x 0.0000004 USD per request = 40.00 USD (S3 Standard GET requests cost)
854
- - (100,000,000 ร— 100 bytes)รท(1024ร—1000ร—1000) โ‰… 10 Gb
855
- Internet: 10 GB x 0.09 USD per GB = 0.90 USD
856
- - leads:
857
- - 1,000,000 GET requests in a month x 0.0000004 USD per request = 0.40 USD (S3 Standard GET requests cost)
858
- - (1,000,000 ร— 200 bytes)รท(1024ร—1000ร—1000) โ‰… 0.19 Gb
859
- Internet: 1 GB x 0.09 USD per GB = 0.09 USD
1312
+ #### 3. Optimize for Read Patterns
860
1313
 
861
- It will cost 41.39 USD, once.
1314
+ ```javascript
1315
+ // โœ… Good: Store frequently accessed data together
1316
+ const order = {
1317
+ id: "order-123",
1318
+ customerId: "customer-456",
1319
+ customerName: "John Doe", // Denormalized for quick access
1320
+ items: ["product-1", "product-2"],
1321
+ total: 99.99
1322
+ };
862
1323
 
863
- ### Small example
1324
+ // โŒ Avoid: Requiring multiple lookups
1325
+ const order = {
1326
+ id: "order-123",
1327
+ customerId: "customer-456", // Requires separate lookup
1328
+ items: ["product-1", "product-2"]
1329
+ };
1330
+ ```
864
1331
 
865
- Lets save some JWT tokens using the [RFC:7519](https://www.rfc-editor.org/rfc/rfc7519.html).
1332
+ #### 4. Use Streaming for Large Datasets
866
1333
 
867
1334
  ```javascript
868
- await s3db.createResource({
869
- name: "tokens",
870
- attributes: {
871
- iss: 'url|max:256',
872
- sub: 'string',
873
- aud: 'string',
874
- exp: 'number',
875
- email: 'email',
876
- name: 'string',
877
- scope: 'string',
878
- email_verified: 'boolean',
879
- })
880
-
881
- function generateToken () {
882
- const token = createTokenLib(...)
883
-
884
- await resource.insert({
885
- id: token.jti || md5(token)
886
- ...token,
887
- })
888
-
889
- return token
890
- }
1335
+ // โœ… Good: Use streams for large operations
1336
+ const readableStream = await users.readable();
1337
+ readableStream.on("data", (user) => {
1338
+ // Process each user individually
1339
+ });
1340
+
1341
+ // โŒ Avoid: Loading all data at once
1342
+ const allUsers = await users.getAll(); // May timeout with large datasets
1343
+ ```
891
1344
 
892
- function validateToken (token) {
893
- const id = token.jti || md5(token)
1345
+ #### 5. Implement Proper Error Handling
894
1346
 
895
- if (!validateTokenSignature(token, ...)) {
896
- await resource.deleteById(id)
897
- throw new Error('invalid-token')
1347
+ ```javascript
1348
+ // Method 1: Try-catch with get()
1349
+ try {
1350
+ const user = await users.get("non-existent-id");
1351
+ } catch (error) {
1352
+ if (error.message.includes("does not exist")) {
1353
+ console.log("User not found");
1354
+ } else {
1355
+ console.error("Unexpected error:", error);
898
1356
  }
1357
+ }
899
1358
 
900
- return resource.getById(id)
1359
+ // Method 2: Check existence first (โš ๏ธ Additional request cost)
1360
+ const userId = "user-123";
1361
+ if (await users.exists(userId)) {
1362
+ const user = await users.get(userId);
1363
+ console.log("User found:", user.name);
1364
+ } else {
1365
+ console.log("User not found");
901
1366
  }
902
1367
  ```
903
1368
 
904
- ## Roadmap
1369
+ **โš ๏ธ Cost Warning**: Using `exists()` creates an additional S3 request. For high-volume operations, prefer the try-catch approach to minimize costs.
1370
+
1371
+ #### 6. Choose the Right Behavior Strategy
1372
+
1373
+ ```javascript
1374
+ // โœ… For development and testing - allows flexibility
1375
+ const devUsers = await s3db.createResource({
1376
+ name: "users",
1377
+ attributes: { name: "string", email: "email" },
1378
+ options: { behavior: "user-management" }
1379
+ });
1380
+
1381
+ // โœ… For production with strict data control
1382
+ const prodUsers = await s3db.createResource({
1383
+ name: "users",
1384
+ attributes: { name: "string", email: "email" },
1385
+ options: { behavior: "enforce-limits" }
1386
+ });
1387
+
1388
+ // โœ… For preserving all data with larger documents
1389
+ const blogPosts = await s3db.createResource({
1390
+ name: "posts",
1391
+ attributes: { title: "string", content: "string", author: "string" },
1392
+ options: { behavior: "body-overflow" }
1393
+ });
1394
+
1395
+ // โœ… For structured data where truncation is acceptable
1396
+ const productDescriptions = await s3db.createResource({
1397
+ name: "products",
1398
+ attributes: { name: "string", description: "string", price: "number" },
1399
+ options: { behavior: "data-truncate" }
1400
+ });
1401
+ ```
1402
+
1403
+ **Behavior Selection Guide:**
1404
+ - **`user-management`**: Development, testing, or when you want full control
1405
+ - **`enforce-limits`**: Production systems requiring strict data validation
1406
+ - **`body-overflow`**: When data integrity is critical and you need to preserve all information
1407
+ - **`data-truncate`**: When you can afford to lose some data but want to maintain structure
905
1408
 
906
- Tasks board can be found at [this link](https://github.com/orgs/forattini-dev/projects/5/views/1)!
1409
+ ### Performance Tips
907
1410
 
908
- Feel free to interact and PRs are welcome! :)
1411
+ 1. **Enable Caching**: Use `cache: true` for frequently accessed data
1412
+ 2. **Adjust Parallelism**: Increase `parallelism`