hydrousdb 3.0.2 → 3.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,45 +18,64 @@
18
18
 
19
19
  - [What is HydrousDB?](#what-is-hydrousdb)
20
20
  - [How It Works](#how-it-works)
21
- - [Quick Start (5 minutes)](#quick-start-5-minutes)
21
+ - [Quick Start](#quick-start)
22
+ - [Installation](#installation)
23
+ - [Module Formats — ESM & CommonJS](#module-formats--esm--commonjs)
22
24
  - [Records](#records)
23
25
  - [Create](#create-a-record)
24
26
  - [Read](#read-a-record)
25
- - [Update](#update-a-record)
27
+ - [Update — patch vs set](#update-a-record)
26
28
  - [Delete](#delete-a-record)
27
29
  - [Query](#query-records)
28
- - [Batch Operations](#batch-operations)
30
+ - [Count](#count-records)
31
+ - [Batch Create](#batch-create)
32
+ - [Batch Delete](#batch-delete)
29
33
  - [Version History](#version-history)
34
+ - [Write-Filter Sentinels](#write-filter-sentinels)
35
+ - [Custom Record IDs](#custom-record-ids)
30
36
  - [Authentication](#authentication)
31
- - [Sign Up](#sign-up-users)
37
+ - [Sign Up](#sign-up)
32
38
  - [Log In / Log Out](#log-in--log-out)
33
39
  - [Session Management](#session-management)
34
- - [Password Reset](#password-reset-flow)
40
+ - [Validate a Session](#validate-a-session)
41
+ - [Update Profile](#update-profile)
42
+ - [Change Password](#change-password)
43
+ - [Password Reset Flow](#password-reset-flow)
35
44
  - [Email Verification](#email-verification)
36
- - [Admin Operations](#admin-operations)
45
+ - [Admin — List Users](#admin--list-users)
46
+ - [Admin — Lock / Unlock](#admin--lock--unlock)
47
+ - [Admin — Delete Users](#admin--delete-users)
37
48
  - [File Storage](#file-storage)
38
49
  - [Simple Upload](#simple-upload)
39
- - [Large File Upload (with progress)](#large-file-upload-with-progress)
40
- - [Download](#download-files)
50
+ - [Upload Raw JSON or Text](#upload-raw-json-or-text)
51
+ - [Large File Upload with Progress](#large-file-upload-with-progress)
52
+ - [Batch Upload](#batch-upload)
53
+ - [Download](#download)
54
+ - [Batch Download](#batch-download)
41
55
  - [List Files](#list-files)
42
56
  - [Scoped Storage](#scoped-storage)
43
- - [Share & Visibility](#share--visibility)
44
- - [File Operations](#file-operations)
57
+ - [File Metadata](#file-metadata)
58
+ - [Signed Share URLs](#signed-share-urls)
59
+ - [Visibility](#visibility)
60
+ - [Move, Copy, Delete](#move-copy-delete)
61
+ - [Storage Stats](#storage-stats)
45
62
  - [Analytics](#analytics)
46
- - [Count](#count)
63
+ - [Count](#count-1)
47
64
  - [Distribution](#distribution)
48
65
  - [Sum](#sum)
49
66
  - [Time Series](#time-series)
67
+ - [Field Time Series](#field-time-series)
50
68
  - [Top N](#top-n)
51
69
  - [Field Stats](#field-stats)
52
70
  - [Multi-Metric Dashboard](#multi-metric-dashboard)
53
- - [Filtered Records](#filtered-records-bigquery)
71
+ - [Filtered Records via BigQuery](#filtered-records-via-bigquery)
54
72
  - [Cross-Bucket Comparison](#cross-bucket-comparison)
55
- - [Storage Stats](#storage-stats)
56
- - [TypeScript Support](#typescript-support)
73
+ - [Storage Stats](#storage-stats-1)
74
+ - [Raw Query](#raw-query)
75
+ - [TypeScript](#typescript)
57
76
  - [Error Handling](#error-handling)
58
77
  - [Security Best Practices](#security-best-practices)
59
- - [API Reference](#api-reference)
78
+ - [Full API Reference](#full-api-reference)
60
79
  - [Contributing](#contributing)
61
80
  - [License](#license)
62
81
 
@@ -64,36 +83,33 @@
64
83
 
65
84
  ## What is HydrousDB?
66
85
 
67
- Traditional databases start choking when your JSON records get large. Postgres hits row-size limits. Firestore charges per field read. MongoDB Atlas buckles under millions of 500 KB+ documents. They were designed for structured rows and small payloads — not the kind of deeply nested, real-world JSON that modern applications actually produce.
68
-
69
- HydrousDB is built specifically for that problem. It stores every record as a compressed GCS blob, retrieves any record in a single network call (no index lookups — the storage path is computed directly from the record ID), and runs analytics at BigQuery scale without ETL. The bigger and messier your JSON, the more it outperforms traditional databases.
86
+ Traditional databases start choking when your JSON records get large. Postgres hits row-size limits. Firestore charges per field read. MongoDB buckles under millions of 500 KB+ documents. They were built for structured rows and small payloads — not the deeply nested, real-world JSON that modern apps actually produce.
70
87
 
71
- **Systems that benefit immediately:**
88
+ HydrousDB is built specifically for that problem. It stores every record as a compressed GCS blob, retrieves any record in a single network call (the storage path is computed directly from the record ID — no index lookups), and runs analytics at BigQuery scale without ETL. The bigger and messier your JSON, the more it outperforms traditional databases.
72
89
 
73
90
  | Domain | Example records | Why traditional DBs struggle |
74
91
  |---|---|---|
75
- | 🏥 **Hospital / EMR** | Full patient charts — vitals history, medication lists, clinical notes, imaging metadata | 850 KB+ per chart, millions of patients, strict audit trails |
76
- | 🎓 **School management** | Student portfolios — all grades, attendance, assessments, teacher notes across years | Deep nesting, bursty writes at term-end, long-term archival |
77
- | 🏭 **IoT / Industrial** | Sensor telemetry — time-stamped readings, device state, calibration metadata | Billions of records, append-heavy, rarely updated |
78
- | 🛒 **E-commerce** | Order records — line items, fulfilment events, return history, custom attributes | Highly variable shape, needs fast analytics across date ranges |
79
- | ⚖️ **Legal / compliance** | Case files — filings, correspondence, version history, linked documents | 1 MB+ records, immutable audit log, cross-case analytics |
80
- | 🎮 **Gaming** | Player save states — inventory, quest progress, achievement history, replay data | Large payloads, millions of concurrent users, burst writes |
81
- | 📡 **Logistics / tracking** | Shipment records — full event timeline, customs data, carrier metadata | Append-only events, heavy querying by date range and status |
92
+ | 🏥 **Hospital / EMR** | Full patient charts — vitals, medications, notes, imaging | 850 KB+ per chart, millions of patients, strict audit trails |
93
+ | 🎓 **School management** | Student portfolios — grades, assessments, teacher notes | Deep nesting, bursty writes at term-end, long-term archival |
94
+ | 🏭 **IoT / Industrial** | Sensor telemetry — readings, device state, calibration | Billions of records, append-heavy, rarely updated |
95
+ | 🛒 **E-commerce** | Orders — line items, fulfilment events, return history | Variable shape, fast analytics across date ranges |
96
+ | ⚖️ **Legal / compliance** | Case files — filings, correspondence, version history | 1 MB+ records, immutable audit log, cross-case analytics |
97
+ | 🎮 **Gaming** | Player save states — inventory, quest progress, replays | Large payloads, millions of users, burst writes |
82
98
 
83
99
  **What you get out of the box:**
84
100
 
85
101
  | Feature | What it does |
86
102
  |---|---|
87
- | **Records** | Schemaless JSON store. Billion-scale, gzip-compressed, date-encoded IDs for zero-lookup retrieval. Up to 1 MB per record. |
88
- | **Auth** | Full user authentication — signup, login, sessions, password reset, email verification, and admin controls. |
89
- | **Storage** | File uploads backed by Google Cloud Storage. Direct-to-GCS uploads, public/private visibility, signed share URLs. |
90
- | **Analytics** | BigQuery-powered aggregations — counts, distributions, time series, top-N, multi-metric dashboards, cross-bucket comparisons. Zero ETL. |
103
+ | **Records** | Schemaless JSON store. Billion-scale, gzip-compressed, date-encoded IDs for zero-lookup retrieval. |
104
+ | **Auth** | Full user system — signup, login, sessions, password reset, email verification, admin controls. |
105
+ | **Storage** | File uploads to GCS. Direct uploads, public/private visibility, signed share URLs. |
106
+ | **Analytics** | BigQuery-powered — counts, distributions, time series, top-N, cross-bucket. Zero ETL. |
91
107
 
92
108
  ---
93
109
 
94
110
  ## How It Works
95
111
 
96
- Every HydrousDB record ID encodes its creation date as a prefix (e.g. `260203-rec_01JA2XYZ`). This means the full storage path to any record can be computed in memory — no index lookup, no pointer chase. Just math.
112
+ Every record ID encodes its creation date as a prefix (e.g. `260203-rec_01JA2XYZ`). This means the full GCS storage path to any record can be computed in memory — no index lookup needed.
97
113
 
98
114
  ```
99
115
  260203-rec_01JA2XYZ
@@ -105,201 +121,258 @@ projects/pid/buckets/bk/records/26/02/03/rec_01JA.json.gz
105
121
  0 index reads ✓
106
122
  ```
107
123
 
108
- Records are gzip-compressed on write (typically 60–80% size reduction). A full 850 KB hospital patient chart compresses to ~255 KB on disk automatically, every time. Records age through storage tiers (Standard → Nearline → Coldline → Archive) as they get older, keeping historical data accessible without manual lifecycle management.
109
-
110
- This architecture means HydrousDB handles what breaks other databases:
111
- - **Huge records** — up to 1 MB per document, compressed
112
- - **Append-heavy workloads** — IoT telemetry, audit logs, event streams
113
- - **Date-range queries at scale** — the ID prefix enables efficient folder scans without a full table scan
114
- - **Long-term retention** — billions of records stay queryable via BigQuery without any migration
124
+ Records are gzip-compressed on write (60–80% size reduction). An 850 KB hospital chart becomes ~255 KB on disk, automatically, every time.
115
125
 
116
126
  ---
117
127
 
118
- ## Quick Start (5 minutes)
119
-
120
- ### Step 1 — Create your account
128
+ ## Quick Start
121
129
 
122
- Go to **[https://hydrousdb.com](https://hydrousdb.com)** and sign up for a free account.
130
+ ### 1. Create your account
123
131
 
124
- ### Step 2 — Create your first bucket
132
+ Sign up at [https://hydrousdb.com](https://hydrousdb.com).
125
133
 
126
- 1. Log in to your dashboard at **[https://hydrousdb.com/dashboard](https://hydrousdb.com/dashboard)**.
127
- 2. Click **"New Bucket"**.
128
- 3. Give it a name — use lowercase letters, numbers, hyphens, or underscores (e.g. `my-first-bucket`).
129
- 4. Click **"Create"**.
134
+ ### 2. Get your API keys
130
135
 
131
- > 💡 **What is a bucket?** A bucket is a named collection of JSON records — similar to a table in SQL or a collection in MongoDB.
132
-
133
- ### Step 3 — Grab your API Keys
134
-
135
- HydrousDB uses three separate keys, each scoped to a service:
136
+ From the dashboard **Settings API Keys**, create three key types:
136
137
 
137
138
  | Key | Prefix | Used for |
138
139
  |---|---|---|
139
- | **Auth Key** | `hk_auth_…` | All `/auth/*` routes — signup, login, sessions |
140
+ | **Auth Key** | `hk_auth_…` | All auth routes — signup, login, sessions |
140
141
  | **Bucket Security Key** | `hk_bucket_…` | Records and analytics |
141
142
  | **Storage Key(s)** | `ssk_…` | File storage — one key per storage bucket |
142
143
 
143
- 1. In the dashboard go to **Settings API Keys**.
144
- 2. Generate each key type you need.
145
- 3. Copy them — you'll use all three when initialising the client.
146
-
147
- > ⚠️ **These keys are your credentials.** Treat them like passwords. Never commit them to Git. Use environment variables.
144
+ > ⚠️ Never commit these to Git. Store them in environment variables.
148
145
 
149
- ### Step 4 — Install the SDK
146
+ ### 3. Install
150
147
 
151
148
  ```bash
152
149
  npm install hydrousdb
153
- # or
154
- yarn add hydrousdb
155
- # or
156
- pnpm add hydrousdb
150
+ # or: yarn add hydrousdb / pnpm add hydrousdb
157
151
  ```
158
152
 
159
153
  **Requirements:** Node.js 18+ (uses the native `fetch` API).
160
154
 
161
- ### Step 5 Your first record
155
+ ### 4. Create the client and write your first record
162
156
 
163
157
  ```typescript
164
158
  import { createClient } from 'hydrousdb';
165
159
 
166
- // Create the client once — reuse it everywhere
160
+ // Create once — reuse everywhere in your app
167
161
  const db = createClient({
168
- authKey: process.env.HYDROUS_AUTH_KEY!, // hk_auth_…
169
- bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!, // hk_bucket_…
162
+ authKey: process.env.HYDROUS_AUTH_KEY!,
163
+ bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!,
170
164
  storageKeys: {
171
- main: process.env.HYDROUS_STORAGE_MAIN!, // ssk_…
165
+ main: process.env.HYDROUS_STORAGE_MAIN!,
172
166
  },
173
167
  });
174
168
 
175
- // Write a record to your bucket
176
- const post = await db.records('my-first-bucket').create({
169
+ // Write
170
+ const post = await db.records('my-bucket').create({
177
171
  title: 'Hello, HydrousDB!',
178
- body: 'My first record.',
179
172
  published: false,
180
173
  });
181
174
 
182
175
  console.log(post.id); // "260601-rec_01JA2XYZ"
183
176
  console.log(post.createdAt); // 1717200000000
184
177
 
185
- // Read it back — zero database reads, path computed from ID
186
- const fetched = await db.records('my-first-bucket').get(post.id);
187
- console.log(fetched.title); // "Hello, HydrousDB!"
178
+ // Read back — zero database reads, path computed from ID
179
+ const fetched = await db.records('my-bucket').get(post.id);
180
+
181
+ // Update
182
+ await db.records('my-bucket').patch(post.id, { published: true });
183
+
184
+ // Delete
185
+ await db.records('my-bucket').delete(post.id);
186
+ ```
187
+
188
+ ---
189
+
190
+ ## Installation
191
+
192
+ ```bash
193
+ npm install hydrousdb
194
+ ```
195
+
196
+ ### Module Formats — ESM & CommonJS
197
+
198
+ The package ships both ESM (`.mjs`) and CommonJS (`.cjs`) builds. Your toolchain picks the right one automatically based on your `import` or `require` call.
188
199
 
189
- // Update it
190
- await db.records('my-first-bucket').patch(post.id, { published: true });
200
+ ```typescript
201
+ // ESM — Next.js, Vite, modern Node, TypeScript
202
+ import { createClient } from 'hydrousdb';
203
+ ```
191
204
 
192
- // Delete it
193
- await db.records('my-first-bucket').delete(post.id);
205
+ ```javascript
206
+ // CommonJS — legacy Node, Jest without transform, older tooling
207
+ const { createClient } = require('hydrousdb');
194
208
  ```
195
209
 
196
- 🎉 **That's it.** You're live.
210
+ Both exports are listed explicitly in `package.json` under the `exports` field:
211
+
212
+ ```json
213
+ {
214
+ "exports": {
215
+ ".": {
216
+ "import": "./dist/index.mjs",
217
+ "require": "./dist/index.cjs",
218
+ "types": "./dist/index.d.ts"
219
+ }
220
+ }
221
+ }
222
+ ```
223
+
224
+ **Using with Next.js?** If Next.js resolves the CJS build instead of ESM (common with the Pages Router or older Next configs), add this to `next.config.js`:
225
+
226
+ ```javascript
227
+ const nextConfig = {
228
+ webpack(config) {
229
+ config.resolve.conditionNames = ['import', 'module', 'require', 'default'];
230
+ return config;
231
+ },
232
+ };
233
+ ```
197
234
 
198
235
  ---
199
236
 
200
237
  ## Records
201
238
 
202
239
  Records are JSON objects stored in named buckets. Every record automatically gets:
203
- - `id` — date-prefixed unique identifier (e.g. `"260601-rec_01JA2XYZ"`) — encodes storage path
240
+
241
+ - `id` — date-prefixed unique identifier (`"260601-rec_01JA2XYZ"`) — encodes the GCS path
204
242
  - `createdAt` — Unix timestamp in milliseconds
205
- - `updatedAt` — Unix timestamp in milliseconds (updated on every write)
243
+ - `updatedAt` — Unix timestamp in milliseconds
206
244
 
207
- Records are gzip-compressed before storage. A 850 KB EMR chart becomes ~255 KB on disk. You never manage this — it's always on.
245
+ ```typescript
246
+ const posts = db.records('blog-posts');
247
+ // or typed:
248
+ const orders = db.records<Order>('orders');
249
+ ```
208
250
 
209
251
  ### Create a Record
210
252
 
211
253
  ```typescript
212
- const products = db.records('products');
213
-
214
- const product = await products.create({
215
- name: 'Wireless Headphones',
216
- price: 79.99,
217
- inStock: true,
218
- tags: ['audio', 'wireless'],
254
+ const post = await posts.create({
255
+ title: 'My First Post',
256
+ body: 'Hello world.',
257
+ status: 'draft',
258
+ views: 0,
219
259
  });
220
260
 
221
- // product.id, product.createdAt, product.updatedAt are added automatically
261
+ // post.id, post.createdAt, post.updatedAt are added automatically
262
+ console.log(post.id); // "260601-rec_01JA2XYZ"
222
263
  ```
223
264
 
265
+ **With queryable fields** — fields you want to filter on server-side must be declared at write time:
266
+
267
+ ```typescript
268
+ const post = await posts.create(
269
+ {
270
+ title: 'My First Post',
271
+ status: 'draft',
272
+ authorId: 'usr_abc',
273
+ },
274
+ {
275
+ queryableFields: ['status', 'authorId'], // index these for filtering
276
+ userEmail: 'alice@example.com', // optional audit trail
277
+ },
278
+ );
279
+ ```
280
+
281
+ > 💡 **Why declare queryable fields?** HydrousDB stores records as compressed blobs. Fields you want to filter or sort by need to be registered in a lightweight index at write time. You only pay index overhead for the fields you actually query.
282
+
224
283
  ### Read a Record
225
284
 
226
285
  ```typescript
227
- // Get by ID — the storage path is derived from the ID in memory, no index read
228
- const product = await products.get('rec_abc123');
286
+ // Path computed from the ID in memory zero index reads
287
+ const post = await posts.get('260601-rec_01JA2XYZ');
229
288
 
230
- // Throws HydrousError with code RECORD_NOT_FOUND if missing
289
+ // Throws HydrousError (code: RECORD_NOT_FOUND) if the ID doesn't exist
231
290
  ```
232
291
 
233
292
  ### Update a Record
234
293
 
294
+ **`patch(id, data)` — merge update.** Only the fields you provide are changed. All other fields on the record are left untouched.
295
+
235
296
  ```typescript
236
- // Patch (merge) only the specified fields are changed
237
- const updated = await products.patch('rec_abc123', {
238
- price: 69.99,
239
- inStock: false,
297
+ const updated = await posts.patch('260601-rec_01JA2XYZ', {
298
+ status: 'published',
299
+ views: 1,
240
300
  });
301
+ ```
241
302
 
242
- // Set (full replace) the entire record is replaced
243
- const replaced = await products.set('rec_abc123', {
244
- name: 'Wireless Headphones v2',
245
- price: 89.99,
246
- inStock: true,
247
- tags: ['audio', 'wireless', 'premium'],
303
+ **`set(id, data)` full replace.** The entire record is replaced with the new data.
304
+
305
+ ```typescript
306
+ const replaced = await posts.set('260601-rec_01JA2XYZ', {
307
+ title: 'Updated Title',
308
+ body: 'New content.',
309
+ status: 'published',
310
+ views: 42,
248
311
  });
249
312
  ```
250
313
 
314
+ **Disable merge** (force field removal):
315
+
316
+ ```typescript
317
+ // merge: false means fields not in `data` are removed
318
+ await posts.patch('260601-rec_01JA2XYZ', { status: 'archived' }, { merge: false });
319
+ ```
320
+
251
321
  ### Delete a Record
252
322
 
253
323
  ```typescript
254
- await products.delete('rec_abc123');
324
+ await posts.delete('260601-rec_01JA2XYZ');
255
325
  ```
256
326
 
257
327
  ### Query Records
258
328
 
259
329
  ```typescript
260
- // Get all records (up to 100 by default)
261
- const { records } = await products.query();
330
+ // All records (up to 100 by default)
331
+ const { records } = await posts.query();
262
332
 
263
333
  // With filters
264
- const { records: affordableStock } = await products.query({
334
+ const { records: published } = await posts.query({
265
335
  filters: [
266
- { field: 'inStock', op: '==', value: true },
267
- { field: 'price', op: '<', value: 100 },
336
+ { field: 'status', op: '==', value: 'published' },
268
337
  ],
269
338
  });
270
339
 
271
- // Sort and paginate
272
- const { records, hasMore, nextCursor } = await products.query({
273
- orderBy: 'price',
274
- order: 'asc',
340
+ // Multiple filters, sort, limit
341
+ const { records, hasMore, nextCursor } = await posts.query({
342
+ filters: [
343
+ { field: 'status', op: '==', value: 'published' },
344
+ { field: 'views', op: '>', value: 100 },
345
+ ],
346
+ orderBy: 'createdAt',
347
+ order: 'desc',
275
348
  limit: 20,
276
349
  });
277
350
 
278
351
  // Next page
279
352
  if (hasMore) {
280
- const page2 = await products.query({
281
- orderBy: 'price',
282
- order: 'asc',
353
+ const page2 = await posts.query({
354
+ orderBy: 'createdAt',
355
+ order: 'desc',
283
356
  limit: 20,
284
357
  startAfter: nextCursor,
285
358
  });
286
359
  }
287
360
 
288
- // Select specific fields only
289
- const { records: lightRecords } = await products.query({
290
- fields: 'name,price,inStock',
361
+ // Select only specific fields (reduces payload size)
362
+ const { records: light } = await posts.query({
363
+ fields: 'id,title,status,createdAt',
291
364
  });
292
365
 
293
- // Filter by date range
294
- const { records: recent } = await products.query({
366
+ // Date range
367
+ const { records: thisWeek } = await posts.query({
295
368
  dateRange: {
296
- start: Date.now() - 7 * 24 * 60 * 60 * 1000, // 7 days ago
369
+ start: Date.now() - 7 * 24 * 60 * 60 * 1000,
297
370
  end: Date.now(),
298
371
  },
299
372
  });
300
373
  ```
301
374
 
302
- **Available filter operators:**
375
+ **Filter operators:**
303
376
 
304
377
  | Operator | Meaning |
305
378
  |---|---|
@@ -309,68 +382,162 @@ const { records: recent } = await products.query({
309
382
  | `<` | Less than |
310
383
  | `>=` | Greater than or equal |
311
384
  | `<=` | Less than or equal |
312
- | `CONTAINS` | String contains (case-sensitive) |
385
+ | `CONTAINS` | String contains |
386
+
387
+ > ⚠️ You can only filter on fields declared as `queryableFields` when the record was created. Filtering on an un-indexed field returns no results.
388
+
389
+ ### Count Records
390
+
391
+ ```typescript
392
+ // Total records in the bucket
393
+ const total = await posts.count();
394
+
395
+ // Records matching filters
396
+ const publishedCount = await posts.count([
397
+ { field: 'status', op: '==', value: 'published' },
398
+ ]);
399
+ ```
400
+
401
+ ### Batch Create
313
402
 
314
- ### Batch Operations
403
+ Up to 500 records per call.
315
404
 
316
405
  ```typescript
317
- // Create multiple records at once
318
- const created = await products.batchCreate([
319
- { name: 'Item A', price: 10.00, inStock: true },
320
- { name: 'Item B', price: 20.00, inStock: false },
321
- { name: 'Item C', price: 30.00, inStock: true },
406
+ const created = await posts.batchCreate(
407
+ [
408
+ { title: 'Post A', status: 'draft' },
409
+ { title: 'Post B', status: 'draft' },
410
+ { title: 'Post C', status: 'published' },
411
+ ],
412
+ {
413
+ queryableFields: ['status'],
414
+ userEmail: 'alice@example.com',
415
+ },
416
+ );
417
+ // → [{ id: '…', title: 'Post A', … }, { id: '…', title: 'Post B', … }, …]
418
+ ```
419
+
420
+ Each record in the batch can optionally carry a `_customRecordId` for upsert behaviour:
421
+
422
+ ```typescript
423
+ await posts.batchCreate([
424
+ { _customRecordId: '260601-post_welcome', title: 'Welcome', status: 'published' },
425
+ { title: 'Auto-ID post', status: 'draft' },
322
426
  ]);
323
- // → [{ id: '260601-rec_1', ... }, { id: '260601-rec_2', ... }, ...]
427
+ ```
428
+
429
+ ### Batch Delete
324
430
 
325
- // Count records
326
- const total = await products.count();
327
- const inStock = await products.count([{ field: 'inStock', op: '==', value: true }]);
431
+ Up to 500 records per call.
328
432
 
329
- // Get all records without filters (shortcut for query)
330
- const all = await products.getAll({ orderBy: 'price', order: 'asc' });
433
+ ```typescript
434
+ const { deleted, failed } = await posts.batchDelete([
435
+ '260601-rec_01JA',
436
+ '260601-rec_02JB',
437
+ '260601-rec_03JC',
438
+ ]);
331
439
 
332
- // Delete multiple records
333
- const { deleted, failed } = await products.batchDelete(['rec_1', 'rec_2', 'rec_3']);
440
+ console.log(`Deleted: ${deleted}, Failed: ${failed.length}`);
334
441
  ```
335
442
 
336
443
  ### Version History
337
444
 
338
- Every write to a record creates a new version, so you can travel back in time.
445
+ Every write creates a new version you can restore any record to any previous state.
446
+
447
+ ```typescript
448
+ // Get the full history list (most recent first)
449
+ const history = await posts.getHistory('260601-rec_01JA2XYZ');
450
+ // [{ id, version: 3, createdAt, data }, { version: 2, … }, { version: 1, … }]
451
+
452
+ // Restore to version 1 (the original)
453
+ const restored = await posts.restoreVersion('260601-rec_01JA2XYZ', history[2]!.version);
454
+ ```
455
+
456
+ ### Write-Filter Sentinels
457
+
458
+ For atomic server-side field operations — use these inside `patch()` to avoid race conditions.
459
+
460
+ ```typescript
461
+ await posts.patch('260601-rec_01JA', {
462
+ // Increment / decrement a numeric field atomically
463
+ views: { __op: 'increment', delta: 1 },
464
+ credits: { __op: 'decrement', delta: 5 },
465
+
466
+ // Set a field only if it doesn't already have a value
467
+ slug: { __op: 'setOnce', value: 'my-first-post' },
468
+
469
+ // Set a field only if a condition is met
470
+ discount: { __op: 'setIf', value: 10, cond: { op: '>=', value: 100 } },
471
+
472
+ // Add to an array (no duplicates)
473
+ tags: { __op: 'appendUnique', item: 'featured' },
474
+
475
+ // Remove from an array
476
+ tags: { __op: 'removeFromArray', item: 'draft' },
477
+
478
+ // Clamp a numeric value between min and max
479
+ rating: { __op: 'clamp', value: 6, min: 0, max: 5 },
480
+
481
+ // Multiply a numeric field
482
+ price: { __op: 'multiplyBy', factor: 1.1 },
483
+
484
+ // Flip a boolean
485
+ active: { __op: 'toggleBool' },
486
+
487
+ // Set field to the server's current timestamp
488
+ lastSeen: { __op: 'serverTimestamp' },
489
+ } as any);
490
+ ```
491
+
492
+ ### Custom Record IDs
493
+
494
+ Provide your own ID instead of using an auto-generated one. If the ID already exists, the record is upserted.
339
495
 
340
496
  ```typescript
341
- // Get the full version history of a record
342
- const history = await products.getHistory('rec_abc123');
343
- // history[0] is the latest version, history[1] is one write before, etc.
497
+ // Single record
498
+ const post = await posts.create(
499
+ { title: 'Welcome', status: 'published' },
500
+ { customRecordId: '260601-post_welcome' },
501
+ );
344
502
 
345
- // Restore to a specific version
346
- const restored = await products.restoreVersion('rec_abc123', history[2]!.version);
503
+ // Batch set _customRecordId on individual items
504
+ await posts.batchCreate([
505
+ { _customRecordId: '260601-post_welcome', title: 'Welcome' },
506
+ { title: 'Auto-ID post' },
507
+ ]);
347
508
  ```
348
509
 
510
+ Custom IDs must match `^[a-zA-Z_][a-zA-Z0-9_.\-]{0,200}$`.
511
+
349
512
  ---
350
513
 
351
514
  ## Authentication
352
515
 
353
- HydrousDB has a built-in user auth system. Your users live in a bucket you create (e.g. `"app-users"`). You get sessions, refresh tokens, password reset, email verification, and admin controls out of the box.
516
+ HydrousDB has a complete user auth system. Your users live in a bucket you name (e.g. `"app-users"`). You get sessions, refresh tokens, password reset, email verification, and admin controls all built in.
354
517
 
355
518
  ```typescript
356
519
  const auth = db.auth('app-users');
357
520
  ```
358
521
 
359
- ### Sign Up Users
522
+ ### Sign Up
360
523
 
361
524
  ```typescript
362
525
  const { user, session } = await auth.signup({
363
526
  email: 'alice@example.com',
364
- password: 'hunter2', // min 8 characters, validated server-side
527
+ password: 'hunter2', // validated server-side
365
528
  fullName: 'Alice Wonderland',
366
529
  // Any extra fields are stored on the user record:
367
530
  plan: 'pro',
368
531
  referral: 'friend123',
369
532
  });
370
533
 
371
- // user.id → "usr_xxxxxxxxxx"
372
- // session.sessionId → persist this in your app
373
- // session.refreshToken → persist this for long-lived sessions
534
+ // Persist these in your app / session store:
535
+ // session.sessionId
536
+ // session.refreshToken
537
+ // session.expiresAt
538
+
539
+ console.log(user.id); // "usr_xxxxxxxxxxxx"
540
+ console.log(user.emailVerified); // false — send a verification email
374
541
  ```
375
542
 
376
543
  ### Log In / Log Out
@@ -382,56 +549,61 @@ const { user, session } = await auth.login({
382
549
  password: 'hunter2',
383
550
  });
384
551
 
385
- // Log out (invalidates the session server-side)
552
+ // Log out invalidates the session server-side
386
553
  await auth.logout({ sessionId: session.sessionId });
554
+
555
+ // Log out from all devices at once
556
+ await auth.logout({ sessionId: session.sessionId, allDevices: true });
387
557
  ```
388
558
 
389
559
  ### Session Management
390
560
 
391
- Sessions expire after **24 hours**. Use the refresh token to get a new session — refresh tokens last **30 days**.
561
+ Sessions expire after **24 hours**. Refresh tokens last **30 days**.
392
562
 
393
563
  ```typescript
394
- // Refresh the session before it expires
564
+ // Refresh before expiry to get a new session
395
565
  const newSession = await auth.refreshSession({
396
566
  refreshToken: session.refreshToken,
397
567
  });
398
568
  // Store newSession.sessionId and newSession.refreshToken
399
569
 
400
- // Get the current user
570
+ // Get a user by ID
401
571
  const user = await auth.getUser({ userId: session.userId });
402
572
  ```
403
573
 
404
- ### Update User Profile
574
+ ### Validate a Session
575
+
576
+ Use this on your backend to verify an incoming session is still active.
577
+
578
+ ```typescript
579
+ const { user, session: activeSession } = await auth.validateSession({
580
+ sessionId: session.sessionId,
581
+ });
582
+
583
+ console.log(user.id); // "usr_xxxxxxxxxxxx"
584
+ console.log(activeSession.expiresAt); // timestamp
585
+ ```
586
+
587
+ ### Update Profile
405
588
 
406
589
  ```typescript
407
590
  const updated = await auth.updateUser({
408
591
  sessionId: session.sessionId,
409
592
  userId: user.id,
410
- data: {
593
+ updates: {
411
594
  fullName: 'Alice Smith',
412
595
  plan: 'enterprise',
596
+ // Any field on the user record can be updated here
413
597
  avatar: 'https://example.com/avatar.jpg',
414
598
  },
415
599
  });
416
600
  ```
417
601
 
418
- ### Password Reset Flow
602
+ > ⚠️ The `updates` key is required — it wraps the fields to change. Fields not included in `updates` are left untouched.
419
603
 
420
- ```typescript
421
- // 1. User requests a reset (always returns success — prevents email enumeration)
422
- await auth.requestPasswordReset({ email: 'alice@example.com' });
604
+ ### Change Password
423
605
 
424
- // 2. User receives an email with a reset token
425
-
426
- // 3. User submits the new password
427
- await auth.confirmPasswordReset({
428
- resetToken: 'tok_from_email',
429
- newPassword: 'correcthorsebatterystaple',
430
- });
431
- // All existing sessions for this user are automatically revoked
432
- ```
433
-
434
- ### Change Password (authenticated)
606
+ Requires an active session so a stolen old password alone is not enough.
435
607
 
436
608
  ```typescript
437
609
  await auth.changePassword({
@@ -440,61 +612,112 @@ await auth.changePassword({
440
612
  currentPassword: 'hunter2',
441
613
  newPassword: 'correcthorsebatterystaple',
442
614
  });
615
+ // All existing sessions for this user are automatically revoked
616
+ ```
617
+
618
+ ### Password Reset Flow
619
+
620
+ ```typescript
621
+ // 1. User requests a reset — always returns success (prevents email enumeration)
622
+ await auth.requestPasswordReset({ email: 'alice@example.com' });
623
+
624
+ // 2. User receives the reset token via email (handled by your email provider)
625
+
626
+ // 3. User submits the token + new password
627
+ await auth.confirmPasswordReset({
628
+ resetToken: 'tok_from_email',
629
+ newPassword: 'correcthorsebatterystaple',
630
+ });
631
+ // All existing sessions are automatically revoked
443
632
  ```
444
633
 
445
634
  ### Email Verification
446
635
 
447
636
  ```typescript
448
- // 1. Send verification email
637
+ // 1. Send the verification email
449
638
  await auth.requestEmailVerification({ userId: user.id });
450
639
 
451
- // 2. User clicks link in email, your app extracts the token
640
+ // 2. User clicks link in their inbox — your app extracts the token from the URL
452
641
 
453
642
  // 3. Confirm the token
454
643
  await auth.confirmEmailVerification({ verifyToken: 'tok_from_email' });
455
644
  ```
456
645
 
457
- ### Admin Operations
646
+ ### Admin — List Users
458
647
 
459
648
  Admin operations require a valid session from a user with `role: 'admin'`.
460
649
 
461
650
  ```typescript
462
- // List all users
463
- const { users, total } = await auth.listUsers({
651
+ // Paginated list — uses cursor-based pagination
652
+ const { users, hasMore, nextCursor } = await auth.listUsers({
464
653
  sessionId: adminSession.sessionId,
465
654
  limit: 50,
466
- offset: 0,
467
655
  });
468
656
 
657
+ if (hasMore) {
658
+ const page2 = await auth.listUsers({
659
+ sessionId: adminSession.sessionId,
660
+ limit: 50,
661
+ cursor: nextCursor!,
662
+ });
663
+ }
664
+ ```
665
+
666
+ Each user in the list includes:
667
+
668
+ ```typescript
669
+ {
670
+ id: 'usr_xxxxxxxxxxxx',
671
+ email: 'alice@example.com',
672
+ fullName: 'Alice Wonderland',
673
+ emailVerified: true,
674
+ accountStatus: 'active', // 'active' | 'locked' | 'suspended'
675
+ role: 'user', // 'user' | 'admin'
676
+ createdAt: 1717200000000,
677
+ updatedAt: 1717200000000,
678
+ // ...any extra fields stored at signup
679
+ }
680
+ ```
681
+
682
+ ### Admin — Lock / Unlock
683
+
684
+ ```typescript
469
685
  // Lock an account (prevents login)
470
- await auth.lockAccount({
686
+ const { lockedUntil, unlockTime } = await auth.lockAccount({
471
687
  sessionId: adminSession.sessionId,
472
688
  userId: 'usr_abc123',
473
- duration: 60 * 60 * 1000, // lock for 1 hour (default: 15 minutes)
689
+ duration: 60 * 60 * 1000, // 1 hour in ms (default: 15 minutes)
474
690
  });
475
691
 
476
- // Unlock an account
692
+ console.log(`Account locked until ${unlockTime}`);
693
+
694
+ // Unlock manually
477
695
  await auth.unlockAccount({
478
696
  sessionId: adminSession.sessionId,
479
697
  userId: 'usr_abc123',
480
698
  });
699
+ ```
481
700
 
482
- // Soft-delete a user (marks as deleted, keeps data)
701
+ ### Admin Delete Users
702
+
703
+ ```typescript
704
+ // Soft-delete — marks the account as deleted, keeps the data
483
705
  await auth.deleteUser({
484
706
  sessionId: adminSession.sessionId,
485
707
  userId: 'usr_abc123',
486
708
  });
487
709
 
488
- // Hard-delete a user (permanent irreversible)
710
+ // Hard-delete permanent, irreversible
489
711
  await auth.hardDeleteUser({
490
712
  sessionId: adminSession.sessionId,
491
713
  userId: 'usr_abc123',
492
714
  });
493
715
 
494
- // Bulk delete multiple users
495
- const { deleted, failed } = await auth.bulkDeleteUsers({
716
+ // Bulk delete up to 500 users, soft or hard
717
+ const { succeeded, failed } = await auth.bulkDeleteUsers({
496
718
  sessionId: adminSession.sessionId,
497
719
  userIds: ['usr_a', 'usr_b', 'usr_c'],
720
+ hard: false, // set true for permanent deletion
498
721
  });
499
722
  ```
500
723
 
@@ -502,66 +725,74 @@ const { deleted, failed } = await auth.bulkDeleteUsers({
502
725
 
503
726
  ## File Storage
504
727
 
505
- HydrousDB Storage is backed by Google Cloud Storage. Storage keys (`ssk_…`) are scoped per bucket, so you can give different parts of your app different levels of access.
728
+ HydrousDB Storage is backed by Google Cloud Storage. Storage keys (`ssk_…`) are scoped per bucket you can give different parts of your app different permissions.
506
729
 
507
730
  ```typescript
508
- // Pick a storage key by the name you gave it in storageKeys
509
- const files = db.storage('main');
731
+ const main = db.storage('main');
510
732
  const avatars = db.storage('avatars');
511
733
  const documents = db.storage('documents');
512
734
  ```
513
735
 
514
736
  ### Simple Upload
515
737
 
516
- For files up to **500 MB** when you don't need upload progress:
738
+ Server-buffered upload for files up to **500 MB**. No progress bar use [Large File Upload](#large-file-upload-with-progress) if you need one.
517
739
 
518
740
  ```typescript
519
- // Browser: upload from a file input
520
- const file = document.querySelector('input[type="file"]').files[0];
521
-
741
+ // Browser from a file input
742
+ const file = document.querySelector('input[type="file"]').files[0];
522
743
  const result = await db.storage('main').upload(file, `uploads/${file.name}`, {
523
- isPublic: true, // publicly accessible without auth
524
- overwrite: false, // throw if the file already exists
744
+ isPublic: true, // publicly accessible without auth (default: false)
745
+ overwrite: false, // throw if file already exists (default: false)
746
+ mimeType: 'image/jpeg', // optional — auto-detected from extension if omitted
525
747
  });
526
748
 
527
- console.log(result.publicUrl); // CDN URL — usable anywhere
528
- console.log(result.downloadUrl); // null (it's public)
749
+ console.log(result.publicUrl); // CDN URL — use anywhere
750
+ console.log(result.downloadUrl); // null when isPublic: true
529
751
  console.log(result.size); // bytes
530
- console.log(result.mimeType); // auto-detected from extension
752
+ console.log(result.mimeType); // 'image/jpeg'
531
753
 
532
- // Node.js: upload from a Buffer
754
+ // Node.js from a Buffer or file path
533
755
  import { readFileSync } from 'fs';
534
- const buffer = readFileSync('./report.pdf');
535
- const result = await db.storage('documents').upload(buffer, 'reports/q3.pdf');
536
- console.log(result.downloadUrl); // requires X-Storage-Key to access
756
+ const buf = readFileSync('./report.pdf');
757
+ const result = await db.storage('documents').upload(buf, 'reports/q3.pdf');
758
+ console.log(result.downloadUrl); // auth-required download URL
537
759
  ```
538
760
 
539
761
  ### Upload Raw JSON or Text
540
762
 
541
763
  ```typescript
764
+ // Upload a JS object as JSON
542
765
  const result = await db.storage('main').uploadRaw(
543
- { theme: 'dark', language: 'en' },
544
- 'user-config/alice.json',
766
+ { theme: 'dark', language: 'en', version: 3 },
767
+ 'settings/alice.json',
545
768
  { isPublic: false },
546
769
  );
770
+
771
+ // Upload a plain string
772
+ await db.storage('main').uploadRaw(
773
+ '<html><body>Hello</body></html>',
774
+ 'exports/page.html',
775
+ { mimeType: 'text/html', isPublic: true },
776
+ );
547
777
  ```
548
778
 
549
- ### Large File Upload (with progress)
779
+ ### Large File Upload with Progress
550
780
 
551
- For files over 10 MB or when you need a progress bar. The file goes directly to GCS — your server never buffers it.
781
+ For files over ~10 MB or when you need a progress bar. The file goes **directly to GCS** — your server never buffers the bytes.
552
782
 
553
783
  ```typescript
554
784
  const storage = db.storage('main');
555
785
 
556
- // Step 1: Get a signed upload URL
786
+ // Step 1 Get a signed GCS upload URL
557
787
  const { uploadUrl, path } = await storage.getUploadUrl({
558
- path: 'videos/product-demo.mp4',
559
- mimeType: 'video/mp4',
560
- size: file.size,
561
- isPublic: true,
788
+ path: 'videos/product-demo.mp4',
789
+ mimeType: 'video/mp4',
790
+ size: file.size,
791
+ isPublic: true,
792
+ expiresInSeconds: 900, // how long the signed URL is valid (default: 900 = 15 min)
562
793
  });
563
794
 
564
- // Step 2: Upload directly to GCS with progress tracking
795
+ // Step 2 Upload directly to GCS with real progress
565
796
  await storage.uploadToSignedUrl(
566
797
  uploadUrl,
567
798
  file,
@@ -572,54 +803,84 @@ await storage.uploadToSignedUrl(
572
803
  },
573
804
  );
574
805
 
575
- // Step 3: Confirm the upload (registers metadata server-side)
806
+ // Step 3 Confirm the upload (registers metadata on the server)
576
807
  const result = await storage.confirmUpload({
577
808
  path: path,
578
809
  mimeType: 'video/mp4',
579
810
  isPublic: true,
580
811
  });
581
812
 
582
- console.log(result.publicUrl); // ready to use
813
+ console.log(result.publicUrl); // live CDN URL
583
814
  ```
584
815
 
585
816
  ### Batch Upload
586
817
 
818
+ Upload up to 50 files at once.
819
+
587
820
  ```typescript
588
821
  const storage = db.storage('main');
589
822
 
590
- // Get signed URLs for up to 50 files at once
823
+ // Step 1 — Get signed URLs for all files
591
824
  const { files } = await storage.getBatchUploadUrls([
592
825
  { path: 'gallery/photo1.jpg', mimeType: 'image/jpeg', size: 204800, isPublic: true },
593
826
  { path: 'gallery/photo2.jpg', mimeType: 'image/jpeg', size: 153600, isPublic: true },
827
+ { path: 'gallery/photo3.png', mimeType: 'image/png', size: 98304, isPublic: true },
594
828
  ]);
595
829
 
596
- // Upload each one directly to GCS
830
+ // Step 2 — Upload each directly to GCS
597
831
  for (const f of files) {
598
832
  await storage.uploadToSignedUrl(f.uploadUrl, blobs[f.index], f.mimeType);
599
833
  }
600
834
 
601
- // Confirm all at once
602
- const results = await storage.batchConfirmUploads(
835
+ // Step 3 — Confirm all at once
836
+ const { succeeded, failed } = await storage.batchConfirmUploads(
603
837
  files.map(f => ({ path: f.path, mimeType: f.mimeType, isPublic: true })),
604
838
  );
839
+
840
+ console.log(`${succeeded.length} uploaded, ${failed.length} failed`);
841
+ for (const f of succeeded) {
842
+ console.log(f.publicUrl);
843
+ }
605
844
  ```
606
845
 
607
- ### Download Files
846
+ ### Download
608
847
 
609
848
  ```typescript
610
849
  // Private files require authentication — returns ArrayBuffer
611
850
  const buffer = await db.storage('documents').download('reports/q3.pdf');
612
851
  const blob = new Blob([buffer], { type: 'application/pdf' });
613
852
 
614
- // Trigger a browser download
853
+ // Trigger browser download
615
854
  const url = URL.createObjectURL(blob);
616
855
  const a = document.createElement('a');
617
856
  a.href = url;
618
857
  a.download = 'q3.pdf';
619
858
  a.click();
859
+ ```
860
+
861
+ > 💡 **Public files:** Use `result.publicUrl` directly — no SDK call needed. `<img src={result.publicUrl} />` just works.
620
862
 
621
- // Public files: use publicUrl directly — no SDK needed
622
- // <img src={result.publicUrl} />
863
+ ### Batch Download
864
+
865
+ Download up to 20 files in one call. Content is returned as base64 strings.
866
+
867
+ ```typescript
868
+ const { succeeded, failed } = await db.storage('documents').batchDownload([
869
+ 'reports/q1.pdf',
870
+ 'reports/q2.pdf',
871
+ 'reports/q3.pdf',
872
+ ]);
873
+
874
+ for (const f of succeeded) {
875
+ console.log(f.path); // 'reports/q1.pdf'
876
+ console.log(f.mimeType); // 'application/pdf'
877
+ console.log(f.size); // bytes
878
+ const bytes = Buffer.from(f.content, 'base64');
879
+ }
880
+
881
+ for (const f of failed) {
882
+ console.error(`${f.path}: ${f.error}`);
883
+ }
623
884
  ```
624
885
 
625
886
  ### List Files
@@ -628,22 +889,25 @@ a.click();
628
889
  const storage = db.storage('main');
629
890
 
630
891
  // List everything at the root
631
- const { files, folders } = await storage.list();
892
+ const { files, folders, hasMore, nextCursor } = await storage.list();
632
893
 
633
894
  // List a specific folder
634
- const { files, folders, hasMore, nextCursor } = await storage.list({
635
- prefix: 'gallery/',
636
- limit: 50,
637
- recursive: false,
895
+ const result = await storage.list({
896
+ prefix: 'gallery/',
897
+ limit: 50,
638
898
  });
639
899
 
640
900
  // Paginate
641
- if (hasMore) {
642
- const page2 = await storage.list({ prefix: 'gallery/', cursor: nextCursor });
901
+ if (result.hasMore) {
902
+ const page2 = await storage.list({
903
+ prefix: 'gallery/',
904
+ cursor: result.nextCursor,
905
+ });
643
906
  }
644
907
  ```
645
908
 
646
- Each file entry includes:
909
+ Each file entry:
910
+
647
911
  ```typescript
648
912
  {
649
913
  name: 'photo1.jpg',
@@ -651,7 +915,7 @@ Each file entry includes:
651
915
  size: 204800,
652
916
  mimeType: 'image/jpeg',
653
917
  isPublic: true,
654
- publicUrl: 'https://storage.googleapis.com/...',
918
+ publicUrl: 'https://storage.googleapis.com/…',
655
919
  downloadUrl: null,
656
920
  updatedAt: '2025-06-01T12:00:00.000Z',
657
921
  }
@@ -659,88 +923,125 @@ Each file entry includes:
659
923
 
660
924
  ### Scoped Storage
661
925
 
662
- Working within a specific folder? Use `.scope()` to avoid repeating the prefix on every call.
926
+ Pre-fix all operations to a folder great for per-user isolation.
663
927
 
664
928
  ```typescript
665
- // All operations in the "user-avatars/" folder
666
- const avatars = db.storage('avatars').scope('user-avatars');
929
+ const userDocs = db.storage('documents').scope(`users/${userId}/`);
667
930
 
668
- await avatars.upload(file, `${userId}.jpg`, { isPublic: true });
669
- // uploads to "user-avatars/{userId}.jpg"
931
+ // Uploads to: users/{userId}/contract.pdf
932
+ await userDocs.upload(pdfBuffer, 'contract.pdf');
670
933
 
671
- const { files } = await avatars.list();
672
- // lists files under "user-avatars/"
934
+ // Lists: users/{userId}/
935
+ const { files } = await userDocs.list();
673
936
 
674
- await avatars.deleteFile(`${userId}.jpg`);
675
- // → deletes "user-avatars/{userId}.jpg"
937
+ // Deletes: users/{userId}/contract.pdf
938
+ await userDocs.deleteFile('contract.pdf');
676
939
 
677
940
  // Nest scopes
678
- const thumbnails = avatars.scope('thumbnails');
679
- // all operations under "user-avatars/thumbnails/"
941
+ const userThumbs = userDocs.scope('thumbnails/');
942
+ // All ops under: users/{userId}/thumbnails/
680
943
  ```
681
944
 
682
- ### Share & Visibility
945
+ `ScopedStorage` has the same full API as `StorageManager` — every method listed in the reference is available.
946
+
947
+ ### File Metadata
683
948
 
684
949
  ```typescript
685
- const storage = db.storage('documents');
950
+ const meta = await db.storage('documents').getMetadata('reports/q3.pdf');
951
+
952
+ console.log(meta.size); // bytes
953
+ console.log(meta.mimeType); // 'application/pdf'
954
+ console.log(meta.isPublic); // false
955
+ console.log(meta.downloadUrl); // auth-required URL
956
+ console.log(meta.createdAt); // ISO string
957
+ console.log(meta.updatedAt); // ISO string
958
+ ```
686
959
 
687
- // Get file metadata (size, MIME type, URLs, visibility)
688
- const meta = await storage.getMetadata('reports/q3.pdf');
960
+ ### Signed Share URLs
689
961
 
690
- // Generate a time-limited share link for a private file
691
- // (no auth key needed to use the link)
692
- const { signedUrl, expiresAt } = await storage.getSignedUrl(
693
- 'reports/q3.pdf',
694
- 3600, // expires in 1 hour (default)
695
- );
962
+ Generate a time-limited link for a private file — no `X-Storage-Key` required to use it.
963
+
964
+ ```typescript
965
+ const { signedUrl, expiresAt, expiresIn } = await db.storage('documents')
966
+ .getSignedUrl('reports/q3.pdf', 3600); // expires in 1 hour (default)
696
967
 
697
- // Toggle visibility after upload
698
- await storage.setVisibility('reports/q3.pdf', true); // make public
699
- await storage.setVisibility('reports/q3.pdf', false); // make private
968
+ // Share signedUrl with the recipient — it expires automatically
969
+ console.log(`Link valid until: ${new Date(expiresAt).toLocaleString()}`);
700
970
  ```
701
971
 
702
- ### File Operations
972
+ > ⚠️ Downloads via signed URLs bypass the server — **download stats are not tracked** for those requests. Use `downloadUrl` for tracked downloads.
973
+
974
+ ### Visibility
975
+
976
+ ```typescript
977
+ // Make a private file public
978
+ const result = await db.storage('main').setVisibility('docs/report.pdf', true);
979
+ console.log(result.publicUrl); // now has a CDN URL
980
+
981
+ // Make a public file private
982
+ const result2 = await db.storage('main').setVisibility('docs/report.pdf', false);
983
+ console.log(result2.downloadUrl); // now requires auth
984
+ ```
985
+
986
+ ### Move, Copy, Delete
703
987
 
704
988
  ```typescript
705
989
  const storage = db.storage('main');
706
990
 
707
- // Rename / move a file
991
+ // Rename a file
708
992
  await storage.move('drafts/report.pdf', 'published/report-2025.pdf');
709
993
 
710
- // Copy a file
994
+ // Move to a different folder
995
+ await storage.move('inbox/data.csv', 'archive/2025/data.csv');
996
+
997
+ // Copy
711
998
  await storage.copy('templates/invoice.html', 'invoices/inv-001.html');
712
999
 
713
- // Create a folder
1000
+ // Create a folder placeholder
714
1001
  await storage.createFolder('archive/2025/');
715
1002
 
716
- // Delete a file
1003
+ // Delete a single file
717
1004
  await storage.deleteFile('temp/scratch.txt');
718
1005
 
719
- // Delete a folder and all its contents
1006
+ // Delete a folder and all its contents recursively
720
1007
  await storage.deleteFolder('temp/');
1008
+ ```
721
1009
 
722
- // Get key-level stats
723
- const stats = await storage.getStats();
724
- // → { totalFiles: 842, totalBytes: 1073741824, uploadCount: 1200, ... }
1010
+ ### Storage Stats
1011
+
1012
+ ```typescript
1013
+ const stats = await db.storage('main').getStats();
1014
+ // {
1015
+ // totalFiles: 842,
1016
+ // totalBytes: 1073741824,
1017
+ // uploadCount: 1200,
1018
+ // downloadCount: 4830,
1019
+ // deleteCount: 58,
1020
+ // }
1021
+
1022
+ // Ping — no auth required
1023
+ const { ok, storageRoot } = await db.storage('main').info();
725
1024
  ```
726
1025
 
727
1026
  ---
728
1027
 
729
1028
  ## Analytics
730
1029
 
731
- HydrousDB Analytics runs queries directly against BigQuery on your GCS data — zero ETL, no data duplication, live results. Fast even on billions of records.
1030
+ HydrousDB Analytics runs directly against BigQuery on your GCS data — zero ETL, no duplication, live results. Fast even on billions of records.
732
1031
 
733
1032
  ```typescript
734
1033
  const analytics = db.analytics('orders');
735
1034
  ```
736
1035
 
1036
+ All `dateRange` values are Unix timestamps in milliseconds: `{ start?: number, end?: number }`.
1037
+
737
1038
  ### Count
738
1039
 
739
1040
  ```typescript
740
1041
  // Total records
741
1042
  const { count } = await analytics.count();
742
1043
 
743
- // Records in a date range
1044
+ // In a date range
744
1045
  const { count: lastWeek } = await analytics.count({
745
1046
  dateRange: {
746
1047
  start: Date.now() - 7 * 24 * 60 * 60 * 1000,
@@ -754,8 +1055,12 @@ const { count: lastWeek } = await analytics.count({
754
1055
  How many records have each unique value for a field?
755
1056
 
756
1057
  ```typescript
757
- const rows = await analytics.distribution({ field: 'status', limit: 10, order: 'desc' });
758
- // → [
1058
+ const rows = await analytics.distribution({
1059
+ field: 'status',
1060
+ limit: 10,
1061
+ order: 'desc', // 'asc' | 'desc'
1062
+ });
1063
+ // [
759
1064
  // { value: 'completed', count: 8234 },
760
1065
  // { value: 'pending', count: 1203 },
761
1066
  // { value: 'refunded', count: 412 },
@@ -766,56 +1071,61 @@ const rows = await analytics.distribution({ field: 'status', limit: 10, order: '
766
1071
 
767
1072
  ```typescript
768
1073
  // Total revenue
769
- const rows = await analytics.sum({ field: 'amount' });
770
- // → [{ sum: 198432.50 }]
1074
+ const [{ sum: total }] = await analytics.sum({ field: 'amount' });
771
1075
 
772
- // Revenue grouped by country
1076
+ // Revenue by country
773
1077
  const byCountry = await analytics.sum({
774
1078
  field: 'amount',
775
1079
  groupBy: 'country',
776
1080
  limit: 10,
777
1081
  });
778
- // [{ group: 'US', sum: 120000 }, { group: 'UK', sum: 45000 }, ...]
1082
+ // [{ group: 'US', sum: 120000 }, { group: 'UK', sum: 45000 }, ]
779
1083
  ```
780
1084
 
781
1085
  ### Time Series
782
1086
 
783
- Record counts over time — ideal for activity and growth charts.
1087
+ Record counts bucketed over time — ideal for activity and growth charts.
784
1088
 
785
1089
  ```typescript
786
1090
  const rows = await analytics.timeSeries({
787
- granularity: 'day', // 'hour' | 'day' | 'week' | 'month' | 'year'
1091
+ granularity: 'day', // 'hour' | 'day' | 'week' | 'month' | 'year'
788
1092
  dateRange: {
789
1093
  start: new Date('2025-01-01').getTime(),
790
1094
  end: new Date('2025-06-01').getTime(),
791
1095
  },
792
1096
  });
793
- // [{ date: '2025-01-01', count: 42 }, { date: '2025-01-02', count: 67 }, ...]
1097
+ // [{ date: '2025-01-01', count: 42 }, { date: '2025-01-02', count: 67 }, ]
794
1098
  ```
795
1099
 
796
- Aggregate a numeric field over time:
1100
+ ### Field Time Series
1101
+
1102
+ Aggregate a numeric field over time.
797
1103
 
798
1104
  ```typescript
799
1105
  const revenue = await analytics.fieldTimeSeries({
800
1106
  field: 'amount',
801
- aggregation: 'sum', // 'sum' | 'avg' | 'min' | 'max' | 'count'
1107
+ aggregation: 'sum', // 'sum' | 'avg' | 'min' | 'max' | 'count'
802
1108
  granularity: 'week',
1109
+ dateRange: {
1110
+ start: new Date('2025-01-01').getTime(),
1111
+ end: Date.now(),
1112
+ },
803
1113
  });
804
- // [{ date: '2025-W01', value: 12340.50 }, ...]
1114
+ // [{ date: '2025-W01', value: 12340.50 }, { date: '2025-W02', value: 9872.00 }, …]
805
1115
  ```
806
1116
 
807
1117
  ### Top N
808
1118
 
809
- Most common values for a field:
1119
+ Most common values for a field.
810
1120
 
811
1121
  ```typescript
812
1122
  const topProducts = await analytics.topN({
813
1123
  field: 'productId',
814
- labelField: 'productName', // optional: include a human-readable label
1124
+ labelField: 'productName', // optional human-readable label alongside the value
815
1125
  n: 5,
816
1126
  order: 'desc',
817
1127
  });
818
- // [
1128
+ // [
819
1129
  // { value: 'prod_123', label: 'Widget Pro', count: 892 },
820
1130
  // { value: 'prod_456', label: 'Gizmo Plus', count: 743 },
821
1131
  // ]
@@ -823,41 +1133,48 @@ const topProducts = await analytics.topN({
823
1133
 
824
1134
  ### Field Stats
825
1135
 
826
- Statistical summary for any numeric field:
1136
+ Statistical summary for any numeric field.
827
1137
 
828
1138
  ```typescript
829
1139
  const stats = await analytics.stats({ field: 'orderValue' });
830
- // {
831
- // min: 4.99, max: 9999.99, avg: 87.23,
832
- // sum: 420948.27, count: 4823, stddev: 143.2
1140
+ // {
1141
+ // min: 4.99,
1142
+ // max: 9999.99,
1143
+ // avg: 87.23,
1144
+ // sum: 420948.27,
1145
+ // count: 4823,
1146
+ // stddev: 143.2,
833
1147
  // }
834
1148
  ```
835
1149
 
836
1150
  ### Multi-Metric Dashboard
837
1151
 
838
- Calculate several aggregations in a single BigQuery query:
1152
+ Run several aggregations in a single BigQuery query — one network call.
839
1153
 
840
1154
  ```typescript
841
1155
  const dashboard = await analytics.multiMetric({
842
1156
  metrics: [
843
- { field: 'amount', name: 'totalRevenue', aggregation: 'sum' },
844
- { field: 'amount', name: 'avgOrderValue', aggregation: 'avg' },
845
- { field: 'amount', name: 'maxOrder', aggregation: 'max' },
846
- { field: 'userId', name: 'totalOrders', aggregation: 'count' },
1157
+ { field: 'amount', name: 'totalRevenue', aggregation: 'sum' },
1158
+ { field: 'amount', name: 'avgOrderValue', aggregation: 'avg' },
1159
+ { field: 'amount', name: 'maxOrder', aggregation: 'max' },
1160
+ { field: 'userId', name: 'uniqueOrders', aggregation: 'count' },
847
1161
  ],
848
- dateRange: { start: new Date('2025-01-01').getTime(), end: Date.now() },
1162
+ dateRange: {
1163
+ start: new Date('2025-01-01').getTime(),
1164
+ end: Date.now(),
1165
+ },
849
1166
  });
850
- // {
1167
+ // {
851
1168
  // totalRevenue: 198432.50,
852
1169
  // avgOrderValue: 87.23,
853
1170
  // maxOrder: 9999.99,
854
- // totalOrders: 2275,
1171
+ // uniqueOrders: 2275,
855
1172
  // }
856
1173
  ```
857
1174
 
858
- ### Filtered Records (BigQuery)
1175
+ ### Filtered Records via BigQuery
859
1176
 
860
- Query raw records at full BigQuery speed:
1177
+ Fetch raw records using the BigQuery engine — useful for large-scale filtered exports.
861
1178
 
862
1179
  ```typescript
863
1180
  const records = await analytics.records({
@@ -866,15 +1183,20 @@ const records = await analytics.records({
866
1183
  { field: 'amount', op: '>', value: 100 },
867
1184
  ],
868
1185
  selectFields: ['orderId', 'amount', 'userId', 'createdAt'],
869
- orderBy: 'amount',
870
- order: 'desc',
871
- limit: 50,
1186
+ orderBy: 'amount',
1187
+ order: 'desc',
1188
+ limit: 50,
1189
+ offset: 0,
1190
+ dateRange: {
1191
+ start: new Date('2025-01-01').getTime(),
1192
+ end: Date.now(),
1193
+ },
872
1194
  });
873
1195
  ```
874
1196
 
875
1197
  ### Cross-Bucket Comparison
876
1198
 
877
- Compare the same metric across multiple buckets in one query:
1199
+ Compare the same metric across multiple buckets in one query.
878
1200
 
879
1201
  ```typescript
880
1202
  const comparison = await analytics.crossBucket({
@@ -882,32 +1204,87 @@ const comparison = await analytics.crossBucket({
882
1204
  field: 'amount',
883
1205
  aggregation: 'sum',
884
1206
  });
885
- // [
1207
+ // [
886
1208
  // { bucket: 'orders-us', value: 120000 },
887
1209
  // { bucket: 'orders-eu', value: 45000 },
888
1210
  // { bucket: 'orders-apac', value: 33000 },
889
1211
  // ]
890
1212
  ```
891
1213
 
892
- > ⚠️ Your Bucket Security Key must have read access to **all** listed buckets.
1214
+ > ⚠️ Your Bucket Security Key must have read access to **every** bucket listed in `bucketKeys`.
893
1215
 
894
1216
  ### Storage Stats
895
1217
 
896
1218
  ```typescript
897
1219
  const stats = await analytics.storageStats();
898
- // { totalRecords: 48210, totalBytes: 921600000, avgBytes: 19112, minBytes: 128, maxBytes: 5242880 }
1220
+ // {
1221
+ // totalRecords: 48210,
1222
+ // totalBytes: 921600000,
1223
+ // avgBytes: 19112,
1224
+ // minBytes: 128,
1225
+ // maxBytes: 5242880,
1226
+ // }
1227
+ ```
1228
+
1229
+ ### Raw Query
1230
+
1231
+ Use the `query()` method when none of the typed helpers cover your use case.
1232
+
1233
+ ```typescript
1234
+ import type { AnalyticsQuery } from 'hydrousdb';
1235
+
1236
+ const result = await analytics.query<{ count: number }>({
1237
+ queryType: 'count',
1238
+ dateRange: { start: Date.now() - 86400000, end: Date.now() },
1239
+ });
1240
+
1241
+ console.log(result.queryType); // 'count'
1242
+ console.log(result.data); // { count: 142 }
899
1243
  ```
900
1244
 
901
1245
  ---
902
1246
 
903
- ## TypeScript Support
1247
+ ## TypeScript
904
1248
 
905
- The SDK is written in TypeScript and ships with full type definitions. Use generic type parameters to get full autocomplete and compile-time safety throughout your app.
1249
+ The SDK is written entirely in TypeScript and ships full type definitions. Use generics for end-to-end type safety.
906
1250
 
907
1251
  ```typescript
908
1252
  import { createClient } from 'hydrousdb';
1253
+ import type {
1254
+ HydrousConfig,
1255
+ RecordResult,
1256
+ QueryFilter,
1257
+ QueryOptions,
1258
+ QueryResult,
1259
+ CreateRecordOptions,
1260
+ BatchCreateOptions,
1261
+ UploadResult,
1262
+ UploadUrlResult,
1263
+ ListResult,
1264
+ FileMetadata,
1265
+ SignedUrlResult,
1266
+ BatchDownloadResult,
1267
+ StorageStats,
1268
+ DateRange,
1269
+ AnalyticsQuery,
1270
+ AnalyticsResult,
1271
+ CountResult,
1272
+ DistributionRow,
1273
+ SumRow,
1274
+ TimeSeriesRow,
1275
+ FieldTimeSeriesRow,
1276
+ TopNRow,
1277
+ FieldStats,
1278
+ MultiMetricResult,
1279
+ StorageStatsResult,
1280
+ CrossBucketRow,
1281
+ UserRecord,
1282
+ Session,
1283
+ AuthResult,
1284
+ ListUsersResult,
1285
+ } from 'hydrousdb';
909
1286
 
910
- // Define your data models as plain interfaces — no index signature needed
1287
+ // Define your domain models
911
1288
  interface Order {
912
1289
  customerId: string;
913
1290
  items: Array<{ productId: string; qty: number; price: number }>;
@@ -929,260 +1306,261 @@ const db = createClient({
929
1306
  storageKeys: { main: process.env.HYDROUS_STORAGE_MAIN! },
930
1307
  });
931
1308
 
932
- // Fully typed clients
1309
+ // Fully typed — autocomplete, compile-time safety throughout
933
1310
  const orders = db.records<Order>('orders');
934
1311
  const customers = db.records<Customer>('customers');
935
1312
 
936
- // order.total, order.status, etc. are all type-safe
937
- const order = await orders.create({
938
- customerId: 'cust_abc',
939
- items: [{ productId: 'prod_1', qty: 2, price: 29.99 }],
940
- total: 59.98,
941
- status: 'pending',
942
- country: 'US',
943
- });
1313
+ const order = await orders.create(
1314
+ { customerId: 'cust_abc', items: [{ productId: 'prod_1', qty: 2, price: 29.99 }], total: 59.98, status: 'pending', country: 'US' },
1315
+ { queryableFields: ['status', 'country', 'customerId'] },
1316
+ );
944
1317
 
945
1318
  // TypeScript catches mistakes at compile time:
946
- // order.nonExistentField // ← TS error ✓
947
- // order.status = 'invalid' // ← TS error ✓
948
- ```
949
-
950
- All exported types are available for import:
951
-
952
- ```typescript
953
- import type {
954
- HydrousConfig,
955
- RecordResult,
956
- QueryFilter,
957
- QueryOptions,
958
- UploadResult,
959
- AnalyticsQuery,
960
- DateRange,
961
- // ... and many more
962
- } from 'hydrousdb';
1319
+ // order.nonExistentField ← TS error ✓
1320
+ // orders.create({ bad: 1 }) ← TS error ✓
963
1321
  ```
964
1322
 
965
1323
  ---
966
1324
 
967
1325
  ## Error Handling
968
1326
 
969
- All errors thrown by the SDK extend `HydrousError`, which carries:
970
-
971
- | Property | Type | Description |
972
- |---|---|---|
973
- | `message` | `string` | Human-readable description |
974
- | `code` | `string` | Machine-readable error code (e.g. `"RECORD_NOT_FOUND"`) |
975
- | `status` | `number` | HTTP status code |
976
- | `requestId` | `string` | Server request ID (for support tracing) |
977
- | `details` | `string[]` | Validation error details |
1327
+ All SDK errors extend `HydrousError`. Specific sub-classes let you handle different failure modes precisely.
978
1328
 
979
1329
  ```typescript
980
- import { HydrousError, NetworkError, AuthError } from 'hydrousdb';
1330
+ import {
1331
+ HydrousError,
1332
+ AuthError,
1333
+ RecordError,
1334
+ StorageError,
1335
+ AnalyticsError,
1336
+ ValidationError,
1337
+ NetworkError,
1338
+ } from 'hydrousdb';
981
1339
 
982
1340
  try {
983
- const { user } = await auth.login({ email: 'a@b.com', password: 'wrong' });
1341
+ const record = await db.records('orders').get('bad-id');
984
1342
  } catch (err) {
985
- if (err instanceof AuthError) {
986
- // Authentication-specific error
987
- console.error(`Auth failed: ${err.code}`);
988
- // err.code might be: INVALID_CREDENTIALS, ACCOUNT_LOCKED, EMAIL_NOT_VERIFIED
989
- } else if (err instanceof NetworkError) {
1343
+ if (err instanceof NetworkError) {
990
1344
  // No internet / server unreachable
991
- console.error('Cannot reach HydrousDB — check your internet connection');
1345
+ console.error('Cannot reach HydrousDB:', err.message);
1346
+
1347
+ } else if (err instanceof AuthError) {
1348
+ // Bad key, expired session, insufficient permissions
1349
+ console.error(`Auth failed [${err.code}]:`, err.message);
1350
+ // err.code: INVALID_CREDENTIALS | ACCOUNT_LOCKED | INVALID_SESSION | FORBIDDEN | …
1351
+
1352
+ } else if (err instanceof ValidationError) {
1353
+ // Invalid input — check details array
1354
+ console.error('Validation failed:', err.details?.join(', '));
1355
+
1356
+ } else if (err instanceof RecordError) {
1357
+ // Record-specific API error
1358
+ console.error(`Record error [${err.code}]:`, err.message);
1359
+
1360
+ } else if (err instanceof StorageError) {
1361
+ // Storage-specific error
1362
+ console.error(`Storage error [${err.code}]:`, err.message);
1363
+
992
1364
  } else if (err instanceof HydrousError) {
993
1365
  // Any other API error
994
- console.error(`API error [${err.code}]: ${err.message}`);
995
- console.error(`Request ID: ${err.requestId}`); // include in support tickets
1366
+ console.error(`API error [${err.code}] ${err.status}:`, err.message);
1367
+ console.error('Request ID:', err.requestId); // include in support tickets
996
1368
  }
997
1369
  }
998
1370
  ```
999
1371
 
1372
+ **Error properties:**
1373
+
1374
+ | Property | Type | Description |
1375
+ |---|---|---|
1376
+ | `message` | `string` | Human-readable description |
1377
+ | `code` | `string` | Machine-readable error code |
1378
+ | `status` | `number` | HTTP status code |
1379
+ | `requestId` | `string \| undefined` | Server request ID — include in support tickets |
1380
+ | `details` | `string[] \| undefined` | Validation error details |
1381
+
1000
1382
  **Common error codes:**
1001
1383
 
1002
- | Code | Meaning |
1003
- |---|---|
1004
- | `RECORD_NOT_FOUND` | The requested record ID does not exist |
1005
- | `INVALID_CREDENTIALS` | Wrong email or password |
1006
- | `ACCOUNT_LOCKED` | The account is temporarily locked |
1007
- | `INVALID_SESSION` | Session expired or revoked — re-authenticate |
1008
- | `MISSING_API_KEY` | Key not provided |
1009
- | `INVALID_SECURITY_KEY` | Key is wrong or revoked |
1010
- | `FORBIDDEN` | Insufficient permissions |
1011
- | `FILE_EXISTS` | File already exists at path (use `overwrite: true`) |
1012
- | `LIMIT_EXCEEDED` | Storage quota or file size limit reached |
1013
- | `SYSTEM_BUCKET_FORBIDDEN` | Cannot query system buckets via analytics |
1014
- | `VALIDATION_ERROR` | Invalid input — check `err.details` |
1015
- | `NETWORK_ERROR` | Failed to reach the API |
1384
+ | Code | Status | Meaning |
1385
+ |---|---|---|
1386
+ | `RECORD_NOT_FOUND` | 404 | The requested record ID does not exist |
1387
+ | `INVALID_CREDENTIALS` | 401 | Wrong email or password |
1388
+ | `ACCOUNT_LOCKED` | 403 | Account is temporarily locked |
1389
+ | `INVALID_SESSION` | 401 | Session expired or revoked — re-authenticate |
1390
+ | `MISSING_API_KEY` | 401 | Key not provided in headers |
1391
+ | `INVALID_SECURITY_KEY` | 401 | Key is wrong or has been revoked |
1392
+ | `FORBIDDEN` | 403 | Insufficient permissions for this operation |
1393
+ | `FILE_EXISTS` | 409 | File already exists use `overwrite: true` |
1394
+ | `SYSTEM_BUCKET_FORBIDDEN` | 403 | Cannot query system buckets via analytics |
1395
+ | `CROSS_BUCKET_FORBIDDEN` | 403 | Key lacks read access to one of the requested buckets |
1396
+ | `VALIDATION_ERROR` | 400 | Invalid input — check `err.details` |
1397
+ | `NETWORK_ERROR` | — | Failed to reach the API |
1016
1398
 
1017
1399
  ---
1018
1400
 
1019
1401
  ## Security Best Practices
1020
1402
 
1021
- 1. **Never hard-code your keys.** Use environment variables:
1403
+ **1. Never hard-code keys use environment variables.**
1022
1404
 
1023
- ```bash
1024
- # .env (add to .gitignore)
1025
- HYDROUS_AUTH_KEY=hk_auth_xxxxxxxxxxxxxxxxxxxx
1026
- HYDROUS_BUCKET_KEY=hk_bucket_xxxxxxxxxxxxxxxxxxxx
1027
- HYDROUS_STORAGE_MAIN=ssk_xxxxxxxxxxxxxxxxxxxx
1028
- ```
1405
+ ```bash
1406
+ # .env (add .env to your .gitignore)
1407
+ HYDROUS_AUTH_KEY=hk_auth_xxxxxxxxxxxxxxxxxxxx
1408
+ HYDROUS_BUCKET_KEY=hk_bucket_xxxxxxxxxxxxxxxxxxxx
1409
+ HYDROUS_STORAGE_MAIN=ssk_xxxxxxxxxxxxxxxxxxxx
1410
+ ```
1029
1411
 
1030
- ```typescript
1031
- const db = createClient({
1032
- authKey: process.env.HYDROUS_AUTH_KEY!,
1033
- bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!,
1034
- storageKeys: { main: process.env.HYDROUS_STORAGE_MAIN! },
1035
- });
1036
- ```
1412
+ ```typescript
1413
+ const db = createClient({
1414
+ authKey: process.env.HYDROUS_AUTH_KEY!,
1415
+ bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!,
1416
+ storageKeys: { main: process.env.HYDROUS_STORAGE_MAIN! },
1417
+ });
1418
+ ```
1037
1419
 
1038
- 2. **Never expose keys to browsers.** For browser-side apps, route requests through your own backend, or use per-user session tokens from `auth.login()`.
1420
+ **2. Never expose keys to browsers.** For browser-side apps, route requests through your own backend, or use per-user sessions from `auth.login()`.
1039
1421
 
1040
- 3. **Keys are sent via request headers — never in URLs.** The SDK enforces this automatically, so keys never appear in server logs or browser history.
1422
+ **3. Keys travel in headers — never in URLs.** The SDK enforces this automatically. Keys never appear in server access logs, GCP audit trails, or browser history.
1041
1423
 
1042
- 4. **Rotate keys periodically.** Revoke old keys from the dashboard after rotation.
1424
+ **4. Rotate keys periodically.** Revoke old keys from the dashboard after rotation.
1043
1425
 
1044
- 5. **Use scoped storage** (`db.storage('keyName').scope('prefix/')`) to isolate access by user or feature, reducing the blast radius of any misconfiguration.
1426
+ **5. Use scoped storage** (`db.storage('key').scope('prefix/')`) to isolate access per user or feature, reducing the blast radius of any misconfiguration.
1045
1427
 
1046
- 6. **Use `isPublic: false` (the default) for sensitive files.** Use signed URLs for time-limited sharing instead of making files permanently public.
1428
+ **6. Keep files private by default.** `isPublic` defaults to `false`. Use `getSignedUrl()` for time-limited external sharing rather than making files permanently public.
1047
1429
 
1048
1430
  ---
1049
1431
 
1050
- ## API Reference
1432
+ ## Full API Reference
1433
+
1434
+ ### `createClient(config)` → `HydrousClient`
1051
1435
 
1052
- ### `createClient(config)`
1436
+ | Config field | Type | Required | Description |
1437
+ |---|---|---|---|
1438
+ | `authKey` | `string` | ✅ | `hk_auth_…` — used for all auth routes |
1439
+ | `bucketSecurityKey` | `string` | ✅ | `hk_bucket_…` — used for records & analytics |
1440
+ | `storageKeys` | `Record<string, string>` | ✅ | Named `ssk_…` keys — at least one entry |
1441
+ | `baseUrl` | `string` | — | Override the API endpoint (for self-hosting or testing) |
1053
1442
 
1054
- Creates and returns a `HydrousClient` instance. Call this once and reuse it everywhere.
1443
+ ---
1444
+
1445
+ ### `db.records<T>(bucketKey)` → `RecordsClient<T>`
1446
+
1447
+ | Method | Signature | Description |
1448
+ |---|---|---|
1449
+ | `create` | `(data: T, opts?: CreateRecordOptions) → T & RecordResult` | Create a record. `opts.queryableFields` indexes fields for filtering. `opts.customRecordId` enables upsert. |
1450
+ | `get` | `(id: string) → T & RecordResult` | Fetch by ID. Zero index reads. |
1451
+ | `set` | `(id: string, data: T) → T & RecordResult` | Full replace. |
1452
+ | `patch` | `(id: string, data: Partial<T>, opts?) → T & RecordResult` | Partial update. `opts.merge` (default `true`) controls whether missing fields are removed. |
1453
+ | `delete` | `(id: string) → void` | Permanent delete. |
1454
+ | `query` | `(opts?: QueryOptions) → QueryResult<T>` | Filtered, sorted, paginated query. |
1455
+ | `getAll` | `(opts?) → (T & RecordResult)[]` | Query without filters — convenience shortcut. |
1456
+ | `count` | `(filters?) → number` | Count matching records. |
1457
+ | `batchCreate` | `(items: T[], opts?: BatchCreateOptions) → (T & RecordResult)[]` | Up to 500 records. |
1458
+ | `batchDelete` | `(ids: string[]) → { deleted, failed }` | Up to 500 records. |
1459
+ | `getHistory` | `(id: string) → RecordHistoryEntry[]` | Full version list, most recent first. |
1460
+ | `restoreVersion` | `(id: string, version: number) → T & RecordResult` | Roll back to any version. |
1461
+
1462
+ **`QueryOptions`:**
1055
1463
 
1056
1464
  ```typescript
1057
- const db = createClient({
1058
- authKey: 'hk_auth_…', // Required auth routes
1059
- bucketSecurityKey: 'hk_bucket_…', // Required records & analytics
1060
- storageKeys: { // Required — at least one entry
1061
- main: 'ssk_main_…',
1062
- avatars: 'ssk_avatars_…',
1063
- documents: 'ssk_docs_…',
1064
- },
1065
- baseUrl: 'https://...', // Optional — defaults to official endpoint
1066
- });
1465
+ {
1466
+ filters?: QueryFilter[]; // [{ field, op, value }, …]
1467
+ fields?: string; // comma-separated field names
1468
+ orderBy?: string;
1469
+ order?: 'asc' | 'desc';
1470
+ limit?: number; // default 100, max 1000
1471
+ offset?: number;
1472
+ startAfter?: string; // cursor for next page
1473
+ startAt?: string;
1474
+ endAt?: string;
1475
+ dateRange?: DateRange; // { start?: number, end?: number }
1476
+ }
1067
1477
  ```
1068
1478
 
1069
- ### `db.records<T>(bucketKey)`
1479
+ ---
1070
1480
 
1071
- Returns a `RecordsClient<T>` for the named bucket. Uses `bucketSecurityKey` automatically.
1481
+ ### `db.auth(bucketKey)` `AuthClient`
1072
1482
 
1073
- | Method | Description |
1074
- |---|---|
1075
- | `create(data)` | Create a new record |
1076
- | `get(id)` | Get a record by ID |
1077
- | `set(id, data)` | Full replace |
1078
- | `patch(id, data, opts?)` | Partial update (merge by default) |
1079
- | `delete(id)` | Delete a record |
1080
- | `query(opts?)` | Query with filters, sort, pagination |
1081
- | `getAll(opts?)` | Shortcut for query without filters |
1082
- | `count(filters?)` | Count matching records |
1083
- | `batchCreate(items)` | Create multiple records |
1084
- | `batchDelete(ids)` | Delete multiple records |
1085
- | `getHistory(id)` | Get version history |
1086
- | `restoreVersion(id, version)` | Restore to a previous version |
1087
-
1088
- ### `db.auth(bucketKey)`
1089
-
1090
- Returns an `AuthClient` for the named user bucket. Uses `authKey` automatically.
1091
-
1092
- | Method | Description |
1093
- |---|---|
1094
- | `signup(opts)` | Register a new user |
1095
- | `login(opts)` | Authenticate and create a session |
1096
- | `logout({ sessionId })` | Invalidate a session |
1097
- | `refreshSession({ refreshToken })` | Extend a session |
1098
- | `getUser({ userId })` | Get user by ID |
1099
- | `updateUser(opts)` | Update user fields |
1100
- | `changePassword(opts)` | Change password (authenticated) |
1101
- | `requestPasswordReset(opts)` | Send reset email |
1102
- | `confirmPasswordReset(opts)` | Apply new password from reset token |
1103
- | `requestEmailVerification(opts)` | Send verification email |
1104
- | `confirmEmailVerification(opts)` | Verify email with token |
1105
- | `listUsers(opts)` | List all users (admin) |
1106
- | `lockAccount(opts)` | Lock a user account (admin) |
1107
- | `unlockAccount(opts)` | Unlock a user account (admin) |
1108
- | `deleteUser(opts)` | Soft-delete a user (admin) |
1109
- | `hardDeleteUser(opts)` | Permanently delete a user (admin) |
1110
- | `bulkDeleteUsers(opts)` | Bulk delete users (admin) |
1111
-
1112
- ### `db.analytics(bucketKey)`
1113
-
1114
- Returns an `AnalyticsClient` for the named bucket. Uses `bucketSecurityKey` automatically.
1115
-
1116
- | Method | Description |
1117
- |---|---|
1118
- | `count(opts?)` | Count records |
1119
- | `distribution(opts)` | Value distribution for a field |
1120
- | `sum(opts)` | Sum with optional groupBy |
1121
- | `timeSeries(opts?)` | Record counts over time |
1122
- | `fieldTimeSeries(opts)` | Field aggregation over time |
1123
- | `topN(opts)` | Top N values for a field |
1124
- | `stats(opts)` | Statistical summary for a numeric field |
1125
- | `records(opts?)` | Filtered raw records via BigQuery |
1126
- | `multiMetric(opts)` | Multiple aggregations in one query |
1127
- | `storageStats(opts?)` | Bucket storage statistics |
1128
- | `crossBucket(opts)` | Compare a metric across multiple buckets |
1129
- | `query(query)` | Raw analytics query |
1130
-
1131
- ### `db.storage(keyName)`
1132
-
1133
- Returns a `StorageManager` for the named storage key. The name must match a key you defined in `storageKeys` when calling `createClient`. Uses the corresponding `ssk_…` key via `X-Storage-Key`.
1483
+ | Method | Signature | Description |
1484
+ |---|---|---|
1485
+ | `signup` | `(opts: SignupOptions) → AuthResult` | Register user + create session. Extra fields beyond `email/password/fullName` are stored on the user record. |
1486
+ | `login` | `(opts: LoginOptions) → AuthResult` | Authenticate + create session. |
1487
+ | `logout` | `({ sessionId, allDevices? }) → void` | Revoke one or all sessions. |
1488
+ | `refreshSession` | `({ refreshToken }) → Session` | Extend an expiring session. |
1489
+ | `validateSession` | `({ sessionId }) → { user, session }` | Check if a session is still active. |
1490
+ | `getUser` | `({ userId }) → UserRecord` | Fetch a user by ID. |
1491
+ | `updateUser` | `(opts: UpdateUserOptions) → UserRecord` | Update user fields must wrap changes in `updates: { … }`. |
1492
+ | `changePassword` | `(opts: ChangePasswordOptions) → void` | Authenticated password change. Field is `currentPassword` in the SDK (maps to `oldPassword` on the wire). |
1493
+ | `requestPasswordReset` | `({ email }) → void` | Trigger reset email. |
1494
+ | `confirmPasswordReset` | `({ resetToken, newPassword }) → void` | Apply new password. |
1495
+ | `requestEmailVerification` | `({ userId }) → void` | Send verification email. |
1496
+ | `confirmEmailVerification` | `({ verifyToken }) void` | Complete verification. |
1497
+ | `listUsers` | `(opts: ListUsersOptions) → ListUsersResult` | Paginated user list. Uses cursor-based pagination — `nextCursor` replaces `offset`. |
1498
+ | `lockAccount` | `({ sessionId, userId, duration? }) → { lockedUntil, unlockTime }` | Admin only. |
1499
+ | `unlockAccount` | `({ sessionId, userId }) → void` | Admin only. |
1500
+ | `deleteUser` | `({ sessionId, userId }) void` | Soft delete. Admin required unless deleting own account. |
1501
+ | `hardDeleteUser` | `({ sessionId, userId }) → void` | Permanent. Admin required unless deleting own account. |
1502
+ | `bulkDeleteUsers` | `({ sessionId, userIds, hard? }) → { succeeded, failed }` | Up to 500 users. Admin only. |
1134
1503
 
1135
- ```typescript
1136
- const storage = db.storage('avatars');
1137
- const scoped = db.storage('avatars').scope('user-uploads/');
1138
- ```
1504
+ ---
1139
1505
 
1140
- | Method | Description |
1141
- |---|---|
1142
- | `upload(data, path, opts?)` | Simple server-buffered upload (up to 500 MB) |
1143
- | `uploadRaw(data, path, opts?)` | Upload JSON or text data |
1144
- | `getUploadUrl(opts)` | Step 1: Get a signed GCS upload URL |
1145
- | `uploadToSignedUrl(url, data, mime, onProgress?)` | Step 2: Upload directly to GCS |
1146
- | `confirmUpload(opts)` | Step 3: Register upload metadata |
1147
- | `getBatchUploadUrls(files)` | Get signed URLs for up to 50 files at once |
1148
- | `batchConfirmUploads(items)` | Confirm multiple uploads at once |
1149
- | `download(path)` | Download a private file as ArrayBuffer |
1150
- | `batchDownload(paths)` | Download multiple files |
1151
- | `list(opts?)` | List files and folders |
1152
- | `getMetadata(path)` | Get file metadata |
1153
- | `getSignedUrl(path, expiresIn?)` | Generate a time-limited share URL |
1154
- | `setVisibility(path, isPublic)` | Toggle public / private |
1155
- | `createFolder(path)` | Create a folder |
1156
- | `deleteFile(path)` | Delete a file |
1157
- | `deleteFolder(path)` | Delete a folder and all its contents |
1158
- | `move(from, to)` | Move or rename a file |
1159
- | `copy(from, to)` | Copy a file |
1160
- | `getStats()` | Key-level storage statistics |
1161
- | `info()` | Ping the storage service (no auth required) |
1162
- | `scope(prefix)` | Get a `ScopedStorage` instance pre-fixed to a folder |
1506
+ ### `db.storage(keyName)` `StorageManager`
1507
+
1508
+ | Method | Signature | Description |
1509
+ |---|---|---|
1510
+ | `upload` | `(data, path, opts?) → UploadResult` | Server-buffered upload. Up to 500 MB. |
1511
+ | `uploadRaw` | `(data, path, opts?) → UploadResult` | Upload a JS object or string as a file. |
1512
+ | `getUploadUrl` | `(opts) → UploadUrlResult` | Step 1 of direct upload. `expiresInSeconds` controls URL TTL. |
1513
+ | `uploadToSignedUrl` | `(signedUrl, data, mimeType, onProgress?) → void` | Step 2 upload to GCS directly. `onProgress` callback fires with 0–100. |
1514
+ | `confirmUpload` | `(opts) → UploadResult` | Step 3 register metadata. |
1515
+ | `getBatchUploadUrls` | `(files: BatchUploadItem[]) → BatchUploadUrlResult` | Up to 50 files. Returns `{ files: [...] }` — only succeeded items included. |
1516
+ | `batchConfirmUploads` | `(items) → { succeeded, failed }` | Confirm multiple uploads. Both arrays are returned. |
1517
+ | `download` | `(path) → ArrayBuffer` | Download private file. |
1518
+ | `batchDownload` | `(paths) → BatchDownloadResult` | Up to 20 files. Returns `{ succeeded, failed }` — content is base64. |
1519
+ | `list` | `(opts?) → ListResult` | Returns `{ files, folders, hasMore, nextCursor }`. |
1520
+ | `getMetadata` | `(path) → FileMetadata` | Size, MIME type, visibility, URLs. |
1521
+ | `getSignedUrl` | `(path, expiresIn?) → SignedUrlResult` | Time-limited share link. Default: 3600s. |
1522
+ | `setVisibility` | `(path, isPublic) → { path, isPublic, publicUrl, downloadUrl }` | Toggle public/private. |
1523
+ | `createFolder` | `(path) → { path }` | Create a GCS prefix placeholder. |
1524
+ | `deleteFile` | `(path) → void` | Delete a single file. |
1525
+ | `deleteFolder` | `(path) → void` | Delete folder + all contents. |
1526
+ | `move` | `(from, to) → { from, to }` | Move or rename. |
1527
+ | `copy` | `(from, to) { from, to }` | Copy to a new path. |
1528
+ | `getStats` | `() StorageStats` | Key-level totals: files, bytes, upload/download/delete counts. |
1529
+ | `info` | `() → { ok, storageRoot }` | Healthcheck — no auth required. |
1530
+ | `scope` | `(prefix) → ScopedStorage` | Create a path-prefixed sub-client with the full StorageManager API. |
1163
1531
 
1164
1532
  ---
1165
1533
 
1166
- ## Contributing
1534
+ ### `db.analytics(bucketKey)` → `AnalyticsClient`
1167
1535
 
1168
- We love contributions! Please see [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines.
1536
+ | Method | Signature | Description |
1537
+ |---|---|---|
1538
+ | `count` | `(opts?) → CountResult` | Total record count. |
1539
+ | `distribution` | `(opts) → DistributionRow[]` | Per-value counts for a field. |
1540
+ | `sum` | `(opts) → SumRow[]` | Sum of a numeric field, optional groupBy. |
1541
+ | `timeSeries` | `(opts?) → TimeSeriesRow[]` | Record counts over time. |
1542
+ | `fieldTimeSeries` | `(opts) → FieldTimeSeriesRow[]` | Field aggregation over time. |
1543
+ | `topN` | `(opts) → TopNRow[]` | Most frequent values. `labelField` adds a human-readable label. |
1544
+ | `stats` | `(opts) → FieldStats` | min / max / avg / sum / count / stddev for a numeric field. |
1545
+ | `records` | `(opts?) → (T & RecordResult)[]` | Filtered raw records via BigQuery. |
1546
+ | `multiMetric` | `(opts) → MultiMetricResult` | Multiple aggregations in one query. Each metric gets a named alias. |
1547
+ | `storageStats` | `(opts?) → StorageStatsResult` | Record count + byte totals for the bucket. |
1548
+ | `crossBucket` | `(opts) → CrossBucketRow[]` | Compare a metric across multiple buckets. |
1549
+ | `query` | `(query: AnalyticsQuery) → AnalyticsResult<T>` | Raw query — for cases not covered by the typed methods above. |
1550
+
1551
+ ---
1552
+
1553
+ ## Contributing
1169
1554
 
1170
1555
  ```bash
1171
- # Clone the repo
1172
1556
  git clone https://github.com/hydrousdb/hydrousdb-js.git
1173
1557
  cd hydrousdb-js
1174
1558
 
1175
- # Install dependencies
1176
- npm install
1177
-
1178
- # Run tests
1179
- npm test
1180
-
1181
- # Build
1182
- npm run build
1183
-
1184
- # Run tests in watch mode
1185
- npm run test:watch
1559
+ npm install # install dependencies
1560
+ npm run build # compile ESM + CJS + type declarations
1561
+ npm test # run the full test suite (68 tests)
1562
+ npm run test:watch # watch mode
1563
+ npm run lint # TypeScript type check
1186
1564
  ```
1187
1565
 
1188
1566
  ---
@@ -1197,4 +1575,4 @@ MIT — see [LICENSE](./LICENSE) for details.
1197
1575
  Built with ❤️ by the <a href="https://hydrousdb.com">HydrousDB</a> team.<br>
1198
1576
  Questions? <a href="mailto:support@hydrousdb.com">support@hydrousdb.com</a> ·
1199
1577
  <a href="https://github.com/hydrousdb/hydrousdb-js/issues">Open an issue</a>
1200
- </p>
1578
+ </p>