hydrousdb 3.0.0 → 3.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +246 -172
- package/dist/index.cjs +157 -610
- package/dist/index.cjs.map +1 -1
- package/dist/index.d.cts +142 -602
- package/dist/index.d.ts +142 -602
- package/dist/index.js +157 -610
- package/dist/index.js.map +1 -1
- package/package.json +8 -8
package/README.md
CHANGED
|
@@ -1,7 +1,8 @@
|
|
|
1
1
|
# HydrousDB JavaScript / TypeScript SDK
|
|
2
2
|
|
|
3
3
|
<p align="center">
|
|
4
|
-
<strong>
|
|
4
|
+
<strong>Store, retrieve, and query massive JSON records in milliseconds — with auth, file storage, and analytics built in.</strong><br><br>
|
|
5
|
+
<a href="https://hydrousdb.com/dashboard"><strong>→ Create a free account and run your first query in 5 minutes</strong></a>
|
|
5
6
|
</p>
|
|
6
7
|
|
|
7
8
|
<p align="center">
|
|
@@ -16,12 +17,8 @@
|
|
|
16
17
|
## Table of Contents
|
|
17
18
|
|
|
18
19
|
- [What is HydrousDB?](#what-is-hydrousdb)
|
|
20
|
+
- [How It Works](#how-it-works)
|
|
19
21
|
- [Quick Start (5 minutes)](#quick-start-5-minutes)
|
|
20
|
-
- [Step 1 — Create your account](#step-1--create-your-account)
|
|
21
|
-
- [Step 2 — Create your first bucket](#step-2--create-your-first-bucket)
|
|
22
|
-
- [Step 3 — Grab your Security Key](#step-3--grab-your-security-key)
|
|
23
|
-
- [Step 4 — Install the SDK](#step-4--install-the-sdk)
|
|
24
|
-
- [Step 5 — Your first record](#step-5--your-first-record)
|
|
25
22
|
- [Records](#records)
|
|
26
23
|
- [Create](#create-a-record)
|
|
27
24
|
- [Read](#read-a-record)
|
|
@@ -48,11 +45,14 @@
|
|
|
48
45
|
- [Analytics](#analytics)
|
|
49
46
|
- [Count](#count)
|
|
50
47
|
- [Distribution](#distribution)
|
|
48
|
+
- [Sum](#sum)
|
|
51
49
|
- [Time Series](#time-series)
|
|
52
50
|
- [Top N](#top-n)
|
|
53
51
|
- [Field Stats](#field-stats)
|
|
54
52
|
- [Multi-Metric Dashboard](#multi-metric-dashboard)
|
|
53
|
+
- [Filtered Records](#filtered-records-bigquery)
|
|
55
54
|
- [Cross-Bucket Comparison](#cross-bucket-comparison)
|
|
55
|
+
- [Storage Stats](#storage-stats)
|
|
56
56
|
- [TypeScript Support](#typescript-support)
|
|
57
57
|
- [Error Handling](#error-handling)
|
|
58
58
|
- [Security Best Practices](#security-best-practices)
|
|
@@ -64,19 +64,54 @@
|
|
|
64
64
|
|
|
65
65
|
## What is HydrousDB?
|
|
66
66
|
|
|
67
|
-
|
|
67
|
+
Traditional databases start choking when your JSON records get large. Postgres hits row-size limits. Firestore charges per field read. MongoDB Atlas buckles under millions of 500 KB+ documents. They were designed for structured rows and small payloads — not the kind of deeply nested, real-world JSON that modern applications actually produce.
|
|
68
|
+
|
|
69
|
+
HydrousDB is built specifically for that problem. It stores every record as a compressed GCS blob, retrieves any record in a single network call (no index lookups — the storage path is computed directly from the record ID), and runs analytics at BigQuery scale without ETL. The bigger and messier your JSON, the more it outperforms traditional databases.
|
|
70
|
+
|
|
71
|
+
**Systems that benefit immediately:**
|
|
72
|
+
|
|
73
|
+
| Domain | Example records | Why traditional DBs struggle |
|
|
74
|
+
|---|---|---|
|
|
75
|
+
| 🏥 **Hospital / EMR** | Full patient charts — vitals history, medication lists, clinical notes, imaging metadata | 850 KB+ per chart, millions of patients, strict audit trails |
|
|
76
|
+
| 🎓 **School management** | Student portfolios — all grades, attendance, assessments, teacher notes across years | Deep nesting, bursty writes at term-end, long-term archival |
|
|
77
|
+
| 🏭 **IoT / Industrial** | Sensor telemetry — time-stamped readings, device state, calibration metadata | Billions of records, append-heavy, rarely updated |
|
|
78
|
+
| 🛒 **E-commerce** | Order records — line items, fulfilment events, return history, custom attributes | Highly variable shape, needs fast analytics across date ranges |
|
|
79
|
+
| ⚖️ **Legal / compliance** | Case files — filings, correspondence, version history, linked documents | 1 MB+ records, immutable audit log, cross-case analytics |
|
|
80
|
+
| 🎮 **Gaming** | Player save states — inventory, quest progress, achievement history, replay data | Large payloads, millions of concurrent users, burst writes |
|
|
81
|
+
| 📡 **Logistics / tracking** | Shipment records — full event timeline, customs data, carrier metadata | Append-only events, heavy querying by date range and status |
|
|
82
|
+
|
|
83
|
+
**What you get out of the box:**
|
|
68
84
|
|
|
69
85
|
| Feature | What it does |
|
|
70
86
|
|---|---|
|
|
71
|
-
| **Records** | Schemaless JSON
|
|
87
|
+
| **Records** | Schemaless JSON store. Billion-scale, gzip-compressed, date-encoded IDs for zero-lookup retrieval. Up to 1 MB per record. |
|
|
72
88
|
| **Auth** | Full user authentication — signup, login, sessions, password reset, email verification, and admin controls. |
|
|
73
|
-
| **Storage** | File uploads
|
|
74
|
-
| **Analytics** | BigQuery-powered aggregations — counts, distributions, time series, top-N, multi-metric dashboards,
|
|
89
|
+
| **Storage** | File uploads backed by Google Cloud Storage. Direct-to-GCS uploads, public/private visibility, signed share URLs. |
|
|
90
|
+
| **Analytics** | BigQuery-powered aggregations — counts, distributions, time series, top-N, multi-metric dashboards, cross-bucket comparisons. Zero ETL. |
|
|
75
91
|
|
|
76
|
-
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
## How It Works
|
|
95
|
+
|
|
96
|
+
Every HydrousDB record ID encodes its creation date as a prefix (e.g. `260203-rec_01JA2XYZ`). This means the full storage path to any record can be computed in memory — no index lookup, no pointer chase. Just math.
|
|
77
97
|
|
|
78
|
-
|
|
79
|
-
-
|
|
98
|
+
```
|
|
99
|
+
260203-rec_01JA2XYZ
|
|
100
|
+
↓ parse date prefix
|
|
101
|
+
YY=26 MM=02 DD=03
|
|
102
|
+
↓ compute path in memory
|
|
103
|
+
projects/pid/buckets/bk/records/26/02/03/rec_01JA.json.gz
|
|
104
|
+
↓ fetch from GCS directly
|
|
105
|
+
0 index reads ✓
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
Records are gzip-compressed on write (typically 60–80% size reduction). A full 850 KB hospital patient chart compresses to ~255 KB on disk — automatically, every time. Records age through storage tiers (Standard → Nearline → Coldline → Archive) as they get older, keeping historical data accessible without manual lifecycle management.
|
|
109
|
+
|
|
110
|
+
This architecture means HydrousDB handles what breaks other databases:
|
|
111
|
+
- **Huge records** — up to 1 MB per document, compressed
|
|
112
|
+
- **Append-heavy workloads** — IoT telemetry, audit logs, event streams
|
|
113
|
+
- **Date-range queries at scale** — the ID prefix enables efficient folder scans without a full table scan
|
|
114
|
+
- **Long-term retention** — billions of records stay queryable via BigQuery without any migration
|
|
80
115
|
|
|
81
116
|
---
|
|
82
117
|
|
|
@@ -95,13 +130,21 @@ Go to **[https://hydrousdb.com](https://hydrousdb.com)** and sign up for a free
|
|
|
95
130
|
|
|
96
131
|
> 💡 **What is a bucket?** A bucket is a named collection of JSON records — similar to a table in SQL or a collection in MongoDB.
|
|
97
132
|
|
|
98
|
-
### Step 3 — Grab your
|
|
133
|
+
### Step 3 — Grab your API Keys
|
|
134
|
+
|
|
135
|
+
HydrousDB uses three separate keys, each scoped to a service:
|
|
136
|
+
|
|
137
|
+
| Key | Prefix | Used for |
|
|
138
|
+
|---|---|---|
|
|
139
|
+
| **Auth Key** | `hk_auth_…` | All `/auth/*` routes — signup, login, sessions |
|
|
140
|
+
| **Bucket Security Key** | `hk_bucket_…` | Records and analytics |
|
|
141
|
+
| **Storage Key(s)** | `ssk_…` | File storage — one key per storage bucket |
|
|
99
142
|
|
|
100
|
-
1. In the dashboard
|
|
101
|
-
2.
|
|
102
|
-
3. Copy
|
|
143
|
+
1. In the dashboard go to **Settings → API Keys**.
|
|
144
|
+
2. Generate each key type you need.
|
|
145
|
+
3. Copy them — you'll use all three when initialising the client.
|
|
103
146
|
|
|
104
|
-
> ⚠️ **
|
|
147
|
+
> ⚠️ **These keys are your credentials.** Treat them like passwords. Never commit them to Git. Use environment variables.
|
|
105
148
|
|
|
106
149
|
### Step 4 — Install the SDK
|
|
107
150
|
|
|
@@ -122,7 +165,11 @@ import { createClient } from 'hydrousdb';
|
|
|
122
165
|
|
|
123
166
|
// Create the client once — reuse it everywhere
|
|
124
167
|
const db = createClient({
|
|
125
|
-
|
|
168
|
+
authKey: process.env.HYDROUS_AUTH_KEY!, // hk_auth_…
|
|
169
|
+
bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!, // hk_bucket_…
|
|
170
|
+
storageKeys: {
|
|
171
|
+
main: process.env.HYDROUS_STORAGE_MAIN!, // ssk_…
|
|
172
|
+
},
|
|
126
173
|
});
|
|
127
174
|
|
|
128
175
|
// Write a record to your bucket
|
|
@@ -132,41 +179,43 @@ const post = await db.records('my-first-bucket').create({
|
|
|
132
179
|
published: false,
|
|
133
180
|
});
|
|
134
181
|
|
|
135
|
-
console.log(post.id); // "
|
|
182
|
+
console.log(post.id); // "260601-rec_01JA2XYZ"
|
|
136
183
|
console.log(post.createdAt); // 1717200000000
|
|
137
184
|
|
|
138
|
-
// Read it back
|
|
185
|
+
// Read it back — zero database reads, path computed from ID
|
|
139
186
|
const fetched = await db.records('my-first-bucket').get(post.id);
|
|
140
187
|
console.log(fetched.title); // "Hello, HydrousDB!"
|
|
141
188
|
|
|
142
189
|
// Update it
|
|
143
|
-
|
|
190
|
+
await db.records('my-first-bucket').patch(post.id, { published: true });
|
|
144
191
|
|
|
145
192
|
// Delete it
|
|
146
193
|
await db.records('my-first-bucket').delete(post.id);
|
|
147
194
|
```
|
|
148
195
|
|
|
149
|
-
🎉 **That's it
|
|
196
|
+
🎉 **That's it.** You're live.
|
|
150
197
|
|
|
151
198
|
---
|
|
152
199
|
|
|
153
200
|
## Records
|
|
154
201
|
|
|
155
202
|
Records are JSON objects stored in named buckets. Every record automatically gets:
|
|
156
|
-
- `id` — unique
|
|
203
|
+
- `id` — date-prefixed unique identifier (e.g. `"260601-rec_01JA2XYZ"`) — encodes storage path
|
|
157
204
|
- `createdAt` — Unix timestamp in milliseconds
|
|
158
205
|
- `updatedAt` — Unix timestamp in milliseconds (updated on every write)
|
|
159
206
|
|
|
207
|
+
Records are gzip-compressed before storage. A 850 KB EMR chart becomes ~255 KB on disk. You never manage this — it's always on.
|
|
208
|
+
|
|
160
209
|
### Create a Record
|
|
161
210
|
|
|
162
211
|
```typescript
|
|
163
212
|
const products = db.records('products');
|
|
164
213
|
|
|
165
214
|
const product = await products.create({
|
|
166
|
-
name:
|
|
167
|
-
price:
|
|
168
|
-
inStock:
|
|
169
|
-
tags:
|
|
215
|
+
name: 'Wireless Headphones',
|
|
216
|
+
price: 79.99,
|
|
217
|
+
inStock: true,
|
|
218
|
+
tags: ['audio', 'wireless'],
|
|
170
219
|
});
|
|
171
220
|
|
|
172
221
|
// product.id, product.createdAt, product.updatedAt are added automatically
|
|
@@ -175,16 +224,16 @@ const product = await products.create({
|
|
|
175
224
|
### Read a Record
|
|
176
225
|
|
|
177
226
|
```typescript
|
|
178
|
-
// Get by ID
|
|
227
|
+
// Get by ID — the storage path is derived from the ID in memory, no index read
|
|
179
228
|
const product = await products.get('rec_abc123');
|
|
180
229
|
|
|
181
|
-
//
|
|
230
|
+
// Throws HydrousError with code RECORD_NOT_FOUND if missing
|
|
182
231
|
```
|
|
183
232
|
|
|
184
233
|
### Update a Record
|
|
185
234
|
|
|
186
235
|
```typescript
|
|
187
|
-
// Patch (merge) — only the
|
|
236
|
+
// Patch (merge) — only the specified fields are changed
|
|
188
237
|
const updated = await products.patch('rec_abc123', {
|
|
189
238
|
price: 69.99,
|
|
190
239
|
inStock: false,
|
|
@@ -214,8 +263,8 @@ const { records } = await products.query();
|
|
|
214
263
|
// With filters
|
|
215
264
|
const { records: affordableStock } = await products.query({
|
|
216
265
|
filters: [
|
|
217
|
-
{ field: 'inStock', op: '==', value: true
|
|
218
|
-
{ field: 'price', op: '<', value: 100
|
|
266
|
+
{ field: 'inStock', op: '==', value: true },
|
|
267
|
+
{ field: 'price', op: '<', value: 100 },
|
|
219
268
|
],
|
|
220
269
|
});
|
|
221
270
|
|
|
@@ -236,7 +285,7 @@ if (hasMore) {
|
|
|
236
285
|
});
|
|
237
286
|
}
|
|
238
287
|
|
|
239
|
-
// Select
|
|
288
|
+
// Select specific fields only
|
|
240
289
|
const { records: lightRecords } = await products.query({
|
|
241
290
|
fields: 'name,price,inStock',
|
|
242
291
|
});
|
|
@@ -267,23 +316,26 @@ const { records: recent } = await products.query({
|
|
|
267
316
|
```typescript
|
|
268
317
|
// Create multiple records at once
|
|
269
318
|
const created = await products.batchCreate([
|
|
270
|
-
{ name: 'Item A', price: 10.00, inStock: true
|
|
319
|
+
{ name: 'Item A', price: 10.00, inStock: true },
|
|
271
320
|
{ name: 'Item B', price: 20.00, inStock: false },
|
|
272
|
-
{ name: 'Item C', price: 30.00, inStock: true
|
|
321
|
+
{ name: 'Item C', price: 30.00, inStock: true },
|
|
273
322
|
]);
|
|
274
|
-
// → [{ id: 'rec_1', ... }, { id: 'rec_2', ... },
|
|
323
|
+
// → [{ id: '260601-rec_1', ... }, { id: '260601-rec_2', ... }, ...]
|
|
275
324
|
|
|
276
325
|
// Count records
|
|
277
|
-
const total
|
|
326
|
+
const total = await products.count();
|
|
278
327
|
const inStock = await products.count([{ field: 'inStock', op: '==', value: true }]);
|
|
279
328
|
|
|
329
|
+
// Get all records without filters (shortcut for query)
|
|
330
|
+
const all = await products.getAll({ orderBy: 'price', order: 'asc' });
|
|
331
|
+
|
|
280
332
|
// Delete multiple records
|
|
281
333
|
const { deleted, failed } = await products.batchDelete(['rec_1', 'rec_2', 'rec_3']);
|
|
282
334
|
```
|
|
283
335
|
|
|
284
336
|
### Version History
|
|
285
337
|
|
|
286
|
-
Every write to a record creates a new version
|
|
338
|
+
Every write to a record creates a new version, so you can travel back in time.
|
|
287
339
|
|
|
288
340
|
```typescript
|
|
289
341
|
// Get the full version history of a record
|
|
@@ -298,9 +350,7 @@ const restored = await products.restoreVersion('rec_abc123', history[2]!.version
|
|
|
298
350
|
|
|
299
351
|
## Authentication
|
|
300
352
|
|
|
301
|
-
HydrousDB has a built-in user auth system. Your users live in a bucket you create
|
|
302
|
-
(e.g. `"app-users"`). You get sessions, refresh tokens, password reset, email
|
|
303
|
-
verification, and admin controls out of the box.
|
|
353
|
+
HydrousDB has a built-in user auth system. Your users live in a bucket you create (e.g. `"app-users"`). You get sessions, refresh tokens, password reset, email verification, and admin controls out of the box.
|
|
304
354
|
|
|
305
355
|
```typescript
|
|
306
356
|
const auth = db.auth('app-users');
|
|
@@ -311,15 +361,15 @@ const auth = db.auth('app-users');
|
|
|
311
361
|
```typescript
|
|
312
362
|
const { user, session } = await auth.signup({
|
|
313
363
|
email: 'alice@example.com',
|
|
314
|
-
password: 'hunter2',
|
|
364
|
+
password: 'hunter2', // min 8 characters, validated server-side
|
|
315
365
|
fullName: 'Alice Wonderland',
|
|
316
366
|
// Any extra fields are stored on the user record:
|
|
317
367
|
plan: 'pro',
|
|
318
368
|
referral: 'friend123',
|
|
319
369
|
});
|
|
320
370
|
|
|
321
|
-
// user.id
|
|
322
|
-
// session.sessionId
|
|
371
|
+
// user.id → "usr_xxxxxxxxxx"
|
|
372
|
+
// session.sessionId → persist this in your app
|
|
323
373
|
// session.refreshToken → persist this for long-lived sessions
|
|
324
374
|
```
|
|
325
375
|
|
|
@@ -338,7 +388,7 @@ await auth.logout({ sessionId: session.sessionId });
|
|
|
338
388
|
|
|
339
389
|
### Session Management
|
|
340
390
|
|
|
341
|
-
Sessions expire after **24 hours**. Use the refresh token to get a new session
|
|
391
|
+
Sessions expire after **24 hours**. Use the refresh token to get a new session — refresh tokens last **30 days**.
|
|
342
392
|
|
|
343
393
|
```typescript
|
|
344
394
|
// Refresh the session before it expires
|
|
@@ -371,7 +421,7 @@ const updated = await auth.updateUser({
|
|
|
371
421
|
// 1. User requests a reset (always returns success — prevents email enumeration)
|
|
372
422
|
await auth.requestPasswordReset({ email: 'alice@example.com' });
|
|
373
423
|
|
|
374
|
-
// 2. User receives an email with a reset token
|
|
424
|
+
// 2. User receives an email with a reset token
|
|
375
425
|
|
|
376
426
|
// 3. User submits the new password
|
|
377
427
|
await auth.confirmPasswordReset({
|
|
@@ -452,11 +502,14 @@ const { deleted, failed } = await auth.bulkDeleteUsers({
|
|
|
452
502
|
|
|
453
503
|
## File Storage
|
|
454
504
|
|
|
455
|
-
HydrousDB Storage is backed by Google Cloud Storage.
|
|
456
|
-
|
|
457
|
-
|
|
505
|
+
HydrousDB Storage is backed by Google Cloud Storage. Storage keys (`ssk_…`) are scoped per bucket, so you can give different parts of your app different levels of access.
|
|
506
|
+
|
|
507
|
+
```typescript
|
|
508
|
+
// Pick a storage key by the name you gave it in storageKeys
|
|
509
|
+
const files = db.storage('main');
|
|
510
|
+
const avatars = db.storage('avatars');
|
|
511
|
+
const documents = db.storage('documents');
|
|
458
512
|
```
|
|
459
|
-
You never see or specify the owner prefix — the SDK handles it transparently.
|
|
460
513
|
|
|
461
514
|
### Simple Upload
|
|
462
515
|
|
|
@@ -464,10 +517,9 @@ For files up to **500 MB** when you don't need upload progress:
|
|
|
464
517
|
|
|
465
518
|
```typescript
|
|
466
519
|
// Browser: upload from a file input
|
|
467
|
-
const
|
|
468
|
-
const file = fileInput.files[0];
|
|
520
|
+
const file = document.querySelector('input[type="file"]').files[0];
|
|
469
521
|
|
|
470
|
-
const result = await db.storage.upload(file, `uploads/${file.name}`, {
|
|
522
|
+
const result = await db.storage('main').upload(file, `uploads/${file.name}`, {
|
|
471
523
|
isPublic: true, // publicly accessible without auth
|
|
472
524
|
overwrite: false, // throw if the file already exists
|
|
473
525
|
});
|
|
@@ -480,14 +532,14 @@ console.log(result.mimeType); // auto-detected from extension
|
|
|
480
532
|
// Node.js: upload from a Buffer
|
|
481
533
|
import { readFileSync } from 'fs';
|
|
482
534
|
const buffer = readFileSync('./report.pdf');
|
|
483
|
-
const result = await db.storage.upload(buffer, 'reports/q3.pdf');
|
|
535
|
+
const result = await db.storage('documents').upload(buffer, 'reports/q3.pdf');
|
|
484
536
|
console.log(result.downloadUrl); // requires X-Storage-Key to access
|
|
485
537
|
```
|
|
486
538
|
|
|
487
539
|
### Upload Raw JSON or Text
|
|
488
540
|
|
|
489
541
|
```typescript
|
|
490
|
-
const result = await db.storage.uploadRaw(
|
|
542
|
+
const result = await db.storage('main').uploadRaw(
|
|
491
543
|
{ theme: 'dark', language: 'en' },
|
|
492
544
|
'user-config/alice.json',
|
|
493
545
|
{ isPublic: false },
|
|
@@ -496,20 +548,21 @@ const result = await db.storage.uploadRaw(
|
|
|
496
548
|
|
|
497
549
|
### Large File Upload (with progress)
|
|
498
550
|
|
|
499
|
-
For files over 10 MB or when you need a progress bar. The file goes directly
|
|
500
|
-
to GCS — your server never buffers it.
|
|
551
|
+
For files over 10 MB or when you need a progress bar. The file goes directly to GCS — your server never buffers it.
|
|
501
552
|
|
|
502
553
|
```typescript
|
|
554
|
+
const storage = db.storage('main');
|
|
555
|
+
|
|
503
556
|
// Step 1: Get a signed upload URL
|
|
504
|
-
const { uploadUrl, path } = await
|
|
557
|
+
const { uploadUrl, path } = await storage.getUploadUrl({
|
|
505
558
|
path: 'videos/product-demo.mp4',
|
|
506
559
|
mimeType: 'video/mp4',
|
|
507
560
|
size: file.size,
|
|
508
561
|
isPublic: true,
|
|
509
562
|
});
|
|
510
563
|
|
|
511
|
-
// Step 2: Upload directly to GCS with progress
|
|
512
|
-
await
|
|
564
|
+
// Step 2: Upload directly to GCS with progress tracking
|
|
565
|
+
await storage.uploadToSignedUrl(
|
|
513
566
|
uploadUrl,
|
|
514
567
|
file,
|
|
515
568
|
'video/mp4',
|
|
@@ -519,8 +572,8 @@ await db.storage.uploadToSignedUrl(
|
|
|
519
572
|
},
|
|
520
573
|
);
|
|
521
574
|
|
|
522
|
-
// Step 3: Confirm the upload (registers metadata)
|
|
523
|
-
const result = await
|
|
575
|
+
// Step 3: Confirm the upload (registers metadata server-side)
|
|
576
|
+
const result = await storage.confirmUpload({
|
|
524
577
|
path: path,
|
|
525
578
|
mimeType: 'video/mp4',
|
|
526
579
|
isPublic: true,
|
|
@@ -532,49 +585,53 @@ console.log(result.publicUrl); // ready to use
|
|
|
532
585
|
### Batch Upload
|
|
533
586
|
|
|
534
587
|
```typescript
|
|
535
|
-
|
|
536
|
-
|
|
588
|
+
const storage = db.storage('main');
|
|
589
|
+
|
|
590
|
+
// Get signed URLs for up to 50 files at once
|
|
591
|
+
const { files } = await storage.getBatchUploadUrls([
|
|
537
592
|
{ path: 'gallery/photo1.jpg', mimeType: 'image/jpeg', size: 204800, isPublic: true },
|
|
538
593
|
{ path: 'gallery/photo2.jpg', mimeType: 'image/jpeg', size: 153600, isPublic: true },
|
|
539
594
|
]);
|
|
540
595
|
|
|
541
|
-
// Upload each one
|
|
596
|
+
// Upload each one directly to GCS
|
|
542
597
|
for (const f of files) {
|
|
543
|
-
await
|
|
598
|
+
await storage.uploadToSignedUrl(f.uploadUrl, blobs[f.index], f.mimeType);
|
|
544
599
|
}
|
|
545
600
|
|
|
546
601
|
// Confirm all at once
|
|
547
|
-
const results = await
|
|
548
|
-
files.map(f => ({ path: f.path, mimeType: f.mimeType, isPublic: true }))
|
|
602
|
+
const results = await storage.batchConfirmUploads(
|
|
603
|
+
files.map(f => ({ path: f.path, mimeType: f.mimeType, isPublic: true })),
|
|
549
604
|
);
|
|
550
605
|
```
|
|
551
606
|
|
|
552
607
|
### Download Files
|
|
553
608
|
|
|
554
609
|
```typescript
|
|
555
|
-
// Private files require authentication —
|
|
556
|
-
const buffer = await db.storage.download('reports/q3.pdf');
|
|
610
|
+
// Private files require authentication — returns ArrayBuffer
|
|
611
|
+
const buffer = await db.storage('documents').download('reports/q3.pdf');
|
|
557
612
|
const blob = new Blob([buffer], { type: 'application/pdf' });
|
|
558
613
|
|
|
559
|
-
//
|
|
560
|
-
const url
|
|
561
|
-
const a
|
|
562
|
-
a.href
|
|
614
|
+
// Trigger a browser download
|
|
615
|
+
const url = URL.createObjectURL(blob);
|
|
616
|
+
const a = document.createElement('a');
|
|
617
|
+
a.href = url;
|
|
563
618
|
a.download = 'q3.pdf';
|
|
564
619
|
a.click();
|
|
565
620
|
|
|
566
|
-
// Public files:
|
|
621
|
+
// Public files: use publicUrl directly — no SDK needed
|
|
567
622
|
// <img src={result.publicUrl} />
|
|
568
623
|
```
|
|
569
624
|
|
|
570
625
|
### List Files
|
|
571
626
|
|
|
572
627
|
```typescript
|
|
628
|
+
const storage = db.storage('main');
|
|
629
|
+
|
|
573
630
|
// List everything at the root
|
|
574
|
-
const { files, folders } = await
|
|
631
|
+
const { files, folders } = await storage.list();
|
|
575
632
|
|
|
576
633
|
// List a specific folder
|
|
577
|
-
const { files, folders, hasMore, nextCursor } = await
|
|
634
|
+
const { files, folders, hasMore, nextCursor } = await storage.list({
|
|
578
635
|
prefix: 'gallery/',
|
|
579
636
|
limit: 50,
|
|
580
637
|
recursive: false,
|
|
@@ -582,7 +639,7 @@ const { files, folders, hasMore, nextCursor } = await db.storage.list({
|
|
|
582
639
|
|
|
583
640
|
// Paginate
|
|
584
641
|
if (hasMore) {
|
|
585
|
-
const page2 = await
|
|
642
|
+
const page2 = await storage.list({ prefix: 'gallery/', cursor: nextCursor });
|
|
586
643
|
}
|
|
587
644
|
```
|
|
588
645
|
|
|
@@ -602,11 +659,11 @@ Each file entry includes:
|
|
|
602
659
|
|
|
603
660
|
### Scoped Storage
|
|
604
661
|
|
|
605
|
-
Working within a specific folder? Use `.scope()` to avoid
|
|
662
|
+
Working within a specific folder? Use `.scope()` to avoid repeating the prefix on every call.
|
|
606
663
|
|
|
607
664
|
```typescript
|
|
608
665
|
// All operations in the "user-avatars/" folder
|
|
609
|
-
const avatars = db.storage.scope('user-avatars');
|
|
666
|
+
const avatars = db.storage('avatars').scope('user-avatars');
|
|
610
667
|
|
|
611
668
|
await avatars.upload(file, `${userId}.jpg`, { isPublic: true });
|
|
612
669
|
// → uploads to "user-avatars/{userId}.jpg"
|
|
@@ -625,42 +682,45 @@ const thumbnails = avatars.scope('thumbnails');
|
|
|
625
682
|
### Share & Visibility
|
|
626
683
|
|
|
627
684
|
```typescript
|
|
628
|
-
|
|
629
|
-
|
|
685
|
+
const storage = db.storage('documents');
|
|
686
|
+
|
|
687
|
+
// Get file metadata (size, MIME type, URLs, visibility)
|
|
688
|
+
const meta = await storage.getMetadata('reports/q3.pdf');
|
|
630
689
|
|
|
631
690
|
// Generate a time-limited share link for a private file
|
|
632
691
|
// (no auth key needed to use the link)
|
|
633
|
-
const { signedUrl, expiresAt } = await
|
|
692
|
+
const { signedUrl, expiresAt } = await storage.getSignedUrl(
|
|
634
693
|
'reports/q3.pdf',
|
|
635
|
-
3600,
|
|
694
|
+
3600, // expires in 1 hour (default)
|
|
636
695
|
);
|
|
637
|
-
// Share signedUrl with whoever needs it
|
|
638
696
|
|
|
639
697
|
// Toggle visibility after upload
|
|
640
|
-
|
|
641
|
-
|
|
698
|
+
await storage.setVisibility('reports/q3.pdf', true); // make public
|
|
699
|
+
await storage.setVisibility('reports/q3.pdf', false); // make private
|
|
642
700
|
```
|
|
643
701
|
|
|
644
702
|
### File Operations
|
|
645
703
|
|
|
646
704
|
```typescript
|
|
705
|
+
const storage = db.storage('main');
|
|
706
|
+
|
|
647
707
|
// Rename / move a file
|
|
648
|
-
await
|
|
708
|
+
await storage.move('drafts/report.pdf', 'published/report-2025.pdf');
|
|
649
709
|
|
|
650
710
|
// Copy a file
|
|
651
|
-
await
|
|
711
|
+
await storage.copy('templates/invoice.html', 'invoices/inv-001.html');
|
|
652
712
|
|
|
653
713
|
// Create a folder
|
|
654
|
-
await
|
|
714
|
+
await storage.createFolder('archive/2025/');
|
|
655
715
|
|
|
656
716
|
// Delete a file
|
|
657
|
-
await
|
|
717
|
+
await storage.deleteFile('temp/scratch.txt');
|
|
658
718
|
|
|
659
719
|
// Delete a folder and all its contents
|
|
660
|
-
await
|
|
720
|
+
await storage.deleteFolder('temp/');
|
|
661
721
|
|
|
662
722
|
// Get key-level stats
|
|
663
|
-
const stats = await
|
|
723
|
+
const stats = await storage.getStats();
|
|
664
724
|
// → { totalFiles: 842, totalBytes: 1073741824, uploadCount: 1200, ... }
|
|
665
725
|
```
|
|
666
726
|
|
|
@@ -668,8 +728,7 @@ const stats = await db.storage.getStats();
|
|
|
668
728
|
|
|
669
729
|
## Analytics
|
|
670
730
|
|
|
671
|
-
HydrousDB Analytics runs
|
|
672
|
-
on millions of records. All queries accept an optional `dateRange` filter.
|
|
731
|
+
HydrousDB Analytics runs queries directly against BigQuery on your GCS data — zero ETL, no data duplication, live results. Fast even on billions of records.
|
|
673
732
|
|
|
674
733
|
```typescript
|
|
675
734
|
const analytics = db.analytics('orders');
|
|
@@ -710,7 +769,7 @@ const rows = await analytics.distribution({ field: 'status', limit: 10, order: '
|
|
|
710
769
|
const rows = await analytics.sum({ field: 'amount' });
|
|
711
770
|
// → [{ sum: 198432.50 }]
|
|
712
771
|
|
|
713
|
-
// Revenue by country
|
|
772
|
+
// Revenue grouped by country
|
|
714
773
|
const byCountry = await analytics.sum({
|
|
715
774
|
field: 'amount',
|
|
716
775
|
groupBy: 'country',
|
|
@@ -721,11 +780,11 @@ const byCountry = await analytics.sum({
|
|
|
721
780
|
|
|
722
781
|
### Time Series
|
|
723
782
|
|
|
724
|
-
Record counts over time — ideal for activity charts.
|
|
783
|
+
Record counts over time — ideal for activity and growth charts.
|
|
725
784
|
|
|
726
785
|
```typescript
|
|
727
786
|
const rows = await analytics.timeSeries({
|
|
728
|
-
granularity: 'day',
|
|
787
|
+
granularity: 'day', // 'hour' | 'day' | 'week' | 'month' | 'year'
|
|
729
788
|
dateRange: {
|
|
730
789
|
start: new Date('2025-01-01').getTime(),
|
|
731
790
|
end: new Date('2025-06-01').getTime(),
|
|
@@ -734,12 +793,12 @@ const rows = await analytics.timeSeries({
|
|
|
734
793
|
// → [{ date: '2025-01-01', count: 42 }, { date: '2025-01-02', count: 67 }, ...]
|
|
735
794
|
```
|
|
736
795
|
|
|
737
|
-
Aggregate a field over time:
|
|
796
|
+
Aggregate a numeric field over time:
|
|
738
797
|
|
|
739
798
|
```typescript
|
|
740
799
|
const revenue = await analytics.fieldTimeSeries({
|
|
741
800
|
field: 'amount',
|
|
742
|
-
aggregation: 'sum',
|
|
801
|
+
aggregation: 'sum', // 'sum' | 'avg' | 'min' | 'max' | 'count'
|
|
743
802
|
granularity: 'week',
|
|
744
803
|
});
|
|
745
804
|
// → [{ date: '2025-W01', value: 12340.50 }, ...]
|
|
@@ -752,7 +811,7 @@ Most common values for a field:
|
|
|
752
811
|
```typescript
|
|
753
812
|
const topProducts = await analytics.topN({
|
|
754
813
|
field: 'productId',
|
|
755
|
-
labelField: 'productName',
|
|
814
|
+
labelField: 'productName', // optional: include a human-readable label
|
|
756
815
|
n: 5,
|
|
757
816
|
order: 'desc',
|
|
758
817
|
});
|
|
@@ -781,10 +840,10 @@ Calculate several aggregations in a single BigQuery query:
|
|
|
781
840
|
```typescript
|
|
782
841
|
const dashboard = await analytics.multiMetric({
|
|
783
842
|
metrics: [
|
|
784
|
-
{ field: 'amount',
|
|
785
|
-
{ field: 'amount',
|
|
786
|
-
{ field: 'amount',
|
|
787
|
-
{ field: 'userId',
|
|
843
|
+
{ field: 'amount', name: 'totalRevenue', aggregation: 'sum' },
|
|
844
|
+
{ field: 'amount', name: 'avgOrderValue', aggregation: 'avg' },
|
|
845
|
+
{ field: 'amount', name: 'maxOrder', aggregation: 'max' },
|
|
846
|
+
{ field: 'userId', name: 'totalOrders', aggregation: 'count' },
|
|
788
847
|
],
|
|
789
848
|
dateRange: { start: new Date('2025-01-01').getTime(), end: Date.now() },
|
|
790
849
|
});
|
|
@@ -798,7 +857,7 @@ const dashboard = await analytics.multiMetric({
|
|
|
798
857
|
|
|
799
858
|
### Filtered Records (BigQuery)
|
|
800
859
|
|
|
801
|
-
Query raw records
|
|
860
|
+
Query raw records at full BigQuery speed:
|
|
802
861
|
|
|
803
862
|
```typescript
|
|
804
863
|
const records = await analytics.records({
|
|
@@ -830,7 +889,7 @@ const comparison = await analytics.crossBucket({
|
|
|
830
889
|
// ]
|
|
831
890
|
```
|
|
832
891
|
|
|
833
|
-
> ⚠️ Your Security Key must have read access to **all** buckets
|
|
892
|
+
> ⚠️ Your Bucket Security Key must have read access to **all** listed buckets.
|
|
834
893
|
|
|
835
894
|
### Storage Stats
|
|
836
895
|
|
|
@@ -843,13 +902,12 @@ const stats = await analytics.storageStats();
|
|
|
843
902
|
|
|
844
903
|
## TypeScript Support
|
|
845
904
|
|
|
846
|
-
The SDK is written in TypeScript and ships with full type definitions. Use generic
|
|
847
|
-
type parameters to describe the shape of your records and get autocomplete throughout.
|
|
905
|
+
The SDK is written in TypeScript and ships with full type definitions. Use generic type parameters to get full autocomplete and compile-time safety throughout your app.
|
|
848
906
|
|
|
849
907
|
```typescript
|
|
850
908
|
import { createClient } from 'hydrousdb';
|
|
851
909
|
|
|
852
|
-
// Define your data models
|
|
910
|
+
// Define your data models as plain interfaces — no index signature needed
|
|
853
911
|
interface Order {
|
|
854
912
|
customerId: string;
|
|
855
913
|
items: Array<{ productId: string; qty: number; price: number }>;
|
|
@@ -865,7 +923,11 @@ interface Customer {
|
|
|
865
923
|
credits: number;
|
|
866
924
|
}
|
|
867
925
|
|
|
868
|
-
const db = createClient({
|
|
926
|
+
const db = createClient({
|
|
927
|
+
authKey: process.env.HYDROUS_AUTH_KEY!,
|
|
928
|
+
bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!,
|
|
929
|
+
storageKeys: { main: process.env.HYDROUS_STORAGE_MAIN! },
|
|
930
|
+
});
|
|
869
931
|
|
|
870
932
|
// Fully typed clients
|
|
871
933
|
const orders = db.records<Order>('orders');
|
|
@@ -881,7 +943,7 @@ const order = await orders.create({
|
|
|
881
943
|
});
|
|
882
944
|
|
|
883
945
|
// TypeScript catches mistakes at compile time:
|
|
884
|
-
// order.nonExistentField
|
|
946
|
+
// order.nonExistentField // ← TS error ✓
|
|
885
947
|
// order.status = 'invalid' // ← TS error ✓
|
|
886
948
|
```
|
|
887
949
|
|
|
@@ -918,19 +980,19 @@ All errors thrown by the SDK extend `HydrousError`, which carries:
|
|
|
918
980
|
import { HydrousError, NetworkError, AuthError } from 'hydrousdb';
|
|
919
981
|
|
|
920
982
|
try {
|
|
921
|
-
const user = await auth.login({ email: 'a@b.com', password: 'wrong' });
|
|
983
|
+
const { user } = await auth.login({ email: 'a@b.com', password: 'wrong' });
|
|
922
984
|
} catch (err) {
|
|
923
985
|
if (err instanceof AuthError) {
|
|
924
986
|
// Authentication-specific error
|
|
925
987
|
console.error(`Auth failed: ${err.code}`);
|
|
926
|
-
// err.code might be: INVALID_CREDENTIALS, ACCOUNT_LOCKED, EMAIL_NOT_VERIFIED
|
|
988
|
+
// err.code might be: INVALID_CREDENTIALS, ACCOUNT_LOCKED, EMAIL_NOT_VERIFIED
|
|
927
989
|
} else if (err instanceof NetworkError) {
|
|
928
990
|
// No internet / server unreachable
|
|
929
991
|
console.error('Cannot reach HydrousDB — check your internet connection');
|
|
930
992
|
} else if (err instanceof HydrousError) {
|
|
931
993
|
// Any other API error
|
|
932
994
|
console.error(`API error [${err.code}]: ${err.message}`);
|
|
933
|
-
console.error(`Request ID: ${err.requestId}`); // include
|
|
995
|
+
console.error(`Request ID: ${err.requestId}`); // include in support tickets
|
|
934
996
|
}
|
|
935
997
|
}
|
|
936
998
|
```
|
|
@@ -943,8 +1005,8 @@ try {
|
|
|
943
1005
|
| `INVALID_CREDENTIALS` | Wrong email or password |
|
|
944
1006
|
| `ACCOUNT_LOCKED` | The account is temporarily locked |
|
|
945
1007
|
| `INVALID_SESSION` | Session expired or revoked — re-authenticate |
|
|
946
|
-
| `MISSING_API_KEY` |
|
|
947
|
-
| `INVALID_SECURITY_KEY` |
|
|
1008
|
+
| `MISSING_API_KEY` | Key not provided |
|
|
1009
|
+
| `INVALID_SECURITY_KEY` | Key is wrong or revoked |
|
|
948
1010
|
| `FORBIDDEN` | Insufficient permissions |
|
|
949
1011
|
| `FILE_EXISTS` | File already exists at path (use `overwrite: true`) |
|
|
950
1012
|
| `LIMIT_EXCEEDED` | Storage quota or file size limit reached |
|
|
@@ -956,31 +1018,32 @@ try {
|
|
|
956
1018
|
|
|
957
1019
|
## Security Best Practices
|
|
958
1020
|
|
|
959
|
-
1. **Never hard-code your
|
|
1021
|
+
1. **Never hard-code your keys.** Use environment variables:
|
|
960
1022
|
|
|
961
1023
|
```bash
|
|
962
1024
|
# .env (add to .gitignore)
|
|
963
|
-
|
|
1025
|
+
HYDROUS_AUTH_KEY=hk_auth_xxxxxxxxxxxxxxxxxxxx
|
|
1026
|
+
HYDROUS_BUCKET_KEY=hk_bucket_xxxxxxxxxxxxxxxxxxxx
|
|
1027
|
+
HYDROUS_STORAGE_MAIN=ssk_xxxxxxxxxxxxxxxxxxxx
|
|
964
1028
|
```
|
|
965
1029
|
|
|
966
1030
|
```typescript
|
|
967
|
-
const db = createClient({
|
|
1031
|
+
const db = createClient({
|
|
1032
|
+
authKey: process.env.HYDROUS_AUTH_KEY!,
|
|
1033
|
+
bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!,
|
|
1034
|
+
storageKeys: { main: process.env.HYDROUS_STORAGE_MAIN! },
|
|
1035
|
+
});
|
|
968
1036
|
```
|
|
969
1037
|
|
|
970
|
-
2. **Never expose
|
|
971
|
-
route requests through your own backend, or use per-user session tokens.
|
|
1038
|
+
2. **Never expose keys to browsers.** For browser-side apps, route requests through your own backend, or use per-user session tokens from `auth.login()`.
|
|
972
1039
|
|
|
973
|
-
3. **
|
|
974
|
-
The SDK enforces this automatically, so keys never appear in server logs or
|
|
975
|
-
browser history.
|
|
1040
|
+
3. **Keys are sent via request headers — never in URLs.** The SDK enforces this automatically, so keys never appear in server logs or browser history.
|
|
976
1041
|
|
|
977
1042
|
4. **Rotate keys periodically.** Revoke old keys from the dashboard after rotation.
|
|
978
1043
|
|
|
979
|
-
5. **Use scoped storage** (`db.storage.scope(
|
|
980
|
-
feature, reducing the blast radius of any misconfiguration.
|
|
1044
|
+
5. **Use scoped storage** (`db.storage('keyName').scope('prefix/')`) to isolate access by user or feature, reducing the blast radius of any misconfiguration.
|
|
981
1045
|
|
|
982
|
-
6. **Use `isPublic: false` (the default) for sensitive files.** Use signed URLs
|
|
983
|
-
for time-limited sharing instead of making files permanently public.
|
|
1046
|
+
6. **Use `isPublic: false` (the default) for sensitive files.** Use signed URLs for time-limited sharing instead of making files permanently public.
|
|
984
1047
|
|
|
985
1048
|
---
|
|
986
1049
|
|
|
@@ -988,18 +1051,24 @@ try {
|
|
|
988
1051
|
|
|
989
1052
|
### `createClient(config)`
|
|
990
1053
|
|
|
991
|
-
Creates and returns a `HydrousClient` instance. Call this once and reuse
|
|
1054
|
+
Creates and returns a `HydrousClient` instance. Call this once and reuse it everywhere.
|
|
992
1055
|
|
|
993
1056
|
```typescript
|
|
994
1057
|
const db = createClient({
|
|
995
|
-
|
|
996
|
-
|
|
1058
|
+
authKey: 'hk_auth_…', // Required — auth routes
|
|
1059
|
+
bucketSecurityKey: 'hk_bucket_…', // Required — records & analytics
|
|
1060
|
+
storageKeys: { // Required — at least one entry
|
|
1061
|
+
main: 'ssk_main_…',
|
|
1062
|
+
avatars: 'ssk_avatars_…',
|
|
1063
|
+
documents: 'ssk_docs_…',
|
|
1064
|
+
},
|
|
1065
|
+
baseUrl: 'https://...', // Optional — defaults to official endpoint
|
|
997
1066
|
});
|
|
998
1067
|
```
|
|
999
1068
|
|
|
1000
1069
|
### `db.records<T>(bucketKey)`
|
|
1001
1070
|
|
|
1002
|
-
Returns a `RecordsClient<T>` for the named bucket.
|
|
1071
|
+
Returns a `RecordsClient<T>` for the named bucket. Uses `bucketSecurityKey` automatically.
|
|
1003
1072
|
|
|
1004
1073
|
| Method | Description |
|
|
1005
1074
|
|---|---|
|
|
@@ -1014,78 +1083,83 @@ Returns a `RecordsClient<T>` for the named bucket.
|
|
|
1014
1083
|
| `batchCreate(items)` | Create multiple records |
|
|
1015
1084
|
| `batchDelete(ids)` | Delete multiple records |
|
|
1016
1085
|
| `getHistory(id)` | Get version history |
|
|
1017
|
-
| `restoreVersion(id, version)` | Restore to a version |
|
|
1086
|
+
| `restoreVersion(id, version)` | Restore to a previous version |
|
|
1018
1087
|
|
|
1019
1088
|
### `db.auth(bucketKey)`
|
|
1020
1089
|
|
|
1021
|
-
Returns an `AuthClient` for the named user bucket.
|
|
1090
|
+
Returns an `AuthClient` for the named user bucket. Uses `authKey` automatically.
|
|
1022
1091
|
|
|
1023
1092
|
| Method | Description |
|
|
1024
1093
|
|---|---|
|
|
1025
1094
|
| `signup(opts)` | Register a new user |
|
|
1026
|
-
| `login(opts)` | Authenticate and create session |
|
|
1027
|
-
| `logout({ sessionId })` | Invalidate session |
|
|
1095
|
+
| `login(opts)` | Authenticate and create a session |
|
|
1096
|
+
| `logout({ sessionId })` | Invalidate a session |
|
|
1028
1097
|
| `refreshSession({ refreshToken })` | Extend a session |
|
|
1029
1098
|
| `getUser({ userId })` | Get user by ID |
|
|
1030
1099
|
| `updateUser(opts)` | Update user fields |
|
|
1031
|
-
| `deleteUser(opts)` | Soft-delete a user |
|
|
1032
|
-
| `hardDeleteUser(opts)` | Permanently delete a user (admin) |
|
|
1033
|
-
| `listUsers(opts)` | List all users (admin) |
|
|
1034
|
-
| `bulkDeleteUsers(opts)` | Bulk delete (admin) |
|
|
1035
|
-
| `lockAccount(opts)` | Lock a user (admin) |
|
|
1036
|
-
| `unlockAccount(opts)` | Unlock a user (admin) |
|
|
1037
1100
|
| `changePassword(opts)` | Change password (authenticated) |
|
|
1038
1101
|
| `requestPasswordReset(opts)` | Send reset email |
|
|
1039
|
-
| `confirmPasswordReset(opts)` | Apply new password |
|
|
1102
|
+
| `confirmPasswordReset(opts)` | Apply new password from reset token |
|
|
1040
1103
|
| `requestEmailVerification(opts)` | Send verification email |
|
|
1041
1104
|
| `confirmEmailVerification(opts)` | Verify email with token |
|
|
1105
|
+
| `listUsers(opts)` | List all users (admin) |
|
|
1106
|
+
| `lockAccount(opts)` | Lock a user account (admin) |
|
|
1107
|
+
| `unlockAccount(opts)` | Unlock a user account (admin) |
|
|
1108
|
+
| `deleteUser(opts)` | Soft-delete a user (admin) |
|
|
1109
|
+
| `hardDeleteUser(opts)` | Permanently delete a user (admin) |
|
|
1110
|
+
| `bulkDeleteUsers(opts)` | Bulk delete users (admin) |
|
|
1042
1111
|
|
|
1043
1112
|
### `db.analytics(bucketKey)`
|
|
1044
1113
|
|
|
1045
|
-
Returns an `AnalyticsClient` for the named bucket.
|
|
1114
|
+
Returns an `AnalyticsClient` for the named bucket. Uses `bucketSecurityKey` automatically.
|
|
1046
1115
|
|
|
1047
1116
|
| Method | Description |
|
|
1048
1117
|
|---|---|
|
|
1049
1118
|
| `count(opts?)` | Count records |
|
|
1050
1119
|
| `distribution(opts)` | Value distribution for a field |
|
|
1051
|
-
| `sum(opts)` | Sum
|
|
1052
|
-
| `timeSeries(opts?)` |
|
|
1120
|
+
| `sum(opts)` | Sum with optional groupBy |
|
|
1121
|
+
| `timeSeries(opts?)` | Record counts over time |
|
|
1053
1122
|
| `fieldTimeSeries(opts)` | Field aggregation over time |
|
|
1054
1123
|
| `topN(opts)` | Top N values for a field |
|
|
1055
|
-
| `stats(opts)` | Statistical summary for a field |
|
|
1056
|
-
| `records(opts?)` | Filtered raw records
|
|
1124
|
+
| `stats(opts)` | Statistical summary for a numeric field |
|
|
1125
|
+
| `records(opts?)` | Filtered raw records via BigQuery |
|
|
1057
1126
|
| `multiMetric(opts)` | Multiple aggregations in one query |
|
|
1058
1127
|
| `storageStats(opts?)` | Bucket storage statistics |
|
|
1059
|
-
| `crossBucket(opts)` | Compare across multiple buckets |
|
|
1128
|
+
| `crossBucket(opts)` | Compare a metric across multiple buckets |
|
|
1060
1129
|
| `query(query)` | Raw analytics query |
|
|
1061
1130
|
|
|
1062
|
-
### `db.storage`
|
|
1131
|
+
### `db.storage(keyName)`
|
|
1063
1132
|
|
|
1064
|
-
|
|
1133
|
+
Returns a `StorageManager` for the named storage key. The name must match a key you defined in `storageKeys` when calling `createClient`. Uses the corresponding `ssk_…` key via `X-Storage-Key`.
|
|
1134
|
+
|
|
1135
|
+
```typescript
|
|
1136
|
+
const storage = db.storage('avatars');
|
|
1137
|
+
const scoped = db.storage('avatars').scope('user-uploads/');
|
|
1138
|
+
```
|
|
1065
1139
|
|
|
1066
1140
|
| Method | Description |
|
|
1067
1141
|
|---|---|
|
|
1068
|
-
| `upload(data, path, opts?)` | Simple server-buffered upload |
|
|
1069
|
-
| `uploadRaw(data, path, opts?)` | Upload JSON
|
|
1070
|
-
| `getUploadUrl(opts)` | Step 1: Get signed GCS upload URL |
|
|
1071
|
-
| `uploadToSignedUrl(url, data, mime, onProgress?)` | Step 2: Upload to GCS
|
|
1142
|
+
| `upload(data, path, opts?)` | Simple server-buffered upload (up to 500 MB) |
|
|
1143
|
+
| `uploadRaw(data, path, opts?)` | Upload JSON or text data |
|
|
1144
|
+
| `getUploadUrl(opts)` | Step 1: Get a signed GCS upload URL |
|
|
1145
|
+
| `uploadToSignedUrl(url, data, mime, onProgress?)` | Step 2: Upload directly to GCS |
|
|
1072
1146
|
| `confirmUpload(opts)` | Step 3: Register upload metadata |
|
|
1073
|
-
| `getBatchUploadUrls(files)` |
|
|
1074
|
-
| `batchConfirmUploads(items)` | Confirm
|
|
1075
|
-
| `download(path)` | Download private file |
|
|
1076
|
-
| `batchDownload(paths)` |
|
|
1147
|
+
| `getBatchUploadUrls(files)` | Get signed URLs for up to 50 files at once |
|
|
1148
|
+
| `batchConfirmUploads(items)` | Confirm multiple uploads at once |
|
|
1149
|
+
| `download(path)` | Download a private file as ArrayBuffer |
|
|
1150
|
+
| `batchDownload(paths)` | Download multiple files |
|
|
1077
1151
|
| `list(opts?)` | List files and folders |
|
|
1078
|
-
| `getMetadata(path)` |
|
|
1079
|
-
| `getSignedUrl(path, expiresIn?)` |
|
|
1080
|
-
| `setVisibility(path, isPublic)` | Toggle public/private |
|
|
1152
|
+
| `getMetadata(path)` | Get file metadata |
|
|
1153
|
+
| `getSignedUrl(path, expiresIn?)` | Generate a time-limited share URL |
|
|
1154
|
+
| `setVisibility(path, isPublic)` | Toggle public / private |
|
|
1081
1155
|
| `createFolder(path)` | Create a folder |
|
|
1082
1156
|
| `deleteFile(path)` | Delete a file |
|
|
1083
|
-
| `deleteFolder(path)` | Delete a folder
|
|
1084
|
-
| `move(from, to)` | Move
|
|
1085
|
-
| `copy(from, to)` | Copy |
|
|
1086
|
-
| `getStats()` | Key-level
|
|
1087
|
-
| `info()` |
|
|
1088
|
-
| `scope(prefix)` | Get a ScopedStorage instance |
|
|
1157
|
+
| `deleteFolder(path)` | Delete a folder and all its contents |
|
|
1158
|
+
| `move(from, to)` | Move or rename a file |
|
|
1159
|
+
| `copy(from, to)` | Copy a file |
|
|
1160
|
+
| `getStats()` | Key-level storage statistics |
|
|
1161
|
+
| `info()` | Ping the storage service (no auth required) |
|
|
1162
|
+
| `scope(prefix)` | Get a `ScopedStorage` instance pre-fixed to a folder |
|
|
1089
1163
|
|
|
1090
1164
|
---
|
|
1091
1165
|
|
|
@@ -1121,6 +1195,6 @@ MIT — see [LICENSE](./LICENSE) for details.
|
|
|
1121
1195
|
|
|
1122
1196
|
<p align="center">
|
|
1123
1197
|
Built with ❤️ by the <a href="https://hydrousdb.com">HydrousDB</a> team.<br>
|
|
1124
|
-
Questions? <a href="mailto:support@hydrousdb.com">support@hydrousdb.com</a> ·
|
|
1198
|
+
Questions? <a href="mailto:support@hydrousdb.com">support@hydrousdb.com</a> ·
|
|
1125
1199
|
<a href="https://github.com/hydrousdb/hydrousdb-js/issues">Open an issue</a>
|
|
1126
|
-
</p>
|
|
1200
|
+
</p>
|