s3db.js 12.3.0 → 13.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,917 @@
1
+ # MemoryClient - Ultra-Fast In-Memory Client
2
+
3
+ Pure in-memory S3-compatible client for blazing-fast tests and development. **100-1000x faster** than LocalStack with zero dependencies on Docker, LocalStack, or AWS.
4
+
5
+ ## Features
6
+
7
+ ✅ **100-1000x Faster** than LocalStack - All operations in memory
8
+ ✅ **Zero Dependencies** - No Docker, MinIO, or AWS required
9
+ ✅ **Full Compatibility** - Drop-in replacement for real S3 Client
10
+ ✅ **Snapshot/Restore** - Perfect for test isolation
11
+ ✅ **Optional Persistence** - Save/load state to disk
12
+ ✅ **BackupPlugin Compatible** - Export/import JSONL format
13
+ ✅ **Configurable Limits** - Simulate S3 limits (2KB metadata, etc)
14
+ ✅ **Complete AWS SDK Support** - All commands implemented
15
+
16
+ ## Quick Start
17
+
18
+ ```javascript
19
+ import { S3db, MemoryClient } from 's3db.js';
20
+
21
+ // Create database with memory client
22
+ const db = new S3db({
23
+ client: new MemoryClient()
24
+ });
25
+
26
+ await db.connect();
27
+
28
+ // Use normally - everything works in memory!
29
+ const users = await db.createResource({
30
+ name: 'users',
31
+ attributes: {
32
+ id: 'string|required',
33
+ name: 'string|required',
34
+ email: 'string|required|email'
35
+ }
36
+ });
37
+
38
+ await users.insert({ id: 'u1', name: 'Alice', email: 'alice@test.com' });
39
+ const user = await users.get('u1');
40
+ ```
41
+
42
+ ## Usage
43
+
44
+ ### Basic Usage
45
+
46
+ ```javascript
47
+ import { S3db, MemoryClient } from 's3db.js';
48
+
49
+ const client = new MemoryClient({
50
+ bucket: 'my-bucket',
51
+ keyPrefix: 'databases/app',
52
+ verbose: true
53
+ });
54
+
55
+ const db = new S3db({ client });
56
+ await db.connect();
57
+ ```
58
+
59
+ ### Test Helper
60
+
61
+ Use the built-in test helper for ultra-fast tests:
62
+
63
+ ```javascript
64
+ import { createMemoryDatabaseForTest } from '../config.js';
65
+
66
+ describe('My Tests', () => {
67
+ let database;
68
+
69
+ beforeEach(async () => {
70
+ // Creates isolated memory database
71
+ database = createMemoryDatabaseForTest('my-test');
72
+ await database.connect();
73
+ });
74
+
75
+ afterEach(async () => {
76
+ await database.disconnect();
77
+ });
78
+
79
+ it('should work blazingly fast', async () => {
80
+ const users = await database.createResource({
81
+ name: 'users',
82
+ attributes: { id: 'string', name: 'string' }
83
+ });
84
+
85
+ await users.insert({ id: 'u1', name: 'Alice' });
86
+ const user = await users.get('u1');
87
+
88
+ expect(user.name).toBe('Alice');
89
+ });
90
+ });
91
+ ```
92
+
93
+ ### Snapshot/Restore
94
+
95
+ Perfect for test isolation and state management:
96
+
97
+ ```javascript
98
+ import { S3db, MemoryClient } from 's3db.js';
99
+
100
+ const client = new MemoryClient();
101
+ const db = new S3db({ client });
102
+ await db.connect();
103
+
104
+ // ... create resources and insert data ...
105
+
106
+ // Save current state
107
+ const snapshot = client.snapshot();
108
+
109
+ // Modify data
110
+ await users.update('u1', { name: 'Modified' });
111
+
112
+ // Restore original state
113
+ client.restore(snapshot);
114
+
115
+ // Data is back to original state!
116
+ ```
117
+
118
+ ### Persistence
119
+
120
+ Optionally persist memory state to disk for debugging:
121
+
122
+ ```javascript
123
+ const client = new MemoryClient({
124
+ persistPath: '/tmp/db-snapshot.json',
125
+ autoPersist: true // Auto-save on changes
126
+ });
127
+
128
+ // Manual save/load
129
+ await client.saveToDisk();
130
+ await client.loadFromDisk();
131
+
132
+ // Or use custom path
133
+ await client.saveToDisk('/tmp/my-snapshot.json');
134
+ await client.loadFromDisk('/tmp/my-snapshot.json');
135
+ ```
136
+
137
+ ### BackupPlugin Compatibility
138
+
139
+ MemoryClient supports **two ways** to backup and restore data in BackupPlugin format:
140
+
141
+ #### Method 1: Direct MemoryClient Methods (Recommended for MemoryClient)
142
+
143
+ Use `exportBackup()` and `importBackup()` directly on the MemoryClient:
144
+
145
+ ```javascript
146
+ import { S3db, MemoryClient } from 's3db.js';
147
+
148
+ const client = new MemoryClient();
149
+ const db = new S3db({ client });
150
+ await db.connect();
151
+
152
+ // Create some data
153
+ const users = await db.createResource({
154
+ name: 'users',
155
+ attributes: { id: 'string', name: 'string', email: 'string' }
156
+ });
157
+
158
+ await users.insert({ id: 'u1', name: 'Alice', email: 'alice@test.com' });
159
+ await users.insert({ id: 'u2', name: 'Bob', email: 'bob@test.com' });
160
+
161
+ // ✅ METHOD 1: Export using MemoryClient directly
162
+ await client.exportBackup('/tmp/backup', {
163
+ compress: true, // Use gzip compression (.jsonl.gz)
164
+ database: db, // Include resource schemas in s3db.json
165
+ resources: ['users'] // Optional: filter specific resources
166
+ });
167
+
168
+ // Result:
169
+ // /tmp/backup/
170
+ // ├── s3db.json - Metadata with schemas and stats
171
+ // └── users.jsonl.gz - Compressed data (one JSON per line)
172
+
173
+ // ✅ METHOD 1: Import using MemoryClient directly
174
+ const importStats = await client.importBackup('/tmp/backup', {
175
+ clear: true, // Clear existing data first
176
+ database: db, // Recreate resources from schemas
177
+ resources: ['users'] // Optional: import specific resources only
178
+ });
179
+
180
+ console.log(importStats);
181
+ // {
182
+ // resourcesImported: 1,
183
+ // recordsImported: 2,
184
+ // errors: []
185
+ // }
186
+ ```
187
+
188
+ #### Method 2: Using BackupPlugin (Works with MemoryClient AND S3Client)
189
+
190
+ Use the BackupPlugin for advanced features like scheduling, rotation, and multi-driver support:
191
+
192
+ ```javascript
193
+ import { S3db, MemoryClient, BackupPlugin } from 's3db.js';
194
+
195
+ const client = new MemoryClient();
196
+ const db = new S3db({
197
+ client,
198
+ plugins: [
199
+ new BackupPlugin({
200
+ driver: 'filesystem',
201
+ backupDir: '/tmp/backups',
202
+ compress: true,
203
+ schedule: '0 2 * * *', // Daily at 2 AM
204
+ retention: 7 // Keep 7 backups
205
+ })
206
+ ]
207
+ });
208
+
209
+ await db.connect();
210
+
211
+ // Create resources and data...
212
+ const users = await db.createResource({
213
+ name: 'users',
214
+ attributes: { id: 'string', name: 'string', email: 'string' }
215
+ });
216
+
217
+ await users.insert({ id: 'u1', name: 'Alice', email: 'alice@test.com' });
218
+
219
+ // ✅ METHOD 2: Backup using BackupPlugin
220
+ const backupPath = await db.plugins.backup.backup();
221
+ console.log(`Backup created: ${backupPath}`);
222
+ // Output: /tmp/backups/backup-2025-10-25T14-30-00-abc123/
223
+
224
+ // ✅ METHOD 2: Restore using BackupPlugin
225
+ await db.plugins.backup.restore(backupPath);
226
+ console.log('Database restored!');
227
+
228
+ // List all backups
229
+ const backups = await db.plugins.backup.listBackups();
230
+ console.log('Available backups:', backups);
231
+ ```
232
+
233
+ **Comparison:**
234
+
235
+ | Feature | MemoryClient Direct | BackupPlugin |
236
+ |---------|-------------------|--------------|
237
+ | **Export/Import** | ✅ Manual control | ✅ Manual + Scheduled |
238
+ | **Compression** | ✅ Gzip | ✅ Gzip |
239
+ | **Resource Filtering** | ✅ Yes | ✅ Yes |
240
+ | **Scheduling** | ❌ No | ✅ Cron support |
241
+ | **Retention/Rotation** | ❌ No | ✅ Auto-cleanup |
242
+ | **Multi-Driver** | ❌ Filesystem only | ✅ S3 + Filesystem |
243
+ | **Works with S3Client** | ❌ No | ✅ Yes |
244
+ | **Simplicity** | ✅ Simpler API | ⚠️ More features |
245
+
246
+ **When to use each:**
247
+
248
+ - **Use MemoryClient Direct** when:
249
+ - Working exclusively with MemoryClient
250
+ - Need simple one-time exports/imports
251
+ - Testing or development scenarios
252
+ - Want minimal configuration
253
+
254
+ - **Use BackupPlugin** when:
255
+ - Need scheduled backups
256
+ - Want automatic retention/rotation
257
+ - Need to backup to S3 or multiple locations
258
+ - Working with real S3Client (production)
259
+ - Need consistent backup strategy across environments
260
+
261
+ **BackupPlugin Format Details:**
262
+
263
+ Both methods create the **same directory structure**, ensuring full compatibility:
264
+
265
+ ```
266
+ /backup-directory/
267
+ ├── s3db.json # Metadata file
268
+ │ {
269
+ │ "version": "1.0",
270
+ │ "timestamp": "2025-10-25T...",
271
+ │ "bucket": "my-bucket",
272
+ │ "keyPrefix": "",
273
+ │ "compressed": true,
274
+ │ "resources": {
275
+ │ "users": {
276
+ │ "schema": {
277
+ │ "attributes": {...},
278
+ │ "partitions": {...},
279
+ │ "behavior": "body-overflow"
280
+ │ },
281
+ │ "stats": {
282
+ │ "recordCount": 2,
283
+ │ "fileSize": 1024
284
+ │ }
285
+ │ }
286
+ │ },
287
+ │ "totalRecords": 2,
288
+ │ "totalSize": 1024
289
+ │ }
290
+
291
+ └── users.jsonl.gz # Compressed JSON Lines (newline-delimited)
292
+ {"id":"u1","name":"Alice","email":"alice@test.com"}
293
+ {"id":"u2","name":"Bob","email":"bob@test.com"}
294
+ ```
295
+
296
+ **Cross-Compatibility Examples:**
297
+
298
+ ```javascript
299
+ // Example 1: Export with MemoryClient, Import with BackupPlugin
300
+ const memClient = new MemoryClient();
301
+ const memDb = new S3db({ client: memClient });
302
+ await memDb.connect();
303
+
304
+ // Create data in memory
305
+ const users = await memDb.createResource({
306
+ name: 'users',
307
+ attributes: { id: 'string', name: 'string' }
308
+ });
309
+ await users.insert({ id: 'u1', name: 'Alice' });
310
+
311
+ // Export using MemoryClient
312
+ await memClient.exportBackup('/tmp/backup');
313
+
314
+ // Import using BackupPlugin on a different database
315
+ const s3Db = new S3db({
316
+ connectionString: 's3://...',
317
+ plugins: [new BackupPlugin({ driver: 'filesystem' })]
318
+ });
319
+ await s3Db.connect();
320
+ await s3Db.plugins.backup.restore('/tmp/backup');
321
+ // ✅ Data now in S3!
322
+
323
+ // Example 2: Backup S3 with BackupPlugin, Test with MemoryClient
324
+ const prodDb = new S3db({
325
+ connectionString: 's3://prod-bucket',
326
+ plugins: [new BackupPlugin({ driver: 'filesystem', backupDir: '/backups' })]
327
+ });
328
+ await prodDb.connect();
329
+
330
+ // Create production backup
331
+ const backupPath = await prodDb.plugins.backup.backup();
332
+
333
+ // Load backup into MemoryClient for local testing
334
+ const testClient = new MemoryClient();
335
+ const testDb = new S3db({ client: testClient });
336
+ await testDb.connect();
337
+
338
+ await testClient.importBackup(backupPath, { database: testDb });
339
+ // ✅ Production data now in memory for testing!
340
+ ```
341
+
342
+ **Use Cases:**
343
+ - **Migrate data** between MemoryClient and real S3
344
+ - **Share test fixtures** between projects and developers
345
+ - **Debug production data** locally without AWS access
346
+ - **Create portable snapshots** for CI/CD pipelines
347
+ - **Test with real data** in fast in-memory client
348
+ - **Disaster recovery** with automated backups
349
+ - **Analyze data** with BigQuery/Athena/Spark (JSONL format)
350
+
351
+ ### Enforce S3 Limits
352
+
353
+ Validate that your code respects S3 limits:
354
+
355
+ ```javascript
356
+ const client = new MemoryClient({
357
+ enforceLimits: true,
358
+ metadataLimit: 2048, // 2KB like S3
359
+ maxObjectSize: 5 * 1024 * 1024 * 1024 // 5GB
360
+ });
361
+
362
+ // This will throw if metadata > 2KB
363
+ await resource.insert({ id: '1', largeMetadata: '...' });
364
+ // Error: Metadata size (3000 bytes) exceeds limit of 2048 bytes
365
+ ```
366
+
367
+ ### Storage Statistics
368
+
369
+ Get insights into memory usage:
370
+
371
+ ```javascript
372
+ const stats = client.getStats();
373
+
374
+ console.log(stats);
375
+ // {
376
+ // objectCount: 150,
377
+ // totalSize: 1024000,
378
+ // totalSizeFormatted: '1000 KB',
379
+ // keys: ['key1', 'key2', ...],
380
+ // bucket: 'my-bucket'
381
+ // }
382
+ ```
383
+
384
+ ## Configuration Options
385
+
386
+ ```javascript
387
+ new MemoryClient({
388
+ // Basic Configuration
389
+ bucket: 'my-bucket', // Bucket name (default: 's3db')
390
+ keyPrefix: 'databases/app', // Key prefix (default: '')
391
+ region: 'us-east-1', // Region (default: 'us-east-1')
392
+ verbose: true, // Log operations (default: false)
393
+
394
+ // Performance
395
+ parallelism: 10, // Parallel operations (default: 10)
396
+
397
+ // Limits Enforcement
398
+ enforceLimits: true, // Enforce S3 limits (default: false)
399
+ metadataLimit: 2048, // Metadata limit in bytes (default: 2048)
400
+ maxObjectSize: 5 * 1024 ** 3, // Max object size (default: 5GB)
401
+
402
+ // Persistence
403
+ persistPath: '/tmp/db.json', // Snapshot file path (default: none)
404
+ autoPersist: true // Auto-save on changes (default: false)
405
+ })
406
+ ```
407
+
408
+ ## API Reference
409
+
410
+ ### Client Methods
411
+
412
+ #### Core Operations
413
+ - `putObject({ key, metadata, body, ... })` - Store object
414
+ - `getObject(key)` - Retrieve object
415
+ - `headObject(key)` - Get metadata only
416
+ - `copyObject({ from, to, ... })` - Copy object
417
+ - `deleteObject(key)` - Delete object
418
+ - `deleteObjects(keys)` - Batch delete
419
+ - `listObjects({ prefix, ... })` - List objects
420
+ - `exists(key)` - Check if exists
421
+
422
+ #### Snapshot/Restore
423
+ - `snapshot()` - Create state snapshot
424
+ - `restore(snapshot)` - Restore from snapshot
425
+
426
+ #### Persistence
427
+ - `saveToDisk(path?)` - Save to disk
428
+ - `loadFromDisk(path?)` - Load from disk
429
+ - `exportBackup(outputDir, options?)` - Export to BackupPlugin format
430
+ - `importBackup(backupDir, options?)` - Import from BackupPlugin format
431
+
432
+ #### Utilities
433
+ - `getStats()` - Get storage statistics
434
+ - `clear()` - Clear all objects
435
+
436
+ ### Events
437
+
438
+ MemoryClient emits the same events as the real Client:
439
+
440
+ ```javascript
441
+ client.on('command.request', (commandName, input) => {
442
+ console.log(`Executing: ${commandName}`);
443
+ });
444
+
445
+ client.on('command.response', (commandName, response, input) => {
446
+ console.log(`Completed: ${commandName}`);
447
+ });
448
+
449
+ client.on('putObject', (error, params) => {
450
+ console.log('Object stored:', params.key);
451
+ });
452
+ ```
453
+
454
+ ## Performance
455
+
456
+ ### Benchmark Results
457
+
458
+ Compared to LocalStack on a MacBook Pro M1:
459
+
460
+ | Operation | LocalStack | MemoryClient | Speedup |
461
+ |-----------|------------|--------------|---------|
462
+ | Insert 100 records | 2.5s | 0.01s | **250x** |
463
+ | Read 100 records | 1.8s | 0.005s | **360x** |
464
+ | List 1000 records | 3.2s | 0.002s | **1600x** |
465
+ | Full test suite (2600 tests) | ~90s | ~5s | **18x** |
466
+
467
+ ### Memory Usage
468
+
469
+ - ~100 bytes per object (overhead)
470
+ - Actual data size (body + metadata)
471
+ - No external processes required
472
+
473
+ ## Use Cases
474
+
475
+ ### 1. Unit Tests
476
+ ```javascript
477
+ // Super fast tests with isolation
478
+ describe('User Service', () => {
479
+ let db;
480
+
481
+ beforeEach(async () => {
482
+ db = createMemoryDatabaseForTest('user-service');
483
+ await db.connect();
484
+ });
485
+
486
+ it('should create user', async () => {
487
+ // Test runs in milliseconds!
488
+ });
489
+ });
490
+ ```
491
+
492
+ ### 2. Integration Tests with Snapshot
493
+ ```javascript
494
+ it('should handle complex workflow', async () => {
495
+ const db = createMemoryDatabaseForTest('workflow');
496
+ await db.connect();
497
+
498
+ // Setup initial state
499
+ await setupTestData(db);
500
+
501
+ // Save state
502
+ const snapshot = db.client.snapshot();
503
+
504
+ // Test scenario 1
505
+ await testScenario1(db);
506
+ db.client.restore(snapshot);
507
+
508
+ // Test scenario 2 (fresh state)
509
+ await testScenario2(db);
510
+ });
511
+ ```
512
+
513
+ ### 3. CI/CD Pipelines
514
+ ```javascript
515
+ // No Docker required!
516
+ // Tests run 10-100x faster
517
+ // Perfect for GitHub Actions, GitLab CI, etc
518
+
519
+ // .github/workflows/test.yml
520
+ - name: Run Tests
521
+ run: pnpm test
522
+ # That's it! No LocalStack setup needed
523
+ ```
524
+
525
+ ### 4. Local Development
526
+ ```javascript
527
+ import { S3db, MemoryClient } from 's3db.js';
528
+
529
+ // Instant startup, no waiting for Docker
530
+ const db = new S3db({
531
+ client: new MemoryClient({ verbose: true })
532
+ });
533
+
534
+ // Iterate rapidly with hot reload
535
+ // See exactly what's happening with verbose logs
536
+ ```
537
+
538
+ ### 5. Demo/Prototype
539
+ ```javascript
540
+ // Show off s3db.js features instantly
541
+ // No AWS credentials or infrastructure needed
542
+ // Perfect for workshops, tutorials, demos
543
+ ```
544
+
545
+ ### 6. Backup & Restore Production Data Locally
546
+
547
+ Export production S3 data and test locally with MemoryClient:
548
+
549
+ ```javascript
550
+ // Step 1: Backup production database (run on server)
551
+ import { S3db, BackupPlugin } from 's3db.js';
552
+
553
+ const prodDb = new S3db({
554
+ connectionString: process.env.PROD_S3_CONNECTION,
555
+ plugins: [
556
+ new BackupPlugin({
557
+ driver: 'filesystem',
558
+ backupDir: '/backups',
559
+ compress: true
560
+ })
561
+ ]
562
+ });
563
+
564
+ await prodDb.connect();
565
+ const backupPath = await prodDb.plugins.backup.backup();
566
+ // Creates: /backups/backup-2025-10-25T14-30-00-abc123/
567
+
568
+ // Step 2: Download backup to local machine
569
+ // $ scp -r server:/backups/backup-2025-10-25T14-30-00-abc123/ ./local-backup/
570
+
571
+ // Step 3: Load into MemoryClient for local testing
572
+ import { S3db, MemoryClient } from 's3db.js';
573
+
574
+ const localClient = new MemoryClient();
575
+ const localDb = new S3db({ client: localClient });
576
+ await localDb.connect();
577
+
578
+ // Import production backup
579
+ await localClient.importBackup('./local-backup/backup-2025-10-25T14-30-00-abc123/', {
580
+ database: localDb
581
+ });
582
+
583
+ // Now test against real production data locally!
584
+ const users = localDb.resources.users;
585
+ const user = await users.get('prod-user-id-123');
586
+ console.log('Testing with real production user:', user);
587
+
588
+ // Runs 100x faster than S3, no AWS costs, perfect for debugging!
589
+ ```
590
+
591
+ ### 7. Share Test Fixtures Between Teams
592
+
593
+ Create reusable test data for the entire team:
594
+
595
+ ```javascript
596
+ // Create test fixture (run once by one developer)
597
+ import { S3db, MemoryClient } from 's3db.js';
598
+
599
+ const client = new MemoryClient();
600
+ const db = new S3db({ client });
601
+ await db.connect();
602
+
603
+ // Create comprehensive test data
604
+ const users = await db.createResource({
605
+ name: 'users',
606
+ attributes: {
607
+ id: 'string|required',
608
+ name: 'string|required',
609
+ email: 'string|required|email',
610
+ role: 'string|required'
611
+ }
612
+ });
613
+
614
+ const posts = await db.createResource({
615
+ name: 'posts',
616
+ attributes: {
617
+ id: 'string|required',
618
+ userId: 'string|required',
619
+ title: 'string|required',
620
+ content: 'string|required'
621
+ }
622
+ });
623
+
624
+ // Add test data
625
+ await users.insert({ id: 'admin', name: 'Admin User', email: 'admin@test.com', role: 'admin' });
626
+ await users.insert({ id: 'user1', name: 'John Doe', email: 'john@test.com', role: 'user' });
627
+ await posts.insert({ id: 'post1', userId: 'user1', title: 'First Post', content: 'Hello!' });
628
+
629
+ // Export fixture
630
+ await client.exportBackup('./fixtures/test-data-v1', {
631
+ database: db,
632
+ compress: true
633
+ });
634
+
635
+ // Commit to repo
636
+ // $ git add fixtures/test-data-v1
637
+ // $ git commit -m "Add test fixtures v1"
638
+
639
+ // Now any team member can use it:
640
+ // In any test file
641
+ const testClient = new MemoryClient();
642
+ const testDb = new S3db({ client: testClient });
643
+ await testDb.connect();
644
+
645
+ await testClient.importBackup('./fixtures/test-data-v1', {
646
+ database: testDb
647
+ });
648
+
649
+ // All tests start with the same clean data!
650
+ const admin = await testDb.resources.users.get('admin');
651
+ expect(admin.role).toBe('admin');
652
+ ```
653
+
654
+ ### 8. Migrate Between Environments
655
+
656
+ Seamlessly move data between development, staging, and production:
657
+
658
+ ```javascript
659
+ // Scenario: Migrate staging data to new production instance
660
+
661
+ // Step 1: Export from staging
662
+ import { S3db, BackupPlugin } from 's3db.js';
663
+
664
+ const stagingDb = new S3db({
665
+ connectionString: 's3://staging-bucket',
666
+ plugins: [new BackupPlugin({ driver: 'filesystem' })]
667
+ });
668
+ await stagingDb.connect();
669
+
670
+ const stagingBackup = await stagingDb.plugins.backup.backup();
671
+ // Output: /backups/staging-backup-2025-10-25.../
672
+
673
+ // Step 2: Test migration locally first (recommended!)
674
+ import { MemoryClient } from 's3db.js';
675
+
676
+ const testClient = new MemoryClient();
677
+ const testDb = new S3db({ client: testClient });
678
+ await testDb.connect();
679
+
680
+ await testClient.importBackup(stagingBackup, { database: testDb });
681
+
682
+ // Run validation scripts
683
+ const recordCount = testClient.getStats().objectCount;
684
+ console.log(`Testing ${recordCount} records...`);
685
+
686
+ // Verify data integrity
687
+ const users = testDb.resources.users;
688
+ const allUsers = await users.query({});
689
+ console.log(`Found ${allUsers.length} users`);
690
+
691
+ // Step 3: If tests pass, import to production
692
+ const prodDb = new S3db({
693
+ connectionString: 's3://prod-bucket',
694
+ plugins: [new BackupPlugin({ driver: 'filesystem' })]
695
+ });
696
+ await prodDb.connect();
697
+
698
+ await prodDb.plugins.backup.restore(stagingBackup);
699
+ console.log('✅ Migration complete!');
700
+ ```
701
+
702
+ ### 9. CI/CD with Real Test Data
703
+
704
+ Use production snapshots in CI without exposing credentials:
705
+
706
+ ```javascript
707
+ // .github/workflows/test.yml
708
+ // - name: Download fixtures
709
+ // run: |
710
+ // curl -L https://fixtures.example.com/prod-snapshot.tar.gz | tar xz
711
+ //
712
+ // - name: Run tests
713
+ // run: pnpm test
714
+
715
+ // In your test suite:
716
+ import { S3db, MemoryClient } from 's3db.js';
717
+
718
+ describe('Business Logic Tests', () => {
719
+ let db;
720
+
721
+ beforeAll(async () => {
722
+ const client = new MemoryClient();
723
+ db = new S3db({ client });
724
+ await db.connect();
725
+
726
+ // Load production snapshot (sanitized, of course!)
727
+ await client.importBackup('./fixtures/prod-snapshot', {
728
+ database: db
729
+ });
730
+ });
731
+
732
+ it('should handle real-world user data', async () => {
733
+ const users = db.resources.users;
734
+ const activeUsers = await users.query({ status: 'active' });
735
+
736
+ // Test against real production patterns
737
+ expect(activeUsers.length).toBeGreaterThan(0);
738
+ });
739
+
740
+ it('should calculate metrics correctly', async () => {
741
+ // Test business logic against production data distribution
742
+ const orders = db.resources.orders;
743
+ const totalRevenue = await calculateRevenue(orders);
744
+
745
+ expect(totalRevenue).toBeGreaterThan(0);
746
+ });
747
+ });
748
+ ```
749
+
750
+ ### 10. Cross-Format Data Analysis
751
+
752
+ Export for analysis in BigQuery, Athena, Spark:
753
+
754
+ ```javascript
755
+ import { S3db, MemoryClient } from 's3db.js';
756
+
757
+ const client = new MemoryClient();
758
+ const db = new S3db({ client });
759
+ await db.connect();
760
+
761
+ // Load your data...
762
+ const users = await db.createResource({
763
+ name: 'users',
764
+ attributes: { id: 'string', name: 'string', signupDate: 'string' }
765
+ });
766
+
767
+ // ... populate data ...
768
+
769
+ // Export to JSONL for BigQuery
770
+ await client.exportBackup('/tmp/bigquery-import', {
771
+ compress: false, // BigQuery prefers uncompressed JSONL
772
+ database: db
773
+ });
774
+
775
+ // Now load users.jsonl into BigQuery:
776
+ // $ bq load --source_format=NEWLINE_DELIMITED_JSON \
777
+ // mydataset.users \
778
+ // /tmp/bigquery-import/users.jsonl
779
+
780
+ // Or use with Athena, Spark, pandas, DuckDB, etc.
781
+ // The JSONL format is universally supported!
782
+ ```
783
+
784
+ ## Compatibility
785
+
786
+ MemoryClient implements the **complete** Client interface:
787
+
788
+ ✅ All CRUD operations
789
+ ✅ Metadata encoding/decoding
790
+ ✅ All behaviors (body-overflow, body-only, etc)
791
+ ✅ Partitions
792
+ ✅ Timestamps
793
+ ✅ Encryption (secret fields)
794
+ ✅ Embeddings and special types
795
+ ✅ Event emission
796
+ ✅ Parallel operations
797
+
798
+ ## Limitations
799
+
800
+ ⚠️ **Not for Production** - Memory-only, data lost on restart
801
+ ⚠️ **Single Process** - No multi-process synchronization
802
+ ⚠️ **No Versioning** - S3 versioning not supported
803
+ ⚠️ **No S3 Events** - No Lambda triggers, etc
804
+
805
+ Use MemoryClient for:
806
+ - ✅ Testing
807
+ - ✅ Development
808
+ - ✅ Prototyping
809
+ - ✅ CI/CD
810
+
811
+ Use Real S3 Client for:
812
+ - ✅ Production
813
+ - ✅ Multi-process apps
814
+ - ✅ Long-term persistence
815
+
816
+ ## Migration Guide
817
+
818
+ ### From LocalStack to MemoryClient
819
+
820
+ **Before:**
821
+ ```javascript
822
+ import { S3db } from 's3db.js';
823
+
824
+ // Required: Docker, LocalStack running
825
+ const db = new S3db({
826
+ connectionString: 'http://test:test@localhost:4566/bucket'
827
+ });
828
+ ```
829
+
830
+ **After:**
831
+ ```javascript
832
+ import { S3db, MemoryClient } from 's3db.js';
833
+
834
+ // Zero infrastructure!
835
+ const db = new S3db({
836
+ client: new MemoryClient()
837
+ });
838
+ ```
839
+
840
+ ### From Real S3 to MemoryClient (for tests)
841
+
842
+ **Before:**
843
+ ```javascript
844
+ import { S3db } from 's3db.js';
845
+
846
+ const db = new S3db({
847
+ connectionString: process.env.BUCKET_CONNECTION_STRING
848
+ });
849
+ ```
850
+
851
+ **After:**
852
+ ```javascript
853
+ import { createMemoryDatabaseForTest } from './tests/config.js';
854
+
855
+ // Use helper
856
+ const db = createMemoryDatabaseForTest('my-test');
857
+ ```
858
+
859
+ ## Examples
860
+
861
+ See `tests/clients/memory-client.test.js` for comprehensive examples.
862
+
863
+ ## Troubleshooting
864
+
865
+ ### Memory Leaks in Tests
866
+
867
+ **Problem:** Tests accumulate memory over time.
868
+
869
+ **Solution:** Clear client after each test:
870
+
871
+ ```javascript
872
+ afterEach(async () => {
873
+ await database.disconnect();
874
+ database.client.clear(); // Clear memory
875
+ });
876
+ ```
877
+
878
+ ### Snapshot Size Too Large
879
+
880
+ **Problem:** Snapshots are huge.
881
+
882
+ **Solution:** Only snapshot what you need:
883
+
884
+ ```javascript
885
+ // Create fresh client for each test instead
886
+ beforeEach(() => {
887
+ db = createMemoryDatabaseForTest('test');
888
+ });
889
+ ```
890
+
891
+ ### Tests Passing with MemoryClient but Failing with Real S3
892
+
893
+ **Problem:** Real S3 has limits that MemoryClient doesn't enforce by default.
894
+
895
+ **Solution:** Enable limit enforcement:
896
+
897
+ ```javascript
898
+ const db = createMemoryDatabaseForTest('test', {
899
+ enforceLimits: true // Catches metadata > 2KB errors
900
+ });
901
+ ```
902
+
903
+ ## Contributing
904
+
905
+ Found a bug? Have a feature request?
906
+
907
+ Open an issue at: https://github.com/forattini-dev/s3db.js/issues
908
+
909
+ ## License
910
+
911
+ MIT - Same as s3db.js
912
+
913
+ ---
914
+
915
+ **Made with ❤️ for the s3db.js community**
916
+
917
+ 🚀 **Happy (fast) testing!**