s3mini 0.9.1 β†’ 0.9.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -9,18 +9,18 @@
9
9
  ## Features
10
10
 
11
11
  - πŸš€ Light and fast: averages β‰ˆ15 % more ops/s and only ~20 KB (minified, not gzipped).
12
- - πŸ”§ Zero dependencies; supports AWS SigV4 (no pre-signed requests) and SSE-C headers (tested only on Cloudflare)
12
+ - πŸ”§ Zero dependencies; supports AWS SigV4, pre-signed URLs, and SSE-C headers (tested on Cloudflare)
13
13
  - 🟠 Works on Cloudflare Workers; ideal for edge computing, Node, and Bun (no browser support).
14
14
  - πŸ”‘ Only the essential S3 APIsβ€”improved list, put, get, delete, and a few more.
15
15
  - πŸ› οΈ Supports multipart uploads.
16
16
  - πŸŽ„ Tree-shakeable ES module.
17
17
  - 🎯 TypeScript support with type definitions.
18
- - πŸ“š Poorly-documented with examples and tests - But widely tested on various S3-compatible services! (Contributions welcome!)
18
+ - πŸ“š Documented with examples, tests and widely tested on various S3-compatible services! (Contributions welcome!)
19
19
  - πŸ“¦ **BYOS3** β€” _Bring your own S3-compatible bucket_ (tested on Cloudflare R2, Backblaze B2, DigitalOcean Spaces, MinIO, Garage, Micro/Ceph and Oracle Object Storage, Scaleway).
20
20
 
21
21
  #### Tested On
22
22
 
23
- ![Tested On](testedon.png)
23
+ ![Tested On](testedon.png) and more ...
24
24
  Contributions welcome!
25
25
 
26
26
  Dev:
@@ -46,39 +46,25 @@ Dev:
46
46
 
47
47
  ## Table of Contents
48
48
 
49
- - [Supported Ops](#supported-ops)
50
49
  - [Installation](#installation)
51
- - [Usage](#usage)
50
+ - [Quick Start](#quick-start)
51
+ - [Configuration](#configuration)
52
+ - [Uploading Objects](#uploading-objects)
53
+ - [Downloading Objects](#downloading-objects)
54
+ - [Listing Objects](#listing-objects)
55
+ - [Deleting Objects](#deleting-objects)
56
+ - [Copy and Move](#copy-and-move)
57
+ - [Conditional Requests](#conditional-requests)
58
+ - [Pre-signed URLs](#pre-signed-urls)
59
+ - [Server-Side Encryption (SSE-C)](#server-side-encryption-sse-c)
60
+ - [API Reference](#api-reference)
61
+ - [Error Handling](#error-handling)
62
+ - [Cloudflare Workers](#cloudflare-workers)
63
+ - [Supported Operations](#supported-operations)
52
64
  - [Security Notes](#security-notes)
53
65
  - [πŸ’™ Contributions welcomed!](#contributions-welcomed)
54
66
  - [License](#license)
55
67
 
56
- ## Supported Ops
57
-
58
- The library supports a subset of S3 operations, focusing on essential features, making it suitable for environments with limited resources.
59
-
60
- #### Bucket ops
61
-
62
- - βœ… HeadBucket (bucketExists)
63
- - βœ… createBucket (createBucket)
64
-
65
- #### Objects ops
66
-
67
- - βœ… ListObjectsV2 (listObjects, listObjectsPaged)
68
- - βœ… GetObject (getObject, getObjectResponse, getObjectWithETag, getObjectRaw, getObjectArrayBuffer, getObjectJSON)
69
- - βœ… PutObject (putObject)
70
- - βœ… DeleteObject (deleteObject)
71
- - βœ… DeleteObjects (deleteObjects)
72
- - βœ… HeadObject (objectExists, getEtag, getContentLength)
73
- - βœ… listMultipartUploads
74
- - βœ… CreateMultipartUpload (getMultipartUploadId)
75
- - βœ… completeMultipartUpload
76
- - βœ… abortMultipartUpload
77
- - βœ… uploadPart
78
- - βœ… CopyObject: Local copyObject/moveObject(copyObject w delete)
79
-
80
- Put/Get objects with SSE-C (server-side encryption with customer-provided keys) is supported, but only tested on Cloudflare R2!
81
-
82
68
  ## Installation
83
69
 
84
70
  ```bash
@@ -105,151 +91,628 @@ mv example.env .env
105
91
  > **⚠️ Environment Support Notice**
106
92
  >
107
93
  > This library is designed to run in environments like **Node.js**, **Bun**, and **Cloudflare Workers**. It does **not support browser environments** due to the use of Node.js APIs and polyfills.
108
- >
109
- > **Cloudflare Workers:** Now works without `nodejs_compat` compatibility flag, using native WebCrypto!
110
94
 
111
- ## Usage
95
+ ## Quick Start
96
+
97
+ ```typescript
98
+ import { S3mini } from 's3mini';
99
+
100
+ const s3 = new S3mini({
101
+ accessKeyId: process.env.S3_ACCESS_KEY,
102
+ secretAccessKey: process.env.S3_SECRET_KEY,
103
+ endpoint: 'https://bucket.region.r2.cloudflarestorage.com',
104
+ region: 'auto',
105
+ });
106
+
107
+ // Upload (auto-selects single PUT or multipart based on size)
108
+ await s3.putAnyObject('photos/vacation.jpg', fileBuffer, 'image/jpeg');
109
+
110
+ // Download
111
+ const data = await s3.getObject('photos/vacation.jpg');
112
+
113
+ // List
114
+ const objects = await s3.listObjects('/', 'photos/');
112
115
 
113
- > [!WARNING]
114
- > `s3mini` was a deprecated alias removed in a recent `0.5.0` release. Please migrate to the new `S3mini` class.
116
+ // Delete
117
+ await s3.deleteObject('photos/vacation.jpg');
118
+ ```
119
+
120
+ ## Configuration
115
121
 
116
122
  ```typescript
117
- import { S3mini, sanitizeETag } from 's3mini';
118
-
119
- const s3client = new S3mini({
120
- accessKeyId: config.accessKeyId,
121
- secretAccessKey: config.secretAccessKey,
122
- endpoint: config.endpoint, // e.g., 'https://<your-bucket>.<your-region>.digitaloceanspaces.com'
123
- region: config.region,
124
- // ?requestSizeInBytes = default is 8 MB
125
- // ?requestAbortTimeout = default is no timeout
126
- // ?logger = default is undefined (no logging)
127
- // ?fetch = default is globalThis.fetch (you can provide your own fetch implementation)
123
+ const s3 = new S3mini({
124
+ // Required
125
+ accessKeyId: string,
126
+ secretAccessKey: string,
127
+ endpoint: string, // Full URL: https://bucket.region.provider.com
128
+
129
+ // Optional
130
+ region: string, // Default: 'auto'
131
+ minPartSize: number, // Default: 8MB β€” threshold for multipart
132
+ requestSizeInBytes: number, // Default: 8MB β€” chunk size for range requests
133
+ requestAbortTimeout: number, // Timeout in ms (undefined = no timeout)
134
+ logger: Logger, // Custom logger with info/warn/error methods
135
+ fetch: typeof fetch, // Custom fetch implementation
128
136
  });
137
+ ```
129
138
 
130
- // Basic bucket ops
131
- let exists: boolean = false;
132
- try {
133
- // Check if the bucket exists
134
- exists = await s3client.bucketExists();
135
- } catch (err) {
136
- throw new Error(`Failed bucketExists() call, wrong credentials maybe: ${err.message}`);
137
- }
138
- if (!exists) {
139
- // Create the bucket based on the endpoint bucket name
140
- await s3client.createBucket();
139
+ **Endpoint formats:**
140
+
141
+ ```typescript
142
+ // Path-style (bucket in path)
143
+ 'https://s3.us-east-1.amazonaws.com/my-bucket';
144
+
145
+ // Virtual-hosted-style (bucket in subdomain)
146
+ 'https://my-bucket.s3.us-east-1.amazonaws.com';
147
+
148
+ // Provider-specific
149
+ 'https://my-bucket.nyc3.digitaloceanspaces.com';
150
+ 'https://account-id.r2.cloudflarestorage.com/my-bucket';
151
+ ```
152
+
153
+ ---
154
+
155
+ ## Uploading Objects
156
+
157
+ ### putObject β€” Simple Upload
158
+
159
+ Direct single-request upload. Use for small files or when you need fine control.
160
+
161
+ ```typescript
162
+ const response = await s3.putObject(
163
+ key: string, // Object key/path
164
+ data: string | Buffer | Uint8Array | Blob | File | ReadableStream,
165
+ contentType?: string, // Default: 'application/octet-stream'
166
+ ssecHeaders?: SSECHeaders, // Optional encryption headers
167
+ additionalHeaders?: AWSHeaders, // Optional x-amz-* headers
168
+ contentLength?: number, // Optional, auto-detected for most types
169
+ );
170
+
171
+ // Returns: Response object
172
+ const etag = response.headers.get('etag');
173
+ ```
174
+
175
+ **Examples:**
176
+
177
+ ```typescript
178
+ // String content
179
+ await s3.putObject('config.json', JSON.stringify({ key: 'value' }), 'application/json');
180
+
181
+ // Buffer/Uint8Array
182
+ const buffer = await fs.readFile('image.png');
183
+ await s3.putObject('images/photo.png', buffer, 'image/png');
184
+
185
+ // Blob (browser File API or Node 18+)
186
+ const blob = new Blob(['Hello'], { type: 'text/plain' });
187
+ await s3.putObject('hello.txt', blob, 'text/plain');
188
+
189
+ // With custom headers
190
+ await s3.putObject('data.bin', buffer, 'application/octet-stream', undefined, {
191
+ 'x-amz-meta-author': 'john',
192
+ 'x-amz-meta-version': '1.0',
193
+ });
194
+ ```
195
+
196
+ ### putAnyObject β€” Smart Upload (Recommended)
197
+
198
+ Automatically chooses single PUT or multipart based on data size. **This is the recommended method for most use cases.**
199
+
200
+ ```typescript
201
+ const response = await s3.putAnyObject(
202
+ key: string,
203
+ data: string | Buffer | Uint8Array | Blob | File | ReadableStream,
204
+ contentType?: string,
205
+ ssecHeaders?: SSECHeaders,
206
+ additionalHeaders?: AWSHeaders,
207
+ contentLength?: number,
208
+ );
209
+ ```
210
+
211
+ **Behavior:**
212
+
213
+ - **≀ minPartSize (8MB default):** Single PUT request
214
+ - **> minPartSize:** Automatic multipart upload with:
215
+ - Parallel part uploads (4 concurrent by default)
216
+ - Automatic retries with exponential backoff (3 retries)
217
+ - Proper cleanup on failure (aborts incomplete uploads)
218
+
219
+ **Examples:**
220
+
221
+ ```typescript
222
+ // Small file β€” uses single PUT internally
223
+ await s3.putAnyObject('small.txt', 'Hello World');
224
+
225
+ // Large file β€” automatically uses multipart
226
+ const largeBuffer = await fs.readFile('video.mp4'); // 500MB
227
+ await s3.putAnyObject('videos/movie.mp4', largeBuffer, 'video/mp4');
228
+
229
+ // Blob (zero-copy slicing for memory efficiency)
230
+ const file = new File([largeArrayBuffer], 'data.bin');
231
+ await s3.putAnyObject('uploads/data.bin', file);
232
+
233
+ // ReadableStream (uploads as data arrives)
234
+ const stream = fs.createReadStream('huge-file.dat');
235
+ await s3.putAnyObject('backups/data.dat', Readable.toWeb(stream));
236
+ ```
237
+
238
+ **Memory efficiency with Blobs:**
239
+
240
+ For large files, using `Blob` or `File` is more memory-efficient than `Uint8Array`:
241
+
242
+ ```typescript
243
+ // ❌ Loads entire file into memory
244
+ const buffer = await fs.readFile('large-video.mp4');
245
+ await s3.putAnyObject('video.mp4', buffer);
246
+
247
+ // βœ… Zero-copy slicing β€” only reads data when uploading each part
248
+ const file = Bun.file('large-video.mp4'); // Bun
249
+ // or
250
+ const blob = new Blob([await fs.readFile('large-video.mp4')]); // Node
251
+ await s3.putAnyObject('video.mp4', file);
252
+ ```
253
+
254
+ ### Manual Multipart Upload
255
+
256
+ For advanced control over multipart uploads (progress tracking, resumable uploads, custom concurrency).
257
+
258
+ ```typescript
259
+ // 1. Initialize upload
260
+ const uploadId = await s3.getMultipartUploadId(
261
+ key: string,
262
+ contentType?: string,
263
+ ssecHeaders?: SSECHeaders,
264
+ additionalHeaders?: AWSHeaders,
265
+ );
266
+
267
+ // 2. Upload parts (must be β‰₯ 5MB except last part)
268
+ const parts: UploadPart[] = [];
269
+
270
+ for (let i = 0; i < totalParts; i++) {
271
+ const partData = buffer.subarray(i * partSize, (i + 1) * partSize);
272
+ const part = await s3.uploadPart(
273
+ key,
274
+ uploadId,
275
+ partData,
276
+ i + 1, // partNumber: 1-indexed, max 10,000
277
+ );
278
+ parts.push(part);
279
+ console.log(`Uploaded part ${i + 1}/${totalParts}`);
141
280
  }
142
281
 
143
- // Basic object ops
144
- // key is the name of the object in the bucket
145
- const smallObjectKey: string = 'small-object.txt';
146
- // content is the data you want to store in the object
147
- // it can be a string or Buffer (recommended for large objects)
148
- const smallObjectContent: string = 'Hello, world!';
149
-
150
- // check if the object exists
151
- const objectExists: boolean = await s3client.objectExists(smallObjectKey);
152
- let etag: string | null = null;
153
- if (!objectExists) {
154
- // put/upload the object, content can be a string or Buffer
155
- // to add object into "folder", use "folder/filename.txt" as key
156
- // Third argument is optional, it can be used to set content type ... default is 'application/octet-stream'
157
- const resp: Response = await s3client.putObject(smallObjectKey, smallObjectContent);
158
- // example with content type:
159
- // const resp: Response = await s3client.putObject(smallObjectKey, smallObjectContent, 'image/png');
160
- // you can also get etag via getEtag method
161
- // const etag: string = await s3client.getEtag(smallObjectKey);
162
- etag = sanitizeETag(resp.headers.get('etag'));
282
+ // 3. Complete upload
283
+ const result = await s3.completeMultipartUpload(key, uploadId, parts);
284
+ console.log('Final ETag:', result.etag);
285
+ ```
286
+
287
+ **Parallel uploads with progress:**
288
+
289
+ ```typescript
290
+ import { runInBatches } from 's3mini';
291
+
292
+ const PART_SIZE = 8 * 1024 * 1024; // 8MB
293
+ const CONCURRENCY = 6;
294
+
295
+ async function uploadWithProgress(key: string, data: Uint8Array) {
296
+ const uploadId = await s3.getMultipartUploadId(key);
297
+ const totalParts = Math.ceil(data.byteLength / PART_SIZE);
298
+ let completed = 0;
299
+
300
+ const tasks = Array.from({ length: totalParts }, (_, i) => async () => {
301
+ const start = i * PART_SIZE;
302
+ const end = Math.min(start + PART_SIZE, data.byteLength);
303
+ const part = await s3.uploadPart(key, uploadId, data.subarray(start, end), i + 1);
304
+ completed++;
305
+ console.log(`Progress: ${((completed / totalParts) * 100).toFixed(1)}%`);
306
+ return part;
307
+ });
308
+
309
+ const results = await runInBatches(tasks, CONCURRENCY);
310
+ const parts = results
311
+ .filter((r): r is PromiseFulfilledResult => r.status === 'fulfilled')
312
+ .map(r => r.value)
313
+ .sort((a, b) => a.partNumber - b.partNumber);
314
+
315
+ return s3.completeMultipartUpload(key, uploadId, parts);
163
316
  }
317
+ ```
318
+
319
+ **Abort an incomplete upload:**
320
+
321
+ ```typescript
322
+ await s3.abortMultipartUpload(key, uploadId);
323
+ ```
164
324
 
165
- // get the object, null if not found
166
- const objectData: string | null = await s3client.getObject(smallObjectKey);
167
- console.log('Object data:', objectData);
168
-
169
- // get the object with ETag, null if not found
170
- const response2: Response = await S3mini.getObject(smallObjectKey, { 'if-none-match': etag });
171
- if (response2) {
172
- // ETag changed so we can get the object data and new ETag
173
- // Note: ETag is not guaranteed to be the same as the MD5 hash of the object
174
- // ETag is sanitized to remove quotes
175
- const etag2: string = sanitizeETag(response2.headers.get('etag'));
176
- console.log('Object data with ETag:', response2.body, 'ETag:', etag2);
177
- } else {
178
- console.log('Object not found or ETag does match.');
325
+ **List pending multipart uploads:**
326
+
327
+ ```typescript
328
+ const pending = await s3.listMultipartUploads();
329
+ // Clean up orphaned uploads
330
+ for (const upload of pending.Upload || []) {
331
+ await s3.abortMultipartUpload(upload.Key, upload.UploadId);
179
332
  }
333
+ ```
180
334
 
181
- // list objects in the bucket, null if bucket is empty
182
- // Note: listObjects uses listObjectsV2 API and iterate over all pages
183
- // so it will return all objects in the bucket which can take a while
184
- // If you want to limit the number of objects returned, use the maxKeys option
185
- // If you want to list objects in a specific "folder", use "folder/" as prefix
186
- // Example s3client.listObjects({"/" "myfolder/"})
187
- const list: object[] | null = await s3client.listObjects();
188
- if (list) {
189
- console.log('List of objects:', list);
190
- } else {
191
- console.log('No objects found in the bucket.');
335
+ ---
336
+
337
+ ## Downloading Objects
338
+
339
+ ```typescript
340
+ // As string
341
+ const text = await s3.getObject('file.txt');
342
+
343
+ // As ArrayBuffer
344
+ const buffer = await s3.getObjectArrayBuffer('image.png');
345
+
346
+ // As JSON (auto-parsed)
347
+ const data = await s3.getObjectJSON('config.json');
348
+
349
+ // Full Response object (for headers, streaming)
350
+ const response = await s3.getObjectResponse('video.mp4');
351
+ const stream = response.body; // ReadableStream
352
+
353
+ // With ETag for caching
354
+ const { etag, data } = await s3.getObjectWithETag('file.txt');
355
+
356
+ // Range request (partial download)
357
+ const response = await s3.getObjectRaw(
358
+ 'large-file.bin',
359
+ false, // wholeFile: false for range request
360
+ 0, // rangeFrom
361
+ 1024 * 1024, // rangeTo (first 1MB)
362
+ );
363
+ ```
364
+
365
+ ---
366
+
367
+ ## Listing Objects
368
+
369
+ ```typescript
370
+ // List all objects (auto-paginates)
371
+ const objects = await s3.listObjects();
372
+
373
+ // With prefix filter (list "folder")
374
+ const photos = await s3.listObjects('/', 'photos/');
375
+
376
+ // With max keys limit
377
+ const first100 = await s3.listObjects('/', '', 100);
378
+
379
+ // Manual pagination
380
+ let token: string | undefined;
381
+ do {
382
+ const { objects, nextContinuationToken } = await s3.listObjectsPaged(
383
+ '/', // delimiter
384
+ 'uploads/', // prefix
385
+ 100, // maxKeys per page
386
+ token, // continuation token
387
+ );
388
+ console.log(objects);
389
+ token = nextContinuationToken;
390
+ } while (token);
391
+ ```
392
+
393
+ **Response shape:**
394
+
395
+ ```typescript
396
+ interface ListObject {
397
+ Key: string;
398
+ Size: number;
399
+ LastModified: Date;
400
+ ETag: string;
401
+ StorageClass: string;
192
402
  }
403
+ ```
404
+
405
+ ---
406
+
407
+ ## Deleting Objects
408
+
409
+ ```typescript
410
+ // Single object
411
+ const deleted = await s3.deleteObject('file.txt'); // boolean
412
+
413
+ // Multiple objects (batched, max 1000 per request)
414
+ const keys = ['a.txt', 'b.txt', 'c.txt'];
415
+ const results = await s3.deleteObjects(keys); // boolean[] in same order
416
+ ```
417
+
418
+ ---
419
+
420
+ ## Copy and Move
193
421
 
194
- // list objects in the bucket, 10 at a time using pagination token
195
- let results = await s3.listObjectsPaged('/', undefined, 10, undefined);
196
- while (results?.objects?.length) {
197
- console.log('List of objects in this page:', results);
198
- results = await s3.listObjectsPaged('/', undefined, 10, results.nextContinuationToken);
422
+ Server-side copy (no data transfer through client):
423
+
424
+ ```typescript
425
+ // Copy within same bucket
426
+ const result = await s3.copyObject('source.txt', 'backup/source.txt');
427
+
428
+ // Copy with new metadata
429
+ await s3.copyObject('report.pdf', 'archive/report.pdf', {
430
+ metadataDirective: 'REPLACE',
431
+ metadata: {
432
+ 'archived-at': new Date().toISOString(),
433
+ },
434
+ contentType: 'application/pdf',
435
+ });
436
+
437
+ // Move (copy + delete source)
438
+ await s3.moveObject('temp/upload.tmp', 'files/document.pdf');
439
+ ```
440
+
441
+ **Options:**
442
+
443
+ ```typescript
444
+ interface CopyObjectOptions {
445
+ metadataDirective?: 'COPY' | 'REPLACE';
446
+ metadata?: Record;
447
+ contentType?: string;
448
+ storageClass?: string;
449
+ taggingDirective?: 'COPY' | 'REPLACE';
450
+ sourceSSECHeaders?: SSECHeaders;
451
+ destinationSSECHeaders?: SSECHeaders;
452
+ additionalHeaders?: AWSHeaders;
199
453
  }
454
+ ```
200
455
 
201
- // delete the object
202
- const wasDeleted: boolean = await s3client.deleteObject(smallObjectKey);
203
- // to delete multiple objects, use deleteObjects method
204
- // const keysToDelete: string[] = ['object1.txt', 'object2.txt'];
205
- // const deletedArray: boolean[] = await s3client.deleteObjects(keysToDelete);
206
- // Note: deleteObjects returns an array of booleans, one for each key, indicating if the object was deleted or not
207
-
208
- // Multipart upload
209
- const multipartKey = 'multipart-object.txt';
210
- const large_buffer = new Uint8Array(1024 * 1024 * 15); // 15 MB buffer
211
- const partSize = 8 * 1024 * 1024; // 8 MB
212
- const totalParts = Math.ceil(large_buffer.length / partSize);
213
- // Beware! This will return always a new uploadId
214
- // if you want to use the same uploadId, you need to store it somewhere
215
- const uploadId = await s3client.getMultipartUploadId(multipartKey);
216
- const uploadPromises = [];
217
- for (let i = 0; i < totalParts; i++) {
218
- const partBuffer = large_buffer.subarray(i * partSize, (i + 1) * partSize);
219
- // upload each part
220
- // Note: uploadPart returns a promise, so you can use Promise.all to upload all parts in parallel
221
- // but be careful with the number of parallel uploads, it can cause throttling
222
- // or errors if you upload too many parts at once
223
- // You can also use generator functions to upload parts in batches
224
- uploadPromises.push(s3client.uploadPart(multipartKey, uploadId, partBuffer, i + 1));
456
+ ---
457
+
458
+ ## Conditional Requests
459
+
460
+ Use If-\* headers to avoid unnecessary transfers:
461
+
462
+ ```typescript
463
+ // Only download if changed (returns null if ETag matches)
464
+ const data = await s3.getObject('file.txt', {
465
+ 'if-none-match': '"abc123"',
466
+ });
467
+
468
+ // Only download if modified since date
469
+ const data = await s3.getObject('file.txt', {
470
+ 'if-modified-since': 'Wed, 21 Oct 2024 07:28:00 GMT',
471
+ });
472
+
473
+ // Check existence with conditions
474
+ const exists = await s3.objectExists('file.txt', {
475
+ 'if-match': '"abc123"',
476
+ }); // null if ETag mismatch, true/false otherwise
477
+ ```
478
+
479
+ ---
480
+
481
+ ## Pre-signed URLs
482
+
483
+ Generate time-limited URLs that allow unauthenticated HTTP clients to upload or download objects directly β€” no credentials needed on the client side.
484
+
485
+ ```typescript
486
+ // Download URL (valid for 1 hour by default)
487
+ const downloadUrl = await s3.getPresignedUrl('GET', 'photos/vacation.jpg');
488
+
489
+ // Upload URL (valid for 5 minutes)
490
+ const uploadUrl = await s3.getPresignedUrl('PUT', 'uploads/file.bin', 300);
491
+ ```
492
+
493
+ **Client-side usage (no SDK or credentials required):**
494
+
495
+ ```typescript
496
+ // Upload via pre-signed URL
497
+ await fetch(uploadUrl, {
498
+ method: 'PUT',
499
+ body: fileData,
500
+ headers: { 'Content-Type': 'image/jpeg' },
501
+ });
502
+
503
+ // Download via pre-signed URL
504
+ const response = await fetch(downloadUrl);
505
+ const data = await response.arrayBuffer();
506
+ ```
507
+
508
+ **Custom response headers:**
509
+
510
+ ```typescript
511
+ // Force download with a specific filename
512
+ const url = await s3.getPresignedUrl('GET', 'report.pdf', 3600, {
513
+ 'response-content-disposition': 'attachment; filename="report.pdf"',
514
+ 'response-content-type': 'application/pdf',
515
+ });
516
+ ```
517
+
518
+ **Signed headers (enforce headers on the client request):**
519
+
520
+ ```typescript
521
+ // Upload URL that requires Content-Type β€” client MUST send this exact header
522
+ const url = await s3.getPresignedUrl('PUT', 'uploads/data.json', 300, {}, {
523
+ 'Content-Type': 'application/json',
524
+ });
525
+
526
+ await fetch(url, {
527
+ method: 'PUT',
528
+ body: JSON.stringify({ ok: true }),
529
+ headers: { 'Content-Type': 'application/json' },
530
+ });
531
+ ```
532
+
533
+ **Method signature:**
534
+
535
+ ```typescript
536
+ getPresignedUrl(
537
+ method: 'GET' | 'PUT',
538
+ key: string,
539
+ expiresIn?: number, // Default: 3600 (1 hour), max: 604800 (7 days)
540
+ queryParams?: Record<string, string>,
541
+ headers?: Record<string, string>, // HTTP headers to sign (e.g. Content-Type)
542
+ ): Promise<string>
543
+ ```
544
+
545
+ **Notes:**
546
+
547
+ - `expiresIn` must be between 1 and 604800 seconds (7 days); non-integer values are floored.
548
+ - Works with both virtual-hosted-style and path-style endpoints.
549
+ - Special characters and unicode in keys are handled automatically.
550
+ - Throws `TypeError` for empty keys or out-of-range `expiresIn`.
551
+ - When `headers` are provided, they are included in `X-Amz-SignedHeaders` and the signature. The client consuming the URL must send those exact headers with matching values. The `host` header is always signed automatically.
552
+
553
+ ---
554
+
555
+ ## Server-Side Encryption (SSE-C)
556
+
557
+ Customer-provided encryption keys (tested on Cloudflare R2):
558
+
559
+ ```typescript
560
+ const ssecHeaders = {
561
+ 'x-amz-server-side-encryption-customer-algorithm': 'AES256',
562
+ 'x-amz-server-side-encryption-customer-key': base64Key,
563
+ 'x-amz-server-side-encryption-customer-key-md5': base64KeyMd5,
564
+ };
565
+
566
+ // Upload encrypted
567
+ await s3.putObject('secret.dat', data, 'application/octet-stream', ssecHeaders);
568
+
569
+ // Download encrypted (must provide same key)
570
+ const decrypted = await s3.getObject('secret.dat', {}, ssecHeaders);
571
+
572
+ // Copy encrypted object
573
+ await s3.copyObject('secret.dat', 'backup/secret.dat', {
574
+ sourceSSECHeaders: {
575
+ 'x-amz-copy-source-server-side-encryption-customer-algorithm': 'AES256',
576
+ 'x-amz-copy-source-server-side-encryption-customer-key': base64Key,
577
+ 'x-amz-copy-source-server-side-encryption-customer-key-md5': base64KeyMd5,
578
+ },
579
+ destinationSSECHeaders: ssecHeaders,
580
+ });
581
+ ```
582
+
583
+ ---
584
+
585
+ ## API Reference
586
+
587
+ ### Constructor
588
+
589
+ | Parameter | Type | Default | Description |
590
+ | --------------------- | -------------- | ------------------ | ------------------------- |
591
+ | `accessKeyId` | `string` | required | AWS access key |
592
+ | `secretAccessKey` | `string` | required | AWS secret key |
593
+ | `endpoint` | `string` | required | Full S3 endpoint URL |
594
+ | `region` | `string` | `'auto'` | AWS region |
595
+ | `minPartSize` | `number` | `8388608` | Multipart threshold (8MB) |
596
+ | `requestAbortTimeout` | `number` | `undefined` | Request timeout in ms |
597
+ | `logger` | `Logger` | `undefined` | Custom logger |
598
+ | `fetch` | `typeof fetch` | `globalThis.fetch` | Custom fetch |
599
+
600
+ ### Methods
601
+
602
+ | Method | Returns | Description |
603
+ | ------------------------------------------------------------------ | ------------------------------------------- | ----------------------- |
604
+ | `bucketExists()` | `Promise<boolean>` | Check if bucket exists |
605
+ | `createBucket()` | `Promise<boolean>` | Create bucket |
606
+ | `listObjects(delimiter?, prefix?, maxKeys?)` | `Promise<ListObject[] \| null>` | List all objects |
607
+ | `listObjectsPaged(delimiter?, prefix?, maxKeys?, token?)` | `Promise<{objects, nextContinuationToken}>` | Paginated list |
608
+ | `getObject(key, opts?, ssec?)` | `Promise<string \| null>` | Get object as string |
609
+ | `getObjectArrayBuffer(key, opts?, ssec?)` | `Promise<ArrayBuffer \| null>` | Get as ArrayBuffer |
610
+ | `getObjectJSON<T>(key, opts?, ssec?)` | `Promise<T \| null>` | Get as parsed JSON |
611
+ | `getObjectResponse(key, opts?, ssec?)` | `Promise<Response \| null>` | Get full Response |
612
+ | `getObjectWithETag(key, opts?, ssec?)` | `Promise<{etag, data}>` | Get with ETag |
613
+ | `getObjectRaw(key, wholeFile?, from?, to?, opts?, ssec?)` | `Promise<Response>` | Range request |
614
+ | `putObject(key, data, type?, ssec?, headers?, length?)` | `Promise<Response>` | Simple upload |
615
+ | `putAnyObject(key, data, type?, ssec?, headers?, length?)` | `Promise<Response>` | Smart upload |
616
+ | `deleteObject(key)` | `Promise<boolean>` | Delete single object |
617
+ | `deleteObjects(keys)` | `Promise<boolean[]>` | Delete multiple |
618
+ | `objectExists(key, opts?)` | `Promise<boolean \| null>` | Check existence |
619
+ | `getEtag(key, opts?, ssec?)` | `Promise<string \| null>` | Get ETag only |
620
+ | `getContentLength(key, ssec?)` | `Promise<number>` | Get size in bytes |
621
+ | `copyObject(source, dest, opts?)` | `Promise<CopyObjectResult>` | Server-side copy |
622
+ | `moveObject(source, dest, opts?)` | `Promise<CopyObjectResult>` | Copy + delete |
623
+ | `getPresignedUrl(method, key, expiresIn?, queryParams?, headers?)` | `Promise<string>` | Generate pre-signed URL |
624
+ | `getMultipartUploadId(key, type?, ssec?, headers?)` | `Promise<string>` | Init multipart |
625
+ | `uploadPart(key, uploadId, data, partNum, opts?, ssec?, headers?)` | `Promise<UploadPart>` | Upload part |
626
+ | `completeMultipartUpload(key, uploadId, parts)` | `Promise<CompleteResult>` | Complete multipart |
627
+ | `abortMultipartUpload(key, uploadId, ssec?)` | `Promise<object>` | Abort multipart |
628
+ | `listMultipartUploads(delimiter?, prefix?, method?, opts?)` | `Promise<object>` | List pending |
629
+ | `sanitizeETag(etag)` | `string` | Remove quotes from ETag |
630
+
631
+ ### Utility Functions
632
+
633
+ ```typescript
634
+ import { runInBatches, sanitizeETag } from 's3mini';
635
+
636
+ // Run async tasks with concurrency control
637
+ const results = await runInBatches(
638
+ tasks: Iterable<() => Promise>,
639
+ batchSize?: number, // Default: 30
640
+ minIntervalMs?: number // Default: 0 (no delay between batches)
641
+ );
642
+
643
+ // Clean ETag value
644
+ const clean = sanitizeETag('"abc123"'); // 'abc123'
645
+ ```
646
+
647
+ ---
648
+
649
+ ## Error Handling
650
+
651
+ ```typescript
652
+ import { S3ServiceError, S3NetworkError } from 's3mini';
653
+
654
+ try {
655
+ await s3.getObject('missing.txt');
656
+ } catch (err) {
657
+ if (err instanceof S3ServiceError) {
658
+ console.error(`S3 error ${err.status}: ${err.serviceCode}`);
659
+ console.error('Response body:', err.body);
660
+ } else if (err instanceof S3NetworkError) {
661
+ console.error(`Network error: ${err.code}`); // ENOTFOUND, ETIMEDOUT, etc.
662
+ }
225
663
  }
226
- const uploadResponses = await Promise.all(uploadPromises);
227
- const parts = uploadResponses.map((response, index) => ({
228
- partNumber: index + 1,
229
- etag: response.etag,
230
- }));
231
- // Complete the multipart upload
232
- const completeResponse = await s3client.completeMultipartUpload(multipartKey, uploadId, parts);
233
- const completeEtag = completeResponse.etag;
234
-
235
- // List multipart uploads
236
- // returns object with uploadId and key
237
- const multipartUploads: object = await s3client.listMultipartUploads();
238
- // Abort the multipart upload
239
- const abortResponse = await s3client.abortMultipartUpload(multipartUploads.key, multipartUploads.uploadId);
240
-
241
- // Multipart download
242
- // lets test getObjectRaw with range
243
- const rangeStart = 2048 * 1024; // 2 MB
244
- const rangeEnd = 8 * 1024 * 1024 * 2; // 16 MB
245
- const rangeResponse = await s3client.getObjectRaw(multipartKey, false, rangeStart, rangeEnd);
246
- const rangeData = await rangeResponse.arrayBuffer();
247
-
248
- // Local copyObject example
249
- const result = await s3.copyObject('report-2024.pdf', 'archive/report-2024.pdf');
250
- ```
251
-
252
- For more check [USAGE.md](USAGE.md) file, examples and tests.
664
+ ```
665
+
666
+ **Error classes:**
667
+
668
+ - `S3Error` β€” Base error class
669
+ - `S3ServiceError` β€” S3 returned an error response (4xx, 5xx)
670
+ - `S3NetworkError` β€” Network-level failure (DNS, timeout, connection refused)
671
+
672
+ ---
673
+
674
+ ## Cloudflare Workers
675
+
676
+ Works natively without `nodejs_compat`:
677
+
678
+ ```typescript
679
+ export default {
680
+ async fetch(request: Request, env: Env): Promise {
681
+ const s3 = new S3mini({
682
+ accessKeyId: env.R2_ACCESS_KEY,
683
+ secretAccessKey: env.R2_SECRET_KEY,
684
+ endpoint: env.R2_ENDPOINT,
685
+ });
686
+
687
+ const data = await s3.getObject('hello.txt');
688
+ return new Response(data);
689
+ },
690
+ };
691
+ ```
692
+
693
+ ---
694
+
695
+ ## Supported Operations
696
+
697
+ | Operation | Method |
698
+ | ----------------------- | -------------------------------------------------------------------------------------------------------------------------- |
699
+ | HeadBucket | `bucketExists()` |
700
+ | CreateBucket | `createBucket()` |
701
+ | ListObjectsV2 | `listObjects()`, `listObjectsPaged()` |
702
+ | GetObject | `getObject()`, `getObjectArrayBuffer()`, `getObjectJSON()`, `getObjectResponse()`, `getObjectWithETag()`, `getObjectRaw()` |
703
+ | PutObject | `putObject()`, `putAnyObject()` |
704
+ | DeleteObject | `deleteObject()` |
705
+ | DeleteObjects | `deleteObjects()` |
706
+ | HeadObject | `objectExists()`, `getEtag()`, `getContentLength()` |
707
+ | CopyObject | `copyObject()`, `moveObject()` |
708
+ | CreateMultipartUpload | `getMultipartUploadId()` |
709
+ | UploadPart | `uploadPart()` |
710
+ | CompleteMultipartUpload | `completeMultipartUpload()` |
711
+ | AbortMultipartUpload | `abortMultipartUpload()` |
712
+ | ListMultipartUploads | `listMultipartUploads()` |
713
+ | Pre-signed URLs | `getPresignedUrl()` |
714
+
715
+ ---
253
716
 
254
717
  ## Security Notes
255
718