s3mini 0.9.0 → 0.9.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # s3mini | Tiny & fast S3 client for node and edge platforms.
2
2
 
3
- `s3mini` is an ultra-lightweight Typescript client (~18 KB minified, ≈15 % more ops/s) for S3-compatible object storage. It runs on Node, Bun, Cloudflare Workers, and other edge platforms. It has been tested on Cloudflare R2, Backblaze B2, DigitalOcean Spaces, Ceph, Oracle, Garage and MinIO. (No Browser support!)
3
+ `s3mini` is an ultra-lightweight Typescript client (~20 KB minified, ≈15 % more ops/s) for S3-compatible object storage. It runs on Node, Bun, Cloudflare Workers, and other edge platforms. It has been tested on Cloudflare R2, Backblaze B2, DigitalOcean Spaces, Ceph, Oracle, Garage and MinIO. (No Browser support!)
4
4
 
5
5
  [[github](https://github.com/good-lly/s3mini)]
6
6
  [[issues](https://github.com/good-lly/s3mini/issues)]
@@ -8,18 +8,19 @@
8
8
 
9
9
  ## Features
10
10
 
11
- - 🚀 Light and fast: averages ≈15 % more ops/s and only ~18 KB (minified, not gzipped).
12
- - 🔧 Zero dependencies; supports AWS SigV4 (no pre-signed requests) and SSE-C headers (tested only on Cloudflare)
11
+ - 🚀 Light and fast: averages ≈15 % more ops/s and only ~20 KB (minified, not gzipped).
12
+ - 🔧 Zero dependencies; supports AWS SigV4, pre-signed URLs, and SSE-C headers (tested on Cloudflare)
13
13
  - 🟠 Works on Cloudflare Workers; ideal for edge computing, Node, and Bun (no browser support).
14
14
  - 🔑 Only the essential S3 APIs—improved list, put, get, delete, and a few more.
15
15
  - 🛠️ Supports multipart uploads.
16
+ - 🎄 Tree-shakeable ES module.
16
17
  - 🎯 TypeScript support with type definitions.
17
- - 📚 Poorly-documented with examples and tests - But widely tested on various S3-compatible services! (Contributions welcome!)
18
+ - 📚 Documented with examples, tests and widely tested on various S3-compatible services! (Contributions welcome!)
18
19
  - 📦 **BYOS3** — _Bring your own S3-compatible bucket_ (tested on Cloudflare R2, Backblaze B2, DigitalOcean Spaces, MinIO, Garage, Micro/Ceph and Oracle Object Storage, Scaleway).
19
20
 
20
21
  #### Tested On
21
22
 
22
- ![Tested On](testedon.png)
23
+ ![Tested On](testedon.png) and more ...
23
24
  Contributions welcome!
24
25
 
25
26
  Dev:
@@ -43,44 +44,27 @@ Dev:
43
44
 
44
45
  <a href="https://github.com/good-lly/s3mini/issues/"> <img src="https://img.shields.io/badge/contributions-welcome-brightgreen.svg" alt="Contributions welcome" /></a>
45
46
 
46
- Performance tests was done on local Minio instance. Your results may vary depending on environment and network conditions, so take it with a grain of salt.
47
- ![performance-image](https://raw.githubusercontent.com/good-lly/s3mini/dev/performance-screenshot.png)
48
-
49
47
  ## Table of Contents
50
48
 
51
- - [Supported Ops](#supported-ops)
52
49
  - [Installation](#installation)
53
- - [Usage](#usage)
50
+ - [Quick Start](#quick-start)
51
+ - [Configuration](#configuration)
52
+ - [Uploading Objects](#uploading-objects)
53
+ - [Downloading Objects](#downloading-objects)
54
+ - [Listing Objects](#listing-objects)
55
+ - [Deleting Objects](#deleting-objects)
56
+ - [Copy and Move](#copy-and-move)
57
+ - [Conditional Requests](#conditional-requests)
58
+ - [Pre-signed URLs](#pre-signed-urls)
59
+ - [Server-Side Encryption (SSE-C)](#server-side-encryption-sse-c)
60
+ - [API Reference](#api-reference)
61
+ - [Error Handling](#error-handling)
62
+ - [Cloudflare Workers](#cloudflare-workers)
63
+ - [Supported Operations](#supported-operations)
54
64
  - [Security Notes](#security-notes)
55
65
  - [💙 Contributions welcomed!](#contributions-welcomed)
56
66
  - [License](#license)
57
67
 
58
- ## Supported Ops
59
-
60
- The library supports a subset of S3 operations, focusing on essential features, making it suitable for environments with limited resources.
61
-
62
- #### Bucket ops
63
-
64
- - ✅ HeadBucket (bucketExists)
65
- - ✅ createBucket (createBucket)
66
-
67
- #### Objects ops
68
-
69
- - ✅ ListObjectsV2 (listObjects, listObjectsPaged)
70
- - ✅ GetObject (getObject, getObjectResponse, getObjectWithETag, getObjectRaw, getObjectArrayBuffer, getObjectJSON)
71
- - ✅ PutObject (putObject)
72
- - ✅ DeleteObject (deleteObject)
73
- - ✅ DeleteObjects (deleteObjects)
74
- - ✅ HeadObject (objectExists, getEtag, getContentLength)
75
- - ✅ listMultipartUploads
76
- - ✅ CreateMultipartUpload (getMultipartUploadId)
77
- - ✅ completeMultipartUpload
78
- - ✅ abortMultipartUpload
79
- - ✅ uploadPart
80
- - ✅ CopyObject: Local copyObject/moveObject(copyObject w delete)
81
-
82
- Put/Get objects with SSE-C (server-side encryption with customer-provided keys) is supported, but only tested on Cloudflare R2!
83
-
84
68
  ## Installation
85
69
 
86
70
  ```bash
@@ -107,151 +91,611 @@ mv example.env .env
107
91
  > **⚠️ Environment Support Notice**
108
92
  >
109
93
  > This library is designed to run in environments like **Node.js**, **Bun**, and **Cloudflare Workers**. It does **not support browser environments** due to the use of Node.js APIs and polyfills.
110
- >
111
- > **Cloudflare Workers:** Now works without `nodejs_compat` compatibility flag, using native WebCrypto!
112
94
 
113
- ## Usage
95
+ ## Quick Start
114
96
 
115
- > [!WARNING]
116
- > `s3mini` was a deprecated alias removed in a recent `0.5.0` release. Please migrate to the new `S3mini` class.
97
+ ```typescript
98
+ import { S3mini } from 's3mini';
99
+
100
+ const s3 = new S3mini({
101
+ accessKeyId: process.env.S3_ACCESS_KEY,
102
+ secretAccessKey: process.env.S3_SECRET_KEY,
103
+ endpoint: 'https://bucket.region.r2.cloudflarestorage.com',
104
+ region: 'auto',
105
+ });
106
+
107
+ // Upload (auto-selects single PUT or multipart based on size)
108
+ await s3.putAnyObject('photos/vacation.jpg', fileBuffer, 'image/jpeg');
109
+
110
+ // Download
111
+ const data = await s3.getObject('photos/vacation.jpg');
112
+
113
+ // List
114
+ const objects = await s3.listObjects('/', 'photos/');
115
+
116
+ // Delete
117
+ await s3.deleteObject('photos/vacation.jpg');
118
+ ```
119
+
120
+ ## Configuration
117
121
 
118
122
  ```typescript
119
- import { S3mini, sanitizeETag } from 's3mini';
120
-
121
- const s3client = new S3mini({
122
- accessKeyId: config.accessKeyId,
123
- secretAccessKey: config.secretAccessKey,
124
- endpoint: config.endpoint, // e.g., 'https://<your-bucket>.<your-region>.digitaloceanspaces.com'
125
- region: config.region,
126
- // ?requestSizeInBytes = default is 8 MB
127
- // ?requestAbortTimeout = default is no timeout
128
- // ?logger = default is undefined (no logging)
129
- // ?fetch = default is globalThis.fetch (you can provide your own fetch implementation)
123
+ const s3 = new S3mini({
124
+ // Required
125
+ accessKeyId: string,
126
+ secretAccessKey: string,
127
+ endpoint: string, // Full URL: https://bucket.region.provider.com
128
+
129
+ // Optional
130
+ region: string, // Default: 'auto'
131
+ minPartSize: number, // Default: 8MB threshold for multipart
132
+ requestSizeInBytes: number, // Default: 8MB chunk size for range requests
133
+ requestAbortTimeout: number, // Timeout in ms (undefined = no timeout)
134
+ logger: Logger, // Custom logger with info/warn/error methods
135
+ fetch: typeof fetch, // Custom fetch implementation
130
136
  });
137
+ ```
131
138
 
132
- // Basic bucket ops
133
- let exists: boolean = false;
134
- try {
135
- // Check if the bucket exists
136
- exists = await s3client.bucketExists();
137
- } catch (err) {
138
- throw new Error(`Failed bucketExists() call, wrong credentials maybe: ${err.message}`);
139
- }
140
- if (!exists) {
141
- // Create the bucket based on the endpoint bucket name
142
- await s3client.createBucket();
139
+ **Endpoint formats:**
140
+
141
+ ```typescript
142
+ // Path-style (bucket in path)
143
+ 'https://s3.us-east-1.amazonaws.com/my-bucket';
144
+
145
+ // Virtual-hosted-style (bucket in subdomain)
146
+ 'https://my-bucket.s3.us-east-1.amazonaws.com';
147
+
148
+ // Provider-specific
149
+ 'https://my-bucket.nyc3.digitaloceanspaces.com';
150
+ 'https://account-id.r2.cloudflarestorage.com/my-bucket';
151
+ ```
152
+
153
+ ---
154
+
155
+ ## Uploading Objects
156
+
157
+ ### putObject — Simple Upload
158
+
159
+ Direct single-request upload. Use for small files or when you need fine control.
160
+
161
+ ```typescript
162
+ const response = await s3.putObject(
163
+ key: string, // Object key/path
164
+ data: string | Buffer | Uint8Array | Blob | File | ReadableStream,
165
+ contentType?: string, // Default: 'application/octet-stream'
166
+ ssecHeaders?: SSECHeaders, // Optional encryption headers
167
+ additionalHeaders?: AWSHeaders, // Optional x-amz-* headers
168
+ contentLength?: number, // Optional, auto-detected for most types
169
+ );
170
+
171
+ // Returns: Response object
172
+ const etag = response.headers.get('etag');
173
+ ```
174
+
175
+ **Examples:**
176
+
177
+ ```typescript
178
+ // String content
179
+ await s3.putObject('config.json', JSON.stringify({ key: 'value' }), 'application/json');
180
+
181
+ // Buffer/Uint8Array
182
+ const buffer = await fs.readFile('image.png');
183
+ await s3.putObject('images/photo.png', buffer, 'image/png');
184
+
185
+ // Blob (browser File API or Node 18+)
186
+ const blob = new Blob(['Hello'], { type: 'text/plain' });
187
+ await s3.putObject('hello.txt', blob, 'text/plain');
188
+
189
+ // With custom headers
190
+ await s3.putObject('data.bin', buffer, 'application/octet-stream', undefined, {
191
+ 'x-amz-meta-author': 'john',
192
+ 'x-amz-meta-version': '1.0',
193
+ });
194
+ ```
195
+
196
+ ### putAnyObject — Smart Upload (Recommended)
197
+
198
+ Automatically chooses single PUT or multipart based on data size. **This is the recommended method for most use cases.**
199
+
200
+ ```typescript
201
+ const response = await s3.putAnyObject(
202
+ key: string,
203
+ data: string | Buffer | Uint8Array | Blob | File | ReadableStream,
204
+ contentType?: string,
205
+ ssecHeaders?: SSECHeaders,
206
+ additionalHeaders?: AWSHeaders,
207
+ contentLength?: number,
208
+ );
209
+ ```
210
+
211
+ **Behavior:**
212
+
213
+ - **≤ minPartSize (8MB default):** Single PUT request
214
+ - **> minPartSize:** Automatic multipart upload with:
215
+ - Parallel part uploads (4 concurrent by default)
216
+ - Automatic retries with exponential backoff (3 retries)
217
+ - Proper cleanup on failure (aborts incomplete uploads)
218
+
219
+ **Examples:**
220
+
221
+ ```typescript
222
+ // Small file — uses single PUT internally
223
+ await s3.putAnyObject('small.txt', 'Hello World');
224
+
225
+ // Large file — automatically uses multipart
226
+ const largeBuffer = await fs.readFile('video.mp4'); // 500MB
227
+ await s3.putAnyObject('videos/movie.mp4', largeBuffer, 'video/mp4');
228
+
229
+ // Blob (zero-copy slicing for memory efficiency)
230
+ const file = new File([largeArrayBuffer], 'data.bin');
231
+ await s3.putAnyObject('uploads/data.bin', file);
232
+
233
+ // ReadableStream (uploads as data arrives)
234
+ const stream = fs.createReadStream('huge-file.dat');
235
+ await s3.putAnyObject('backups/data.dat', Readable.toWeb(stream));
236
+ ```
237
+
238
+ **Memory efficiency with Blobs:**
239
+
240
+ For large files, using `Blob` or `File` is more memory-efficient than `Uint8Array`:
241
+
242
+ ```typescript
243
+ // ❌ Loads entire file into memory
244
+ const buffer = await fs.readFile('large-video.mp4');
245
+ await s3.putAnyObject('video.mp4', buffer);
246
+
247
+ // ✅ Zero-copy slicing — only reads data when uploading each part
248
+ const file = Bun.file('large-video.mp4'); // Bun
249
+ // or
250
+ const blob = new Blob([await fs.readFile('large-video.mp4')]); // Node
251
+ await s3.putAnyObject('video.mp4', file);
252
+ ```
253
+
254
+ ### Manual Multipart Upload
255
+
256
+ For advanced control over multipart uploads (progress tracking, resumable uploads, custom concurrency).
257
+
258
+ ```typescript
259
+ // 1. Initialize upload
260
+ const uploadId = await s3.getMultipartUploadId(
261
+ key: string,
262
+ contentType?: string,
263
+ ssecHeaders?: SSECHeaders,
264
+ additionalHeaders?: AWSHeaders,
265
+ );
266
+
267
+ // 2. Upload parts (must be ≥ 5MB except last part)
268
+ const parts: UploadPart[] = [];
269
+
270
+ for (let i = 0; i < totalParts; i++) {
271
+ const partData = buffer.subarray(i * partSize, (i + 1) * partSize);
272
+ const part = await s3.uploadPart(
273
+ key,
274
+ uploadId,
275
+ partData,
276
+ i + 1, // partNumber: 1-indexed, max 10,000
277
+ );
278
+ parts.push(part);
279
+ console.log(`Uploaded part ${i + 1}/${totalParts}`);
143
280
  }
144
281
 
145
- // Basic object ops
146
- // key is the name of the object in the bucket
147
- const smallObjectKey: string = 'small-object.txt';
148
- // content is the data you want to store in the object
149
- // it can be a string or Buffer (recommended for large objects)
150
- const smallObjectContent: string = 'Hello, world!';
151
-
152
- // check if the object exists
153
- const objectExists: boolean = await s3client.objectExists(smallObjectKey);
154
- let etag: string | null = null;
155
- if (!objectExists) {
156
- // put/upload the object, content can be a string or Buffer
157
- // to add object into "folder", use "folder/filename.txt" as key
158
- // Third argument is optional, it can be used to set content type ... default is 'application/octet-stream'
159
- const resp: Response = await s3client.putObject(smallObjectKey, smallObjectContent);
160
- // example with content type:
161
- // const resp: Response = await s3client.putObject(smallObjectKey, smallObjectContent, 'image/png');
162
- // you can also get etag via getEtag method
163
- // const etag: string = await s3client.getEtag(smallObjectKey);
164
- etag = sanitizeETag(resp.headers.get('etag'));
282
+ // 3. Complete upload
283
+ const result = await s3.completeMultipartUpload(key, uploadId, parts);
284
+ console.log('Final ETag:', result.etag);
285
+ ```
286
+
287
+ **Parallel uploads with progress:**
288
+
289
+ ```typescript
290
+ import { runInBatches } from 's3mini';
291
+
292
+ const PART_SIZE = 8 * 1024 * 1024; // 8MB
293
+ const CONCURRENCY = 6;
294
+
295
+ async function uploadWithProgress(key: string, data: Uint8Array) {
296
+ const uploadId = await s3.getMultipartUploadId(key);
297
+ const totalParts = Math.ceil(data.byteLength / PART_SIZE);
298
+ let completed = 0;
299
+
300
+ const tasks = Array.from({ length: totalParts }, (_, i) => async () => {
301
+ const start = i * PART_SIZE;
302
+ const end = Math.min(start + PART_SIZE, data.byteLength);
303
+ const part = await s3.uploadPart(key, uploadId, data.subarray(start, end), i + 1);
304
+ completed++;
305
+ console.log(`Progress: ${((completed / totalParts) * 100).toFixed(1)}%`);
306
+ return part;
307
+ });
308
+
309
+ const results = await runInBatches(tasks, CONCURRENCY);
310
+ const parts = results
311
+ .filter((r): r is PromiseFulfilledResult => r.status === 'fulfilled')
312
+ .map(r => r.value)
313
+ .sort((a, b) => a.partNumber - b.partNumber);
314
+
315
+ return s3.completeMultipartUpload(key, uploadId, parts);
165
316
  }
317
+ ```
318
+
319
+ **Abort an incomplete upload:**
320
+
321
+ ```typescript
322
+ await s3.abortMultipartUpload(key, uploadId);
323
+ ```
324
+
325
+ **List pending multipart uploads:**
166
326
 
167
- // get the object, null if not found
168
- const objectData: string | null = await s3client.getObject(smallObjectKey);
169
- console.log('Object data:', objectData);
170
-
171
- // get the object with ETag, null if not found
172
- const response2: Response = await S3mini.getObject(smallObjectKey, { 'if-none-match': etag });
173
- if (response2) {
174
- // ETag changed so we can get the object data and new ETag
175
- // Note: ETag is not guaranteed to be the same as the MD5 hash of the object
176
- // ETag is sanitized to remove quotes
177
- const etag2: string = sanitizeETag(response2.headers.get('etag'));
178
- console.log('Object data with ETag:', response2.body, 'ETag:', etag2);
179
- } else {
180
- console.log('Object not found or ETag does match.');
327
+ ```typescript
328
+ const pending = await s3.listMultipartUploads();
329
+ // Clean up orphaned uploads
330
+ for (const upload of pending.Upload || []) {
331
+ await s3.abortMultipartUpload(upload.Key, upload.UploadId);
181
332
  }
333
+ ```
334
+
335
+ ---
336
+
337
+ ## Downloading Objects
338
+
339
+ ```typescript
340
+ // As string
341
+ const text = await s3.getObject('file.txt');
182
342
 
183
- // list objects in the bucket, null if bucket is empty
184
- // Note: listObjects uses listObjectsV2 API and iterate over all pages
185
- // so it will return all objects in the bucket which can take a while
186
- // If you want to limit the number of objects returned, use the maxKeys option
187
- // If you want to list objects in a specific "folder", use "folder/" as prefix
188
- // Example s3client.listObjects({"/" "myfolder/"})
189
- const list: object[] | null = await s3client.listObjects();
190
- if (list) {
191
- console.log('List of objects:', list);
192
- } else {
193
- console.log('No objects found in the bucket.');
343
+ // As ArrayBuffer
344
+ const buffer = await s3.getObjectArrayBuffer('image.png');
345
+
346
+ // As JSON (auto-parsed)
347
+ const data = await s3.getObjectJSON('config.json');
348
+
349
+ // Full Response object (for headers, streaming)
350
+ const response = await s3.getObjectResponse('video.mp4');
351
+ const stream = response.body; // ReadableStream
352
+
353
+ // With ETag for caching
354
+ const { etag, data } = await s3.getObjectWithETag('file.txt');
355
+
356
+ // Range request (partial download)
357
+ const response = await s3.getObjectRaw(
358
+ 'large-file.bin',
359
+ false, // wholeFile: false for range request
360
+ 0, // rangeFrom
361
+ 1024 * 1024, // rangeTo (first 1MB)
362
+ );
363
+ ```
364
+
365
+ ---
366
+
367
+ ## Listing Objects
368
+
369
+ ```typescript
370
+ // List all objects (auto-paginates)
371
+ const objects = await s3.listObjects();
372
+
373
+ // With prefix filter (list "folder")
374
+ const photos = await s3.listObjects('/', 'photos/');
375
+
376
+ // With max keys limit
377
+ const first100 = await s3.listObjects('/', '', 100);
378
+
379
+ // Manual pagination
380
+ let token: string | undefined;
381
+ do {
382
+ const { objects, nextContinuationToken } = await s3.listObjectsPaged(
383
+ '/', // delimiter
384
+ 'uploads/', // prefix
385
+ 100, // maxKeys per page
386
+ token, // continuation token
387
+ );
388
+ console.log(objects);
389
+ token = nextContinuationToken;
390
+ } while (token);
391
+ ```
392
+
393
+ **Response shape:**
394
+
395
+ ```typescript
396
+ interface ListObject {
397
+ Key: string;
398
+ Size: number;
399
+ LastModified: Date;
400
+ ETag: string;
401
+ StorageClass: string;
194
402
  }
403
+ ```
404
+
405
+ ---
406
+
407
+ ## Deleting Objects
408
+
409
+ ```typescript
410
+ // Single object
411
+ const deleted = await s3.deleteObject('file.txt'); // boolean
412
+
413
+ // Multiple objects (batched, max 1000 per request)
414
+ const keys = ['a.txt', 'b.txt', 'c.txt'];
415
+ const results = await s3.deleteObjects(keys); // boolean[] in same order
416
+ ```
417
+
418
+ ---
419
+
420
+ ## Copy and Move
421
+
422
+ Server-side copy (no data transfer through client):
423
+
424
+ ```typescript
425
+ // Copy within same bucket
426
+ const result = await s3.copyObject('source.txt', 'backup/source.txt');
427
+
428
+ // Copy with new metadata
429
+ await s3.copyObject('report.pdf', 'archive/report.pdf', {
430
+ metadataDirective: 'REPLACE',
431
+ metadata: {
432
+ 'archived-at': new Date().toISOString(),
433
+ },
434
+ contentType: 'application/pdf',
435
+ });
436
+
437
+ // Move (copy + delete source)
438
+ await s3.moveObject('temp/upload.tmp', 'files/document.pdf');
439
+ ```
440
+
441
+ **Options:**
195
442
 
196
- // list objects in the bucket, 10 at a time using pagination token
197
- let results = await s3.listObjectsPaged('/', undefined, 10, undefined);
198
- while (results?.objects?.length) {
199
- console.log('List of objects in this page:', results);
200
- results = await s3.listObjectsPaged('/', undefined, 10, results.nextContinuationToken);
443
+ ```typescript
444
+ interface CopyObjectOptions {
445
+ metadataDirective?: 'COPY' | 'REPLACE';
446
+ metadata?: Record;
447
+ contentType?: string;
448
+ storageClass?: string;
449
+ taggingDirective?: 'COPY' | 'REPLACE';
450
+ sourceSSECHeaders?: SSECHeaders;
451
+ destinationSSECHeaders?: SSECHeaders;
452
+ additionalHeaders?: AWSHeaders;
201
453
  }
454
+ ```
202
455
 
203
- // delete the object
204
- const wasDeleted: boolean = await s3client.deleteObject(smallObjectKey);
205
- // to delete multiple objects, use deleteObjects method
206
- // const keysToDelete: string[] = ['object1.txt', 'object2.txt'];
207
- // const deletedArray: boolean[] = await s3client.deleteObjects(keysToDelete);
208
- // Note: deleteObjects returns an array of booleans, one for each key, indicating if the object was deleted or not
209
-
210
- // Multipart upload
211
- const multipartKey = 'multipart-object.txt';
212
- const large_buffer = new Uint8Array(1024 * 1024 * 15); // 15 MB buffer
213
- const partSize = 8 * 1024 * 1024; // 8 MB
214
- const totalParts = Math.ceil(large_buffer.length / partSize);
215
- // Beware! This will return always a new uploadId
216
- // if you want to use the same uploadId, you need to store it somewhere
217
- const uploadId = await s3client.getMultipartUploadId(multipartKey);
218
- const uploadPromises = [];
219
- for (let i = 0; i < totalParts; i++) {
220
- const partBuffer = large_buffer.subarray(i * partSize, (i + 1) * partSize);
221
- // upload each part
222
- // Note: uploadPart returns a promise, so you can use Promise.all to upload all parts in parallel
223
- // but be careful with the number of parallel uploads, it can cause throttling
224
- // or errors if you upload too many parts at once
225
- // You can also use generator functions to upload parts in batches
226
- uploadPromises.push(s3client.uploadPart(multipartKey, uploadId, partBuffer, i + 1));
456
+ ---
457
+
458
+ ## Conditional Requests
459
+
460
+ Use If-\* headers to avoid unnecessary transfers:
461
+
462
+ ```typescript
463
+ // Only download if changed (returns null if ETag matches)
464
+ const data = await s3.getObject('file.txt', {
465
+ 'if-none-match': '"abc123"',
466
+ });
467
+
468
+ // Only download if modified since date
469
+ const data = await s3.getObject('file.txt', {
470
+ 'if-modified-since': 'Wed, 21 Oct 2024 07:28:00 GMT',
471
+ });
472
+
473
+ // Check existence with conditions
474
+ const exists = await s3.objectExists('file.txt', {
475
+ 'if-match': '"abc123"',
476
+ }); // null if ETag mismatch, true/false otherwise
477
+ ```
478
+
479
+ ---
480
+
481
+ ## Pre-signed URLs
482
+
483
+ Generate time-limited URLs that allow unauthenticated HTTP clients to upload or download objects directly — no credentials needed on the client side.
484
+
485
+ ```typescript
486
+ // Download URL (valid for 1 hour by default)
487
+ const downloadUrl = await s3.getPresignedUrl('GET', 'photos/vacation.jpg');
488
+
489
+ // Upload URL (valid for 5 minutes)
490
+ const uploadUrl = await s3.getPresignedUrl('PUT', 'uploads/file.bin', 300);
491
+ ```
492
+
493
+ **Client-side usage (no SDK or credentials required):**
494
+
495
+ ```typescript
496
+ // Upload via pre-signed URL
497
+ await fetch(uploadUrl, {
498
+ method: 'PUT',
499
+ body: fileData,
500
+ headers: { 'Content-Type': 'image/jpeg' },
501
+ });
502
+
503
+ // Download via pre-signed URL
504
+ const response = await fetch(downloadUrl);
505
+ const data = await response.arrayBuffer();
506
+ ```
507
+
508
+ **Custom response headers:**
509
+
510
+ ```typescript
511
+ // Force download with a specific filename
512
+ const url = await s3.getPresignedUrl('GET', 'report.pdf', 3600, {
513
+ 'response-content-disposition': 'attachment; filename="report.pdf"',
514
+ 'response-content-type': 'application/pdf',
515
+ });
516
+ ```
517
+
518
+ **Method signature:**
519
+
520
+ ```typescript
521
+ getPresignedUrl(
522
+ method: 'GET' | 'PUT',
523
+ key: string,
524
+ expiresIn?: number, // Default: 3600 (1 hour), max: 604800 (7 days)
525
+ queryParams?: Record<string, string>,
526
+ ): Promise<string>
527
+ ```
528
+
529
+ **Notes:**
530
+
531
+ - `expiresIn` must be between 1 and 604800 seconds (7 days); non-integer values are floored.
532
+ - Works with both virtual-hosted-style and path-style endpoints.
533
+ - Special characters and unicode in keys are handled automatically.
534
+ - Throws `TypeError` for empty keys or out-of-range `expiresIn`.
535
+
536
+ ---
537
+
538
+ ## Server-Side Encryption (SSE-C)
539
+
540
+ Customer-provided encryption keys (tested on Cloudflare R2):
541
+
542
+ ```typescript
543
+ const ssecHeaders = {
544
+ 'x-amz-server-side-encryption-customer-algorithm': 'AES256',
545
+ 'x-amz-server-side-encryption-customer-key': base64Key,
546
+ 'x-amz-server-side-encryption-customer-key-md5': base64KeyMd5,
547
+ };
548
+
549
+ // Upload encrypted
550
+ await s3.putObject('secret.dat', data, 'application/octet-stream', ssecHeaders);
551
+
552
+ // Download encrypted (must provide same key)
553
+ const decrypted = await s3.getObject('secret.dat', {}, ssecHeaders);
554
+
555
+ // Copy encrypted object
556
+ await s3.copyObject('secret.dat', 'backup/secret.dat', {
557
+ sourceSSECHeaders: {
558
+ 'x-amz-copy-source-server-side-encryption-customer-algorithm': 'AES256',
559
+ 'x-amz-copy-source-server-side-encryption-customer-key': base64Key,
560
+ 'x-amz-copy-source-server-side-encryption-customer-key-md5': base64KeyMd5,
561
+ },
562
+ destinationSSECHeaders: ssecHeaders,
563
+ });
564
+ ```
565
+
566
+ ---
567
+
568
+ ## API Reference
569
+
570
+ ### Constructor
571
+
572
+ | Parameter | Type | Default | Description |
573
+ | --------------------- | -------------- | ------------------ | ------------------------- |
574
+ | `accessKeyId` | `string` | required | AWS access key |
575
+ | `secretAccessKey` | `string` | required | AWS secret key |
576
+ | `endpoint` | `string` | required | Full S3 endpoint URL |
577
+ | `region` | `string` | `'auto'` | AWS region |
578
+ | `minPartSize` | `number` | `8388608` | Multipart threshold (8MB) |
579
+ | `requestAbortTimeout` | `number` | `undefined` | Request timeout in ms |
580
+ | `logger` | `Logger` | `undefined` | Custom logger |
581
+ | `fetch` | `typeof fetch` | `globalThis.fetch` | Custom fetch |
582
+
583
+ ### Methods
584
+
585
+ | Method | Returns | Description |
586
+ | ------------------------------------------------------------------ | ------------------------------------------- | ----------------------- |
587
+ | `bucketExists()` | `Promise<boolean>` | Check if bucket exists |
588
+ | `createBucket()` | `Promise<boolean>` | Create bucket |
589
+ | `listObjects(delimiter?, prefix?, maxKeys?)` | `Promise<ListObject[] \| null>` | List all objects |
590
+ | `listObjectsPaged(delimiter?, prefix?, maxKeys?, token?)` | `Promise<{objects, nextContinuationToken}>` | Paginated list |
591
+ | `getObject(key, opts?, ssec?)` | `Promise<string \| null>` | Get object as string |
592
+ | `getObjectArrayBuffer(key, opts?, ssec?)` | `Promise<ArrayBuffer \| null>` | Get as ArrayBuffer |
593
+ | `getObjectJSON<T>(key, opts?, ssec?)` | `Promise<T \| null>` | Get as parsed JSON |
594
+ | `getObjectResponse(key, opts?, ssec?)` | `Promise<Response \| null>` | Get full Response |
595
+ | `getObjectWithETag(key, opts?, ssec?)` | `Promise<{etag, data}>` | Get with ETag |
596
+ | `getObjectRaw(key, wholeFile?, from?, to?, opts?, ssec?)` | `Promise<Response>` | Range request |
597
+ | `putObject(key, data, type?, ssec?, headers?, length?)` | `Promise<Response>` | Simple upload |
598
+ | `putAnyObject(key, data, type?, ssec?, headers?, length?)` | `Promise<Response>` | Smart upload |
599
+ | `deleteObject(key)` | `Promise<boolean>` | Delete single object |
600
+ | `deleteObjects(keys)` | `Promise<boolean[]>` | Delete multiple |
601
+ | `objectExists(key, opts?)` | `Promise<boolean \| null>` | Check existence |
602
+ | `getEtag(key, opts?, ssec?)` | `Promise<string \| null>` | Get ETag only |
603
+ | `getContentLength(key, ssec?)` | `Promise<number>` | Get size in bytes |
604
+ | `copyObject(source, dest, opts?)` | `Promise<CopyObjectResult>` | Server-side copy |
605
+ | `moveObject(source, dest, opts?)` | `Promise<CopyObjectResult>` | Copy + delete |
606
+ | `getPresignedUrl(method, key, expiresIn?, queryParams?)` | `Promise<string>` | Generate pre-signed URL |
607
+ | `getMultipartUploadId(key, type?, ssec?, headers?)` | `Promise<string>` | Init multipart |
608
+ | `uploadPart(key, uploadId, data, partNum, opts?, ssec?, headers?)` | `Promise<UploadPart>` | Upload part |
609
+ | `completeMultipartUpload(key, uploadId, parts)` | `Promise<CompleteResult>` | Complete multipart |
610
+ | `abortMultipartUpload(key, uploadId, ssec?)` | `Promise<object>` | Abort multipart |
611
+ | `listMultipartUploads(delimiter?, prefix?, method?, opts?)` | `Promise<object>` | List pending |
612
+ | `sanitizeETag(etag)` | `string` | Remove quotes from ETag |
613
+
614
+ ### Utility Functions
615
+
616
+ ```typescript
617
+ import { runInBatches, sanitizeETag } from 's3mini';
618
+
619
+ // Run async tasks with concurrency control
620
+ const results = await runInBatches(
621
+ tasks: Iterable<() => Promise>,
622
+ batchSize?: number, // Default: 30
623
+ minIntervalMs?: number // Default: 0 (no delay between batches)
624
+ );
625
+
626
+ // Clean ETag value
627
+ const clean = sanitizeETag('"abc123"'); // 'abc123'
628
+ ```
629
+
630
+ ---
631
+
632
+ ## Error Handling
633
+
634
+ ```typescript
635
+ import { S3ServiceError, S3NetworkError } from 's3mini';
636
+
637
+ try {
638
+ await s3.getObject('missing.txt');
639
+ } catch (err) {
640
+ if (err instanceof S3ServiceError) {
641
+ console.error(`S3 error ${err.status}: ${err.serviceCode}`);
642
+ console.error('Response body:', err.body);
643
+ } else if (err instanceof S3NetworkError) {
644
+ console.error(`Network error: ${err.code}`); // ENOTFOUND, ETIMEDOUT, etc.
645
+ }
227
646
  }
228
- const uploadResponses = await Promise.all(uploadPromises);
229
- const parts = uploadResponses.map((response, index) => ({
230
- partNumber: index + 1,
231
- etag: response.etag,
232
- }));
233
- // Complete the multipart upload
234
- const completeResponse = await s3client.completeMultipartUpload(multipartKey, uploadId, parts);
235
- const completeEtag = completeResponse.etag;
236
-
237
- // List multipart uploads
238
- // returns object with uploadId and key
239
- const multipartUploads: object = await s3client.listMultipartUploads();
240
- // Abort the multipart upload
241
- const abortResponse = await s3client.abortMultipartUpload(multipartUploads.key, multipartUploads.uploadId);
242
-
243
- // Multipart download
244
- // lets test getObjectRaw with range
245
- const rangeStart = 2048 * 1024; // 2 MB
246
- const rangeEnd = 8 * 1024 * 1024 * 2; // 16 MB
247
- const rangeResponse = await s3client.getObjectRaw(multipartKey, false, rangeStart, rangeEnd);
248
- const rangeData = await rangeResponse.arrayBuffer();
249
-
250
- // Local copyObject example
251
- const result = await s3.copyObject('report-2024.pdf', 'archive/report-2024.pdf');
252
647
  ```
253
648
 
254
- For more check [USAGE.md](USAGE.md) file, examples and tests.
649
+ **Error classes:**
650
+
651
+ - `S3Error` — Base error class
652
+ - `S3ServiceError` — S3 returned an error response (4xx, 5xx)
653
+ - `S3NetworkError` — Network-level failure (DNS, timeout, connection refused)
654
+
655
+ ---
656
+
657
+ ## Cloudflare Workers
658
+
659
+ Works natively without `nodejs_compat`:
660
+
661
+ ```typescript
662
+ export default {
663
+ async fetch(request: Request, env: Env): Promise {
664
+ const s3 = new S3mini({
665
+ accessKeyId: env.R2_ACCESS_KEY,
666
+ secretAccessKey: env.R2_SECRET_KEY,
667
+ endpoint: env.R2_ENDPOINT,
668
+ });
669
+
670
+ const data = await s3.getObject('hello.txt');
671
+ return new Response(data);
672
+ },
673
+ };
674
+ ```
675
+
676
+ ---
677
+
678
+ ## Supported Operations
679
+
680
+ | Operation | Method |
681
+ | ----------------------- | -------------------------------------------------------------------------------------------------------------------------- |
682
+ | HeadBucket | `bucketExists()` |
683
+ | CreateBucket | `createBucket()` |
684
+ | ListObjectsV2 | `listObjects()`, `listObjectsPaged()` |
685
+ | GetObject | `getObject()`, `getObjectArrayBuffer()`, `getObjectJSON()`, `getObjectResponse()`, `getObjectWithETag()`, `getObjectRaw()` |
686
+ | PutObject | `putObject()`, `putAnyObject()` |
687
+ | DeleteObject | `deleteObject()` |
688
+ | DeleteObjects | `deleteObjects()` |
689
+ | HeadObject | `objectExists()`, `getEtag()`, `getContentLength()` |
690
+ | CopyObject | `copyObject()`, `moveObject()` |
691
+ | CreateMultipartUpload | `getMultipartUploadId()` |
692
+ | UploadPart | `uploadPart()` |
693
+ | CompleteMultipartUpload | `completeMultipartUpload()` |
694
+ | AbortMultipartUpload | `abortMultipartUpload()` |
695
+ | ListMultipartUploads | `listMultipartUploads()` |
696
+ | Pre-signed URLs | `getPresignedUrl()` |
697
+
698
+ ---
255
699
 
256
700
  ## Security Notes
257
701