express-storage 1.0.0 → 1.1.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +519 -348
- package/dist/drivers/azure.driver.d.ts +88 -0
- package/dist/drivers/azure.driver.d.ts.map +1 -0
- package/dist/drivers/azure.driver.js +367 -0
- package/dist/drivers/azure.driver.js.map +1 -0
- package/dist/drivers/base.driver.d.ts +125 -24
- package/dist/drivers/base.driver.d.ts.map +1 -1
- package/dist/drivers/base.driver.js +248 -62
- package/dist/drivers/base.driver.js.map +1 -1
- package/dist/drivers/gcs.driver.d.ts +60 -13
- package/dist/drivers/gcs.driver.d.ts.map +1 -1
- package/dist/drivers/gcs.driver.js +242 -41
- package/dist/drivers/gcs.driver.js.map +1 -1
- package/dist/drivers/local.driver.d.ts +89 -12
- package/dist/drivers/local.driver.d.ts.map +1 -1
- package/dist/drivers/local.driver.js +533 -45
- package/dist/drivers/local.driver.js.map +1 -1
- package/dist/drivers/s3.driver.d.ts +64 -13
- package/dist/drivers/s3.driver.d.ts.map +1 -1
- package/dist/drivers/s3.driver.js +269 -41
- package/dist/drivers/s3.driver.js.map +1 -1
- package/dist/factory/driver.factory.d.ts +35 -29
- package/dist/factory/driver.factory.d.ts.map +1 -1
- package/dist/factory/driver.factory.js +119 -59
- package/dist/factory/driver.factory.js.map +1 -1
- package/dist/index.d.ts +23 -22
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +26 -46
- package/dist/index.js.map +1 -1
- package/dist/storage-manager.d.ts +205 -52
- package/dist/storage-manager.d.ts.map +1 -1
- package/dist/storage-manager.js +644 -73
- package/dist/storage-manager.js.map +1 -1
- package/dist/types/storage.types.d.ts +243 -18
- package/dist/types/storage.types.d.ts.map +1 -1
- package/dist/utils/config.utils.d.ts +28 -4
- package/dist/utils/config.utils.d.ts.map +1 -1
- package/dist/utils/config.utils.js +121 -47
- package/dist/utils/config.utils.js.map +1 -1
- package/dist/utils/file.utils.d.ts +111 -14
- package/dist/utils/file.utils.d.ts.map +1 -1
- package/dist/utils/file.utils.js +215 -32
- package/dist/utils/file.utils.js.map +1 -1
- package/package.json +51 -27
- package/dist/drivers/oci.driver.d.ts +0 -37
- package/dist/drivers/oci.driver.d.ts.map +0 -1
- package/dist/drivers/oci.driver.js +0 -84
- package/dist/drivers/oci.driver.js.map +0 -1
package/README.md
CHANGED
|
@@ -1,490 +1,661 @@
|
|
|
1
1
|
# Express Storage
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
**Secure, unified file uploads for Express.js — one API for all cloud providers.**
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
Stop writing separate upload code for every storage provider. Express Storage gives you a single, secure interface that works with AWS S3, Google Cloud Storage, Azure Blob Storage, and local disk. Switch providers by changing one environment variable. No code changes required.
|
|
6
6
|
|
|
7
|
-
-
|
|
8
|
-
-
|
|
9
|
-
|
|
10
|
-
- **Flexible File Handling**: Support for single and multiple file uploads
|
|
11
|
-
- **Automatic File Organization**: Files stored in month/year directories for local storage
|
|
12
|
-
- **Unique File Naming**: Unix timestamp-based unique filenames with sanitization
|
|
13
|
-
- **Environment-based Configuration**: Simple setup using environment variables
|
|
14
|
-
- **Class-based API**: Clean, object-oriented interface with `StorageManager`
|
|
15
|
-
- **Comprehensive Testing**: Full test coverage with Jest
|
|
16
|
-
- **Error Handling**: Consistent error responses with detailed messages
|
|
7
|
+
[](https://www.npmjs.com/package/express-storage)
|
|
8
|
+
[](https://www.typescriptlang.org/)
|
|
9
|
+
[](https://opensource.org/licenses/MIT)
|
|
17
10
|
|
|
18
|
-
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Why Express Storage?
|
|
14
|
+
|
|
15
|
+
Every application needs file uploads. And every application gets it wrong at first.
|
|
16
|
+
|
|
17
|
+
You start with local storage, then realize you need S3 for production. You copy-paste upload code from Stack Overflow, then discover it's vulnerable to path traversal attacks. You build presigned URL support, then learn Azure handles it completely differently than AWS.
|
|
18
|
+
|
|
19
|
+
**Express Storage solves these problems once, so you don't have to.**
|
|
20
|
+
|
|
21
|
+
### What Makes It Different
|
|
22
|
+
|
|
23
|
+
- **One API, Four Providers** — Write upload code once. Deploy to any cloud.
|
|
24
|
+
- **Security Built In** — Path traversal prevention, filename sanitization, file validation, and null byte protection come standard.
|
|
25
|
+
- **Presigned URLs Done Right** — Client-side uploads that bypass your server, with proper validation for each provider's quirks.
|
|
26
|
+
- **TypeScript Native** — Full type safety with intelligent autocomplete. No `any` types hiding bugs.
|
|
27
|
+
- **Zero Config Switching** — Change `FILE_DRIVER=local` to `FILE_DRIVER=s3` and you're done.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
## Quick Start
|
|
32
|
+
|
|
33
|
+
### Installation
|
|
19
34
|
|
|
20
35
|
```bash
|
|
21
36
|
npm install express-storage
|
|
22
37
|
```
|
|
23
38
|
|
|
24
|
-
|
|
39
|
+
### Basic Setup
|
|
40
|
+
|
|
41
|
+
```typescript
|
|
42
|
+
import express from "express";
|
|
43
|
+
import multer from "multer";
|
|
44
|
+
import { StorageManager } from "express-storage";
|
|
45
|
+
|
|
46
|
+
const app = express();
|
|
47
|
+
const upload = multer();
|
|
48
|
+
const storage = new StorageManager();
|
|
49
|
+
|
|
50
|
+
app.post("/upload", upload.single("file"), async (req, res) => {
|
|
51
|
+
const result = await storage.uploadFile(req.file, {
|
|
52
|
+
maxSize: 10 * 1024 * 1024, // 10MB limit
|
|
53
|
+
allowedMimeTypes: ["image/jpeg", "image/png", "application/pdf"],
|
|
54
|
+
});
|
|
25
55
|
|
|
26
|
-
|
|
56
|
+
if (result.success) {
|
|
57
|
+
res.json({ url: result.fileUrl });
|
|
58
|
+
} else {
|
|
59
|
+
res.status(400).json({ error: result.error });
|
|
60
|
+
}
|
|
61
|
+
});
|
|
62
|
+
```
|
|
27
63
|
|
|
28
|
-
|
|
64
|
+
### Environment Configuration
|
|
65
|
+
|
|
66
|
+
Create a `.env` file:
|
|
29
67
|
|
|
30
68
|
```env
|
|
31
|
-
#
|
|
69
|
+
# Choose your storage provider
|
|
32
70
|
FILE_DRIVER=local
|
|
33
71
|
|
|
34
|
-
# For local storage
|
|
35
|
-
LOCAL_PATH=
|
|
72
|
+
# For local storage
|
|
73
|
+
LOCAL_PATH=uploads
|
|
36
74
|
|
|
37
|
-
# For
|
|
75
|
+
# For AWS S3
|
|
38
76
|
FILE_DRIVER=s3
|
|
39
77
|
BUCKET_NAME=my-bucket
|
|
40
78
|
AWS_REGION=us-east-1
|
|
41
|
-
AWS_ACCESS_KEY=your-
|
|
42
|
-
AWS_SECRET_KEY=your-secret
|
|
79
|
+
AWS_ACCESS_KEY=your-key
|
|
80
|
+
AWS_SECRET_KEY=your-secret
|
|
81
|
+
|
|
82
|
+
# For Google Cloud Storage
|
|
83
|
+
FILE_DRIVER=gcs
|
|
84
|
+
BUCKET_NAME=my-bucket
|
|
85
|
+
GCS_PROJECT_ID=my-project
|
|
43
86
|
|
|
44
|
-
#
|
|
45
|
-
|
|
87
|
+
# For Azure Blob Storage
|
|
88
|
+
FILE_DRIVER=azure
|
|
89
|
+
BUCKET_NAME=my-container
|
|
90
|
+
AZURE_CONNECTION_STRING=your-connection-string
|
|
46
91
|
```
|
|
47
92
|
|
|
48
|
-
|
|
93
|
+
That's it. Your upload code stays the same regardless of which provider you choose.
|
|
94
|
+
|
|
95
|
+
---
|
|
96
|
+
|
|
97
|
+
## Supported Storage Providers
|
|
98
|
+
|
|
99
|
+
| Provider | Direct Upload | Presigned URLs | Best For |
|
|
100
|
+
| ---------------- | ------------- | ----------------- | ------------------------- |
|
|
101
|
+
| **Local Disk** | `local` | — | Development, small apps |
|
|
102
|
+
| **AWS S3** | `s3` | `s3-presigned` | Most production apps |
|
|
103
|
+
| **Google Cloud** | `gcs` | `gcs-presigned` | GCP-hosted applications |
|
|
104
|
+
| **Azure Blob** | `azure` | `azure-presigned` | Azure-hosted applications |
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
## Security Features
|
|
109
|
+
|
|
110
|
+
File uploads are one of the most exploited attack vectors in web applications. Express Storage protects you by default.
|
|
111
|
+
|
|
112
|
+
### Path Traversal Prevention
|
|
113
|
+
|
|
114
|
+
Attackers try filenames like `../../../etc/passwd` to escape your upload directory. We block this:
|
|
49
115
|
|
|
50
116
|
```typescript
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
117
|
+
// These malicious filenames are automatically rejected
|
|
118
|
+
"../secret.txt"; // Blocked: path traversal
|
|
119
|
+
"..\\config.json"; // Blocked: Windows path traversal
|
|
120
|
+
"file\0.txt"; // Blocked: null byte injection
|
|
121
|
+
```
|
|
54
122
|
|
|
55
|
-
|
|
56
|
-
const upload = multer();
|
|
123
|
+
### Automatic Filename Sanitization
|
|
57
124
|
|
|
58
|
-
|
|
59
|
-
const storage = new StorageManager();
|
|
125
|
+
User-provided filenames can't be trusted. We transform them into safe, unique identifiers:
|
|
60
126
|
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
}
|
|
127
|
+
```
|
|
128
|
+
User uploads: "My Photo (1).jpg"
|
|
129
|
+
Stored as: "1706123456789_a1b2c3d4e5_my_photo_1_.jpg"
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
The format `{timestamp}_{random}_{sanitized_name}` prevents collisions and removes dangerous characters.
|
|
133
|
+
|
|
134
|
+
### File Validation
|
|
135
|
+
|
|
136
|
+
Validate before processing. Reject before storing.
|
|
137
|
+
|
|
138
|
+
```typescript
|
|
139
|
+
await storage.uploadFile(file, {
|
|
140
|
+
maxSize: 5 * 1024 * 1024, // 5MB limit
|
|
141
|
+
allowedMimeTypes: ["image/jpeg", "image/png"],
|
|
142
|
+
allowedExtensions: [".jpg", ".png"],
|
|
78
143
|
});
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
### Presigned URL Security
|
|
147
|
+
|
|
148
|
+
For S3 and GCS, file constraints are enforced at the URL level — clients physically cannot upload the wrong file type or size. For Azure (which doesn't support URL-level constraints), we validate after upload and automatically delete invalid files.
|
|
149
|
+
|
|
150
|
+
---
|
|
151
|
+
|
|
152
|
+
## Presigned URLs: Client-Side Uploads
|
|
153
|
+
|
|
154
|
+
Large files shouldn't flow through your server. Presigned URLs let clients upload directly to cloud storage.
|
|
155
|
+
|
|
156
|
+
### The Flow
|
|
157
|
+
|
|
158
|
+
```
|
|
159
|
+
1. Client → Your Server: "I want to upload photo.jpg (2MB, image/jpeg)"
|
|
160
|
+
2. Your Server → Client: "Here's a presigned URL, valid for 10 minutes"
|
|
161
|
+
3. Client → Cloud Storage: Uploads directly (your server never touches the bytes)
|
|
162
|
+
4. Client → Your Server: "Upload complete, please verify"
|
|
163
|
+
5. Your Server: Confirms file exists, returns permanent URL
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
### Implementation
|
|
167
|
+
|
|
168
|
+
```typescript
|
|
169
|
+
// Step 1: Generate upload URL
|
|
170
|
+
app.post("/upload/init", async (req, res) => {
|
|
171
|
+
const { fileName, contentType, fileSize } = req.body;
|
|
172
|
+
|
|
173
|
+
const result = await storage.generateUploadUrl(
|
|
174
|
+
fileName,
|
|
175
|
+
contentType,
|
|
176
|
+
fileSize,
|
|
177
|
+
"user-uploads", // Optional folder
|
|
178
|
+
);
|
|
79
179
|
|
|
80
|
-
// Multiple files upload
|
|
81
|
-
app.post('/upload-multiple', upload.array('files', 10), async (req, res) => {
|
|
82
|
-
try {
|
|
83
|
-
const results = await storage.uploadFiles(req.files as Express.Multer.File[]);
|
|
84
|
-
|
|
85
|
-
const successful = results.filter(r => r.success);
|
|
86
|
-
const failed = results.filter(r => !r.success);
|
|
87
|
-
|
|
88
180
|
res.json({
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
failed: failed.length,
|
|
92
|
-
results
|
|
181
|
+
uploadUrl: result.uploadUrl,
|
|
182
|
+
reference: result.reference, // Save this for later
|
|
93
183
|
});
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
184
|
+
});
|
|
185
|
+
|
|
186
|
+
// Step 2: Confirm upload
|
|
187
|
+
app.post("/upload/confirm", async (req, res) => {
|
|
188
|
+
const { reference, expectedContentType, expectedFileSize } = req.body;
|
|
189
|
+
|
|
190
|
+
const result = await storage.validateAndConfirmUpload(reference, {
|
|
191
|
+
expectedContentType,
|
|
192
|
+
expectedFileSize,
|
|
193
|
+
});
|
|
194
|
+
|
|
195
|
+
if (result.success) {
|
|
196
|
+
res.json({ viewUrl: result.viewUrl });
|
|
197
|
+
} else {
|
|
198
|
+
res.status(400).json({ error: result.error });
|
|
199
|
+
}
|
|
97
200
|
});
|
|
98
201
|
```
|
|
99
202
|
|
|
100
|
-
|
|
203
|
+
### Provider-Specific Behavior
|
|
204
|
+
|
|
205
|
+
| Provider | Content-Type Enforced | File Size Enforced | Post-Upload Validation |
|
|
206
|
+
| -------- | --------------------- | ------------------ | ---------------------- |
|
|
207
|
+
| S3 | At URL level | At URL level | Optional |
|
|
208
|
+
| GCS | At URL level | At URL level | Optional |
|
|
209
|
+
| Azure | **Not enforced** | **Not enforced** | **Required** |
|
|
210
|
+
|
|
211
|
+
For Azure, always call `validateAndConfirmUpload()` with expected values. Invalid files are automatically deleted.
|
|
101
212
|
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
| `s3` | Direct | AWS S3 direct upload | `BUCKET_NAME`, `AWS_REGION`, `AWS_ACCESS_KEY`, `AWS_SECRET_KEY` |
|
|
106
|
-
| `s3-presigned` | Presigned | AWS S3 presigned URLs | `BUCKET_NAME`, `AWS_REGION`, `AWS_ACCESS_KEY`, `AWS_SECRET_KEY` |
|
|
107
|
-
| `gcs` | Direct | Google Cloud Storage direct upload | `BUCKET_NAME`, `GCS_PROJECT_ID`, `GCS_CREDENTIALS` |
|
|
108
|
-
| `gcs-presigned` | Presigned | Google Cloud Storage presigned URLs | `BUCKET_NAME`, `GCS_PROJECT_ID`, `GCS_CREDENTIALS` |
|
|
109
|
-
| `oci` | Direct | Oracle Cloud Infrastructure direct upload | `BUCKET_NAME`, `OCI_REGION`, `OCI_CREDENTIALS` |
|
|
110
|
-
| `oci-presigned` | Presigned | Oracle Cloud Infrastructure presigned URLs | `BUCKET_NAME`, `OCI_REGION`, `OCI_CREDENTIALS` |
|
|
213
|
+
---
|
|
214
|
+
|
|
215
|
+
## Large File Uploads
|
|
111
216
|
|
|
112
|
-
|
|
217
|
+
For files larger than 100MB, we recommend using **presigned URLs** instead of direct server uploads. Here's why:
|
|
113
218
|
|
|
114
|
-
###
|
|
115
|
-
- `FILE_DRIVER` (required): Storage driver to use
|
|
116
|
-
- `BUCKET_NAME`: Cloud storage bucket name
|
|
117
|
-
- `LOCAL_PATH`: Local storage directory path (default: `public/express-storage`)
|
|
118
|
-
- `PRESIGNED_URL_EXPIRY`: Presigned URL expiry in seconds (default: 600)
|
|
219
|
+
### Memory Efficiency
|
|
119
220
|
|
|
120
|
-
|
|
121
|
-
- `AWS_REGION`: AWS region (e.g., `us-east-1`)
|
|
122
|
-
- `AWS_ACCESS_KEY`: AWS access key ID
|
|
123
|
-
- `AWS_SECRET_KEY`: AWS secret access key
|
|
221
|
+
When you upload through your server, the entire file must be buffered in memory (or stored temporarily on disk). For a 500MB video file, that's 500MB of RAM per concurrent upload. With presigned URLs, the file goes directly to cloud storage — your server only handles small JSON requests.
|
|
124
222
|
|
|
125
|
-
###
|
|
126
|
-
- `GCS_PROJECT_ID`: Google Cloud project ID
|
|
127
|
-
- `GCS_CREDENTIALS`: Path to service account JSON file
|
|
223
|
+
### Automatic Streaming
|
|
128
224
|
|
|
129
|
-
|
|
130
|
-
- `OCI_REGION`: OCI region (e.g., `us-ashburn-1`)
|
|
131
|
-
- `OCI_CREDENTIALS`: Path to OCI credentials file
|
|
225
|
+
For files that must go through your server, Express Storage automatically uses streaming uploads for files larger than 100MB:
|
|
132
226
|
|
|
133
|
-
|
|
227
|
+
- **S3**: Uses multipart upload with 10MB chunks
|
|
228
|
+
- **GCS**: Uses resumable uploads with streaming
|
|
229
|
+
- **Azure**: Uses block upload with streaming
|
|
134
230
|
|
|
135
|
-
|
|
231
|
+
This happens transparently — you don't need to change your code.
|
|
136
232
|
|
|
137
|
-
|
|
233
|
+
### Recommended Approach for Large Files
|
|
138
234
|
|
|
139
|
-
#### Constructor
|
|
140
235
|
```typescript
|
|
141
|
-
|
|
236
|
+
// Frontend: Request presigned URL
|
|
237
|
+
const { uploadUrl, reference } = await fetch("/api/upload/init", {
|
|
238
|
+
method: "POST",
|
|
239
|
+
body: JSON.stringify({
|
|
240
|
+
fileName: "large-video.mp4",
|
|
241
|
+
contentType: "video/mp4",
|
|
242
|
+
fileSize: 524288000, // 500MB
|
|
243
|
+
}),
|
|
244
|
+
}).then((r) => r.json());
|
|
245
|
+
|
|
246
|
+
// Frontend: Upload directly to cloud (bypasses your server!)
|
|
247
|
+
await fetch(uploadUrl, {
|
|
248
|
+
method: "PUT",
|
|
249
|
+
body: file,
|
|
250
|
+
headers: { "Content-Type": "video/mp4" },
|
|
251
|
+
});
|
|
252
|
+
|
|
253
|
+
// Frontend: Confirm upload
|
|
254
|
+
await fetch("/api/upload/confirm", {
|
|
255
|
+
method: "POST",
|
|
256
|
+
body: JSON.stringify({ reference }),
|
|
257
|
+
});
|
|
142
258
|
```
|
|
143
259
|
|
|
144
|
-
|
|
260
|
+
### Size Limits
|
|
145
261
|
|
|
146
|
-
|
|
147
|
-
|
|
148
|
-
|
|
149
|
-
|
|
262
|
+
| Scenario | Recommended Limit | Reason |
|
|
263
|
+
| ------------------------------ | ----------------- | ------------------------------ |
|
|
264
|
+
| Direct upload (memory storage) | < 100MB | Node.js memory constraints |
|
|
265
|
+
| Direct upload (disk storage) | < 500MB | Temp file management |
|
|
266
|
+
| Presigned URL upload | 5GB+ | Limited only by cloud provider |
|
|
150
267
|
|
|
151
|
-
|
|
152
|
-
const results = await storage.uploadFiles(files: Express.Multer.File[]): Promise<FileUploadResult[]>
|
|
268
|
+
---
|
|
153
269
|
|
|
154
|
-
|
|
155
|
-
const result = await storage.upload(input: FileInput): Promise<FileUploadResult | FileUploadResult[]>
|
|
156
|
-
```
|
|
270
|
+
## API Reference
|
|
157
271
|
|
|
158
|
-
|
|
159
|
-
```typescript
|
|
160
|
-
// Generate upload URL
|
|
161
|
-
const result = await storage.generateUploadUrl(fileName: string): Promise<PresignedUrlResult>
|
|
272
|
+
### StorageManager
|
|
162
273
|
|
|
163
|
-
|
|
164
|
-
const result = await storage.generateViewUrl(fileName: string): Promise<PresignedUrlResult>
|
|
274
|
+
The main class you'll interact with.
|
|
165
275
|
|
|
166
|
-
|
|
167
|
-
|
|
276
|
+
```typescript
|
|
277
|
+
import { StorageManager } from "express-storage";
|
|
168
278
|
|
|
169
|
-
//
|
|
170
|
-
const
|
|
279
|
+
// Use environment variables
|
|
280
|
+
const storage = new StorageManager();
|
|
281
|
+
|
|
282
|
+
// Or configure programmatically
|
|
283
|
+
const storage = new StorageManager({
|
|
284
|
+
driver: "s3",
|
|
285
|
+
credentials: {
|
|
286
|
+
bucketName: "my-bucket",
|
|
287
|
+
awsRegion: "us-east-1",
|
|
288
|
+
maxFileSize: 50 * 1024 * 1024, // 50MB
|
|
289
|
+
},
|
|
290
|
+
logger: console, // Optional: enable debug logging
|
|
291
|
+
});
|
|
171
292
|
```
|
|
172
293
|
|
|
173
|
-
|
|
294
|
+
### File Upload Methods
|
|
295
|
+
|
|
174
296
|
```typescript
|
|
175
|
-
//
|
|
176
|
-
const
|
|
297
|
+
// Single file
|
|
298
|
+
const result = await storage.uploadFile(file, validation?, options?);
|
|
177
299
|
|
|
178
|
-
//
|
|
179
|
-
const results = await storage.
|
|
300
|
+
// Multiple files (processed in parallel with concurrency limits)
|
|
301
|
+
const results = await storage.uploadFiles(files, validation?, options?);
|
|
302
|
+
|
|
303
|
+
// Generic upload (auto-detects single vs multiple)
|
|
304
|
+
const result = await storage.upload(input, validation?, options?);
|
|
180
305
|
```
|
|
181
306
|
|
|
182
|
-
|
|
307
|
+
### Presigned URL Methods
|
|
308
|
+
|
|
183
309
|
```typescript
|
|
184
|
-
//
|
|
185
|
-
const
|
|
310
|
+
// Generate upload URL with constraints
|
|
311
|
+
const result = await storage.generateUploadUrl(fileName, contentType?, fileSize?, folder?);
|
|
312
|
+
|
|
313
|
+
// Generate view URL for existing file
|
|
314
|
+
const result = await storage.generateViewUrl(reference);
|
|
186
315
|
|
|
187
|
-
//
|
|
188
|
-
const
|
|
316
|
+
// Validate upload (required for Azure, recommended for all)
|
|
317
|
+
const result = await storage.validateAndConfirmUpload(reference, options?);
|
|
189
318
|
|
|
190
|
-
//
|
|
191
|
-
const
|
|
319
|
+
// Batch operations
|
|
320
|
+
const results = await storage.generateUploadUrls(files, folder?);
|
|
321
|
+
const results = await storage.generateViewUrls(references);
|
|
192
322
|
```
|
|
193
323
|
|
|
194
|
-
###
|
|
324
|
+
### File Management
|
|
195
325
|
|
|
196
326
|
```typescript
|
|
197
|
-
//
|
|
198
|
-
const
|
|
199
|
-
driver: 's3',
|
|
200
|
-
bucketName: 'my-bucket',
|
|
201
|
-
awsRegion: 'us-east-1'
|
|
202
|
-
});
|
|
327
|
+
// Delete single file
|
|
328
|
+
const success = await storage.deleteFile(reference);
|
|
203
329
|
|
|
204
|
-
//
|
|
205
|
-
const
|
|
330
|
+
// Delete multiple files
|
|
331
|
+
const results = await storage.deleteFiles(references);
|
|
206
332
|
|
|
207
|
-
//
|
|
208
|
-
|
|
333
|
+
// List files with pagination
|
|
334
|
+
const result = await storage.listFiles(prefix?, maxResults?, continuationToken?);
|
|
209
335
|
```
|
|
210
336
|
|
|
211
|
-
###
|
|
337
|
+
### Upload Options
|
|
212
338
|
|
|
213
339
|
```typescript
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
// Use default storage manager
|
|
226
|
-
const result = await uploadFile(file);
|
|
227
|
-
const results = await uploadFiles(files);
|
|
228
|
-
const urlResult = await generateUploadUrl('filename.jpg');
|
|
229
|
-
const success = await deleteFile('filename.jpg');
|
|
230
|
-
|
|
231
|
-
// Initialize custom storage manager
|
|
232
|
-
const storage = initializeStorageManager({
|
|
233
|
-
driver: 'local',
|
|
234
|
-
localPath: 'uploads'
|
|
340
|
+
interface UploadOptions {
|
|
341
|
+
contentType?: string; // Override detected type
|
|
342
|
+
metadata?: Record<string, string>; // Custom metadata
|
|
343
|
+
cacheControl?: string; // e.g., 'max-age=31536000'
|
|
344
|
+
contentDisposition?: string; // e.g., 'attachment; filename="doc.pdf"'
|
|
345
|
+
}
|
|
346
|
+
|
|
347
|
+
// Example: Upload with caching headers
|
|
348
|
+
await storage.uploadFile(file, undefined, {
|
|
349
|
+
cacheControl: "public, max-age=31536000",
|
|
350
|
+
metadata: { uploadedBy: "user-123" },
|
|
235
351
|
});
|
|
236
352
|
```
|
|
237
353
|
|
|
238
|
-
|
|
354
|
+
### Validation Options
|
|
239
355
|
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
245
|
-
|
|
246
|
-
│ ├── 1703123456_image.jpg
|
|
247
|
-
│ └── 1703123457_document.pdf
|
|
248
|
-
├── february/
|
|
249
|
-
│ └── 2024/
|
|
250
|
-
│ └── 1705800000_video.mp4
|
|
251
|
-
└── ...
|
|
356
|
+
```typescript
|
|
357
|
+
interface FileValidationOptions {
|
|
358
|
+
maxSize?: number; // Maximum file size in bytes
|
|
359
|
+
allowedMimeTypes?: string[]; // e.g., ['image/jpeg', 'image/png']
|
|
360
|
+
allowedExtensions?: string[]; // e.g., ['.jpg', '.png']
|
|
361
|
+
}
|
|
252
362
|
```
|
|
253
363
|
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
259
|
-
|
|
260
|
-
|
|
261
|
-
|
|
364
|
+
---
|
|
365
|
+
|
|
366
|
+
## Environment Variables
|
|
367
|
+
|
|
368
|
+
### Core Settings
|
|
369
|
+
|
|
370
|
+
| Variable | Description | Default |
|
|
371
|
+
| ---------------------- | ----------------------------------- | ------------------------ |
|
|
372
|
+
| `FILE_DRIVER` | Storage driver to use | `local` |
|
|
373
|
+
| `BUCKET_NAME` | Cloud storage bucket/container name | — |
|
|
374
|
+
| `BUCKET_PATH` | Default folder path within bucket | `""` (root) |
|
|
375
|
+
| `LOCAL_PATH` | Directory for local storage | `public/express-storage` |
|
|
376
|
+
| `PRESIGNED_URL_EXPIRY` | URL validity in seconds | `600` (10 min) |
|
|
377
|
+
| `MAX_FILE_SIZE` | Maximum upload size in bytes | `5368709120` (5GB) |
|
|
378
|
+
|
|
379
|
+
### AWS S3
|
|
262
380
|
|
|
263
|
-
|
|
381
|
+
| Variable | Description |
|
|
382
|
+
| ---------------- | ----------------------------------------------- |
|
|
383
|
+
| `AWS_REGION` | AWS region (e.g., `us-east-1`) |
|
|
384
|
+
| `AWS_ACCESS_KEY` | Access key ID (optional if using IAM roles) |
|
|
385
|
+
| `AWS_SECRET_KEY` | Secret access key (optional if using IAM roles) |
|
|
264
386
|
|
|
265
|
-
|
|
387
|
+
### Google Cloud Storage
|
|
388
|
+
|
|
389
|
+
| Variable | Description |
|
|
390
|
+
| ----------------- | ------------------------------------------------ |
|
|
391
|
+
| `GCS_PROJECT_ID` | Google Cloud project ID |
|
|
392
|
+
| `GCS_CREDENTIALS` | Path to service account JSON (optional with ADC) |
|
|
393
|
+
|
|
394
|
+
### Azure Blob Storage
|
|
395
|
+
|
|
396
|
+
| Variable | Description |
|
|
397
|
+
| ------------------------- | ------------------------------------------------- |
|
|
398
|
+
| `AZURE_CONNECTION_STRING` | Full connection string (recommended) |
|
|
399
|
+
| `AZURE_ACCOUNT_NAME` | Storage account name (alternative) |
|
|
400
|
+
| `AZURE_ACCOUNT_KEY` | Storage account key (alternative) |
|
|
401
|
+
|
|
402
|
+
**Note**: Azure uses `BUCKET_NAME` for the container name (same as S3/GCS).
|
|
403
|
+
|
|
404
|
+
---
|
|
405
|
+
|
|
406
|
+
## Utilities
|
|
407
|
+
|
|
408
|
+
Express Storage includes battle-tested utilities you can use directly.
|
|
409
|
+
|
|
410
|
+
### Retry with Exponential Backoff
|
|
266
411
|
|
|
267
412
|
```typescript
|
|
268
|
-
|
|
269
|
-
const uploadResult = await storage.generateUploadUrl('my-file.jpg');
|
|
270
|
-
if (uploadResult.success) {
|
|
271
|
-
// Client can use uploadResult.uploadUrl to upload directly
|
|
272
|
-
console.log(uploadResult.uploadUrl);
|
|
273
|
-
}
|
|
413
|
+
import { withRetry } from "express-storage";
|
|
274
414
|
|
|
275
|
-
|
|
276
|
-
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
|
|
280
|
-
}
|
|
415
|
+
const result = await withRetry(() => storage.uploadFile(file), {
|
|
416
|
+
maxAttempts: 3,
|
|
417
|
+
baseDelay: 1000,
|
|
418
|
+
maxDelay: 10000,
|
|
419
|
+
exponentialBackoff: true,
|
|
420
|
+
});
|
|
281
421
|
```
|
|
282
422
|
|
|
283
|
-
|
|
423
|
+
### File Type Helpers
|
|
284
424
|
|
|
285
|
-
|
|
425
|
+
```typescript
|
|
426
|
+
import {
|
|
427
|
+
isImageFile,
|
|
428
|
+
isDocumentFile,
|
|
429
|
+
getFileExtension,
|
|
430
|
+
formatFileSize,
|
|
431
|
+
} from "express-storage";
|
|
432
|
+
|
|
433
|
+
isImageFile("image/jpeg"); // true
|
|
434
|
+
isDocumentFile("application/pdf"); // true
|
|
435
|
+
getFileExtension("photo.jpg"); // '.jpg'
|
|
436
|
+
formatFileSize(1048576); // '1 MB'
|
|
437
|
+
```
|
|
438
|
+
|
|
439
|
+
### Custom Logging
|
|
286
440
|
|
|
287
441
|
```typescript
|
|
288
|
-
import { StorageManager } from
|
|
289
|
-
|
|
290
|
-
|
|
291
|
-
|
|
292
|
-
|
|
293
|
-
|
|
294
|
-
|
|
295
|
-
|
|
296
|
-
|
|
297
|
-
|
|
298
|
-
});
|
|
442
|
+
import { StorageManager, Logger } from "express-storage";
|
|
443
|
+
|
|
444
|
+
const logger: Logger = {
|
|
445
|
+
debug: (msg, ...args) => console.debug(`[Storage] ${msg}`, ...args),
|
|
446
|
+
info: (msg, ...args) => console.info(`[Storage] ${msg}`, ...args),
|
|
447
|
+
warn: (msg, ...args) => console.warn(`[Storage] ${msg}`, ...args),
|
|
448
|
+
error: (msg, ...args) => console.error(`[Storage] ${msg}`, ...args),
|
|
449
|
+
};
|
|
450
|
+
|
|
451
|
+
const storage = new StorageManager({ driver: "s3", logger });
|
|
299
452
|
```
|
|
300
453
|
|
|
301
|
-
|
|
454
|
+
---
|
|
455
|
+
|
|
456
|
+
## Real-World Examples
|
|
457
|
+
|
|
458
|
+
### Profile Picture Upload
|
|
302
459
|
|
|
303
460
|
```typescript
|
|
304
|
-
app.post(
|
|
305
|
-
|
|
306
|
-
|
|
307
|
-
|
|
461
|
+
app.post("/users/:id/avatar", upload.single("avatar"), async (req, res) => {
|
|
462
|
+
const result = await storage.uploadFile(
|
|
463
|
+
req.file,
|
|
464
|
+
{
|
|
465
|
+
maxSize: 2 * 1024 * 1024, // 2MB
|
|
466
|
+
allowedMimeTypes: ["image/jpeg", "image/png", "image/webp"],
|
|
467
|
+
},
|
|
468
|
+
{
|
|
469
|
+
cacheControl: "public, max-age=86400",
|
|
470
|
+
metadata: { userId: req.params.id },
|
|
471
|
+
},
|
|
472
|
+
);
|
|
473
|
+
|
|
308
474
|
if (result.success) {
|
|
309
|
-
|
|
310
|
-
|
|
311
|
-
fileName: result.fileName,
|
|
312
|
-
fileUrl: result.fileUrl
|
|
313
|
-
});
|
|
475
|
+
await db.users.update(req.params.id, { avatarUrl: result.fileUrl });
|
|
476
|
+
res.json({ avatarUrl: result.fileUrl });
|
|
314
477
|
} else {
|
|
315
|
-
|
|
316
|
-
success: false,
|
|
317
|
-
error: result.error
|
|
318
|
-
});
|
|
478
|
+
res.status(400).json({ error: result.error });
|
|
319
479
|
}
|
|
320
|
-
} catch (error) {
|
|
321
|
-
res.status(500).json({
|
|
322
|
-
success: false,
|
|
323
|
-
error: error instanceof Error ? error.message : 'Unknown error'
|
|
324
|
-
});
|
|
325
|
-
}
|
|
326
480
|
});
|
|
327
481
|
```
|
|
328
482
|
|
|
329
|
-
###
|
|
483
|
+
### Document Upload with Presigned URLs
|
|
330
484
|
|
|
331
485
|
```typescript
|
|
332
|
-
|
|
333
|
-
|
|
334
|
-
|
|
335
|
-
|
|
336
|
-
|
|
337
|
-
|
|
338
|
-
|
|
339
|
-
|
|
486
|
+
// Frontend requests upload URL
|
|
487
|
+
app.post("/documents/request-upload", async (req, res) => {
|
|
488
|
+
const { fileName, fileSize } = req.body;
|
|
489
|
+
|
|
490
|
+
const result = await storage.generateUploadUrl(
|
|
491
|
+
fileName,
|
|
492
|
+
"application/pdf",
|
|
493
|
+
fileSize,
|
|
494
|
+
`documents/${req.user.id}`,
|
|
495
|
+
);
|
|
496
|
+
|
|
497
|
+
// Store pending upload in database
|
|
498
|
+
await db.documents.create({
|
|
499
|
+
reference: result.reference,
|
|
500
|
+
userId: req.user.id,
|
|
501
|
+
status: "pending",
|
|
340
502
|
});
|
|
341
|
-
|
|
342
|
-
|
|
343
|
-
|
|
344
|
-
|
|
345
|
-
if (!allowedTypes.includes(file.mimetype)) {
|
|
346
|
-
return res.status(400).json({
|
|
347
|
-
success: false,
|
|
348
|
-
error: 'Invalid file type. Only JPEG, PNG, and GIF allowed.'
|
|
503
|
+
|
|
504
|
+
res.json({
|
|
505
|
+
uploadUrl: result.uploadUrl,
|
|
506
|
+
reference: result.reference,
|
|
349
507
|
});
|
|
350
|
-
}
|
|
351
|
-
|
|
352
|
-
const result = await storage.uploadFile(file);
|
|
353
|
-
res.json(result);
|
|
354
508
|
});
|
|
355
|
-
```
|
|
356
509
|
|
|
357
|
-
|
|
510
|
+
// Frontend confirms upload complete
|
|
511
|
+
app.post("/documents/confirm-upload", async (req, res) => {
|
|
512
|
+
const { reference } = req.body;
|
|
358
513
|
|
|
359
|
-
|
|
360
|
-
|
|
361
|
-
|
|
362
|
-
|
|
363
|
-
|
|
364
|
-
|
|
365
|
-
|
|
366
|
-
|
|
367
|
-
|
|
368
|
-
|
|
369
|
-
|
|
370
|
-
|
|
371
|
-
|
|
372
|
-
|
|
373
|
-
|
|
374
|
-
|
|
375
|
-
|
|
376
|
-
|
|
377
|
-
res.json(summary);
|
|
514
|
+
const result = await storage.validateAndConfirmUpload(reference, {
|
|
515
|
+
expectedContentType: "application/pdf",
|
|
516
|
+
});
|
|
517
|
+
|
|
518
|
+
if (result.success) {
|
|
519
|
+
await db.documents.update(
|
|
520
|
+
{ reference },
|
|
521
|
+
{
|
|
522
|
+
status: "uploaded",
|
|
523
|
+
size: result.actualFileSize,
|
|
524
|
+
},
|
|
525
|
+
);
|
|
526
|
+
res.json({ success: true, viewUrl: result.viewUrl });
|
|
527
|
+
} else {
|
|
528
|
+
await db.documents.delete({ reference });
|
|
529
|
+
res.status(400).json({ error: result.error });
|
|
530
|
+
}
|
|
378
531
|
});
|
|
379
532
|
```
|
|
380
533
|
|
|
381
|
-
|
|
534
|
+
### Bulk File Upload
|
|
382
535
|
|
|
383
|
-
|
|
536
|
+
```typescript
|
|
537
|
+
app.post("/gallery/upload", upload.array("photos", 20), async (req, res) => {
|
|
538
|
+
const files = req.files as Express.Multer.File[];
|
|
384
539
|
|
|
385
|
-
|
|
386
|
-
|
|
387
|
-
|
|
540
|
+
const results = await storage.uploadFiles(files, {
|
|
541
|
+
maxSize: 10 * 1024 * 1024,
|
|
542
|
+
allowedMimeTypes: ["image/jpeg", "image/png"],
|
|
543
|
+
});
|
|
388
544
|
|
|
389
|
-
|
|
390
|
-
|
|
545
|
+
const successful = results.filter((r) => r.success);
|
|
546
|
+
const failed = results.filter((r) => !r.success);
|
|
391
547
|
|
|
392
|
-
|
|
393
|
-
|
|
548
|
+
res.json({
|
|
549
|
+
uploaded: successful.length,
|
|
550
|
+
failed: failed.length,
|
|
551
|
+
files: successful.map((r) => ({
|
|
552
|
+
fileName: r.fileName,
|
|
553
|
+
url: r.fileUrl,
|
|
554
|
+
})),
|
|
555
|
+
errors: failed.map((r) => r.error),
|
|
556
|
+
});
|
|
557
|
+
});
|
|
394
558
|
```
|
|
395
559
|
|
|
396
|
-
|
|
560
|
+
---
|
|
397
561
|
|
|
398
|
-
|
|
399
|
-
- Node.js >= 16.0.0
|
|
400
|
-
- TypeScript >= 5.1.6
|
|
562
|
+
## Migrating Between Providers
|
|
401
563
|
|
|
402
|
-
|
|
564
|
+
Moving from local development to cloud production? Or switching cloud providers? Here's how.
|
|
403
565
|
|
|
404
|
-
|
|
405
|
-
# Install dependencies
|
|
406
|
-
npm install
|
|
566
|
+
### Local to S3
|
|
407
567
|
|
|
408
|
-
|
|
409
|
-
|
|
568
|
+
```env
|
|
569
|
+
# Before (development)
|
|
570
|
+
FILE_DRIVER=local
|
|
571
|
+
LOCAL_PATH=uploads
|
|
410
572
|
|
|
411
|
-
#
|
|
412
|
-
|
|
573
|
+
# After (production)
|
|
574
|
+
FILE_DRIVER=s3
|
|
575
|
+
BUCKET_NAME=my-app-uploads
|
|
576
|
+
AWS_REGION=us-east-1
|
|
577
|
+
```
|
|
413
578
|
|
|
414
|
-
|
|
415
|
-
npm run clean
|
|
579
|
+
Your code stays exactly the same. Files uploaded before migration remain in their original location — you'll need to migrate existing files separately if needed.
|
|
416
580
|
|
|
417
|
-
|
|
418
|
-
npm run type-check
|
|
581
|
+
### S3 to Azure
|
|
419
582
|
|
|
420
|
-
|
|
421
|
-
|
|
422
|
-
|
|
583
|
+
```env
|
|
584
|
+
# Before
|
|
585
|
+
FILE_DRIVER=s3
|
|
586
|
+
BUCKET_NAME=my-bucket
|
|
587
|
+
AWS_REGION=us-east-1
|
|
423
588
|
|
|
424
|
-
#
|
|
425
|
-
|
|
589
|
+
# After
|
|
590
|
+
FILE_DRIVER=azure
|
|
591
|
+
BUCKET_NAME=my-container
|
|
592
|
+
AZURE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=...
|
|
426
593
|
```
|
|
427
594
|
|
|
428
|
-
|
|
595
|
+
**Important**: If using presigned URLs, remember that Azure requires post-upload validation. Add `validateAndConfirmUpload()` calls to your confirmation endpoints.
|
|
429
596
|
|
|
430
|
-
|
|
431
|
-
|
|
432
|
-
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
|
|
436
|
-
|
|
437
|
-
|
|
438
|
-
|
|
439
|
-
|
|
440
|
-
|
|
441
|
-
|
|
442
|
-
|
|
443
|
-
|
|
444
|
-
|
|
445
|
-
|
|
446
|
-
|
|
447
|
-
|
|
448
|
-
|
|
449
|
-
|
|
450
|
-
|
|
451
|
-
|
|
597
|
+
---
|
|
598
|
+
|
|
599
|
+
## TypeScript Support
|
|
600
|
+
|
|
601
|
+
Express Storage is written in TypeScript and exports all types:
|
|
602
|
+
|
|
603
|
+
```typescript
|
|
604
|
+
import {
|
|
605
|
+
StorageManager,
|
|
606
|
+
StorageDriver,
|
|
607
|
+
FileUploadResult,
|
|
608
|
+
PresignedUrlResult,
|
|
609
|
+
FileValidationOptions,
|
|
610
|
+
UploadOptions,
|
|
611
|
+
Logger,
|
|
612
|
+
} from "express-storage";
|
|
613
|
+
|
|
614
|
+
// Full autocomplete and type checking
|
|
615
|
+
const result: FileUploadResult = await storage.uploadFile(file);
|
|
616
|
+
|
|
617
|
+
if (result.success) {
|
|
618
|
+
console.log(result.fileName); // TypeScript knows this exists
|
|
619
|
+
console.log(result.fileUrl); // TypeScript knows this exists
|
|
620
|
+
}
|
|
452
621
|
```
|
|
453
622
|
|
|
454
|
-
|
|
623
|
+
---
|
|
455
624
|
|
|
456
|
-
|
|
457
|
-
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
|
|
458
|
-
3. Commit your changes (`git commit -m 'Add amazing feature'`)
|
|
459
|
-
4. Push to the branch (`git push origin feature/amazing-feature`)
|
|
460
|
-
5. Open a Pull Request
|
|
625
|
+
## Contributing
|
|
461
626
|
|
|
462
|
-
|
|
627
|
+
Contributions are welcome! Please read our contributing guidelines before submitting a pull request.
|
|
463
628
|
|
|
464
|
-
|
|
465
|
-
|
|
466
|
-
|
|
467
|
-
- Ensure all tests pass before submitting PR
|
|
629
|
+
```bash
|
|
630
|
+
# Clone the repository
|
|
631
|
+
git clone https://github.com/th3hero/express-storage.git
|
|
468
632
|
|
|
469
|
-
|
|
633
|
+
# Install dependencies
|
|
634
|
+
npm install
|
|
635
|
+
|
|
636
|
+
# Run in development mode
|
|
637
|
+
npm run dev
|
|
470
638
|
|
|
471
|
-
|
|
639
|
+
# Build for production
|
|
640
|
+
npm run build
|
|
472
641
|
|
|
473
|
-
|
|
642
|
+
# Run linting
|
|
643
|
+
npm run lint
|
|
644
|
+
```
|
|
474
645
|
|
|
475
|
-
|
|
476
|
-
|
|
477
|
-
|
|
646
|
+
---
|
|
647
|
+
|
|
648
|
+
## License
|
|
649
|
+
|
|
650
|
+
MIT License — use it however you want.
|
|
651
|
+
|
|
652
|
+
---
|
|
478
653
|
|
|
479
|
-
##
|
|
654
|
+
## Support
|
|
480
655
|
|
|
481
|
-
|
|
482
|
-
-
|
|
483
|
-
- Support for local, S3, GCS, and OCI storage
|
|
484
|
-
- Presigned URL generation
|
|
485
|
-
- TypeScript-first implementation
|
|
486
|
-
- Comprehensive test coverage
|
|
656
|
+
- **Issues**: [GitHub Issues](https://github.com/th3hero/express-storage/issues)
|
|
657
|
+
- **Author**: Alok Kumar ([@th3hero](https://github.com/th3hero))
|
|
487
658
|
|
|
488
659
|
---
|
|
489
660
|
|
|
490
|
-
**Made
|
|
661
|
+
**Made for developers who are tired of writing upload code from scratch.**
|