shrine 2.12.0 → 2.13.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

@@ -66,16 +66,32 @@ cached file in form of a JSON string, and assigns the cached result to record's
66
66
  attacher.assign(io)
67
67
 
68
68
  # writes the given cached file to the data column
69
- attacher.assign '{
70
- "id": "9260ea09d8effd.jpg",
71
- "storage": "cache",
72
- "metadata": { ... }
73
- }'
69
+ attacher.assign('{"id":"9260ea09d8effd.jpg","storage":"cache","metadata":{ ... }}')
70
+ ```
71
+
72
+ When assigning an IO object, any additional options passed to `#assign` will be
73
+ forwarded to `Shrine#upload`. This allows you to do things like overriding
74
+ metadata, setting upload location, or passing upload options:
75
+
76
+ ```rb
77
+ attacher.assign io,
78
+ metadata: { "filename" => "myfile.txt" },
79
+ location: "custom/location",
80
+ upload_options: { acl: "public-read" }
81
+ ```
82
+
83
+ If you're attaching a cached file and want to override its metadata before
84
+ assignment, you can do it like so:
85
+
86
+ ```rb
87
+ cached_file = Shrine.uploaded_file('{"id":"9260ea09d8effd.jpg","storage":"cache","metadata":{ ... }}')
88
+ cached_file.metadata["filename"] = "myfile.txt"
89
+
90
+ attacher.assign(cached_file.to_json)
74
91
  ```
75
92
 
76
93
  For security reasons `#assign` doesn't accept files uploaded to permanent
77
- storage, but you can also use `#set` to attach any `Shrine::UploadedFile`
78
- object.
94
+ storage, but you can use `#set` to attach any `Shrine::UploadedFile` object.
79
95
 
80
96
  ```rb
81
97
  uploaded_file #=> #<Shrine::UploadedFile>
@@ -160,9 +176,20 @@ The `:action` parameter is optional; it can be used for triggering a certain
160
176
  processing block, and it is also automatically printed by the `logging` plugin
161
177
  to aid in debugging.
162
178
 
163
- Internally this calls `#swap`, which will update the record with any uploaded
164
- file, but will reload the record to check if the current attachment hasn't
165
- changed (if the `backgrounding` plugin is loaded).
179
+ As a matter of fact, all additional options passed to `#promote` will be
180
+ forwarded to `Shrine#upload`. So unless you're generating versions, you can do
181
+ things like override metadata, set upload location, or pass upload options:
182
+
183
+ ```rb
184
+ attacher.promote cached_file,
185
+ metadata: { "filename" => "myfile.txt" },
186
+ location: "custom/location",
187
+ upload_options: { acl: "public-read" }
188
+ ```
189
+
190
+ Internally `#promote` calls `#swap`, which will update the record with any
191
+ uploaded file, but will reload the record to check if the current attachment
192
+ hasn't changed (if the `backgrounding` plugin is loaded).
166
193
 
167
194
  ```rb
168
195
  attacher.swap(uploaded_file)
@@ -218,19 +245,29 @@ Normally you can upload and delete directly by using the uploader.
218
245
  ```rb
219
246
  uploader = ImageUploader.new(:store)
220
247
  uploaded_file = uploader.upload(image) # uploads the file to `:store` storage
221
- uploader.delete(uploaded_file) # deletes the file uploaded to `:store`
248
+ uploader.delete(uploaded_file) # deletes the uploaded file from `:store`
222
249
  ```
223
250
 
224
- The attacher has methods for "caching", "storing" and "deleting" files, which
225
- delegate to these uploader methods, but also pass in the `#context`:
251
+ But the attacher also has wrapper methods for uploading and deleting, which
252
+ also automatically pass in the attacher `#context` (which includes `:record`
253
+ and `:name`):
226
254
 
227
255
  ```rb
228
- cached_file = attacher.cache!(image) # delegates to `Shrine#upload`
229
- stored_file = attacher.store!(image) # delegates to `Shrine#upload`
230
- attacher.delete!(stored_file) # delegates to `Shrine#delete`
256
+ attacher.cache!(file) # uploads file to temporary storage
257
+ # => #<Shrine::UploadedFile: @data={"storage" => "cache", ...}>
258
+ attacher.store!(file) # uploads file to permanent storage
259
+ # => #<Shrine::UploadedFile: @data={"storage" => "store", ...}>
260
+ attacher.delete!(uploaded_file) # deletes uploaded file from storage
231
261
  ```
232
262
 
233
- The `#cache!` and `#store!` only upload the file to the storage, they don't
234
- write to record's data column.
263
+ These methods only upload/delete files, they don't write to record's data
264
+ column. You can also pass additional options for `Shrine#upload` and
265
+ `Shrine#delete`:
266
+
267
+ ```rb
268
+ attacher.cache!(file, upload_options: { acl: "public-read" })
269
+ attacher.store!(file, location: "custom/location")
270
+ attacher.delete!(uploaded_file, foo: "bar")
271
+ ```
235
272
 
236
273
  [file migrations]: https://shrinerb.com/rdoc/files/doc/migrating_storage_md.html
@@ -1,5 +1,7 @@
1
1
  # The Design of Shrine
2
2
 
3
+ *If you want an in-depth walkthrough through the Shrine codebase, see [Notes on study of shrine implementation] article by Jonathan Rochkind.*
4
+
3
5
  There are five main types of objects that you deal with in Shrine:
4
6
 
5
7
  * Storage
@@ -56,8 +58,9 @@ Storages are typically not used directly, but through `Shrine`.
56
58
 
57
59
  ## `Shrine`
58
60
 
59
- A `Shrine` object (also called an "uploader") acts as a wrapper around a
60
- storage. First the storage needs to be registered under a name:
61
+ A `Shrine` object (also called an "uploader") is essentially a wrapper around
62
+ the `#upload` storage method. First the storage needs to be registered under a
63
+ name:
61
64
 
62
65
  ```rb
63
66
  Shrine.storages[:file_system] = Shrine::Storage::FileSystem.new("uploads")
@@ -76,12 +79,17 @@ following:
76
79
 
77
80
  * generates a unique location
78
81
  * extracts metadata
79
- * uploads the file
82
+ * uploads the file (calls `Storage#upload`)
80
83
  * closes the file
81
84
  * creates a `Shrine::UploadedFile` from the data
82
85
 
83
- In applications it's common to create subclasses of `Shrine`, in order to allow
84
- having different uploading logic for different types of files.
86
+ `Shrine` class and subclasses are also used for loading plugins that extend all
87
+ core classes. Each `Shrine` subclass has its own subclass of each of the core
88
+ classes (`Shrine::UploadedFile`, `Shrine::Attacher`, and `Shrine::Attachment`),
89
+ which makes it possible to have different `Shrine` subclasses with differently
90
+ customized attachment logic. See [Creating a New Plugin] guide and the [Plugin
91
+ system of Sequel and Roda] article for more details on the design of Shrine's
92
+ plugin system.
85
93
 
86
94
  ## `Shrine::UploadedFile`
87
95
 
@@ -207,3 +215,6 @@ automatically:
207
215
  destroyed
208
216
 
209
217
  [Using Attacher]: https://shrinerb.com/rdoc/files/doc/attacher_md.html
218
+ [Notes on study of shrine implementation]: https://bibwild.wordpress.com/2018/09/12/notes-on-study-of-shrine-implementation/
219
+ [Creating a New Plugin]: https://shrinerb.com/rdoc/files/doc/creating_plugins_md.html
220
+ [Plugin system of Sequel and Roda]: https://twin.github.io/the-plugin-system-of-sequel-and-roda/
@@ -24,7 +24,7 @@ storage service is beneficial for several reasons:
24
24
  times out.
25
25
 
26
26
  To start, let's set both temporary and permanent storage to S3, with the
27
- temporary storage uploading to the `cache/` directory:
27
+ temporary storage uploading to the `cache/` prefix:
28
28
 
29
29
  ```rb
30
30
  # Gemfile
@@ -47,39 +47,41 @@ Shrine.storages = {
47
47
  }
48
48
  ```
49
49
 
50
- ## Enabling CORS
51
-
52
- In order to be able upload files directly to your S3 bucket, you need enable
53
- CORS. You can do that from the AWS S3 Console by going to your bucket, clicking
54
- on the "Permissions" tab, then on "CORS Configuration", and following the
55
- [guide for configuring CORS][CORS guide].
56
-
57
- Alternatively you can configure CORS via an [API call][CORS API]:
58
-
59
- ```rb
60
- require "aws-sdk-s3"
61
-
62
- client = Aws::S3::Client.new(
63
- access_key_id: "<YOUR KEY>",
64
- secret_access_key: "<YOUR SECRET>",
65
- region: "<REGION>",
66
- )
67
-
68
- client.put_bucket_cors(
69
- bucket: "<YOUR BUCKET>",
70
- cors_configuration: {
71
- cors_rules: [{
72
- allowed_headers: ["Authorization", "Content-Type", "Origin"],
73
- allowed_methods: ["GET", "POST", "PUT"],
74
- allowed_origins: ["*"],
75
- max_age_seconds: 3000,
76
- }]
77
- }
78
- )
50
+ ## Bucket CORS configuration
51
+
52
+ In order to be able upload files directly to your S3 bucket, you'll need to
53
+ update your bucket's CORS configuration, as public uploads are not allowed by
54
+ default. You can do that from the AWS S3 Console by going to your bucket,
55
+ clicking on the "Permissions" tab and then on "CORS Configuration".
56
+
57
+ If you're using [Uppy], this is the recommended CORS configuration for the
58
+ [Aws S3 plugin] that should work for both POST and PUT uploads:
59
+
60
+ ```xml
61
+ <?xml version="1.0" encoding="UTF-8"?>
62
+ <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
63
+ <CORSRule>
64
+ <AllowedOrigin>https://my-app.com</AllowedOrigin>
65
+ <AllowedMethod>GET</AllowedMethod>
66
+ <AllowedMethod>POST</AllowedMethod>
67
+ <AllowedMethod>PUT</AllowedMethod>
68
+ <MaxAgeSeconds>3000</MaxAgeSeconds>
69
+ <AllowedHeader>Authorization</AllowedHeader>
70
+ <AllowedHeader>x-amz-date</AllowedHeader>
71
+ <AllowedHeader>x-amz-content-sha256</AllowedHeader>
72
+ <AllowedHeader>content-type</AllowedHeader>
73
+ </CORSRule>
74
+ <CORSRule>
75
+ <AllowedOrigin>*</AllowedOrigin>
76
+ <AllowedMethod>GET</AllowedMethod>
77
+ <MaxAgeSeconds>3000</MaxAgeSeconds>
78
+ </CORSRule>
79
+ </CORSConfiguration>
79
80
  ```
80
81
 
81
- Note that due to DNS propagation it may take some time for the CORS update to
82
- be applied.
82
+ Replace `https://my-app.com` with the URL to your app (in development you can
83
+ set this to `*`). Once you've hit "Save", it may take some time for the
84
+ new CORS settings to be applied.
83
85
 
84
86
  ## Strategy A (dynamic)
85
87
 
@@ -87,13 +89,28 @@ be applied.
87
89
  * Single or multiple file uploads
88
90
  * Some JavaScript needed
89
91
 
90
- When the user selects a file in the form, on the client-side we asynchronously
91
- fetch the presign information from the server, and use this information to
92
- upload the file to S3. The `presign_endpoint` plugin gives us this presign
93
- route, so we just need to mount it in our application:
92
+ When the user selects a file in the form, on the client side we asynchronously
93
+ fetch the upload parameters from the server, and use it to upload the file to
94
+ S3. It's recommended to use [Uppy] for client side uploads.
95
+
96
+ The `presign_endpoint` plugin provides a Rack application that generates these
97
+ upload parameters, which we can just mount in our application. We'll make our
98
+ presign endpoint also use the additional `type` and `filename` query parameters
99
+ to set `Content-Type` and `Content-Disposition` for the uploaded file, as well
100
+ as limit the upload size to 10 MB (see [`Shrine::Storage::S3#presign`] for the
101
+ list of available options).
94
102
 
95
103
  ```rb
96
- Shrine.plugin :presign_endpoint, presign_options: { method: :put }
104
+ Shrine.plugin :presign_endpoint, presign_options: -> (request) {
105
+ filename = request.params["filename"]
106
+ type = request.params["type"]
107
+
108
+ {
109
+ content_disposition: "inline; filename=\"#{filename}\"", # set download filename
110
+ content_type: type, # set content type (required if using DigitalOcean Spaces)
111
+ content_length_range: 0..(10*1024*1024), # limit upload size to 10 MB
112
+ }
113
+ }
97
114
  ```
98
115
  ```rb
99
116
  # config.ru (Rack)
@@ -110,27 +127,32 @@ end
110
127
  ```
111
128
 
112
129
  The above will create a `GET /presign` route, which internally calls
113
- [`Shrine::Storage::S3#presign`], returning the HTTP verb (PUT) and the S3 URL
114
- to which the file should be uploaded, along with the required parameters (will
115
- only be present for POST presigns) and request headers.
130
+ [`Shrine::Storage::S3#presign`] to return the HTTP verb (POST) and the S3 URL
131
+ to which the file should be uploaded, along with the required POST parameters
132
+ and request headers.
116
133
 
117
134
  ```rb
118
135
  # GET /presign
119
136
  {
120
- "method": "put",
121
- "url": "https://my-bucket.s3.eu-central-1.amazonaws.com/cache/my-key?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIMDH2HTSB3RKB4WQ%2F20180424%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20180424T212022Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=1036b9cefe52f0b46c1f257f6817fc3c55cd8d9004f87a38cf86177762359375",
122
- "fields": {},
137
+ "method": "post",
138
+ "url": "https://my-bucket.s3-eu-west-1.amazonaws.com",
139
+ "fields": {
140
+ "key": "b7d575850ba61b44c8a9ff889dfdb14d88cdc25f8dd121004c8",
141
+ "policy": "eyJleHBpcmF0aW9uIjoiMjAxNS0QwMToxMToyOVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJ...",
142
+ "x-amz-credential": "AKIAIJF55TMZYT6Q/20151024/eu-west-1/s3/aws4_request",
143
+ "x-amz-algorithm": "AWS4-HMAC-SHA256",
144
+ "x-amz-date": "20151024T001129Z",
145
+ "x-amz-signature": "c1eb634f83f96b69bd675f535b3ff15ae184b102fcba51e4db5f4959b4ae26f4"
146
+ },
123
147
  "headers": {}
124
148
  }
125
149
  ```
126
150
 
127
- On the client side you can make it so that, when the user selects a file,
128
- upload parameters are fetched from presign endpoint, and are used to upload
129
- the selected file directly to S3. It's recommended to use [Uppy] for this.
130
-
131
- Once the file has been uploaded, you can generate a JSON representation of the
132
- uploaded file on the client-side, and write it to the hidden attachment field
133
- (or send it directly in an AJAX request).
151
+ Uppy's [AWS S3][uppy aws s3] plugin would then make a request to this endpoint and use these
152
+ parameters to upload the file directly to S3. Once the file has been uploaded,
153
+ you can generate a JSON representation of the uploaded file on the client side,
154
+ and write it to the hidden attachment field (or send it directly in an AJAX
155
+ request).
134
156
 
135
157
  ```rb
136
158
  {
@@ -148,11 +170,11 @@ uploaded file on the client-side, and write it to the hidden attachment field
148
170
  * `storage` – direct uploads typically use the `:cache` storage
149
171
  * `metadata` – hash of metadata extracted from the file
150
172
 
151
- Once submitted this JSON will then be assigned to the attachment attribute
152
- instead of the raw file. See [this walkthrough][direct S3 upload walkthrough]
153
- for adding dynamic direct S3 uploads from scratch using [Uppy], as well as the
154
- [Roda][roda demo] or [Rails][rails demo] demo app for a complete example of
155
- multiple direct S3 uploads.
173
+ Once the form is submitted, this JSON data will then be assigned to the
174
+ attachment attribute instead of the raw file. See [this walkthrough][direct S3
175
+ upload walkthrough] for adding dynamic direct S3 uploads from scratch, as well
176
+ as the [Roda][roda demo] / [Rails][rails demo] demo app for a complete example
177
+ of multiple direct S3 uploads.
156
178
 
157
179
  ## Strategy B (static)
158
180
 
@@ -166,13 +188,13 @@ generating the form can use [`Shrine::Storage::S3#presign`], which returns URL
166
188
  and form fields that should be used for the upload.
167
189
 
168
190
  ```rb
169
- presigned_data = Shrine.storages[:cache].presign(
191
+ presign_data = Shrine.storages[:cache].presign(
170
192
  SecureRandom.hex,
171
193
  success_action_redirect: new_album_url
172
194
  )
173
195
 
174
- form action: presigned_data[:url], method: "post", enctype: "multipart/form-data" do |f|
175
- presigned_data[:fields].each do |name, value|
196
+ form action: presign_data[:url], method: "post", enctype: "multipart/form-data" do |f|
197
+ presign_data[:fields].each do |name, value|
176
198
  f.input :hidden, name: name, value: value
177
199
  end
178
200
  f.input :file, name: "file"
@@ -186,7 +208,7 @@ builder to generate this form, you might need to also tell S3 to ignore the
186
208
  additional `utf8` and `authenticity_token` fields that Rails generates:
187
209
 
188
210
  ```rb
189
- presigned_data = Shrine.storages[:cache].presign(
211
+ presign_data = Shrine.storages[:cache].presign(
190
212
  SecureRandom.hex,
191
213
  allow_any: ["utf8", "authenticity_token"],
192
214
  success_action_redirect: new_album_url
@@ -202,7 +224,7 @@ GET parameters in the URL, out of which we only need the `key` parameter:
202
224
  ```rb
203
225
  cached_file = {
204
226
  storage: "cache",
205
- id: request.params[:key][/cache\/(.+)/, 1], # we subtract the storage prefix
227
+ id: params["key"][/^cache\/(.+)/, 1], # we subtract the storage prefix
206
228
  metadata: {},
207
229
  }
208
230
 
@@ -212,6 +234,17 @@ form @album, action: "/albums" do |f|
212
234
  end
213
235
  ```
214
236
 
237
+ ## Shrine metadata
238
+
239
+ When attaching a file that was uploaded directly to S3, by default Shrine will
240
+ not extract metadata from the file, instead it will simply copy over any
241
+ metadata assigned on the client side. This is the default behaviour because
242
+ extracting metadata requires retrieving file content, which in this case means
243
+ additional HTTP requests.
244
+
245
+ See [this section][metadata direct uploads] or the rationale and instructions
246
+ on how to opt in.
247
+
215
248
  ## Object data
216
249
 
217
250
  When the cached S3 object is copied to permanent storage, the destination S3
@@ -234,59 +267,6 @@ plugin :upload_options, store: -> (io, context) do
234
267
  end
235
268
  ```
236
269
 
237
- ## Shrine metadata
238
-
239
- With direct uploads any metadata has to be extracted on the client-side, since
240
- the file upload doesn't touch the application, so the Shrine uploader doesn't
241
- get a chance to extract the metadata. When directly uploaded file is promoted
242
- to permanent storage, Shrine's default behaviour is to just copy the received
243
- metadata.
244
-
245
- If you want to re-extract metadata on the server before file validation, you
246
- can load the `restore_cached_data`. That will make Shrine open the S3 file for
247
- reading, pass it for metadata extraction, and then override the metadata
248
- received from the client with the extracted ones.
249
-
250
- ```rb
251
- plugin :restore_cached_data
252
- ```
253
-
254
- Note that if you don't need this metadata before file validation, and you would
255
- like to have it extracted in a background job, you can do that with the
256
- following trick:
257
-
258
- ```rb
259
- class MyUploader < Shrine
260
- plugin :processing
261
- plugin :refresh_metadata
262
-
263
- process(:store) do |io, context|
264
- io.refresh_metadata!
265
- io # return the same cached IO
266
- end
267
- end
268
- ```
269
-
270
- ## Checksum
271
-
272
- To have AWS S3 verify the integrity of the uploaded data, you can use a
273
- checksum. For that you first need to tell AWS S3 that you're going to be
274
- including the `Content-MD5` request header in the upload request, by adding
275
- the `:content_md5` presign option.
276
-
277
- ```rb
278
- Shrine.plugin :presign_endpoint, presign_options: -> (request) do
279
- {
280
- content_md5: request.params["checksum"],
281
- method: :put,
282
- }
283
- end
284
- ```
285
-
286
- With the above setup, you can pass the MD5 hash of the file via the `checksum`
287
- query parameter in the request to the presign endpoint. See [this
288
- walkthrough][checksum walkthrough] for a complete JavaScript solution.
289
-
290
270
  ## Clearing cache
291
271
 
292
272
  Directly uploaded files won't automatically be deleted from your temporary
@@ -353,6 +333,42 @@ Shrine::Attacher.promote do |data|
353
333
  end
354
334
  ```
355
335
 
336
+ ## Checksums
337
+
338
+ You can have AWS S3 verify the integrity of the uploaded data by including a
339
+ checksum generated on the client side in the upload request. For that we'll
340
+ need to include the checksum in the presign request, which we can pass in via
341
+ the `checksum` query parameter. The `:content_md5` parameter is not supported
342
+ in POST presigns, so for this we'll need to switch to PUT.
343
+
344
+ ```rb
345
+ Shrine.plugin :presign_endpoint, presign_options: -> (request) do
346
+ {
347
+ method: :put,
348
+ content_md5: request.params["checksum"],
349
+ }
350
+ end
351
+ ```
352
+
353
+ See [this walkthrough][checksum walkthrough] for a complete JavaScript
354
+ implementation of checksums.
355
+
356
+ Note that PUT presigns don't support the `:content_length_range` option, but
357
+ they support `:content_length` instead. So, if you want to limit the upload
358
+ size during direct uploads, you can pass an additional `size` query parameter
359
+ to the presign request on the client side, and require it when generating
360
+ presign options:
361
+
362
+ ```rb
363
+ Shrine.plugin :presign_endpoint, presign_options: -> (request) do
364
+ {
365
+ method: :put,
366
+ content_length: request.params.fetch("size"),
367
+ content_md5: request.params["checksum"],
368
+ }
369
+ end
370
+ ```
371
+
356
372
  ## Testing
357
373
 
358
374
  To avoid network requests in your test and development environment, you can use
@@ -367,6 +383,8 @@ setup] guide.
367
383
  [roda demo]: https://github.com/shrinerb/shrine/tree/master/demo
368
384
  [rails demo]: https://github.com/erikdahlstrand/shrine-rails-example
369
385
  [Uppy]: https://uppy.io
386
+ [uppy aws s3]: https://uppy.io/docs/aws-s3/
387
+ [uppy aws-s3 cors]: https://uppy.io/docs/aws-s3/#S3-Bucket-configuration
370
388
  [Amazon S3 Data Consistency Model]: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMode
371
389
  [CORS guide]: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
372
390
  [CORS API]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_bucket_cors-instance_method
@@ -374,3 +392,4 @@ setup] guide.
374
392
  [lifecycle API]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_bucket_lifecycle_configuration-instance_method
375
393
  [Minio]: https://minio.io
376
394
  [minio setup]: https://shrinerb.com/rdoc/files/doc/testing_md.html#label-Minio
395
+ [metadata direct uploads]: https://github.com/shrinerb/shrine/blob/master/doc/metadata.md#direct-uploads