shrine 2.12.0 → 2.13.0
Sign up to get free protection for your applications and to get access to all the features.
Potentially problematic release.
This version of shrine might be problematic. Click here for more details.
- checksums.yaml +4 -4
- data/CHANGELOG.md +30 -0
- data/README.md +153 -41
- data/doc/advantages.md +96 -106
- data/doc/attacher.md +55 -18
- data/doc/design.md +16 -5
- data/doc/direct_s3.md +132 -113
- data/doc/metadata.md +82 -27
- data/doc/multiple_files.md +76 -33
- data/doc/processing.md +2 -11
- data/doc/testing.md +2 -2
- data/lib/shrine.rb +18 -10
- data/lib/shrine/plugins/determine_mime_type.rb +6 -1
- data/lib/shrine/plugins/download_endpoint.rb +3 -0
- data/lib/shrine/plugins/infer_extension.rb +25 -10
- data/lib/shrine/plugins/module_include.rb +1 -1
- data/lib/shrine/plugins/presign_endpoint.rb +10 -8
- data/lib/shrine/plugins/rack_file.rb +8 -0
- data/lib/shrine/plugins/store_dimensions.rb +1 -1
- data/lib/shrine/plugins/upload_endpoint.rb +8 -4
- data/lib/shrine/plugins/versions.rb +1 -2
- data/lib/shrine/storage/file_system.rb +6 -4
- data/lib/shrine/storage/s3.rb +89 -82
- data/lib/shrine/version.rb +1 -1
- data/shrine.gemspec +3 -2
- metadata +22 -8
data/doc/attacher.md
CHANGED
@@ -66,16 +66,32 @@ cached file in form of a JSON string, and assigns the cached result to record's
|
|
66
66
|
attacher.assign(io)
|
67
67
|
|
68
68
|
# writes the given cached file to the data column
|
69
|
-
attacher.assign '
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
69
|
+
attacher.assign('{"id":"9260ea09d8effd.jpg","storage":"cache","metadata":{ ... }}')
|
70
|
+
```
|
71
|
+
|
72
|
+
When assigning an IO object, any additional options passed to `#assign` will be
|
73
|
+
forwarded to `Shrine#upload`. This allows you to do things like overriding
|
74
|
+
metadata, setting upload location, or passing upload options:
|
75
|
+
|
76
|
+
```rb
|
77
|
+
attacher.assign io,
|
78
|
+
metadata: { "filename" => "myfile.txt" },
|
79
|
+
location: "custom/location",
|
80
|
+
upload_options: { acl: "public-read" }
|
81
|
+
```
|
82
|
+
|
83
|
+
If you're attaching a cached file and want to override its metadata before
|
84
|
+
assignment, you can do it like so:
|
85
|
+
|
86
|
+
```rb
|
87
|
+
cached_file = Shrine.uploaded_file('{"id":"9260ea09d8effd.jpg","storage":"cache","metadata":{ ... }}')
|
88
|
+
cached_file.metadata["filename"] = "myfile.txt"
|
89
|
+
|
90
|
+
attacher.assign(cached_file.to_json)
|
74
91
|
```
|
75
92
|
|
76
93
|
For security reasons `#assign` doesn't accept files uploaded to permanent
|
77
|
-
storage, but you can
|
78
|
-
object.
|
94
|
+
storage, but you can use `#set` to attach any `Shrine::UploadedFile` object.
|
79
95
|
|
80
96
|
```rb
|
81
97
|
uploaded_file #=> #<Shrine::UploadedFile>
|
@@ -160,9 +176,20 @@ The `:action` parameter is optional; it can be used for triggering a certain
|
|
160
176
|
processing block, and it is also automatically printed by the `logging` plugin
|
161
177
|
to aid in debugging.
|
162
178
|
|
163
|
-
|
164
|
-
|
165
|
-
|
179
|
+
As a matter of fact, all additional options passed to `#promote` will be
|
180
|
+
forwarded to `Shrine#upload`. So unless you're generating versions, you can do
|
181
|
+
things like override metadata, set upload location, or pass upload options:
|
182
|
+
|
183
|
+
```rb
|
184
|
+
attacher.promote cached_file,
|
185
|
+
metadata: { "filename" => "myfile.txt" },
|
186
|
+
location: "custom/location",
|
187
|
+
upload_options: { acl: "public-read" }
|
188
|
+
```
|
189
|
+
|
190
|
+
Internally `#promote` calls `#swap`, which will update the record with any
|
191
|
+
uploaded file, but will reload the record to check if the current attachment
|
192
|
+
hasn't changed (if the `backgrounding` plugin is loaded).
|
166
193
|
|
167
194
|
```rb
|
168
195
|
attacher.swap(uploaded_file)
|
@@ -218,19 +245,29 @@ Normally you can upload and delete directly by using the uploader.
|
|
218
245
|
```rb
|
219
246
|
uploader = ImageUploader.new(:store)
|
220
247
|
uploaded_file = uploader.upload(image) # uploads the file to `:store` storage
|
221
|
-
uploader.delete(uploaded_file) # deletes the file
|
248
|
+
uploader.delete(uploaded_file) # deletes the uploaded file from `:store`
|
222
249
|
```
|
223
250
|
|
224
|
-
|
225
|
-
|
251
|
+
But the attacher also has wrapper methods for uploading and deleting, which
|
252
|
+
also automatically pass in the attacher `#context` (which includes `:record`
|
253
|
+
and `:name`):
|
226
254
|
|
227
255
|
```rb
|
228
|
-
|
229
|
-
|
230
|
-
attacher.
|
256
|
+
attacher.cache!(file) # uploads file to temporary storage
|
257
|
+
# => #<Shrine::UploadedFile: @data={"storage" => "cache", ...}>
|
258
|
+
attacher.store!(file) # uploads file to permanent storage
|
259
|
+
# => #<Shrine::UploadedFile: @data={"storage" => "store", ...}>
|
260
|
+
attacher.delete!(uploaded_file) # deletes uploaded file from storage
|
231
261
|
```
|
232
262
|
|
233
|
-
|
234
|
-
|
263
|
+
These methods only upload/delete files, they don't write to record's data
|
264
|
+
column. You can also pass additional options for `Shrine#upload` and
|
265
|
+
`Shrine#delete`:
|
266
|
+
|
267
|
+
```rb
|
268
|
+
attacher.cache!(file, upload_options: { acl: "public-read" })
|
269
|
+
attacher.store!(file, location: "custom/location")
|
270
|
+
attacher.delete!(uploaded_file, foo: "bar")
|
271
|
+
```
|
235
272
|
|
236
273
|
[file migrations]: https://shrinerb.com/rdoc/files/doc/migrating_storage_md.html
|
data/doc/design.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1
1
|
# The Design of Shrine
|
2
2
|
|
3
|
+
*If you want an in-depth walkthrough through the Shrine codebase, see [Notes on study of shrine implementation] article by Jonathan Rochkind.*
|
4
|
+
|
3
5
|
There are five main types of objects that you deal with in Shrine:
|
4
6
|
|
5
7
|
* Storage
|
@@ -56,8 +58,9 @@ Storages are typically not used directly, but through `Shrine`.
|
|
56
58
|
|
57
59
|
## `Shrine`
|
58
60
|
|
59
|
-
A `Shrine` object (also called an "uploader")
|
60
|
-
storage. First the storage needs to be registered under a
|
61
|
+
A `Shrine` object (also called an "uploader") is essentially a wrapper around
|
62
|
+
the `#upload` storage method. First the storage needs to be registered under a
|
63
|
+
name:
|
61
64
|
|
62
65
|
```rb
|
63
66
|
Shrine.storages[:file_system] = Shrine::Storage::FileSystem.new("uploads")
|
@@ -76,12 +79,17 @@ following:
|
|
76
79
|
|
77
80
|
* generates a unique location
|
78
81
|
* extracts metadata
|
79
|
-
* uploads the file
|
82
|
+
* uploads the file (calls `Storage#upload`)
|
80
83
|
* closes the file
|
81
84
|
* creates a `Shrine::UploadedFile` from the data
|
82
85
|
|
83
|
-
|
84
|
-
|
86
|
+
`Shrine` class and subclasses are also used for loading plugins that extend all
|
87
|
+
core classes. Each `Shrine` subclass has its own subclass of each of the core
|
88
|
+
classes (`Shrine::UploadedFile`, `Shrine::Attacher`, and `Shrine::Attachment`),
|
89
|
+
which makes it possible to have different `Shrine` subclasses with differently
|
90
|
+
customized attachment logic. See [Creating a New Plugin] guide and the [Plugin
|
91
|
+
system of Sequel and Roda] article for more details on the design of Shrine's
|
92
|
+
plugin system.
|
85
93
|
|
86
94
|
## `Shrine::UploadedFile`
|
87
95
|
|
@@ -207,3 +215,6 @@ automatically:
|
|
207
215
|
destroyed
|
208
216
|
|
209
217
|
[Using Attacher]: https://shrinerb.com/rdoc/files/doc/attacher_md.html
|
218
|
+
[Notes on study of shrine implementation]: https://bibwild.wordpress.com/2018/09/12/notes-on-study-of-shrine-implementation/
|
219
|
+
[Creating a New Plugin]: https://shrinerb.com/rdoc/files/doc/creating_plugins_md.html
|
220
|
+
[Plugin system of Sequel and Roda]: https://twin.github.io/the-plugin-system-of-sequel-and-roda/
|
data/doc/direct_s3.md
CHANGED
@@ -24,7 +24,7 @@ storage service is beneficial for several reasons:
|
|
24
24
|
times out.
|
25
25
|
|
26
26
|
To start, let's set both temporary and permanent storage to S3, with the
|
27
|
-
temporary storage uploading to the `cache/`
|
27
|
+
temporary storage uploading to the `cache/` prefix:
|
28
28
|
|
29
29
|
```rb
|
30
30
|
# Gemfile
|
@@ -47,39 +47,41 @@ Shrine.storages = {
|
|
47
47
|
}
|
48
48
|
```
|
49
49
|
|
50
|
-
##
|
51
|
-
|
52
|
-
In order to be able upload files directly to your S3 bucket, you need
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
|
57
|
-
|
58
|
-
|
59
|
-
|
60
|
-
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
50
|
+
## Bucket CORS configuration
|
51
|
+
|
52
|
+
In order to be able upload files directly to your S3 bucket, you'll need to
|
53
|
+
update your bucket's CORS configuration, as public uploads are not allowed by
|
54
|
+
default. You can do that from the AWS S3 Console by going to your bucket,
|
55
|
+
clicking on the "Permissions" tab and then on "CORS Configuration".
|
56
|
+
|
57
|
+
If you're using [Uppy], this is the recommended CORS configuration for the
|
58
|
+
[Aws S3 plugin] that should work for both POST and PUT uploads:
|
59
|
+
|
60
|
+
```xml
|
61
|
+
<?xml version="1.0" encoding="UTF-8"?>
|
62
|
+
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
|
63
|
+
<CORSRule>
|
64
|
+
<AllowedOrigin>https://my-app.com</AllowedOrigin>
|
65
|
+
<AllowedMethod>GET</AllowedMethod>
|
66
|
+
<AllowedMethod>POST</AllowedMethod>
|
67
|
+
<AllowedMethod>PUT</AllowedMethod>
|
68
|
+
<MaxAgeSeconds>3000</MaxAgeSeconds>
|
69
|
+
<AllowedHeader>Authorization</AllowedHeader>
|
70
|
+
<AllowedHeader>x-amz-date</AllowedHeader>
|
71
|
+
<AllowedHeader>x-amz-content-sha256</AllowedHeader>
|
72
|
+
<AllowedHeader>content-type</AllowedHeader>
|
73
|
+
</CORSRule>
|
74
|
+
<CORSRule>
|
75
|
+
<AllowedOrigin>*</AllowedOrigin>
|
76
|
+
<AllowedMethod>GET</AllowedMethod>
|
77
|
+
<MaxAgeSeconds>3000</MaxAgeSeconds>
|
78
|
+
</CORSRule>
|
79
|
+
</CORSConfiguration>
|
79
80
|
```
|
80
81
|
|
81
|
-
|
82
|
-
|
82
|
+
Replace `https://my-app.com` with the URL to your app (in development you can
|
83
|
+
set this to `*`). Once you've hit "Save", it may take some time for the
|
84
|
+
new CORS settings to be applied.
|
83
85
|
|
84
86
|
## Strategy A (dynamic)
|
85
87
|
|
@@ -87,13 +89,28 @@ be applied.
|
|
87
89
|
* Single or multiple file uploads
|
88
90
|
* Some JavaScript needed
|
89
91
|
|
90
|
-
When the user selects a file in the form, on the client
|
91
|
-
fetch the
|
92
|
-
|
93
|
-
|
92
|
+
When the user selects a file in the form, on the client side we asynchronously
|
93
|
+
fetch the upload parameters from the server, and use it to upload the file to
|
94
|
+
S3. It's recommended to use [Uppy] for client side uploads.
|
95
|
+
|
96
|
+
The `presign_endpoint` plugin provides a Rack application that generates these
|
97
|
+
upload parameters, which we can just mount in our application. We'll make our
|
98
|
+
presign endpoint also use the additional `type` and `filename` query parameters
|
99
|
+
to set `Content-Type` and `Content-Disposition` for the uploaded file, as well
|
100
|
+
as limit the upload size to 10 MB (see [`Shrine::Storage::S3#presign`] for the
|
101
|
+
list of available options).
|
94
102
|
|
95
103
|
```rb
|
96
|
-
Shrine.plugin :presign_endpoint, presign_options:
|
104
|
+
Shrine.plugin :presign_endpoint, presign_options: -> (request) {
|
105
|
+
filename = request.params["filename"]
|
106
|
+
type = request.params["type"]
|
107
|
+
|
108
|
+
{
|
109
|
+
content_disposition: "inline; filename=\"#{filename}\"", # set download filename
|
110
|
+
content_type: type, # set content type (required if using DigitalOcean Spaces)
|
111
|
+
content_length_range: 0..(10*1024*1024), # limit upload size to 10 MB
|
112
|
+
}
|
113
|
+
}
|
97
114
|
```
|
98
115
|
```rb
|
99
116
|
# config.ru (Rack)
|
@@ -110,27 +127,32 @@ end
|
|
110
127
|
```
|
111
128
|
|
112
129
|
The above will create a `GET /presign` route, which internally calls
|
113
|
-
[`Shrine::Storage::S3#presign`]
|
114
|
-
to which the file should be uploaded, along with the required parameters
|
115
|
-
|
130
|
+
[`Shrine::Storage::S3#presign`] to return the HTTP verb (POST) and the S3 URL
|
131
|
+
to which the file should be uploaded, along with the required POST parameters
|
132
|
+
and request headers.
|
116
133
|
|
117
134
|
```rb
|
118
135
|
# GET /presign
|
119
136
|
{
|
120
|
-
"method": "
|
121
|
-
"url": "https://my-bucket.s3
|
122
|
-
"fields": {
|
137
|
+
"method": "post",
|
138
|
+
"url": "https://my-bucket.s3-eu-west-1.amazonaws.com",
|
139
|
+
"fields": {
|
140
|
+
"key": "b7d575850ba61b44c8a9ff889dfdb14d88cdc25f8dd121004c8",
|
141
|
+
"policy": "eyJleHBpcmF0aW9uIjoiMjAxNS0QwMToxMToyOVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJ...",
|
142
|
+
"x-amz-credential": "AKIAIJF55TMZYT6Q/20151024/eu-west-1/s3/aws4_request",
|
143
|
+
"x-amz-algorithm": "AWS4-HMAC-SHA256",
|
144
|
+
"x-amz-date": "20151024T001129Z",
|
145
|
+
"x-amz-signature": "c1eb634f83f96b69bd675f535b3ff15ae184b102fcba51e4db5f4959b4ae26f4"
|
146
|
+
},
|
123
147
|
"headers": {}
|
124
148
|
}
|
125
149
|
```
|
126
150
|
|
127
|
-
|
128
|
-
upload
|
129
|
-
|
130
|
-
|
131
|
-
|
132
|
-
uploaded file on the client-side, and write it to the hidden attachment field
|
133
|
-
(or send it directly in an AJAX request).
|
151
|
+
Uppy's [AWS S3][uppy aws s3] plugin would then make a request to this endpoint and use these
|
152
|
+
parameters to upload the file directly to S3. Once the file has been uploaded,
|
153
|
+
you can generate a JSON representation of the uploaded file on the client side,
|
154
|
+
and write it to the hidden attachment field (or send it directly in an AJAX
|
155
|
+
request).
|
134
156
|
|
135
157
|
```rb
|
136
158
|
{
|
@@ -148,11 +170,11 @@ uploaded file on the client-side, and write it to the hidden attachment field
|
|
148
170
|
* `storage` – direct uploads typically use the `:cache` storage
|
149
171
|
* `metadata` – hash of metadata extracted from the file
|
150
172
|
|
151
|
-
Once submitted this JSON will then be assigned to the
|
152
|
-
instead of the raw file. See [this walkthrough][direct S3
|
153
|
-
for adding dynamic direct S3 uploads from scratch
|
154
|
-
[Roda][roda demo]
|
155
|
-
multiple direct S3 uploads.
|
173
|
+
Once the form is submitted, this JSON data will then be assigned to the
|
174
|
+
attachment attribute instead of the raw file. See [this walkthrough][direct S3
|
175
|
+
upload walkthrough] for adding dynamic direct S3 uploads from scratch, as well
|
176
|
+
as the [Roda][roda demo] / [Rails][rails demo] demo app for a complete example
|
177
|
+
of multiple direct S3 uploads.
|
156
178
|
|
157
179
|
## Strategy B (static)
|
158
180
|
|
@@ -166,13 +188,13 @@ generating the form can use [`Shrine::Storage::S3#presign`], which returns URL
|
|
166
188
|
and form fields that should be used for the upload.
|
167
189
|
|
168
190
|
```rb
|
169
|
-
|
191
|
+
presign_data = Shrine.storages[:cache].presign(
|
170
192
|
SecureRandom.hex,
|
171
193
|
success_action_redirect: new_album_url
|
172
194
|
)
|
173
195
|
|
174
|
-
form action:
|
175
|
-
|
196
|
+
form action: presign_data[:url], method: "post", enctype: "multipart/form-data" do |f|
|
197
|
+
presign_data[:fields].each do |name, value|
|
176
198
|
f.input :hidden, name: name, value: value
|
177
199
|
end
|
178
200
|
f.input :file, name: "file"
|
@@ -186,7 +208,7 @@ builder to generate this form, you might need to also tell S3 to ignore the
|
|
186
208
|
additional `utf8` and `authenticity_token` fields that Rails generates:
|
187
209
|
|
188
210
|
```rb
|
189
|
-
|
211
|
+
presign_data = Shrine.storages[:cache].presign(
|
190
212
|
SecureRandom.hex,
|
191
213
|
allow_any: ["utf8", "authenticity_token"],
|
192
214
|
success_action_redirect: new_album_url
|
@@ -202,7 +224,7 @@ GET parameters in the URL, out of which we only need the `key` parameter:
|
|
202
224
|
```rb
|
203
225
|
cached_file = {
|
204
226
|
storage: "cache",
|
205
|
-
id:
|
227
|
+
id: params["key"][/^cache\/(.+)/, 1], # we subtract the storage prefix
|
206
228
|
metadata: {},
|
207
229
|
}
|
208
230
|
|
@@ -212,6 +234,17 @@ form @album, action: "/albums" do |f|
|
|
212
234
|
end
|
213
235
|
```
|
214
236
|
|
237
|
+
## Shrine metadata
|
238
|
+
|
239
|
+
When attaching a file that was uploaded directly to S3, by default Shrine will
|
240
|
+
not extract metadata from the file, instead it will simply copy over any
|
241
|
+
metadata assigned on the client side. This is the default behaviour because
|
242
|
+
extracting metadata requires retrieving file content, which in this case means
|
243
|
+
additional HTTP requests.
|
244
|
+
|
245
|
+
See [this section][metadata direct uploads] or the rationale and instructions
|
246
|
+
on how to opt in.
|
247
|
+
|
215
248
|
## Object data
|
216
249
|
|
217
250
|
When the cached S3 object is copied to permanent storage, the destination S3
|
@@ -234,59 +267,6 @@ plugin :upload_options, store: -> (io, context) do
|
|
234
267
|
end
|
235
268
|
```
|
236
269
|
|
237
|
-
## Shrine metadata
|
238
|
-
|
239
|
-
With direct uploads any metadata has to be extracted on the client-side, since
|
240
|
-
the file upload doesn't touch the application, so the Shrine uploader doesn't
|
241
|
-
get a chance to extract the metadata. When directly uploaded file is promoted
|
242
|
-
to permanent storage, Shrine's default behaviour is to just copy the received
|
243
|
-
metadata.
|
244
|
-
|
245
|
-
If you want to re-extract metadata on the server before file validation, you
|
246
|
-
can load the `restore_cached_data`. That will make Shrine open the S3 file for
|
247
|
-
reading, pass it for metadata extraction, and then override the metadata
|
248
|
-
received from the client with the extracted ones.
|
249
|
-
|
250
|
-
```rb
|
251
|
-
plugin :restore_cached_data
|
252
|
-
```
|
253
|
-
|
254
|
-
Note that if you don't need this metadata before file validation, and you would
|
255
|
-
like to have it extracted in a background job, you can do that with the
|
256
|
-
following trick:
|
257
|
-
|
258
|
-
```rb
|
259
|
-
class MyUploader < Shrine
|
260
|
-
plugin :processing
|
261
|
-
plugin :refresh_metadata
|
262
|
-
|
263
|
-
process(:store) do |io, context|
|
264
|
-
io.refresh_metadata!
|
265
|
-
io # return the same cached IO
|
266
|
-
end
|
267
|
-
end
|
268
|
-
```
|
269
|
-
|
270
|
-
## Checksum
|
271
|
-
|
272
|
-
To have AWS S3 verify the integrity of the uploaded data, you can use a
|
273
|
-
checksum. For that you first need to tell AWS S3 that you're going to be
|
274
|
-
including the `Content-MD5` request header in the upload request, by adding
|
275
|
-
the `:content_md5` presign option.
|
276
|
-
|
277
|
-
```rb
|
278
|
-
Shrine.plugin :presign_endpoint, presign_options: -> (request) do
|
279
|
-
{
|
280
|
-
content_md5: request.params["checksum"],
|
281
|
-
method: :put,
|
282
|
-
}
|
283
|
-
end
|
284
|
-
```
|
285
|
-
|
286
|
-
With the above setup, you can pass the MD5 hash of the file via the `checksum`
|
287
|
-
query parameter in the request to the presign endpoint. See [this
|
288
|
-
walkthrough][checksum walkthrough] for a complete JavaScript solution.
|
289
|
-
|
290
270
|
## Clearing cache
|
291
271
|
|
292
272
|
Directly uploaded files won't automatically be deleted from your temporary
|
@@ -353,6 +333,42 @@ Shrine::Attacher.promote do |data|
|
|
353
333
|
end
|
354
334
|
```
|
355
335
|
|
336
|
+
## Checksums
|
337
|
+
|
338
|
+
You can have AWS S3 verify the integrity of the uploaded data by including a
|
339
|
+
checksum generated on the client side in the upload request. For that we'll
|
340
|
+
need to include the checksum in the presign request, which we can pass in via
|
341
|
+
the `checksum` query parameter. The `:content_md5` parameter is not supported
|
342
|
+
in POST presigns, so for this we'll need to switch to PUT.
|
343
|
+
|
344
|
+
```rb
|
345
|
+
Shrine.plugin :presign_endpoint, presign_options: -> (request) do
|
346
|
+
{
|
347
|
+
method: :put,
|
348
|
+
content_md5: request.params["checksum"],
|
349
|
+
}
|
350
|
+
end
|
351
|
+
```
|
352
|
+
|
353
|
+
See [this walkthrough][checksum walkthrough] for a complete JavaScript
|
354
|
+
implementation of checksums.
|
355
|
+
|
356
|
+
Note that PUT presigns don't support the `:content_length_range` option, but
|
357
|
+
they support `:content_length` instead. So, if you want to limit the upload
|
358
|
+
size during direct uploads, you can pass an additional `size` query parameter
|
359
|
+
to the presign request on the client side, and require it when generating
|
360
|
+
presign options:
|
361
|
+
|
362
|
+
```rb
|
363
|
+
Shrine.plugin :presign_endpoint, presign_options: -> (request) do
|
364
|
+
{
|
365
|
+
method: :put,
|
366
|
+
content_length: request.params.fetch("size"),
|
367
|
+
content_md5: request.params["checksum"],
|
368
|
+
}
|
369
|
+
end
|
370
|
+
```
|
371
|
+
|
356
372
|
## Testing
|
357
373
|
|
358
374
|
To avoid network requests in your test and development environment, you can use
|
@@ -367,6 +383,8 @@ setup] guide.
|
|
367
383
|
[roda demo]: https://github.com/shrinerb/shrine/tree/master/demo
|
368
384
|
[rails demo]: https://github.com/erikdahlstrand/shrine-rails-example
|
369
385
|
[Uppy]: https://uppy.io
|
386
|
+
[uppy aws s3]: https://uppy.io/docs/aws-s3/
|
387
|
+
[uppy aws-s3 cors]: https://uppy.io/docs/aws-s3/#S3-Bucket-configuration
|
370
388
|
[Amazon S3 Data Consistency Model]: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMode
|
371
389
|
[CORS guide]: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
|
372
390
|
[CORS API]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_bucket_cors-instance_method
|
@@ -374,3 +392,4 @@ setup] guide.
|
|
374
392
|
[lifecycle API]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_bucket_lifecycle_configuration-instance_method
|
375
393
|
[Minio]: https://minio.io
|
376
394
|
[minio setup]: https://shrinerb.com/rdoc/files/doc/testing_md.html#label-Minio
|
395
|
+
[metadata direct uploads]: https://github.com/shrinerb/shrine/blob/master/doc/metadata.md#direct-uploads
|