shrine 3.1.0 → 3.2.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

@@ -1,5 +1,5 @@
1
1
  ---
2
- title: Shrine for Refile Users
2
+ title: Upgrading from Refile
3
3
  ---
4
4
 
5
5
  This guide is aimed at helping Refile users transition to Shrine, and it consists
@@ -110,13 +110,6 @@ into separate columns.
110
110
  Shrine provides on-the-fly processing via the
111
111
  [`derivation_endpoint`][derivation_endpoint] plugin:
112
112
 
113
- ```rb
114
- # config/routes.rb (Rails)
115
- Rails.application.routes.draw do
116
- # ...
117
- mount ImageUploader.derivation_endpoint => "/derivations/image"
118
- end
119
- ```
120
113
  ```rb
121
114
  require "image_processing/mini_magick"
122
115
 
@@ -132,8 +125,15 @@ class ImageUploader < Shrine
132
125
  end
133
126
  end
134
127
  ```
128
+ ```rb
129
+ # config/routes.rb (Rails)
130
+ Rails.application.routes.draw do
131
+ # ...
132
+ mount ImageUploader.derivation_endpoint => "/derivations/image"
133
+ end
134
+ ```
135
135
 
136
- Shrine also support processing up front using the [`derivatives`][derivatives]
136
+ Shrine also support eager processing using the [`derivatives`][derivatives]
137
137
  plugin.
138
138
 
139
139
  ### Validation
@@ -0,0 +1,96 @@
1
+ ---
2
+ title: Shrine 3.2.0
3
+ ---
4
+
5
+ ## New features
6
+
7
+ * The `type_predicates` plugin has been added, which adds convenient predicate
8
+ methods to `Shrine::UploadedFile` based on the MIME type.
9
+
10
+ ```rb
11
+ # Gemfile
12
+ gem "mini_mime" # default dependency of type_predicates
13
+ ```
14
+ ```rb
15
+ Shrine.plugin :type_predicates
16
+ ```
17
+
18
+ The plugin adds four predicate methods based on the general type of the file:
19
+
20
+ ```rb
21
+ file.image? # returns true for any "image/*" MIME type
22
+ file.video? # returns true for any "video/*" MIME type
23
+ file.audio? # returns true for any "audio/*" MIME type
24
+ file.text? # returns true for any "text/*" MIME type
25
+ ```
26
+
27
+ You can also check for specific MIME type using the extension name:
28
+
29
+ ```rb
30
+ file.type?(:jpg) # returns true if MIME type is "image/jpeg"
31
+ file.type?(:svg) # returns true if MIME type is "image/svg+xml"
32
+ file.type?(:mov) # returns true if MIME type is "video/quicktime"
33
+ file.type?(:ppt) # returns true if MIME type is "application/vnd.ms-powerpoint"
34
+ ...
35
+ ```
36
+
37
+ For convenience, you can create predicate methods for specific file types:
38
+
39
+ ```rb
40
+ Shrine.plugin :type_predicates, methods: %i[jpg svg mov ppt]
41
+ ```
42
+ ```rb
43
+ file.jpg? # returns true if MIME type is "image/jpeg"
44
+ file.svg? # returns true if MIME type is "image/svg+xml"
45
+ file.mov? # returns true if MIME type is "video/quicktime"
46
+ file.ppt? # returns true if MIME type is "application/vnd.ms-powerpoint"
47
+ ```
48
+
49
+ * The `#add_metadata` method has been added to the `add_metadata` plugin for
50
+ adding new metadata to an existing file/attachment.
51
+
52
+ ```rb
53
+ attacher.file.metadata #=> { ... }
54
+ attacher.add_metadata("foo" => "bar")
55
+ attacher.file.metadata #=> { ..., "foo" => "bar" }
56
+ ```
57
+
58
+ ## Other improvements
59
+
60
+ * The `remove_invalid` plugin now works correctly with `derivatives` plugin.
61
+
62
+ * The `remove_invalid` plugin is now also activated when `Attacher#validate`
63
+ is called manually.
64
+
65
+ * The current attached file data can now be assigned back to the attachment
66
+ attribute, and this operation will be a no-op.
67
+
68
+ ```rb
69
+ photo.image #=> #<Shrine::UploadedFile id="foo" storage=:store metadata={...}>
70
+ photo.image = { "id" => "foo", "storage" => "store", "metadata" => { ... } } # no-op
71
+ ```
72
+
73
+ This allows treating the attachment attribute as a persistent attribute,
74
+ where the current value can be assigned back on record updates.
75
+
76
+ * When promoting derivatives, the `:derivative` parameter value was being
77
+ passed to the uploader as an array. This has been fixed, and the value is now
78
+ the same as when uploading derivatives directly to permanent storage.
79
+
80
+ * The `derivatives` plugin now includes additional `:io` and `:attacher` values
81
+ in the instrumentation event payload.
82
+
83
+ ## Backwards compatibility
84
+
85
+ * The `validation` plugin now runs validations on `Attacher#attach` and
86
+ `Attacher#attach_cached`. If you were using `Attacher#change` directly and
87
+ expecting the validations to be run automatically, you will need to update
88
+ your code.
89
+
90
+ * If you were updating the cached file metadata via file data assignment, this
91
+ will no longer work.
92
+
93
+ ```rb
94
+ photo.image #=> #<Shrine::UploadedFile id="foo" storage=:cache metadata={...}>
95
+ photo.image = { "id" => "foo", "storage" => "cache", "metadata" => { ... } } # no-op
96
+ ```
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  id: securing-uploads
3
- title: Securing uploads
3
+ title: Securing Uploads
4
4
  ---
5
5
 
6
6
  Shrine does a lot to make your file uploads secure, but there are still a lot
@@ -0,0 +1,19 @@
1
+ ---
2
+ title: Memory
3
+ ---
4
+
5
+ The Memory storage stores uploaded files in memory, which is suitable for
6
+ testing.
7
+
8
+ ```rb
9
+ Shrine.storages[:store] = Shrine::Storage::Memory.new
10
+ ```
11
+
12
+ By default, each storage instance uses a new Hash object for storing files,
13
+ but you can pass your own:
14
+
15
+ ```rb
16
+ my_store = Hash.new
17
+
18
+ Shrine.storages[:store] = Shrine::Storage::Memory.new(my_store)
19
+ ```
@@ -2,30 +2,38 @@
2
2
  title: AWS S3
3
3
  ---
4
4
 
5
- The S3 storage handles uploads to Amazon S3 service, using the [aws-sdk-s3]
5
+ The S3 storage handles uploads to [AWS S3] service (or any s3-compatible
6
+ service such as [DigitalOcean Spaces] or [MinIO]). It requires the [aws-sdk-s3]
6
7
  gem:
7
8
 
8
9
  ```rb
10
+ # Gemfile
9
11
  gem "aws-sdk-s3", "~> 1.14"
10
12
  ```
11
13
 
12
- It can be initialized by providing the bucket name and credentials:
14
+ ## Initialization
15
+
16
+ The storage is initialized by providing your bucket name, region and
17
+ credentials:
13
18
 
14
19
  ```rb
15
20
  require "shrine/storage/s3"
16
21
 
17
22
  s3 = Shrine::Storage::S3.new(
18
23
  bucket: "my-app", # required
24
+ region: "eu-west-1", # required
19
25
  access_key_id: "abc",
20
26
  secret_access_key: "xyz",
21
- region: "eu-west-1",
22
27
  )
23
28
  ```
24
29
 
25
- The core features of this storage require the following AWS permissions:
26
- `s3:ListBucket`, `s3:PutObject`, `s3:GetObject`, and `s3:DeleteObject`. If you
27
- have additional upload options configured such as setting object ACLs, then
28
- additional permissions may be required.
30
+ The `:access_key_id` and `:secret_access_key` options are just one form of
31
+ authentication, see [`Aws::S3::Client#initialize`] docs for more details.
32
+
33
+ > The core features of this storage require the following AWS permissions:
34
+ `s3:ListBucket`, `s3:PutObject`, `s3:GetObject`, and `s3:DeleteObject`. If you
35
+ have additional upload options configured such as setting object ACLs, then
36
+ additional permissions may be required.
29
37
 
30
38
  The storage exposes the underlying Aws objects:
31
39
 
@@ -45,14 +53,29 @@ s3.object("key") #=> #<Aws::S3::Object>
45
53
 
46
54
  By default, uploaded S3 objects will have private visibility, meaning they can
47
55
  only be accessed via signed expiring URLs generated using your private S3
48
- credentials. If you would like to generate public URLs, you can tell S3 storage
49
- to make uploads public:
56
+ credentials.
50
57
 
51
58
  ```rb
52
- s3 = Shrine::Storage::S3.new(public: true, **s3_options)
59
+ s3 = Shrine::Storage::S3.new(**s3_options)
60
+ s3.upload(io, "key") # uploads with default "private" ACL
61
+ s3.url("key") # https://my-bucket.s3.amazonaws.com/key?X-Amz-Expires=900&X-Amz-Signature=b22d37c37d...
62
+ ```
53
63
 
64
+ If you would like to generate public URLs, you can tell S3 storage to make
65
+ uploads public:
66
+
67
+ ```rb
68
+ s3 = Shrine::Storage::S3.new(public: true, **s3_options)
54
69
  s3.upload(io, "key") # uploads with "public-read" ACL
55
- s3.url("key") # returns public (unsigned) object URL
70
+ s3.url("key") # https://my-bucket.s3.amazonaws.com/key
71
+ ```
72
+
73
+ If you want to make only *some* uploads public, you can conditionally apply the
74
+ `:acl` upload option and `:public` URL option:
75
+
76
+ ```rb
77
+ Shrine.plugin :upload_options, store: -> (io, **) { { acl: "public-read" } }
78
+ Shrine.plugin :url_options, store: -> (io, **) { { public: true } }
56
79
  ```
57
80
 
58
81
  ## Prefix
@@ -80,13 +103,11 @@ You can also generate upload options per upload with the `upload_options`
80
103
  plugin
81
104
 
82
105
  ```rb
83
- class MyUploader < Shrine
84
- plugin :upload_options, store: -> (io, derivative: nil, **) do
85
- if derivative == :thumb
86
- { acl: "public-read" }
87
- else
88
- { acl: "private" }
89
- end
106
+ Shrine.plugin :upload_options, store: -> (io, derivative: nil, **) do
107
+ if derivative == :thumb
108
+ { acl: "public-read" }
109
+ else
110
+ { acl: "private" }
90
111
  end
91
112
  end
92
113
  ```
@@ -97,23 +118,9 @@ or when using the uploader directly
97
118
  uploader.upload(file, upload_options: { acl: "private" })
98
119
  ```
99
120
 
100
- Note that, unlike the `:upload_options` storage option, upload options given on
101
- the uploader level won't be forwarded for generating presigns, since presigns
102
- are generated using the storage directly.
103
-
104
- ## URL options
105
-
106
- Other than [`:host`](#url-host) and [`:public`](#public-uploads) URL options,
107
- all additional options are forwarded to [`Aws::S3::Object#presigned_url`].
108
-
109
- ```rb
110
- s3.url(
111
- expires_in: 15,
112
- response_content_disposition: ContentDisposition.attachment("my-filename"),
113
- response_content_type: "foo/bar",
114
- # ...
115
- )
116
- ```
121
+ > Unlike the `:upload_options` storage option, upload options given on
122
+ the uploader level won't be forwarded for generating presigns, since presigns
123
+ are generated using the storage directly.
117
124
 
118
125
  ## URL Host
119
126
 
@@ -133,12 +140,14 @@ s3.url("image.jpg", host: "https://your-s3-host.com/prefix/") # needs to end wit
133
140
  ```
134
141
 
135
142
  To have the `:host` option passed automatically for every URL, use the
136
- `url_options` plugin.
143
+ `url_options` plugin:
137
144
 
138
145
  ```rb
139
146
  plugin :url_options, store: { host: "http://abc123.cloudfront.net" }
140
147
  ```
141
148
 
149
+ ### Signer
150
+
142
151
  If you would like to [serve private content via CloudFront], you need to sign
143
152
  the object URLs with a special signer, such as [`Aws::CloudFront::UrlSigner`]
144
153
  provided by the `aws-sdk-cloudfront` gem. The S3 storage initializer accepts a
@@ -157,13 +166,27 @@ Shrine::Storage::S3.new(signer: signer.method(:signed_url))
157
166
  Shrine::Storage::S3.new(signer: -> (url, **options) { signer.signed_url(url, **options) })
158
167
  ```
159
168
 
169
+ ## URL options
170
+
171
+ Other than `:host` and `:public` URL options, all additional `S3#url` options
172
+ are forwarded to [`Aws::S3::Object#presigned_url`].
173
+
174
+ ```rb
175
+ s3.url(
176
+ expires_in: 15,
177
+ response_content_disposition: ContentDisposition.attachment("my-filename"),
178
+ response_content_type: "foo/bar",
179
+ # ...
180
+ )
181
+ ```
182
+
160
183
  ## Presigns
161
184
 
162
- The `#presign` method can be used for generating paramters for direct uploads
163
- to Amazon S3:
185
+ The `S3#presign` method can be used for generating parameters for direct upload
186
+ to S3:
164
187
 
165
188
  ```rb
166
- s3.presign("/path/to/file") #=>
189
+ s3.presign("key") #=>
167
190
  # {
168
191
  # url: "https://my-bucket.s3.amazonaws.com/...",
169
192
  # fields: { ... }, # blank for PUT presigns
@@ -172,11 +195,25 @@ s3.presign("/path/to/file") #=>
172
195
  # }
173
196
  ```
174
197
 
175
- Additional presign options can be given in three places:
198
+ By default, parameters for a POST upload is generated, but you can also
199
+ generate PUT upload parameters:
176
200
 
177
- * in `Storage::S3#presign` by forwarding options
178
- * in `:upload_options` option on this storage
179
- * in `presign_endpoint` plugin through `:presign_options`
201
+ ```rb
202
+ s3.presign("key", method: :put)
203
+ ```
204
+
205
+ Any additional options are forwarded to [`Aws::S3::Object#presigned_post`]
206
+ (for POST uploads) and [`Aws::S3::Object#presigned_url`] (for PUT uploads).
207
+
208
+ ```rb
209
+ s3.presign("key", method: :put, content_disposition: "attachment; filename=my-file.txt") #=>
210
+ # {
211
+ # url: "https://my-bucket.s3.amazonaws.com/...",
212
+ # fields: {},
213
+ # headers: { "Content-Disposition" => "attachment; filename=my-file.txt" },
214
+ # method :put,
215
+ # }
216
+ ```
180
217
 
181
218
  ## Large files
182
219
 
@@ -184,33 +221,24 @@ The aws-sdk-s3 gem has the ability to automatically use multipart upload/copy
184
221
  for larger files, splitting the file into multiple chunks and uploading/copying
185
222
  them in parallel.
186
223
 
187
- By default any files that are uploaded will use the multipart upload if they're
188
- larger than 15MB, and any files that are copied will use the multipart copy if
189
- they're larger than 150MB, but you can change the thresholds via
190
- `:multipart_threshold`.
224
+ By default, multipart upload will be used for files larger than 15MB, and
225
+ multipart copy for files larger than 100MB, but you can change the thresholds
226
+ via `:multipart_threshold`:
191
227
 
192
228
  ```rb
193
- thresholds = { upload: 30*1024*1024, copy: 200*1024*1024 }
194
- Shrine::Storage::S3.new(multipart_threshold: thresholds, **s3_options)
195
- ```
196
-
197
- If you want to change how many threads aws-sdk-s3 will use for multipart
198
- upload/copy, you can use the `upload_options` plugin to specify
199
- `:thread_count`.
200
-
201
- ```rb
202
- plugin :upload_options, store: -> (io, context) do
203
- { thread_count: 5 }
204
- end
229
+ Shrine::Storage::S3.new(
230
+ multipart_threshold: { upload: 30*1024*1024, copy: 200*1024*1024 },
231
+ **s3_options,
232
+ )
205
233
  ```
206
234
 
207
235
  ## Encryption
208
236
 
209
- The easiest way to use server-side encryption for uploaded S3 objects is to
237
+ The easiest way to use **server-side** encryption for uploaded S3 objects is to
210
238
  configure default encryption for your S3 bucket. Alternatively, you can pass
211
239
  server-side encryption parameters to the API calls.
212
240
 
213
- The `#upload` method accepts `:sse_*` options:
241
+ The `S3#upload` method accepts `:sse_*` options:
214
242
 
215
243
  ```rb
216
244
  s3.upload(io, "key", sse_customer_algorithm: "AES256",
@@ -219,7 +247,7 @@ s3.upload(io, "key", sse_customer_algorithm: "AES256",
219
247
  ssekms_key_id: "key_id")
220
248
  ```
221
249
 
222
- The `#presign` method accepts `:server_side_encryption_*` options for POST
250
+ The `S3#presign` method accepts `:server_side_encryption_*` options for POST
223
251
  presigns, and the same `:sse_*` options as above for PUT presigns.
224
252
 
225
253
  ```rb
@@ -237,16 +265,9 @@ s3.open("key", sse_customer_algorithm: "AES256",
237
265
  sse_customer_key_md5: "secret_key_md5")
238
266
  ```
239
267
 
240
- If you want to use client-side encryption instead, you can instantiate the
241
- storage with an `Aws::S3::Encryption::Client` instance.
242
-
243
- ```rb
244
- client = Aws::S3::Encryption::Client.new(
245
- kms_key_id: "alias/my-key"
246
- )
247
-
248
- Shrine::Storage::S3(client: client, bucket: "my-bucket")
249
- ```
268
+ If you want to use **client-side** encryption instead, note that it's still a
269
+ work in progress, see issue [#348] for some discussion and
270
+ [workarounds][client-side encryption workaround].
250
271
 
251
272
  ## Accelerate endpoint
252
273
 
@@ -279,20 +300,20 @@ Alternatively you can periodically call the `#clear!` method:
279
300
  s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 }
280
301
  ```
281
302
 
282
- ## Request Rate and Performance Guidelines
283
-
284
- Amazon S3 automatically scales to high request rates. For example, your
285
- application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests
286
- per second per prefix in a bucket (a prefix is a top-level "directory" in the
287
- bucket). If your app needs to support higher request rates to S3 than that, you
288
- can scale exponentially by using more prefixes.
289
-
303
+ [AWS S3]: https://aws.amazon.com/s3/
304
+ [MinIO]: https://min.io/
305
+ [DigitalOcean Spaces]: https://www.digitalocean.com/products/spaces/
306
+ [aws-sdk-s3]: https://rubygems.org/gems/aws-sdk-s3
290
307
  [uploading]: http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Object.html#put-instance_method
291
308
  [copying]: http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Object.html#copy_from-instance_method
292
309
  [presigning]: http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Object.html#presigned_post-instance_method
293
310
  [`Aws::S3::Object#presigned_url`]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Object.html#presigned_url-instance_method
294
- [aws-sdk-s3]: https://github.com/aws/aws-sdk-ruby/tree/master/gems/aws-sdk-s3
295
311
  [Transfer Acceleration]: http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
296
312
  [object lifecycle]: http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html
297
313
  [serve private content via CloudFront]: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
298
314
  [`Aws::CloudFront::UrlSigner`]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/CloudFront/UrlSigner.html
315
+ [`Aws::S3::Client#initialize`]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#initialize-instance_method
316
+ [`Aws::S3::Object#presigned_post`]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Object.html#presigned_post-instance_method
317
+ [`Aws::S3::Object#presigned_url`]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Object.html#presigned_url-instance_method
318
+ [#348]: https://github.com/shrinerb/shrine/issues/348
319
+ [client-side encryption workaround]: https://github.com/shrinerb/shrine/issues/348#issuecomment-486445382