shrine 1.0.0 → 1.1.0
Sign up to get free protection for your applications and to get access to all the features.
Potentially problematic release.
This version of shrine might be problematic. Click here for more details.
- checksums.yaml +4 -4
- data/README.md +101 -149
- data/doc/carrierwave.md +12 -16
- data/doc/changing_location.md +50 -0
- data/doc/creating_plugins.md +2 -2
- data/doc/creating_storages.md +70 -9
- data/doc/direct_s3.md +132 -61
- data/doc/migrating_storage.md +12 -10
- data/doc/paperclip.md +12 -17
- data/doc/refile.md +338 -0
- data/doc/regenerating_versions.md +75 -11
- data/doc/securing_uploads.md +172 -0
- data/lib/shrine.rb +21 -16
- data/lib/shrine/plugins/activerecord.rb +2 -2
- data/lib/shrine/plugins/background_helpers.rb +2 -148
- data/lib/shrine/plugins/backgrounding.rb +148 -0
- data/lib/shrine/plugins/backup.rb +88 -0
- data/lib/shrine/plugins/data_uri.rb +25 -4
- data/lib/shrine/plugins/default_url.rb +37 -0
- data/lib/shrine/plugins/delete_uploaded.rb +40 -0
- data/lib/shrine/plugins/determine_mime_type.rb +4 -2
- data/lib/shrine/plugins/direct_upload.rb +107 -62
- data/lib/shrine/plugins/download_endpoint.rb +157 -0
- data/lib/shrine/plugins/hooks.rb +19 -5
- data/lib/shrine/plugins/keep_location.rb +43 -0
- data/lib/shrine/plugins/moving.rb +11 -10
- data/lib/shrine/plugins/parallelize.rb +1 -5
- data/lib/shrine/plugins/parsed_json.rb +7 -1
- data/lib/shrine/plugins/pretty_location.rb +6 -0
- data/lib/shrine/plugins/rack_file.rb +7 -1
- data/lib/shrine/plugins/remove_invalid.rb +22 -0
- data/lib/shrine/plugins/sequel.rb +2 -2
- data/lib/shrine/plugins/upload_options.rb +41 -0
- data/lib/shrine/plugins/versions.rb +9 -7
- data/lib/shrine/storage/file_system.rb +46 -30
- data/lib/shrine/storage/linter.rb +48 -25
- data/lib/shrine/storage/s3.rb +89 -22
- data/lib/shrine/version.rb +1 -1
- data/shrine.gemspec +3 -3
- metadata +16 -5
@@ -0,0 +1,50 @@
|
|
1
|
+
# Changing Location of Files
|
2
|
+
|
3
|
+
You have a production app with already uploaded attachments. However, you've
|
4
|
+
realized that the existing store folder structure for attachments isn't working
|
5
|
+
for you.
|
6
|
+
|
7
|
+
The first step is to change the location, either by using the `pretty_location`
|
8
|
+
plugin:
|
9
|
+
|
10
|
+
```rb
|
11
|
+
Shrine.plugin :pretty_location
|
12
|
+
```
|
13
|
+
|
14
|
+
Or by overriding `#generate_location`:
|
15
|
+
|
16
|
+
```rb
|
17
|
+
class MyUploader < Shrine
|
18
|
+
def generate_location(io, context)
|
19
|
+
"#{context[:record].class}/#{context[:record.id]}/#{io.original_filename}"
|
20
|
+
end
|
21
|
+
end
|
22
|
+
```
|
23
|
+
|
24
|
+
After you've deployed this change, all existing attachments on old locations
|
25
|
+
will continue to work properly. The next step is to run a script that will
|
26
|
+
move those to new locations. The easiest way to do that is to reupload them:
|
27
|
+
|
28
|
+
```rb
|
29
|
+
Shrine.plugin :migration_helpers # before the model is loaded
|
30
|
+
Shrine.plugin :multi_delete # for deleting multiple files at once
|
31
|
+
```
|
32
|
+
```rb
|
33
|
+
old_avatars = []
|
34
|
+
|
35
|
+
User.paged_each do |user|
|
36
|
+
user.update_avatar do |avatar|
|
37
|
+
old_avatars << avatar
|
38
|
+
user.avatar_store.upload(avatar)
|
39
|
+
end
|
40
|
+
end
|
41
|
+
|
42
|
+
if old_avatars.any?
|
43
|
+
# you'll have to change this code slightly if you're using versions
|
44
|
+
uploader = old_avatars.first.uploader
|
45
|
+
uploader.delete(old_avatars)
|
46
|
+
end
|
47
|
+
```
|
48
|
+
|
49
|
+
And now all your existing attachments should be happily living on new
|
50
|
+
locations.
|
data/doc/creating_plugins.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1
|
-
# Creating a
|
1
|
+
# Creating a New Plugin
|
2
2
|
|
3
3
|
Shrine has a lot of plugins built-in, but you can also easily create your own.
|
4
4
|
Simply put, a plugin is a module:
|
@@ -68,7 +68,7 @@ these modules, you can also make your plugin configurable:
|
|
68
68
|
Shrine.plugin :my_plugin, foo: "bar"
|
69
69
|
```
|
70
70
|
|
71
|
-
You can do this
|
71
|
+
You can do this by adding a `.configure` method to your plugin, which will be
|
72
72
|
given any passed in arguments or blocks. Typically you'll want to save these
|
73
73
|
options into Shrine's `opts`, so that you can access them inside of Shrine's
|
74
74
|
methods.
|
data/doc/creating_storages.md
CHANGED
@@ -1,4 +1,6 @@
|
|
1
|
-
# Creating a
|
1
|
+
# Creating a New Storage
|
2
|
+
|
3
|
+
## Essentials
|
2
4
|
|
3
5
|
Shrine ships with the FileSystem and S3 storages, but it's also easy to create
|
4
6
|
your own. A storage is a class which has at least the following methods:
|
@@ -47,18 +49,46 @@ class Shrine
|
|
47
49
|
end
|
48
50
|
```
|
49
51
|
|
50
|
-
|
51
|
-
`
|
52
|
+
If your storage doesn't control which id the uploaded file will have, you
|
53
|
+
can modify the `id` variable:
|
52
54
|
|
53
55
|
```rb
|
54
|
-
|
56
|
+
def upload(io, id, metadata = {})
|
57
|
+
actual_id = do_upload(io, id, metadata)
|
58
|
+
id.replace(actual_id)
|
59
|
+
end
|
60
|
+
```
|
55
61
|
|
56
|
-
|
57
|
-
|
62
|
+
Likewise, if you need to save some information into the metadata after upload,
|
63
|
+
you can modify the metadata hash:
|
64
|
+
|
65
|
+
```rb
|
66
|
+
def upload(io, id, metadata = {})
|
67
|
+
additional_metadata = do_upload(io, id, metadata)
|
68
|
+
metadata.merge!(additional_metadata)
|
69
|
+
end
|
58
70
|
```
|
59
71
|
|
60
|
-
|
61
|
-
|
72
|
+
## Streaming
|
73
|
+
|
74
|
+
If your storage can stream files by yielding chunks, you can add an additional
|
75
|
+
`#stream` method:
|
76
|
+
|
77
|
+
```rb
|
78
|
+
class Shrine
|
79
|
+
module Storage
|
80
|
+
class MyStorage
|
81
|
+
# ...
|
82
|
+
|
83
|
+
def stream(id)
|
84
|
+
# yields chunks of the file
|
85
|
+
end
|
86
|
+
|
87
|
+
# ...
|
88
|
+
end
|
89
|
+
end
|
90
|
+
end
|
91
|
+
```
|
62
92
|
|
63
93
|
## Moving
|
64
94
|
|
@@ -76,7 +106,7 @@ class Shrine
|
|
76
106
|
end
|
77
107
|
|
78
108
|
def movable?(io, id)
|
79
|
-
# whether the given `io` is movable
|
109
|
+
# whether the given `io` is movable to the location `id`
|
80
110
|
end
|
81
111
|
|
82
112
|
# ...
|
@@ -106,3 +136,34 @@ class Shrine
|
|
106
136
|
end
|
107
137
|
end
|
108
138
|
```
|
139
|
+
|
140
|
+
## Linter
|
141
|
+
|
142
|
+
To check that your storage implements all these methods correctly, you can use
|
143
|
+
`Shrine::Storage::Linter`:
|
144
|
+
|
145
|
+
```rb
|
146
|
+
require "shrine/storage/linter"
|
147
|
+
|
148
|
+
storage = Shrine::Storage::MyStorage.new(*args)
|
149
|
+
linter = Shrine::Storage::Linter.new(storage)
|
150
|
+
linter.call
|
151
|
+
```
|
152
|
+
|
153
|
+
The linter will test your methods with simple IO objects, and raise an error
|
154
|
+
with an appropriate message if a part of the specification isn't satisfied.
|
155
|
+
|
156
|
+
If you want to specify the IO object to use for testing (e.g. you need the IO
|
157
|
+
to be an actual image), you can pass in a lambda which returns the IO when
|
158
|
+
called:
|
159
|
+
|
160
|
+
```rb
|
161
|
+
linter.call(->{File.open("test/fixtures/image.jpg")})
|
162
|
+
```
|
163
|
+
|
164
|
+
If you don't want errors to be raised but rather only warnings, you can
|
165
|
+
pass `action: :warn` when initializing
|
166
|
+
|
167
|
+
```rb
|
168
|
+
linter = Shrine::Storage::Linter.new(storage, action: :warn)
|
169
|
+
```
|
data/doc/direct_s3.md
CHANGED
@@ -1,13 +1,23 @@
|
|
1
|
-
# Direct
|
1
|
+
# Direct Uploads to S3
|
2
2
|
|
3
3
|
Probably the best way to do file uploads is to upload them directly to S3, and
|
4
|
-
|
5
|
-
|
4
|
+
then upon saving the record when file is moved to a permanent place, put that
|
5
|
+
and any additional file processing in the background. The goal of this guide
|
6
|
+
is to provide instructions, as well as evaluate possible ways of doing this.
|
7
|
+
|
8
|
+
```rb
|
9
|
+
require "shrine/storage/s3"
|
10
|
+
|
11
|
+
Shrine.storages = {
|
12
|
+
cache: Shrine::Storage::S3.new(prefix: "cache", **s3_options),
|
13
|
+
store: Shrine::Storage::S3.new(prefix: "store", **s3_options),
|
14
|
+
}
|
15
|
+
```
|
6
16
|
|
7
17
|
## Enabling CORS
|
8
18
|
|
9
|
-
First thing that
|
10
|
-
by clicking on "Properties > Permissions > Add CORS Configuration", and
|
19
|
+
First thing that you need to do is enable CORS on your S3 bucket. You can do
|
20
|
+
that by clicking on "Properties > Permissions > Add CORS Configuration", and
|
11
21
|
then just follow the Amazon documentation on how to write a CORS file.
|
12
22
|
|
13
23
|
http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
|
@@ -15,50 +25,50 @@ http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
|
|
15
25
|
Note that it may take some time for the CORS settings to be applied, due to
|
16
26
|
DNS propagation.
|
17
27
|
|
18
|
-
##
|
19
|
-
|
20
|
-
If you're doing just a single file upload in your form, you can generate
|
21
|
-
upfront the fields necessary for direct S3 uploads using
|
22
|
-
`Shrine::Storage::S3#presign`. This method returns a [`Aws::S3::PresignedPost`]
|
23
|
-
object, which has `#url` and `#fields`, which you could use like this:
|
28
|
+
## File hash
|
24
29
|
|
25
|
-
|
26
|
-
<% presign = Shrine.storages[:cache].presign(SecureRandom.hex) %>
|
30
|
+
Shrine's JSON representation of an uploaded file looks like this:
|
27
31
|
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
|
32
|
-
|
33
|
-
|
32
|
+
```rb
|
33
|
+
{
|
34
|
+
"id": "349234854924394", # requied
|
35
|
+
"storage": "cache", # required
|
36
|
+
"metadata": {
|
37
|
+
"size": 45461, # required
|
38
|
+
"filename": "foo.jpg", # optional
|
39
|
+
"mime_type": "image/jpeg", # optional
|
40
|
+
}
|
41
|
+
}
|
34
42
|
```
|
35
43
|
|
36
|
-
|
44
|
+
The `id`, `storage` and `metadata.size` fields are required, and the rest of
|
45
|
+
the metadata is optional. After uploading the file to S3, you need to construct
|
46
|
+
this JSON and assign it to the hidden attachment field in the form.
|
37
47
|
|
38
|
-
|
39
|
-
Shrine.storages[:cache].presign(SecureRandom.hex,
|
40
|
-
content_length_range: 0..(5*1024*1024), # Limit of 5 MB
|
41
|
-
success_action_redirect: webhook_url, # Tell S3 where to redirect
|
42
|
-
# ...
|
43
|
-
)
|
44
|
-
```
|
48
|
+
## Strategy A (dynamic)
|
45
49
|
|
46
|
-
|
50
|
+
* Best user experience
|
51
|
+
* Single or multiple file uploads
|
52
|
+
* Some JavaScript needed
|
47
53
|
|
48
|
-
|
49
|
-
|
50
|
-
plugins provides a route just for that:
|
54
|
+
You can configure the `direct_upload` plugin to expose the presign route, and
|
55
|
+
mount the endpoint:
|
51
56
|
|
52
57
|
```rb
|
53
58
|
plugin :direct_upload, presign: true
|
54
59
|
```
|
60
|
+
```rb
|
61
|
+
Rails.application.routes.draw do
|
62
|
+
mount ImageUploader::UploadEndpoint => "attachments/image"
|
63
|
+
end
|
64
|
+
```
|
55
65
|
|
56
66
|
This gives the endpoint a `GET /:storage/presign` route, which generates a
|
57
67
|
presign object and returns it as JSON:
|
58
68
|
|
59
69
|
```rb
|
60
70
|
{
|
61
|
-
"url" => "https://
|
71
|
+
"url" => "https://my-bucket.s3-eu-west-1.amazonaws.com",
|
62
72
|
"fields" => {
|
63
73
|
"key" => "b7d575850ba61b44c8a9ff889dfdb14d88cdc25f8dd121004c8",
|
64
74
|
"policy" => "eyJleHBpcmF0aW9uIjoiMjAxNS0QwMToxMToyOVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJzaHJpbmUtdGVzdGluZyJ9LHsia2V5IjoiYjdkNTc1ODUwYmE2MWI0NGU3Y2M4YTliZmY4OGU5ZGZkYjE2NTQ0ZDk4OGNkYzI1ZjhkZDEyMTAwNGM4In0seyJ4LWFtei1jcmVkZW50aWFsIjoiQUtJQUlKRjU1VE1aWlk0NVVUNlEvMjAxNTEwMjQvZXUtd2VzdC0xL3MzL2F3czRfcmVxdWVzdCJ9LHsieC1hbXotYWxnb3JpdGhtIjoiQVdTNC1ITUFDLVNIQTI1NiJ9LHsieC1hbXotZGF0ZSI6IjIwMTUxMDI0VDAwMTEyOVoifV19",
|
@@ -70,47 +80,108 @@ presign object and returns it as JSON:
|
|
70
80
|
}
|
71
81
|
```
|
72
82
|
|
73
|
-
|
74
|
-
the
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
a
|
83
|
+
When the user attaches a file, you should first request the presign object from
|
84
|
+
the direct endpoint, and then upload the file to the given URL with the given
|
85
|
+
fields. For uploading to S3 you can use any of the great JavaScript libraries
|
86
|
+
out there, [jQuery-File-Upload] for example.
|
87
|
+
|
88
|
+
After the upload you create a JSON representation of the uploaded file and
|
89
|
+
usually write it to the hidden attachment field in the form:
|
90
|
+
|
91
|
+
```js
|
92
|
+
var image = {
|
93
|
+
id: /cache\/(.+)/.exec(key)[1], # we have to remove the prefix part
|
94
|
+
storage: 'cache',
|
95
|
+
metadata: {
|
96
|
+
size: data.files[0].size,
|
97
|
+
filename: data.files[0].name,
|
98
|
+
mime_type: data.files[0].type,
|
99
|
+
}
|
100
|
+
}
|
79
101
|
|
80
|
-
|
81
|
-
plugin :direct_upload, presign: ->(request) do # yields a Roda request object
|
82
|
-
{success_action_redirect: "http://example.com/webhook"}
|
83
|
-
end
|
102
|
+
$('input[type=file]').prev().value(JSON.stringify(image))
|
84
103
|
```
|
85
104
|
|
86
|
-
|
105
|
+
It's generally a good idea to disable the submit button until the file is
|
106
|
+
uploaded, as well as display a progress bar. See the [example app] for the
|
107
|
+
working implementation of multiple direct S3 uploads.
|
87
108
|
|
88
|
-
|
89
|
-
the uploaded file which Shrine will understand. This is how a Shrine's uploaded
|
90
|
-
file looks like:
|
109
|
+
## Strategy B (static)
|
91
110
|
|
92
|
-
|
93
|
-
|
94
|
-
|
95
|
-
|
96
|
-
|
97
|
-
|
98
|
-
|
99
|
-
|
111
|
+
* Basic user experience
|
112
|
+
* Only for single uploads
|
113
|
+
* No JavaScript needed
|
114
|
+
|
115
|
+
An alternative to the previous strategy is generating a file upload form that
|
116
|
+
submits synchronously to S3, and then redirects back to your application.
|
117
|
+
For that you can use `Shrine::Storage::S3#presign`, which returns a
|
118
|
+
[`Aws::S3::PresignedPost`] object, which has `#url` and `#fields`:
|
119
|
+
|
120
|
+
```erb
|
121
|
+
<% presign = Shrine.storages[:cache].presign(SecureRandom.hex, success_action_redirect: new_album_url) %>
|
122
|
+
|
123
|
+
<form action="<%= presign.url %>" method="post" enctype="multipart/form-data">
|
124
|
+
<input type="file" name="file">
|
125
|
+
<% presign.fields.each do |name, value| %>
|
126
|
+
<input type="hidden" name="<%= name %>" value="<%= value %>">
|
127
|
+
<% end %>
|
128
|
+
<input type="submit" value="Upload">
|
129
|
+
</form>
|
130
|
+
```
|
131
|
+
|
132
|
+
After the file is submitted, S3 will redirect to the URL you specified and
|
133
|
+
include the object key as a query param:
|
134
|
+
|
135
|
+
```erb
|
136
|
+
<%
|
137
|
+
cached_file = {
|
138
|
+
storage: "cache",
|
139
|
+
id: params[:key][/cache\/(.+)/, 1], # we have to remove the prefix part,
|
140
|
+
metadata: {
|
141
|
+
size: Shrine.storages[:cache].bucket.object(params[:key]).size,
|
142
|
+
}
|
100
143
|
}
|
101
|
-
|
144
|
+
%>
|
145
|
+
|
146
|
+
<form action="/albums" method="post">
|
147
|
+
<input type="hidden" name="album[image]" value="<%= cached_file.to_json %>">
|
148
|
+
<input type="submit" value="Save">
|
149
|
+
</form>
|
102
150
|
```
|
103
151
|
|
104
|
-
|
105
|
-
|
106
|
-
|
152
|
+
Notice that we needed to fetch and assign the size of the uploaded file. This
|
153
|
+
is because this hash is later transformed into an IO which requires `#size`
|
154
|
+
to be non-nil (and it is read from the metadata field).
|
155
|
+
|
156
|
+
## Eventual consistency
|
157
|
+
|
158
|
+
When uploading objects to Amazon S3, sometimes they may not be available
|
159
|
+
immediately. This can be a problem when using direct S3 uploads, because
|
160
|
+
usually in this case you're using S3 for both cache and store, so the S3 object
|
161
|
+
is moved to store soon after caching.
|
162
|
+
|
163
|
+
> Amazon S3 provides eventual consistency for some operations, so it is
|
164
|
+
> possible that new data will not be available immediately after the upload,
|
165
|
+
> which could result in an incomplete data load or loading stale data. COPY
|
166
|
+
> operations where the cluster and the bucket are in different regions are
|
167
|
+
> eventually consistent. All regions provide read-after-write consistency for
|
168
|
+
> uploads of new objects with unique object keys. For more information about
|
169
|
+
> data consistency, see [Amazon S3 Data Consistency Model] in the *Amazon Simple
|
170
|
+
> Storage Service Developer Guide*.
|
171
|
+
|
172
|
+
This means that in certain cases copying from cache to store can fail if it
|
173
|
+
happens immediately after uploading to cache. If you start noticing these
|
174
|
+
errors, and you're using `backgrounding` plugin, you can tell your
|
175
|
+
backgrounding library to perform the job with a delay:
|
107
176
|
|
108
177
|
```rb
|
109
|
-
|
178
|
+
Shrine.plugin :backgrounding
|
179
|
+
Shrine::Attacher.promote do |data|
|
180
|
+
UploadJob.perform_in(60, data) # tells a Sidekiq worker to perform in 1 minute
|
181
|
+
end
|
110
182
|
```
|
111
183
|
|
112
|
-
In a form you can assign this to an appropriate "hidden" field.
|
113
|
-
|
114
184
|
[`Aws::S3::PresignedPost`]: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Bucket.html#presigned_post-instance_method
|
115
185
|
[example app]: https://github.com/janko-m/shrine-example
|
116
186
|
[jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
|
187
|
+
[Amazon S3 Data Consistency Model]: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMode
|
data/doc/migrating_storage.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1
|
-
# Migrating to
|
1
|
+
# Migrating to Another Storage
|
2
2
|
|
3
3
|
While your application is live in production and performing uploads, it may
|
4
4
|
happen that you decide you want to change your storage (the `:store`). Shrine
|
@@ -12,8 +12,8 @@ current store (let's say that you're migrating from FileSystem to S3):
|
|
12
12
|
|
13
13
|
```rb
|
14
14
|
Shrine.storages = {
|
15
|
-
cache: Shrine::Storage::FileSystem.new("public",
|
16
|
-
store: Shrine::Storage::FileSystem.new("public",
|
15
|
+
cache: Shrine::Storage::FileSystem.new("public", prefix: "uploads/cache"),
|
16
|
+
store: Shrine::Storage::FileSystem.new("public", prefix: "uploads/store"),
|
17
17
|
new_store: Shrine::Storage::S3.new(**s3_options),
|
18
18
|
}
|
19
19
|
|
@@ -30,11 +30,12 @@ files to the new storage, and update the records. This is how you can do it
|
|
30
30
|
if you're using Sequel:
|
31
31
|
|
32
32
|
```rb
|
33
|
-
Shrine.plugin :migration_helpers
|
34
|
-
|
33
|
+
Shrine.plugin :migration_helpers # before the model is loaded
|
34
|
+
```
|
35
|
+
```rb
|
35
36
|
User.paged_each do |user|
|
36
37
|
user.update_avatar do |avatar|
|
37
|
-
user.avatar_store.upload(avatar)
|
38
|
+
user.avatar_store.upload(avatar, {record: user, name: :avatar})
|
38
39
|
end
|
39
40
|
end
|
40
41
|
|
@@ -53,7 +54,7 @@ be `:store` again):
|
|
53
54
|
|
54
55
|
```rb
|
55
56
|
Shrine.storages = {
|
56
|
-
cache: Shrine::Storage::FileSystem.new("public",
|
57
|
+
cache: Shrine::Storage::FileSystem.new("public", prefix: "uploads/cache"),
|
57
58
|
store: Shrine::Storage::S3.new(**s3_options),
|
58
59
|
}
|
59
60
|
|
@@ -64,11 +65,12 @@ Shrine.storages[:new_store] = Shrine.storages[:store]
|
|
64
65
|
Sequel it would be something like:
|
65
66
|
|
66
67
|
```rb
|
67
|
-
Shrine.plugin :
|
68
|
-
|
68
|
+
Shrine.plugin :migration_helper # before the model is loaded
|
69
|
+
```
|
70
|
+
```rb
|
69
71
|
User.paged_each do |user|
|
70
72
|
user.update_avatar do |avatar|
|
71
|
-
avatar.to_json.gsub('new_store', 'store')
|
73
|
+
avatar.to_json.gsub('"new_store"', '"store"')
|
72
74
|
end
|
73
75
|
end
|
74
76
|
|