shrine 1.0.0 → 1.1.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

Files changed (40) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +101 -149
  3. data/doc/carrierwave.md +12 -16
  4. data/doc/changing_location.md +50 -0
  5. data/doc/creating_plugins.md +2 -2
  6. data/doc/creating_storages.md +70 -9
  7. data/doc/direct_s3.md +132 -61
  8. data/doc/migrating_storage.md +12 -10
  9. data/doc/paperclip.md +12 -17
  10. data/doc/refile.md +338 -0
  11. data/doc/regenerating_versions.md +75 -11
  12. data/doc/securing_uploads.md +172 -0
  13. data/lib/shrine.rb +21 -16
  14. data/lib/shrine/plugins/activerecord.rb +2 -2
  15. data/lib/shrine/plugins/background_helpers.rb +2 -148
  16. data/lib/shrine/plugins/backgrounding.rb +148 -0
  17. data/lib/shrine/plugins/backup.rb +88 -0
  18. data/lib/shrine/plugins/data_uri.rb +25 -4
  19. data/lib/shrine/plugins/default_url.rb +37 -0
  20. data/lib/shrine/plugins/delete_uploaded.rb +40 -0
  21. data/lib/shrine/plugins/determine_mime_type.rb +4 -2
  22. data/lib/shrine/plugins/direct_upload.rb +107 -62
  23. data/lib/shrine/plugins/download_endpoint.rb +157 -0
  24. data/lib/shrine/plugins/hooks.rb +19 -5
  25. data/lib/shrine/plugins/keep_location.rb +43 -0
  26. data/lib/shrine/plugins/moving.rb +11 -10
  27. data/lib/shrine/plugins/parallelize.rb +1 -5
  28. data/lib/shrine/plugins/parsed_json.rb +7 -1
  29. data/lib/shrine/plugins/pretty_location.rb +6 -0
  30. data/lib/shrine/plugins/rack_file.rb +7 -1
  31. data/lib/shrine/plugins/remove_invalid.rb +22 -0
  32. data/lib/shrine/plugins/sequel.rb +2 -2
  33. data/lib/shrine/plugins/upload_options.rb +41 -0
  34. data/lib/shrine/plugins/versions.rb +9 -7
  35. data/lib/shrine/storage/file_system.rb +46 -30
  36. data/lib/shrine/storage/linter.rb +48 -25
  37. data/lib/shrine/storage/s3.rb +89 -22
  38. data/lib/shrine/version.rb +1 -1
  39. data/shrine.gemspec +3 -3
  40. metadata +16 -5
@@ -0,0 +1,50 @@
1
+ # Changing Location of Files
2
+
3
+ You have a production app with already uploaded attachments. However, you've
4
+ realized that the existing store folder structure for attachments isn't working
5
+ for you.
6
+
7
+ The first step is to change the location, either by using the `pretty_location`
8
+ plugin:
9
+
10
+ ```rb
11
+ Shrine.plugin :pretty_location
12
+ ```
13
+
14
+ Or by overriding `#generate_location`:
15
+
16
+ ```rb
17
+ class MyUploader < Shrine
18
+ def generate_location(io, context)
19
+ "#{context[:record].class}/#{context[:record.id]}/#{io.original_filename}"
20
+ end
21
+ end
22
+ ```
23
+
24
+ After you've deployed this change, all existing attachments on old locations
25
+ will continue to work properly. The next step is to run a script that will
26
+ move those to new locations. The easiest way to do that is to reupload them:
27
+
28
+ ```rb
29
+ Shrine.plugin :migration_helpers # before the model is loaded
30
+ Shrine.plugin :multi_delete # for deleting multiple files at once
31
+ ```
32
+ ```rb
33
+ old_avatars = []
34
+
35
+ User.paged_each do |user|
36
+ user.update_avatar do |avatar|
37
+ old_avatars << avatar
38
+ user.avatar_store.upload(avatar)
39
+ end
40
+ end
41
+
42
+ if old_avatars.any?
43
+ # you'll have to change this code slightly if you're using versions
44
+ uploader = old_avatars.first.uploader
45
+ uploader.delete(old_avatars)
46
+ end
47
+ ```
48
+
49
+ And now all your existing attachments should be happily living on new
50
+ locations.
@@ -1,4 +1,4 @@
1
- # Creating a new plugin
1
+ # Creating a New Plugin
2
2
 
3
3
  Shrine has a lot of plugins built-in, but you can also easily create your own.
4
4
  Simply put, a plugin is a module:
@@ -68,7 +68,7 @@ these modules, you can also make your plugin configurable:
68
68
  Shrine.plugin :my_plugin, foo: "bar"
69
69
  ```
70
70
 
71
- You can do this my adding a `.configure` method to your plugin, which will be
71
+ You can do this by adding a `.configure` method to your plugin, which will be
72
72
  given any passed in arguments or blocks. Typically you'll want to save these
73
73
  options into Shrine's `opts`, so that you can access them inside of Shrine's
74
74
  methods.
@@ -1,4 +1,6 @@
1
- # Creating a new storage
1
+ # Creating a New Storage
2
+
3
+ ## Essentials
2
4
 
3
5
  Shrine ships with the FileSystem and S3 storages, but it's also easy to create
4
6
  your own. A storage is a class which has at least the following methods:
@@ -47,18 +49,46 @@ class Shrine
47
49
  end
48
50
  ```
49
51
 
50
- To check that your storage implements all these methods correctly, you can use
51
- `Shrine::Storage::Linter` in tests:
52
+ If your storage doesn't control which id the uploaded file will have, you
53
+ can modify the `id` variable:
52
54
 
53
55
  ```rb
54
- require "shrine/storage/linter"
56
+ def upload(io, id, metadata = {})
57
+ actual_id = do_upload(io, id, metadata)
58
+ id.replace(actual_id)
59
+ end
60
+ ```
55
61
 
56
- storage = Shrine::Storage::MyStorage.new(*args)
57
- Shrine::Storage::Linter.call(storage)
62
+ Likewise, if you need to save some information into the metadata after upload,
63
+ you can modify the metadata hash:
64
+
65
+ ```rb
66
+ def upload(io, id, metadata = {})
67
+ additional_metadata = do_upload(io, id, metadata)
68
+ metadata.merge!(additional_metadata)
69
+ end
58
70
  ```
59
71
 
60
- The linter will pass real files through your storage, and raise an error with
61
- an appropriate message if a part of the specification isn't satisfied.
72
+ ## Streaming
73
+
74
+ If your storage can stream files by yielding chunks, you can add an additional
75
+ `#stream` method:
76
+
77
+ ```rb
78
+ class Shrine
79
+ module Storage
80
+ class MyStorage
81
+ # ...
82
+
83
+ def stream(id)
84
+ # yields chunks of the file
85
+ end
86
+
87
+ # ...
88
+ end
89
+ end
90
+ end
91
+ ```
62
92
 
63
93
  ## Moving
64
94
 
@@ -76,7 +106,7 @@ class Shrine
76
106
  end
77
107
 
78
108
  def movable?(io, id)
79
- # whether the given `io` is movable, to the location `id`
109
+ # whether the given `io` is movable to the location `id`
80
110
  end
81
111
 
82
112
  # ...
@@ -106,3 +136,34 @@ class Shrine
106
136
  end
107
137
  end
108
138
  ```
139
+
140
+ ## Linter
141
+
142
+ To check that your storage implements all these methods correctly, you can use
143
+ `Shrine::Storage::Linter`:
144
+
145
+ ```rb
146
+ require "shrine/storage/linter"
147
+
148
+ storage = Shrine::Storage::MyStorage.new(*args)
149
+ linter = Shrine::Storage::Linter.new(storage)
150
+ linter.call
151
+ ```
152
+
153
+ The linter will test your methods with simple IO objects, and raise an error
154
+ with an appropriate message if a part of the specification isn't satisfied.
155
+
156
+ If you want to specify the IO object to use for testing (e.g. you need the IO
157
+ to be an actual image), you can pass in a lambda which returns the IO when
158
+ called:
159
+
160
+ ```rb
161
+ linter.call(->{File.open("test/fixtures/image.jpg")})
162
+ ```
163
+
164
+ If you don't want errors to be raised but rather only warnings, you can
165
+ pass `action: :warn` when initializing
166
+
167
+ ```rb
168
+ linter = Shrine::Storage::Linter.new(storage, action: :warn)
169
+ ```
data/doc/direct_s3.md CHANGED
@@ -1,13 +1,23 @@
1
- # Direct uploads to S3
1
+ # Direct Uploads to S3
2
2
 
3
3
  Probably the best way to do file uploads is to upload them directly to S3, and
4
- afterwards do processing in a background job. Direct S3 uploads are a bit more
5
- involved, so we'll explain the process.
4
+ then upon saving the record when file is moved to a permanent place, put that
5
+ and any additional file processing in the background. The goal of this guide
6
+ is to provide instructions, as well as evaluate possible ways of doing this.
7
+
8
+ ```rb
9
+ require "shrine/storage/s3"
10
+
11
+ Shrine.storages = {
12
+ cache: Shrine::Storage::S3.new(prefix: "cache", **s3_options),
13
+ store: Shrine::Storage::S3.new(prefix: "store", **s3_options),
14
+ }
15
+ ```
6
16
 
7
17
  ## Enabling CORS
8
18
 
9
- First thing that we need to do is enable CORS on our S3 bucket. You can do that
10
- by clicking on "Properties > Permissions > Add CORS Configuration", and
19
+ First thing that you need to do is enable CORS on your S3 bucket. You can do
20
+ that by clicking on "Properties > Permissions > Add CORS Configuration", and
11
21
  then just follow the Amazon documentation on how to write a CORS file.
12
22
 
13
23
  http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
@@ -15,50 +25,50 @@ http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
15
25
  Note that it may take some time for the CORS settings to be applied, due to
16
26
  DNS propagation.
17
27
 
18
- ## Static upload
19
-
20
- If you're doing just a single file upload in your form, you can generate
21
- upfront the fields necessary for direct S3 uploads using
22
- `Shrine::Storage::S3#presign`. This method returns a [`Aws::S3::PresignedPost`]
23
- object, which has `#url` and `#fields`, which you could use like this:
28
+ ## File hash
24
29
 
25
- ```erb
26
- <% presign = Shrine.storages[:cache].presign(SecureRandom.hex) %>
30
+ Shrine's JSON representation of an uploaded file looks like this:
27
31
 
28
- <form action="<%= presign.url %>" method="post" enctype="multipart/form-data">
29
- <input type="file" name="file">
30
- <% presign.fields.each do |name, value| %>
31
- <input type="hidden" name="<%= name %>" value="<%= value %>">
32
- <% end %>
33
- </form>
32
+ ```rb
33
+ {
34
+ "id": "349234854924394", # requied
35
+ "storage": "cache", # required
36
+ "metadata": {
37
+ "size": 45461, # required
38
+ "filename": "foo.jpg", # optional
39
+ "mime_type": "image/jpeg", # optional
40
+ }
41
+ }
34
42
  ```
35
43
 
36
- You can also pass additional options to `#presign`:
44
+ The `id`, `storage` and `metadata.size` fields are required, and the rest of
45
+ the metadata is optional. After uploading the file to S3, you need to construct
46
+ this JSON and assign it to the hidden attachment field in the form.
37
47
 
38
- ```rb
39
- Shrine.storages[:cache].presign(SecureRandom.hex,
40
- content_length_range: 0..(5*1024*1024), # Limit of 5 MB
41
- success_action_redirect: webhook_url, # Tell S3 where to redirect
42
- # ...
43
- )
44
- ```
48
+ ## Strategy A (dynamic)
45
49
 
46
- ## Dynamic upload
50
+ * Best user experience
51
+ * Single or multiple file uploads
52
+ * Some JavaScript needed
47
53
 
48
- If the frontend is separate from the backend, or you want to do multiple file
49
- uploads, you need to generate these presigns dynamically. The `direct_upload`
50
- plugins provides a route just for that:
54
+ You can configure the `direct_upload` plugin to expose the presign route, and
55
+ mount the endpoint:
51
56
 
52
57
  ```rb
53
58
  plugin :direct_upload, presign: true
54
59
  ```
60
+ ```rb
61
+ Rails.application.routes.draw do
62
+ mount ImageUploader::UploadEndpoint => "attachments/image"
63
+ end
64
+ ```
55
65
 
56
66
  This gives the endpoint a `GET /:storage/presign` route, which generates a
57
67
  presign object and returns it as JSON:
58
68
 
59
69
  ```rb
60
70
  {
61
- "url" => "https://shrine-testing.s3-eu-west-1.amazonaws.com",
71
+ "url" => "https://my-bucket.s3-eu-west-1.amazonaws.com",
62
72
  "fields" => {
63
73
  "key" => "b7d575850ba61b44c8a9ff889dfdb14d88cdc25f8dd121004c8",
64
74
  "policy" => "eyJleHBpcmF0aW9uIjoiMjAxNS0QwMToxMToyOVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJzaHJpbmUtdGVzdGluZyJ9LHsia2V5IjoiYjdkNTc1ODUwYmE2MWI0NGU3Y2M4YTliZmY4OGU5ZGZkYjE2NTQ0ZDk4OGNkYzI1ZjhkZDEyMTAwNGM4In0seyJ4LWFtei1jcmVkZW50aWFsIjoiQUtJQUlKRjU1VE1aWlk0NVVUNlEvMjAxNTEwMjQvZXUtd2VzdC0xL3MzL2F3czRfcmVxdWVzdCJ9LHsieC1hbXotYWxnb3JpdGhtIjoiQVdTNC1ITUFDLVNIQTI1NiJ9LHsieC1hbXotZGF0ZSI6IjIwMTUxMDI0VDAwMTEyOVoifV19",
@@ -70,47 +80,108 @@ presign object and returns it as JSON:
70
80
  }
71
81
  ```
72
82
 
73
- You can use this data in a similar way as with static upload. See
74
- the [example app] for how multiple file upload to S3 can be done using
75
- [jQuery-File-Upload].
76
-
77
- If you want to pass additional options to `Storage::S3#presign`, you can pass
78
- a block to `:presign`:
83
+ When the user attaches a file, you should first request the presign object from
84
+ the direct endpoint, and then upload the file to the given URL with the given
85
+ fields. For uploading to S3 you can use any of the great JavaScript libraries
86
+ out there, [jQuery-File-Upload] for example.
87
+
88
+ After the upload you create a JSON representation of the uploaded file and
89
+ usually write it to the hidden attachment field in the form:
90
+
91
+ ```js
92
+ var image = {
93
+ id: /cache\/(.+)/.exec(key)[1], # we have to remove the prefix part
94
+ storage: 'cache',
95
+ metadata: {
96
+ size: data.files[0].size,
97
+ filename: data.files[0].name,
98
+ mime_type: data.files[0].type,
99
+ }
100
+ }
79
101
 
80
- ```rb
81
- plugin :direct_upload, presign: ->(request) do # yields a Roda request object
82
- {success_action_redirect: "http://example.com/webhook"}
83
- end
102
+ $('input[type=file]').prev().value(JSON.stringify(image))
84
103
  ```
85
104
 
86
- ## File hash
105
+ It's generally a good idea to disable the submit button until the file is
106
+ uploaded, as well as display a progress bar. See the [example app] for the
107
+ working implementation of multiple direct S3 uploads.
87
108
 
88
- Once you've uploaded the file to S3, you need to create the representation of
89
- the uploaded file which Shrine will understand. This is how a Shrine's uploaded
90
- file looks like:
109
+ ## Strategy B (static)
91
110
 
92
- ```rb
93
- {
94
- "id" => "349234854924394",
95
- "storage" => "cache",
96
- "metadata" => {
97
- "size" => 45461,
98
- "filename" => "foo.jpg", # optional
99
- "mime_type" => "image/jpeg", # optional
111
+ * Basic user experience
112
+ * Only for single uploads
113
+ * No JavaScript needed
114
+
115
+ An alternative to the previous strategy is generating a file upload form that
116
+ submits synchronously to S3, and then redirects back to your application.
117
+ For that you can use `Shrine::Storage::S3#presign`, which returns a
118
+ [`Aws::S3::PresignedPost`] object, which has `#url` and `#fields`:
119
+
120
+ ```erb
121
+ <% presign = Shrine.storages[:cache].presign(SecureRandom.hex, success_action_redirect: new_album_url) %>
122
+
123
+ <form action="<%= presign.url %>" method="post" enctype="multipart/form-data">
124
+ <input type="file" name="file">
125
+ <% presign.fields.each do |name, value| %>
126
+ <input type="hidden" name="<%= name %>" value="<%= value %>">
127
+ <% end %>
128
+ <input type="submit" value="Upload">
129
+ </form>
130
+ ```
131
+
132
+ After the file is submitted, S3 will redirect to the URL you specified and
133
+ include the object key as a query param:
134
+
135
+ ```erb
136
+ <%
137
+ cached_file = {
138
+ storage: "cache",
139
+ id: params[:key][/cache\/(.+)/, 1], # we have to remove the prefix part,
140
+ metadata: {
141
+ size: Shrine.storages[:cache].bucket.object(params[:key]).size,
142
+ }
100
143
  }
101
- }
144
+ %>
145
+
146
+ <form action="/albums" method="post">
147
+ <input type="hidden" name="album[image]" value="<%= cached_file.to_json %>">
148
+ <input type="submit" value="Save">
149
+ </form>
102
150
  ```
103
151
 
104
- The `id`, `storage` and `metadata.size` fields are required, and the rest of
105
- the metadata is optional. You need to assign a JSON representation of this
106
- hash to the model in place of the attachment.
152
+ Notice that we needed to fetch and assign the size of the uploaded file. This
153
+ is because this hash is later transformed into an IO which requires `#size`
154
+ to be non-nil (and it is read from the metadata field).
155
+
156
+ ## Eventual consistency
157
+
158
+ When uploading objects to Amazon S3, sometimes they may not be available
159
+ immediately. This can be a problem when using direct S3 uploads, because
160
+ usually in this case you're using S3 for both cache and store, so the S3 object
161
+ is moved to store soon after caching.
162
+
163
+ > Amazon S3 provides eventual consistency for some operations, so it is
164
+ > possible that new data will not be available immediately after the upload,
165
+ > which could result in an incomplete data load or loading stale data. COPY
166
+ > operations where the cluster and the bucket are in different regions are
167
+ > eventually consistent. All regions provide read-after-write consistency for
168
+ > uploads of new objects with unique object keys. For more information about
169
+ > data consistency, see [Amazon S3 Data Consistency Model] in the *Amazon Simple
170
+ > Storage Service Developer Guide*.
171
+
172
+ This means that in certain cases copying from cache to store can fail if it
173
+ happens immediately after uploading to cache. If you start noticing these
174
+ errors, and you're using `backgrounding` plugin, you can tell your
175
+ backgrounding library to perform the job with a delay:
107
176
 
108
177
  ```rb
109
- user.avatar = '{"id":"43244656","storage":"cache",...}'
178
+ Shrine.plugin :backgrounding
179
+ Shrine::Attacher.promote do |data|
180
+ UploadJob.perform_in(60, data) # tells a Sidekiq worker to perform in 1 minute
181
+ end
110
182
  ```
111
183
 
112
- In a form you can assign this to an appropriate "hidden" field.
113
-
114
184
  [`Aws::S3::PresignedPost`]: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Bucket.html#presigned_post-instance_method
115
185
  [example app]: https://github.com/janko-m/shrine-example
116
186
  [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
187
+ [Amazon S3 Data Consistency Model]: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMode
@@ -1,4 +1,4 @@
1
- # Migrating to another storage
1
+ # Migrating to Another Storage
2
2
 
3
3
  While your application is live in production and performing uploads, it may
4
4
  happen that you decide you want to change your storage (the `:store`). Shrine
@@ -12,8 +12,8 @@ current store (let's say that you're migrating from FileSystem to S3):
12
12
 
13
13
  ```rb
14
14
  Shrine.storages = {
15
- cache: Shrine::Storage::FileSystem.new("public", subdirectory: "uploads/cache"),
16
- store: Shrine::Storage::FileSystem.new("public", subdirectory: "uploads/store"),
15
+ cache: Shrine::Storage::FileSystem.new("public", prefix: "uploads/cache"),
16
+ store: Shrine::Storage::FileSystem.new("public", prefix: "uploads/store"),
17
17
  new_store: Shrine::Storage::S3.new(**s3_options),
18
18
  }
19
19
 
@@ -30,11 +30,12 @@ files to the new storage, and update the records. This is how you can do it
30
30
  if you're using Sequel:
31
31
 
32
32
  ```rb
33
- Shrine.plugin :migration_helpers
34
-
33
+ Shrine.plugin :migration_helpers # before the model is loaded
34
+ ```
35
+ ```rb
35
36
  User.paged_each do |user|
36
37
  user.update_avatar do |avatar|
37
- user.avatar_store.upload(avatar)
38
+ user.avatar_store.upload(avatar, {record: user, name: :avatar})
38
39
  end
39
40
  end
40
41
 
@@ -53,7 +54,7 @@ be `:store` again):
53
54
 
54
55
  ```rb
55
56
  Shrine.storages = {
56
- cache: Shrine::Storage::FileSystem.new("public", subdirectory: "uploads/cache"),
57
+ cache: Shrine::Storage::FileSystem.new("public", prefix: "uploads/cache"),
57
58
  store: Shrine::Storage::S3.new(**s3_options),
58
59
  }
59
60
 
@@ -64,11 +65,12 @@ Shrine.storages[:new_store] = Shrine.storages[:store]
64
65
  Sequel it would be something like:
65
66
 
66
67
  ```rb
67
- Shrine.plugin :migration_helpers
68
-
68
+ Shrine.plugin :migration_helper # before the model is loaded
69
+ ```
70
+ ```rb
69
71
  User.paged_each do |user|
70
72
  user.update_avatar do |avatar|
71
- avatar.to_json.gsub('new_store', 'store')
73
+ avatar.to_json.gsub('"new_store"', '"store"')
72
74
  end
73
75
  end
74
76