shrine 2.1.1 → 2.2.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

data/doc/carrierwave.md CHANGED
@@ -41,22 +41,21 @@ own options that are specific to them.
41
41
 
42
42
  ### Processing
43
43
 
44
- In Shrine processing is done instance-level in the `#process` method, and can
45
- be specified for each phase. You can return a single processed file or a hash of
46
- versions (with the `versions` plugin):
44
+ In Shrine processing is defined and performed on the instance-level, which
45
+ gives a lot of flexibility. You can return a single processed file or a hash of
46
+ versions:
47
47
 
48
48
  ```rb
49
49
  require "image_processing/mini_magick" # part of the "image_processing" gem
50
50
 
51
51
  class ImageUploader < Shrine
52
52
  include ImageProcessing::MiniMagick
53
+ plugin :processing
53
54
  plugin :versions
54
55
 
55
- def process(io, context)
56
- if context[:phase] == :store
57
- thumb = resize_to_limit(io.download, 300, 300)
58
- {original: io, thumb: thumb}
59
- end
56
+ process(:store) do |io, context|
57
+ thumbnail = resize_to_limit(io.download, 300, 300)
58
+ {original: io, thumbnail: thumbnail}
60
59
  end
61
60
  end
62
61
  ```
@@ -496,23 +495,13 @@ them to validation errors.
496
495
 
497
496
  #### `validate_processing`, `ignore_processing_errors`
498
497
 
499
- Shrine doesn't offer any built-in ways of rescuing processing errors, because
500
- it completely depends on how you do your processing. You can easily add your
501
- own rescuing:
502
-
503
- ```rb
504
- class ImageUploader < Shrine
505
- def process(io, context)
506
- # processing
507
- rescue SomeProcessingError
508
- # handling
509
- end
510
- end
511
- ```
498
+ In Shrine processing is performed *after* validations, and typically
499
+ asynchronously in a background job, so it is expected that you validate files
500
+ before processing.
512
501
 
513
502
  #### `enable_processing`
514
503
 
515
- You can just do conditionals inside if `Shrine#process`.
504
+ You can just add conditionals in processing code.
516
505
 
517
506
  #### `ensure_multipart_form`
518
507
 
@@ -20,12 +20,12 @@ Shrine.plugin :delete_promoted
20
20
 
21
21
  User.paged_each do |user|
22
22
  attacher = user.avatar_attacher
23
- attacher.promote(phase: :migrate) if attacher.stored?
24
- # use `attacher._promote(phase: :migrate)` if you want promoting to be backgrounded
23
+ attacher.promote(action: :migrate) if attacher.stored?
24
+ # use `attacher._promote(action: :migrate)` if you want promoting to be backgrounded
25
25
  end
26
26
  ```
27
27
 
28
- The `:phase` is not mandatory, it's just for better introspection when
28
+ The `:action` is not mandatory, it's just for better introspection when
29
29
  monitoring background jobs and logs.
30
30
 
31
31
  Now all your existing attachments should be happily living on new locations.
@@ -3,32 +3,25 @@
3
3
  ## Essentials
4
4
 
5
5
  Shrine ships with the FileSystem and S3 storages, but it's also easy to create
6
- your own. A storage is a class which has at least the following methods:
6
+ your own. A storage is a class which needs to implement to the following
7
+ methods:
7
8
 
8
9
  ```rb
9
10
  class Shrine
10
11
  module Storage
11
12
  class MyStorage
12
- def initialize(*args)
13
- # initializing logic
14
- end
15
-
16
- def upload(io, id, shrine_metadata: {}, **upload_options)
13
+ def upload(io, id, **options)
17
14
  # uploads `io` to the location `id`
18
15
  end
19
16
 
20
- def download(id)
21
- # downloads the file from the storage
17
+ def url(id, **options)
18
+ # URL to the remote file, accepts options for customizing the URL
22
19
  end
23
20
 
24
21
  def open(id)
25
22
  # returns the remote file as an IO-like object
26
23
  end
27
24
 
28
- def read(id)
29
- # returns the file contents as a string
30
- end
31
-
32
25
  def exists?(id)
33
26
  # checks if the file exists on the storage
34
27
  end
@@ -36,14 +29,6 @@ class Shrine
36
29
  def delete(id)
37
30
  # deletes the file from the storage
38
31
  end
39
-
40
- def url(id, **options)
41
- # URL to the remote file, accepts options for customizing the URL
42
- end
43
-
44
- def clear!
45
- # deletes all the files in the storage
46
- end
47
32
  end
48
33
  end
49
34
  end
@@ -76,6 +61,28 @@ def upload(io, id, shrine_metadata: {}, **upload_options)
76
61
  end
77
62
  ```
78
63
 
64
+ ## Download
65
+
66
+ Shrine automatically downloads the file to a Tempfile using `#open`. However,
67
+ if you would like to implement your own downloading, you can define `#download`
68
+ and Shrine will use that instead:
69
+
70
+ ```rb
71
+ class Shrine
72
+ module Storage
73
+ class MyStorage
74
+ # ...
75
+
76
+ def download(id)
77
+ # download the file to a Tempfile
78
+ end
79
+
80
+ # ...
81
+ end
82
+ end
83
+ end
84
+ ```
85
+
79
86
  ## Update
80
87
 
81
88
  If your storage supports updating data of existing files (e.g. some metadata),
@@ -108,7 +115,7 @@ class Shrine
108
115
  class MyStorage
109
116
  # ...
110
117
 
111
- def move(io, id, shrine_metadata: {}, **upload_options)
118
+ def move(io, id, **upload_options)
112
119
  # does the moving of the `io` to the location `id`
113
120
  end
114
121
 
@@ -144,6 +151,26 @@ class Shrine
144
151
  end
145
152
  ```
146
153
 
154
+ ## Clearing
155
+
156
+ While this method is not used by Shrine, it is good to give users the
157
+ possibility to delete all files in a storage, and the conventional name for
158
+ this method is `#clear!`:
159
+
160
+ class Shrine
161
+ module Strorage
162
+ class MyStorage
163
+ # ...
164
+
165
+ def clear!
166
+ # deletes all files in the storage
167
+ end
168
+
169
+ # ...
170
+ end
171
+ end
172
+ end
173
+
147
174
  ## Linter
148
175
 
149
176
  To check that your storage implements all these methods correctly, you can use
data/doc/direct_s3.md CHANGED
@@ -49,7 +49,7 @@ Shrine's JSON representation of an uploaded file looks like this:
49
49
  The `id`, `storage` fields are optional, while the `metadata` values are
50
50
  optional (`metadata.size` is only required to later upload that file to a
51
51
  non-S3 storage). After uploading the file to S3, you need to construct this
52
- JSON and assign it to the hidden attachment field in the form.
52
+ JSON, and then you can assign it to the hidden attachment field in the form.
53
53
 
54
54
  ## Strategy A (dynamic)
55
55
 
@@ -57,22 +57,26 @@ JSON and assign it to the hidden attachment field in the form.
57
57
  * Single or multiple file uploads
58
58
  * Some JavaScript needed
59
59
 
60
- You can configure the `direct_upload` plugin to expose the presign route, and
61
- mount the endpoint:
60
+ When the user selects the file, we dynamically request the presign from the
61
+ server, and use this information to start uploading the file to S3. The
62
+ direct_upload plugin gives us this presign route, so we just need to mount it
63
+ in our application:
62
64
 
63
65
  ```rb
64
- plugin :direct_upload, presign: true
66
+ plugin :direct_upload
65
67
  ```
66
68
  ```rb
67
69
  Rails.application.routes.draw do
68
- mount ImageUploader::UploadEndpoint => "attachments/image"
70
+ mount ImageUploader::UploadEndpoint => "/image"
69
71
  end
70
72
  ```
71
73
 
72
- This gives the endpoint a `GET /:storage/presign` route, which generates a
73
- presign object and returns it as JSON:
74
+ This gives your application a `GET /images/cache/presign` route, which
75
+ returns the S3 URL which the file should be uploaded to, along with the
76
+ necessary request parameters:
74
77
 
75
78
  ```rb
79
+ # GET /images/cache/presign
76
80
  {
77
81
  "url" => "https://my-bucket.s3-eu-west-1.amazonaws.com",
78
82
  "fields" => {
@@ -86,30 +90,27 @@ presign object and returns it as JSON:
86
90
  }
87
91
  ```
88
92
 
89
- When the user attaches a file, you should first request the presign object from
90
- the direct endpoint, and then upload the file to the given URL with the given
91
- fields. For uploading to S3 you can use any of the great JavaScript libraries
92
- out there, [jQuery-File-Upload] for example.
93
-
94
- After the upload you create a JSON representation of the uploaded file and
95
- usually write it to the hidden attachment field in the form:
93
+ For uploading to S3 you'll probably want to use a JavaScript file upload
94
+ library like [jQuery-File-Upload] or [Dropzone]. After the upload you should
95
+ create a JSON representation of the uploaded file, which you can write to
96
+ the hidden attachment field:
96
97
 
97
98
  ```js
98
99
  var image = {
99
- id: key.match(/cache\/(.+)/)[1], # we have to remove the prefix part
100
+ id: key.match(/cache\/(.+)/)[1], // we have to remove the prefix part
100
101
  storage: 'cache',
101
102
  metadata: {
102
103
  size: data.files[0].size,
103
- filename: data.files[0].name,
104
- mime_type: data.files[0].type,
104
+ filename: data.files[0].name.match(/[^\/\\]+$/)[0], // IE returns full path
105
+ mime_type: data.files[0].type
105
106
  }
106
107
  }
107
108
 
108
- $('input[type=file]').prev().value(JSON.stringify(image))
109
+ $('input[type=file]').prev().val(JSON.stringify(image))
109
110
  ```
110
111
 
111
112
  It's generally a good idea to disable the submit button until the file is
112
- uploaded, as well as display a progress bar. See the [example app] for the
113
+ uploaded, as well as display a progress bar. See the [example app] for a
113
114
  working implementation of multiple direct S3 uploads.
114
115
 
115
116
  ## Strategy B (static)
@@ -118,10 +119,11 @@ working implementation of multiple direct S3 uploads.
118
119
  * Only for single uploads
119
120
  * No JavaScript needed
120
121
 
121
- An alternative to the previous strategy is generating a file upload form that
122
- submits synchronously to S3, and then redirects back to your application.
123
- For that you can use `Shrine::Storage::S3#presign`, which returns a
124
- [`Aws::S3::PresignedPost`] object, which has `#url` and `#fields`:
122
+ An alternative to the previous strategy is generating a file upload form
123
+ immediately when the page is rendered, and then file upload can be either
124
+ asynchronous, or synchronous with redirection. For generating the form we can
125
+ use `Shrine::Storage::S3#presign`, which returns a [`Aws::S3::PresignedPost`]
126
+ object, which has `#url` and `#fields` methods:
125
127
 
126
128
  ```erb
127
129
  <% presign = Shrine.storages[:cache].presign(SecureRandom.hex, success_action_redirect: new_album_url) %>
@@ -135,8 +137,9 @@ For that you can use `Shrine::Storage::S3#presign`, which returns a
135
137
  </form>
136
138
  ```
137
139
 
138
- After the file is submitted, S3 will redirect to the URL you specified and
139
- include the object key as a query param:
140
+ If you're doing synchronous upload with redirection, the redirect URL will
141
+ include the object key in the query parameters, which you can use to generate
142
+ Shrine's uploaded file representation:
140
143
 
141
144
  ```erb
142
145
  <%
@@ -153,10 +156,6 @@ include the object key as a query param:
153
156
  </form>
154
157
  ```
155
158
 
156
- Notice that we needed to fetch and assign the size of the uploaded file. This
157
- is because this hash is later transformed into an IO which requires `#size`
158
- to be non-nil (and it is read from the metadata field).
159
-
160
159
  ## Metadata
161
160
 
162
161
  With direct uploads any metadata has to be extracted on the client, since
@@ -170,6 +169,13 @@ load the restore_cached_data plugin.
170
169
  plugin :restore_cached_data
171
170
  ```
172
171
 
172
+ ## Clearing cache
173
+
174
+ Since directly uploaded files will stay in your temporary storage, you will
175
+ want to periodically delete the old ones that were already promoted. Luckily,
176
+ Amazon provides [a built-in solution](http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html)
177
+ for that.
178
+
173
179
  ## Eventual consistency
174
180
 
175
181
  When uploading objects to Amazon S3, sometimes they may not be available
@@ -193,6 +199,7 @@ backgrounding library to perform the job with a delay:
193
199
 
194
200
  ```rb
195
201
  Shrine.plugin :backgrounding
202
+
196
203
  Shrine::Attacher.promote do |data|
197
204
  PromoteJob.perform_in(60, data) # tells a Sidekiq worker to perform in 1 minute
198
205
  end
@@ -200,5 +207,6 @@ end
200
207
 
201
208
  [`Aws::S3::PresignedPost`]: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Bucket.html#presigned_post-instance_method
202
209
  [example app]: https://github.com/janko-m/shrine-example
210
+ [Dropzone]: https://github.com/enyo/dropzone
203
211
  [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
204
212
  [Amazon S3 Data Consistency Model]: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMode
data/doc/paperclip.md CHANGED
@@ -46,23 +46,21 @@ uploaded_file.original_filename #=> "nature.jpg"
46
46
 
47
47
  ### Processing
48
48
 
49
- In Shrine you do processing inside the uploader's `#process` method, and unlike
50
- Paperclip, the processing is done on instance-level, so you have maximum
51
- flexibility. In Shrine you generate versions by simply returning a hash, and
52
- also loading the `versions` plugin to make your uploader recognize versions:
49
+ Unlike Paperclip, in Shrine you define and perform processing on
50
+ instance-level, which gives a lot of flexibility. As the result you can return
51
+ a single file or a hash of versions:
53
52
 
54
53
  ```rb
55
54
  require "image_processing/mini_magick" # part of the "image_processing" gem
56
55
 
57
56
  class ImageUploader < Shrine
58
57
  include ImageProcessing::MiniMagick
58
+ plugin :processing
59
59
  plugin :versions
60
60
 
61
- def process(io, context)
62
- if context[:phase] == :store
63
- thumb = resize_to_limit(io.download, 300, 300)
64
- {original: io, thumb: thumb}
65
- end
61
+ process(:store) do |io, context|
62
+ thumbnail = resize_to_limit(io.download, 300, 300)
63
+ {original: io, thumbnail: thumbnail}
66
64
  end
67
65
  end
68
66
  ```
data/doc/refile.md CHANGED
@@ -107,20 +107,18 @@ on upload (like CarrierWave and Paperclip). However, there are storages which
107
107
  you can use which support on-the-fly processing, like [shrine-cloudinary] or
108
108
  [shrine-imgix].
109
109
 
110
- In Shrine you do processing by overriding the `#process` method on your
111
- uploader (for images you can use the [image_processing] gem):
110
+ Processing is defined and performed on the instance level, and the result of
111
+ can be a single file or a hash of versions:
112
112
 
113
113
  ```rb
114
114
  require "image_processing/mini_magick"
115
115
 
116
116
  class ImageUploader < Shrine
117
117
  include ImageProcessing::MiniMagick
118
+ plugin :processing
118
119
 
119
- def process(io, context)
120
- case context[:phase]
121
- when :store
122
- resize_to_fit!(io.download, 700, 700)
123
- end
120
+ process(:store) do |io, context|
121
+ resize_to_fit!(io.download, 700, 700)
124
122
  end
125
123
  end
126
124
  ```
@@ -292,12 +290,11 @@ Shrine.plugin :logging
292
290
 
293
291
  #### `.processors`, `.processor`
294
292
 
295
- In Shrine processing is done by overriding the `#process` method in your
296
- uploader:
297
-
298
293
  ```rb
299
294
  class MyUploader < Shrine
300
- def process(io, context)
295
+ plugin :processing
296
+
297
+ process(:store) do |io, context|
301
298
  # ...
302
299
  end
303
300
  end
@@ -15,14 +15,11 @@ versions:
15
15
 
16
16
  ```rb
17
17
  class ImageUploader < Shrine
18
- plugin :versions
19
-
20
- def process(io, context)
21
- case context[:phase]
22
- when :store
23
- thumb = process_thumb(io.download)
24
- {original: io, thumb: thumb}
25
- end
18
+ # ...
19
+
20
+ process(:store) do |io, context|
21
+ thumbnail = process_thumbnail(io.download)
22
+ {original: io, thumbnail: thumbnail}
26
23
  end
27
24
  end
28
25
  ```
@@ -75,15 +72,12 @@ update your processing code to generate it, and deploy it:
75
72
 
76
73
  ```rb
77
74
  class ImageUploader < Shrine
78
- plugin :versions
79
-
80
- def process(io, context)
81
- case context[:phase]
82
- when :store
83
- # ...
84
- new = some_processing(io.download, *args)
85
- {small: small, medium: medium, new: new} # we generate the ":new" version
86
- end
75
+ # ...
76
+
77
+ process(:store) do |io, context|
78
+ # ...
79
+ new = some_processing(io.download, *args)
80
+ {small: small, medium: medium, new: new} # we generate the ":new" version
87
81
  end
88
82
  end
89
83
  ```
@@ -119,8 +113,9 @@ old_versions = []
119
113
  User.paged_each do |user|
120
114
  attacher, attachment = user.avatar_attacher, user.avatar
121
115
  if attacher.stored? && attachment[:old_version]
122
- old_versions << attachment.delete(:old_version)
123
- attacher.swap(attachment)
116
+ old_version = attachment.delete(:old_version)
117
+ swapped = attacher.swap(attachment)
118
+ old_versions << old_version if swapped
124
119
  end
125
120
  end
126
121