shrine 3.0.0.beta2 → 3.0.0.beta3

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

Files changed (61) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +45 -1
  3. data/README.md +100 -106
  4. data/doc/advantages.md +90 -88
  5. data/doc/attacher.md +322 -152
  6. data/doc/carrierwave.md +105 -113
  7. data/doc/changing_derivatives.md +308 -0
  8. data/doc/changing_location.md +92 -21
  9. data/doc/changing_storage.md +107 -0
  10. data/doc/creating_plugins.md +1 -1
  11. data/doc/design.md +8 -9
  12. data/doc/direct_s3.md +3 -2
  13. data/doc/metadata.md +97 -78
  14. data/doc/multiple_files.md +3 -3
  15. data/doc/paperclip.md +89 -88
  16. data/doc/plugins/activerecord.md +3 -12
  17. data/doc/plugins/backgrounding.md +126 -100
  18. data/doc/plugins/derivation_endpoint.md +4 -5
  19. data/doc/plugins/derivatives.md +63 -32
  20. data/doc/plugins/download_endpoint.md +54 -1
  21. data/doc/plugins/entity.md +1 -0
  22. data/doc/plugins/form_assign.md +53 -0
  23. data/doc/plugins/mirroring.md +37 -16
  24. data/doc/plugins/multi_cache.md +22 -0
  25. data/doc/plugins/presign_endpoint.md +1 -1
  26. data/doc/plugins/remote_url.md +19 -4
  27. data/doc/plugins/validation.md +83 -0
  28. data/doc/processing.md +149 -133
  29. data/doc/refile.md +68 -63
  30. data/doc/release_notes/3.0.0.md +835 -0
  31. data/doc/securing_uploads.md +56 -36
  32. data/doc/storage/s3.md +2 -2
  33. data/doc/testing.md +104 -120
  34. data/doc/upgrading_to_3.md +538 -0
  35. data/doc/validation.md +48 -87
  36. data/lib/shrine.rb +7 -4
  37. data/lib/shrine/attacher.rb +16 -6
  38. data/lib/shrine/plugins/activerecord.rb +33 -14
  39. data/lib/shrine/plugins/atomic_helpers.rb +1 -1
  40. data/lib/shrine/plugins/backgrounding.rb +23 -89
  41. data/lib/shrine/plugins/data_uri.rb +13 -2
  42. data/lib/shrine/plugins/derivation_endpoint.rb +7 -11
  43. data/lib/shrine/plugins/derivatives.rb +44 -20
  44. data/lib/shrine/plugins/download_endpoint.rb +26 -0
  45. data/lib/shrine/plugins/form_assign.rb +6 -3
  46. data/lib/shrine/plugins/keep_files.rb +2 -2
  47. data/lib/shrine/plugins/mirroring.rb +62 -22
  48. data/lib/shrine/plugins/model.rb +2 -2
  49. data/lib/shrine/plugins/multi_cache.rb +27 -0
  50. data/lib/shrine/plugins/remote_url.rb +25 -10
  51. data/lib/shrine/plugins/remove_invalid.rb +1 -1
  52. data/lib/shrine/plugins/sequel.rb +39 -20
  53. data/lib/shrine/plugins/validation.rb +3 -0
  54. data/lib/shrine/storage/s3.rb +16 -1
  55. data/lib/shrine/uploaded_file.rb +1 -0
  56. data/lib/shrine/version.rb +1 -1
  57. data/shrine.gemspec +1 -1
  58. metadata +12 -7
  59. data/doc/migrating_storage.md +0 -76
  60. data/doc/regenerating_versions.md +0 -143
  61. data/lib/shrine/plugins/attacher_options.rb +0 -55
@@ -1,31 +1,102 @@
1
- # Migrating to Different Location
1
+ # Migrating File Locations
2
2
 
3
- You have a production app with already uploaded attachments. However, you've
4
- realized that the existing store folder structure for attachments isn't working
5
- for you.
3
+ This guide shows how to migrate the location of uploaded files on the same
4
+ storage in production, with zero downtime.
6
5
 
7
- The first step is to change the location, by overriding `#generate_location` or
8
- with the pretty_location plugin, and deploy that change. This will make any new
9
- files upload to the desired location, attachments on old locations will still
10
- continue to work normally.
11
-
12
- The next step is to run a script that will move old files to new locations. The
13
- easiest way to do that is to reupload them and delete them. Shrine has a method
14
- exactly for that, `Attacher#promote`, which also handles the situation when
15
- someone attaches a new file during "moving" (since we're running this script on
16
- live production).
6
+ Let's assume we have a `Photo` model with an `image` file attachment:
17
7
 
18
8
  ```rb
19
- Shrine.plugin :delete_promoted
9
+ Shrine.plugin :activerecord
10
+ ```
11
+ ```rb
12
+ class ImageUploader < Shrine
13
+ # ...
14
+ end
15
+ ```
16
+ ```rb
17
+ class Photo < ActiveRecord::Base
18
+ include ImageUploader::Attachment(:image)
19
+ end
20
+ ```
21
+
22
+ ## 1. Update the location generation
20
23
 
21
- User.paged_each do |user|
22
- attacher = user.avatar_attacher
23
- attacher.promote(action: :migrate) if attacher.stored?
24
- # use `attacher._promote(action: :migrate)` if you want promoting to be backgrounded
24
+ Since Shrine generates the location only once during upload, it is safe to change
25
+ the `Shrine#generate_location` method. All the existing files will still continue
26
+ to work with the previously stored urls because the files have not been migrated.
27
+
28
+ ```rb
29
+ class ImageUploader < Shrine
30
+ def generate_location(io, **options)
31
+ # change location generation
32
+ end
25
33
  end
26
34
  ```
27
35
 
28
- The `:action` is not mandatory, it's just for better introspection when
29
- monitoring background jobs and logs.
36
+ We can now deploy this change to production so new file uploads will be stored in
37
+ the new location.
38
+
39
+ ## 2. Move existing files
40
+
41
+ To move existing files to new location, run the following script. It fetches
42
+ the photos in batches, downloads the image, and re-uploads it to the new location.
43
+ We only need to migrate the files in `:store` storage need to be migrated as the files
44
+ in `:cache` storage will be uploaded to the new location on promotion.
45
+
46
+ ```rb
47
+ Photo.find_each do |photo|
48
+ attacher = photo.image_attacher
49
+
50
+ next unless attacher.stored? # move only attachments uploaded to permanent storage
51
+
52
+ old_attacher = attacher.dup
53
+
54
+ attacher.set attacher.upload(attacher.file) # reupload file
55
+ attacher.set_derivatives attacher.upload_derivatives(attacher.derivatives) # reupload derivatives if you have derivatives
56
+
57
+ begin
58
+ attacher.atomic_persist # persist changes if attachment has not changed in the meantime
59
+ old_attacher.destroy # delete files on old location
60
+ rescue Shrine::AttachmentChanged, # attachment has changed during reuploading
61
+ ActiveRecord::RecordNotFound # record has been deleted during reuploading
62
+ attacher.destroy # delete now orphaned files
63
+ end
64
+ end
65
+ ```
30
66
 
31
67
  Now all your existing attachments should be happily living on new locations.
68
+
69
+ ### Backgrounding
70
+
71
+ For faster migration, we can also delay moving files into a background job:
72
+
73
+ ```rb
74
+ Photo.find_each do |photo|
75
+ attacher = photo.image_attacher
76
+
77
+ next unless attacher.stored? # move only attachments uploaded to permanent storage
78
+
79
+ MoveFilesJob.perform_later(
80
+ attacher.class,
81
+ attacher.record,
82
+ attacher.name,
83
+ attacher.file_data,
84
+ )
85
+ end
86
+ ```
87
+ ```rb
88
+ class MoveFilesJob < ActiveJob::Base
89
+ def perform(attacher_class, record, name, file_data)
90
+ attacher = attacher_class.retrieve(model: record, name: name, file: file_data)
91
+ old_attacher = attacher.dup
92
+
93
+ attacher.set attacher.upload(attacher.file)
94
+ attacher.set_derivatives attacher.upload_derivatives(attacher.derivatives)
95
+
96
+ attacher.atomic_persist
97
+ old_attacher.destroy
98
+ rescue Shrine::AttachmentChanged, ActiveRecord::RecordNotFound
99
+ attacher&.destroy
100
+ end
101
+ end
102
+ ```
@@ -0,0 +1,107 @@
1
+ # Migrating File Storage
2
+
3
+ This guides shows how to move file attachments to a different storage in
4
+ production, with zero downtime.
5
+
6
+ Let's assume we have a `Photo` model with an `image` file attachment stored
7
+ in AWS S3 storage:
8
+
9
+ ```rb
10
+ Shrine.storages = {
11
+ cache: Shrine::Storage::S3.new(...),
12
+ store: Shrine::Storage::S3.new(...),
13
+ }
14
+
15
+ Shrine.plugin :activerecord
16
+ ```
17
+ ```rb
18
+ class ImageUploader < Shrine
19
+ # ...
20
+ end
21
+ ```
22
+ ```rb
23
+ class Photo < ActiveRecord::Base
24
+ include ImageUploader::Attachment(:image)
25
+ end
26
+ ```
27
+
28
+ Let's also assume that we're migrating from AWS S3 to Google Cloud Storage, and
29
+ we've added the new storage to `Shrine.storages`:
30
+
31
+ ```rb
32
+ Shrine.storages = {
33
+ ...
34
+ store: Shrine::Storage::S3.new(...),
35
+ gcs: Shrine::Storage::GoogleCloudStorage.new(...),
36
+ }
37
+ ```
38
+
39
+ ## 1. Mirror upload and delete operations
40
+
41
+ The first step is to start mirroring uploads and deletes made on your current
42
+ storage to the new storage. We can do this by loading the `mirroring` plugin:
43
+
44
+ ```rb
45
+ Shrine.plugin :mirroring, mirror: { store: :gcs }
46
+ ```
47
+
48
+ Put the above code in an initializer and deploy it.
49
+
50
+ You can additionally delay the mirroring into a [background job][mirroring
51
+ backgrounding] for better performance.
52
+
53
+ ## 2. Copy the files
54
+
55
+ Next step is to copy all remaining files from current storage into the new
56
+ storage using the following script. It fetches the photos in batches, downloads
57
+ the image, and re-uploads it to the new storage.
58
+
59
+ ```rb
60
+ Photo.find_each do |photo|
61
+ attacher = photo.image_attacher
62
+
63
+ next unless attacher.stored?
64
+
65
+ attacher.file.trigger_mirror_upload
66
+
67
+ # if using derivatives
68
+ attacher.map_derivative(attacher.derivatives) do |_, derivative|
69
+ derivative.trigger_mirror_upload
70
+ end
71
+ end
72
+ ```
73
+
74
+ Now the new storage should have all files the current storage has, and new
75
+ uploads will continue being mirrored to the new storage.
76
+
77
+ ## 3. Update storage
78
+
79
+ Once all the files are copied over to the new storage, everything should be
80
+ ready for us to update the storage in the Shrine configuration. We can keep
81
+ mirroring, in case the change would need to reverted.
82
+
83
+ ```rb
84
+ Shrine.storages = {
85
+ ...
86
+ store: Shrine::Storage::GoogleCloudStorage.new(...),
87
+ s3: Shrine::Storage::S3.new(...),
88
+ }
89
+
90
+ Shrine.plugin :mirroring, mirror: { store: :s3 } # mirror to :s3 storage
91
+ ```
92
+
93
+ ## 4. Remove mirroring
94
+
95
+ Once everything is looking good, we can remove the mirroring:
96
+
97
+ ```diff
98
+ Shrine.storages = {
99
+ ...
100
+ store: Shrine::Storage::GoogleCloudStorage.new(...),
101
+ - s3: Shrine::Storage::S3.new(...),
102
+ }
103
+
104
+ - Shrine.plugin :mirroring, mirror: { store: :s3 } # mirror to :s3 storage
105
+ ```
106
+
107
+ [mirroring backgrounding]: /doc/plugins/mirroring.md#backgrounding
@@ -94,7 +94,7 @@ If your plugin depends on other plugins, you can load them inside of
94
94
  ```rb
95
95
  module MyPlugin
96
96
  def self.load_dependencies(uploader, *)
97
- uploader.plugin :versions # depends on the versions plugin
97
+ uploader.plugin :derivatives # depends on the derivatives plugin
98
98
  end
99
99
  end
100
100
  ```
@@ -63,14 +63,13 @@ the `#upload` storage method. First the storage needs to be registered under a
63
63
  name:
64
64
 
65
65
  ```rb
66
- Shrine.storages[:file_system] = Shrine::Storage::FileSystem.new("uploads")
66
+ Shrine.storages[:disk] = Shrine::Storage::FileSystem.new("uploads")
67
67
  ```
68
68
 
69
- Now we can instantiate an uploader with this identifier and upload files:
69
+ Now we can upload files to the registered storage:
70
70
 
71
71
  ```rb
72
- uploader = Shrine.new(:file_system)
73
- uploaded_file = uploader.upload(file)
72
+ uploaded_file = Shrine.upload(file, :disk)
74
73
  uploaded_file #=> #<Shrine::UploadedFile>
75
74
  ```
76
75
 
@@ -155,14 +154,14 @@ A `Shrine::Attacher` is instantiated with a model instance and an attachment
155
154
  name (an "image" attachment will be saved to `image_data` field):
156
155
 
157
156
  ```rb
158
- attacher = Shrine::Attacher.new(photo, :image)
157
+ attacher = Shrine::Attacher.from_model(photo, :image)
159
158
 
160
159
  attacher.assign(file)
161
- attacher.get #=> #<Shrine::UploadedFile>
160
+ attacher.file #=> #<Shrine::UploadedFile @storage_key=:cache ...>
162
161
  attacher.record.image_data #=> "{\"storage\":\"cache\",\"id\":\"9260ea09d8effd.jpg\",\"metadata\":{...}}"
163
162
 
164
- attacher._promote
165
- attacher.get #=> #<Shrine::UploadedFile>
163
+ attacher.finalize
164
+ attacher.file #=> #<Shrine::UploadedFile @storage_key=:store ...>
166
165
  attacher.record.image_data #=> "{\"storage\":\"store\",\"id\":\"ksdf02lr9sf3la.jpg\",\"metadata\":{...}}"
167
166
  ```
168
167
 
@@ -196,7 +195,7 @@ We can include this module to a model:
196
195
 
197
196
  ```rb
198
197
  class Photo
199
- include Shrine::Attachment.new(:image)
198
+ include Shrine::Attachment(:image)
200
199
  end
201
200
  ```
202
201
  ```rb
@@ -308,8 +308,9 @@ backgrounding library to perform the job with a delay:
308
308
  ```rb
309
309
  Shrine.plugin :backgrounding
310
310
 
311
- Shrine::Attacher.promote do |data|
312
- PromoteJob.perform_in(3, data) # tells a Sidekiq worker to perform in 3 seconds
311
+ Shrine::Attacher.promote_block do
312
+ # tells a Sidekiq worker to perform in 3 seconds
313
+ PromoteJob.perform_in(3, self.class, record.class, record.id, name, file_data)
313
314
  end
314
315
  ```
315
316
 
@@ -84,10 +84,10 @@ By default, the `mime_type` metadata will be copied over from the
84
84
  `#content_type` value comes from the `Content-Type` header of the upload
85
85
  request, it's *not guaranteed* to hold the actual MIME type of the file (browser
86
86
  determines this header based on file extension). Moreover, only
87
- `ActionDispatch::Http::UploadedFile`, `Shrine::Plugins::RackFile::UploadedFile`,
88
- and `Shrine::Plugins::DataUri::DataFile` objects have `#content_type` defined,
89
- so, when uploading simple file objects, `mime_type` will be nil. That makes
90
- relying on `#content_type` both a security risk and limiting.
87
+ `ActionDispatch::Http::UploadedFile`, `Shrine::RackFile`, and
88
+ `Shrine::DataFile` objects have `#content_type` defined, so, when uploading
89
+ simple file objects, `mime_type` will be nil. That makes relying on
90
+ `#content_type` both a security risk and limiting.
91
91
 
92
92
  To remedy that, Shrine comes with a
93
93
  [`determine_mime_type`][determine_mime_type] plugin which is able to extract
@@ -117,7 +117,11 @@ default, the plugin uses [FastImage] to analyze dimensions, but you can also
117
117
  have it use [MiniMagick] or [ruby-vips]:
118
118
 
119
119
  ```rb
120
- Shrine.plugin :store_dimensions, analyzer: :mini_magick
120
+ # Gemfile
121
+ gem "fastimage"
122
+ ```
123
+ ```rb
124
+ Shrine.plugin :store_dimensions
121
125
  ```
122
126
  ```rb
123
127
  uploaded_file = uploader.upload(image)
@@ -138,18 +142,18 @@ any custom metadata, using the `add_metadata` plugin (which extends
138
142
  from images:
139
143
 
140
144
  ```rb
141
- require "mini_magick"
145
+ # Gemfile
146
+ gem "exiftool"
147
+ ```
148
+ ```rb
149
+ require "exiftool"
142
150
 
143
151
  class ImageUploader < Shrine
144
152
  plugin :add_metadata
145
153
 
146
154
  add_metadata :exif do |io, context|
147
155
  Shrine.with_file(io) do |file|
148
- begin
149
- MiniMagick::Image.new(file.path).exif
150
- rescue MiniMagick::Error
151
- # not a valid image
152
- end
156
+ Exiftool.new(file.path).to_hash
153
157
  end
154
158
  end
155
159
  end
@@ -163,6 +167,10 @@ uploaded_file.exif #=> {...}
163
167
  Or, if you're uploading videos, you might want to extract some video-specific
164
168
  meatadata:
165
169
 
170
+ ```rb
171
+ # Gemfile
172
+ gem "streamio-ffmpeg"
173
+ ```
166
174
  ```rb
167
175
  require "streamio-ffmpeg"
168
176
 
@@ -192,11 +200,10 @@ uploaded_file.metadata #=>
192
200
  ```
193
201
 
194
202
  The yielded `io` object will not always be an object that responds to `#path`.
195
- If you're using the `data_uri` plugin, the `io` will be a `StringIO` wrapper.
196
- With `restore_cached_data` or `refresh_metadata` plugins, `io` might be a
197
- `Shrine::UploadedFile` object. If you're using a metadata analyzer that
198
- requires the source file to be on disk, you can use `Shrine.with_file` to
199
- ensure you have a file object.
203
+ For example, with the `data_uri` plugin the `io` can be a `StringIO` wrapper,
204
+ with `restore_cached_data` or `refresh_metadata` plugins the `io` might be a
205
+ `Shrine::UploadedFile` object. So we're using `Shrine.with_file` to ensure we
206
+ have a file object.
200
207
 
201
208
  ## Metadata columns
202
209
 
@@ -227,9 +234,7 @@ you want to validate the extracted metadata or have it immediately available
227
234
  for any other reason), you can load the `restore_cached_data` plugin:
228
235
 
229
236
  ```rb
230
- class ImageUploader < Shrine
231
- plugin :restore_cached_data # automatically extract metadata from cached files on assignment
232
- end
237
+ Shrine.plugin :restore_cached_data # automatically extract metadata from cached files on assignment
233
238
  ```
234
239
  ```rb
235
240
  photo.image = '{"id":"ks9elsd.jpg","storage":"cache","metadata":{}}' # metadata is extracted
@@ -241,100 +246,113 @@ photo.image.metadata #=>
241
246
  # }
242
247
  ```
243
248
 
244
- On the other hand, if you're using backgrounding, you can extract metadata
245
- during background promotion using the `refresh_metadata` plugin (which the
246
- `restore_cached_data` plugin uses internally):
249
+ ### Backgrounding
250
+
251
+ If you're using [backgrounding], you can extract metadata during background
252
+ promotion using the `refresh_metadata` plugin (which the `restore_cached_data`
253
+ plugin uses internally):
247
254
 
248
255
  ```rb
249
- class ImageUploader < Shrine
250
- plugin :refresh_metadata
251
- plugin :processing
256
+ Shrine.plugin :refresh_metadata # allow re-extracting metadata
257
+ Shrine.plugin :backgrounding
258
+ Shrine::Attacher.promote_block { PromoteJob.perform_later(self.class, record, name, file_data) }
259
+ ```
260
+ ```rb
261
+ class PromoteJob < ActiveJob::Base
262
+ def perform(attacher_class, record, name, file_data)
263
+ attacher = attacher_class.retrieve(model: record, name: name, file: file_data)
264
+ attacher.refresh_metadata!
265
+ attacher.atomic_promote
266
+ end
267
+ end
268
+ ```
252
269
 
253
- # this will be called in the background if using backgrounding plugin
254
- process(:store) do |io, context|
255
- io.refresh_metadata!(context) # extracts metadata and updates `io.metadata`
256
- io
270
+ You can also extract metadata in the background separately from promotion:
271
+
272
+ ```rb
273
+ MetadataJob.perform_later(
274
+ attacher.class,
275
+ attacher.record,
276
+ attacher.name,
277
+ attacher.file_data,
278
+ )
279
+ ```
280
+ ```rb
281
+ class MetadataJob < ActiveJob::Base
282
+ def perform(attacher_class, record, name, file_data)
283
+ attacher = attacher_class.retrieve(model: record, name: name, file: file_data)
284
+ attacher.refresh_metadata!
285
+ attacher.atomic_persist
257
286
  end
258
287
  end
259
288
  ```
260
289
 
261
- If you have metadata that is cheap to extract in the foreground, but also have
262
- additional metadata that can be extracted asynchronously, you can combine the
263
- two approaches. For example, if you're attaching video files, you might want to
264
- extract MIME type upfront and video-specific metadata in a background job, which
265
- can be done as follows (provided that `backgrounding` plugin is used):
290
+ If you have some metadata that you want to extract in the foreground and some
291
+ that you want to extract in the background, you can use the uploader context:
266
292
 
267
293
  ```rb
268
294
  class MyUploader < Shrine
269
- plugin :determine_mime_type # this will be called in the foreground
270
- plugin :restore_cached_data
271
- plugin :refresh_metadata
272
295
  plugin :add_metadata
273
- plugin :processing
274
296
 
275
- # this will be called in the background if using backgrounding plugin
276
- process(:store) do |io, context|
277
- io.refresh_metadata!(context)
278
- io
279
- end
297
+ add_metadata do |io, **options|
298
+ next unless options[:background] # proceed only when `background: true` was specified
280
299
 
281
- add_metadata do |io, context|
282
- next unless context[:action] == :store # this will be the case during promotion
283
-
284
- Shrine.with_file(io) do |file|
285
- # example of metadata extraction
286
- movie = FFMPEG::Movie.new(file.path) # uses the streamio-ffmpeg gem
300
+ # example of metadata extraction
301
+ movie = Shrine.with_file(io) { |file| FFMPEG::Movie.new(file.path) }
287
302
 
288
- { "duration" => movie.duration,
289
- "bitrate" => movie.bitrate,
290
- "resolution" => movie.resolution,
291
- "frame_rate" => movie.frame_rate }
292
- end
303
+ { "duration" => movie.duration,
304
+ "bitrate" => movie.bitrate,
305
+ "resolution" => movie.resolution,
306
+ "frame_rate" => movie.frame_rate }
307
+ end
308
+ end
309
+ ```
310
+ ```rb
311
+ class MetadataJob < ActiveJob::Base
312
+ def perform(record, name, file_data)
313
+ attacher = Shrine::Attacher.retrieve(model: record, name: name, file: file_data)
314
+ attacher.refresh_metadata!(background: true)
315
+ attacher.atomic_persist
293
316
  end
294
317
  end
295
318
  ```
296
319
 
320
+ ### Optimizations
321
+
297
322
  If you want to do both metadata extraction and file processing during
298
323
  promotion, you can wrap both in an `UploadedFile#open` block to make
299
324
  sure the file content is retrieved from the storage only once.
300
325
 
301
326
  ```rb
302
- class MyUploader < Shrine
303
- plugin :refresh_metadata
304
- plugin :processing
305
-
306
- process(:store) do |io, context|
307
- io.open do |io, context|
308
- io.refresh_metadata!(context)
327
+ class PromoteJob < ActiveJob::Base
328
+ def perform(record, name, file_data)
329
+ attacher = Shrine::Attacher.retrieve(model: record, name: name, file: file_data)
309
330
 
310
- original = io.download # reuses already open uploaded file
311
- # ... processing ...
331
+ attacher.file.open do
332
+ attacher.refresh_metadata!
333
+ attacher.create_derivatives
312
334
  end
335
+
336
+ attacher.atomic_promote
313
337
  end
314
338
  end
315
339
  ```
316
340
 
317
- If you're dealing with large files, it's recommended to also use the `tempfile`
318
- plugin to make sure the same copy of the uploaded file is used for metadata
319
- extraction (`Shrine.with_file`) and processing (`UploadedFile#tempfile`).
341
+ If you're dealing with large files and have metadata extractors that use
342
+ `Shrine.with_file`, you might want to use the `tempfile` plugin to make sure
343
+ the same copy of the uploaded file is reused for both metadata extraction and
344
+ file processing.
320
345
 
321
346
  ```rb
322
347
  Shrine.plugin :tempfile # load it globally so that it overrides `Shrine.with_file`
323
348
  ```
324
349
  ```rb
325
- class MyUploader < Shrine
326
- plugin :refresh_metadata
327
- plugin :processing
328
-
329
- process(:store) do |io, context|
330
- io.open do |io, context|
331
- io.refresh_metadata!(context)
332
-
333
- original = io.tempfile # used the cached tempfile
334
- # ... processing ...
335
- end
336
- end
350
+ # ...
351
+ attacher.file.open do
352
+ attacher.refresh_metadata!
353
+ attacher.create_derivatives(attacher.file.tempfile)
337
354
  end
355
+ # ...
338
356
  ```
339
357
 
340
358
  [`file`]: http://linux.die.net/man/1/file
@@ -345,3 +363,4 @@ end
345
363
  [ruby-vips]: https://github.com/libvips/ruby-vips
346
364
  [tus server]: https://github.com/janko/tus-ruby-server
347
365
  [determine_mime_type]: /doc/plugins/determine_mime_type.md#readme
366
+ [backgrounding]: /doc/plugins/backgrounding.md#readme