shrine 2.8.0 → 2.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

Files changed (64) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +681 -0
  3. data/README.md +73 -21
  4. data/doc/carrierwave.md +75 -20
  5. data/doc/creating_storages.md +15 -26
  6. data/doc/direct_s3.md +113 -31
  7. data/doc/multiple_files.md +4 -8
  8. data/doc/paperclip.md +98 -31
  9. data/doc/refile.md +4 -6
  10. data/doc/testing.md +24 -21
  11. data/lib/shrine.rb +32 -20
  12. data/lib/shrine/plugins/activerecord.rb +2 -0
  13. data/lib/shrine/plugins/add_metadata.rb +2 -0
  14. data/lib/shrine/plugins/background_helpers.rb +2 -0
  15. data/lib/shrine/plugins/backgrounding.rb +11 -4
  16. data/lib/shrine/plugins/backup.rb +2 -0
  17. data/lib/shrine/plugins/cached_attachment_data.rb +2 -0
  18. data/lib/shrine/plugins/copy.rb +2 -0
  19. data/lib/shrine/plugins/data_uri.rb +20 -12
  20. data/lib/shrine/plugins/default_storage.rb +2 -0
  21. data/lib/shrine/plugins/default_url.rb +2 -0
  22. data/lib/shrine/plugins/default_url_options.rb +2 -0
  23. data/lib/shrine/plugins/delete_promoted.rb +2 -0
  24. data/lib/shrine/plugins/delete_raw.rb +2 -0
  25. data/lib/shrine/plugins/determine_mime_type.rb +18 -2
  26. data/lib/shrine/plugins/direct_upload.rb +6 -6
  27. data/lib/shrine/plugins/download_endpoint.rb +2 -0
  28. data/lib/shrine/plugins/dynamic_storage.rb +2 -0
  29. data/lib/shrine/plugins/hooks.rb +2 -0
  30. data/lib/shrine/plugins/included.rb +2 -0
  31. data/lib/shrine/plugins/infer_extension.rb +131 -0
  32. data/lib/shrine/plugins/keep_files.rb +2 -0
  33. data/lib/shrine/plugins/logging.rb +6 -4
  34. data/lib/shrine/plugins/metadata_attributes.rb +2 -0
  35. data/lib/shrine/plugins/migration_helpers.rb +2 -0
  36. data/lib/shrine/plugins/module_include.rb +2 -0
  37. data/lib/shrine/plugins/moving.rb +2 -0
  38. data/lib/shrine/plugins/multi_delete.rb +4 -0
  39. data/lib/shrine/plugins/parallelize.rb +2 -0
  40. data/lib/shrine/plugins/parsed_json.rb +2 -0
  41. data/lib/shrine/plugins/presign_endpoint.rb +7 -7
  42. data/lib/shrine/plugins/pretty_location.rb +2 -0
  43. data/lib/shrine/plugins/processing.rb +2 -0
  44. data/lib/shrine/plugins/rack_file.rb +2 -0
  45. data/lib/shrine/plugins/rack_response.rb +2 -0
  46. data/lib/shrine/plugins/recache.rb +2 -0
  47. data/lib/shrine/plugins/refresh_metadata.rb +2 -0
  48. data/lib/shrine/plugins/remote_url.rb +12 -1
  49. data/lib/shrine/plugins/remove_attachment.rb +2 -0
  50. data/lib/shrine/plugins/remove_invalid.rb +2 -0
  51. data/lib/shrine/plugins/restore_cached_data.rb +2 -0
  52. data/lib/shrine/plugins/sequel.rb +2 -0
  53. data/lib/shrine/plugins/signature.rb +10 -8
  54. data/lib/shrine/plugins/store_dimensions.rb +5 -3
  55. data/lib/shrine/plugins/upload_endpoint.rb +7 -8
  56. data/lib/shrine/plugins/upload_options.rb +2 -0
  57. data/lib/shrine/plugins/validation_helpers.rb +2 -0
  58. data/lib/shrine/plugins/versions.rb +72 -31
  59. data/lib/shrine/storage/file_system.rb +11 -4
  60. data/lib/shrine/storage/linter.rb +5 -13
  61. data/lib/shrine/storage/s3.rb +16 -13
  62. data/lib/shrine/version.rb +3 -1
  63. data/shrine.gemspec +7 -6
  64. metadata +26 -10
@@ -87,11 +87,10 @@ to add the `multiple` attribute to the file field.
87
87
  <input type="file" multiple name="file">
88
88
  ```
89
89
 
90
- You can then use a generic JavaScript file upload library like
91
- [jQuery-File-Upload], [Dropzone] or [FineUploader] to asynchronously upload
92
- each of the selected files to your app or to an external service. See the
93
- `upload_endpoint` and `presign_endpoint` plugins, and [Direct Uploads to S3]
94
- guide for more details.
90
+ On the client side you can then asynchronously upload each of the selected
91
+ files to a direct upload endpoint. See documenation for the `upload_endpoint`
92
+ and `presign_endpoint` plugins, as well as the [Direct Uploads to S3] guide for
93
+ more details.
95
94
 
96
95
  After each upload finishes, you can generate a nested hash for the new
97
96
  associated record, and write the uploaded file JSON to the attachment field:
@@ -113,7 +112,4 @@ automatically take care of the attachment management.
113
112
 
114
113
  [`Sequel::Model.nested_attributes`]: http://sequel.jeremyevans.net/rdoc-plugins/classes/Sequel/Plugins/NestedAttributes.html
115
114
  [`ActiveRecord::Base.accepts_nested_attributes_for`]: http://api.rubyonrails.org/classes/ActiveRecord/NestedAttributes/ClassMethods.html
116
- [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
117
- [Dropzone]: https://github.com/enyo/dropzone
118
- [FineUploader]: https://github.com/FineUploader/fine-uploader
119
115
  [Direct Uploads to S3]: http://shrinerb.com/rdoc/files/doc/direct_s3_md.html
@@ -9,8 +9,7 @@ consists of three parts:
9
9
 
10
10
  ## Storages
11
11
 
12
- While in Paperclip you configure storage in the model, a Shrine storage is just
13
- a class which you configure individually:
12
+ In Paperclip the storage is configure inside the global options:
14
13
 
15
14
  ```rb
16
15
  class Photo < ActiveRecord::Base
@@ -20,24 +19,24 @@ class Photo < ActiveRecord::Base
20
19
  bucket: "my-bucket",
21
20
  access_key_id: "abc",
22
21
  secret_access_key: "xyz",
23
- },
24
- s3_host_alias: "http://abc123.cloudfront.net",
22
+ }
25
23
  end
26
24
  ```
25
+
26
+ In contrast, a Shrine storage is just a class which you configure individually:
27
+
27
28
  ```rb
28
29
  Shrine.storages[:store] = Shrine::Storage::S3.new(
29
30
  bucket: "my-bucket",
30
31
  access_key_id: "abc",
31
32
  secret_access_key: "xyz",
32
33
  )
33
-
34
- Shrine.plugin :default_url_options, store: {host: "http://abc123.cloudfront.net"}
35
34
  ```
36
35
 
37
36
  Paperclip doesn't have a concept of "temporary" storage, so it cannot retain
38
37
  uploaded files in case of validation errors, and [direct S3 uploads] cannot be
39
- implemented in a safe way. Shrine conceptually separates a "temporary" and
40
- "permanent" storage:
38
+ implemented in a safe way. Shrine uses separate "temporary" and "permanent"
39
+ storage for attaching files:
41
40
 
42
41
  ```rb
43
42
  Shrine.storages = {
@@ -285,9 +284,6 @@ can be done by including the below module to all models that have Paperclip
285
284
  attachments:
286
285
 
287
286
  ```rb
288
- require "fastimage"
289
- require "mime/types"
290
-
291
287
  module PaperclipShrineSynchronization
292
288
  def self.included(model)
293
289
  model.before_save do
@@ -328,8 +324,8 @@ module PaperclipShrineSynchronization
328
324
  metadata: {
329
325
  size: attachment.size,
330
326
  filename: attachment.original_filename,
331
- content_type: attachment.content_type,
332
- },
327
+ mime_type: attachment.content_type,
328
+ }
333
329
  }
334
330
  end
335
331
 
@@ -337,25 +333,10 @@ module PaperclipShrineSynchronization
337
333
  # files on the filesystem, make sure to subtract the appropriate part
338
334
  # from the path assigned to `:id`.
339
335
  def style_to_shrine_data(style)
340
- attachment = style.attachment
341
- path = attachment.path(style.name)
342
- url = attachment.url(style.name)
343
- file = attachment.instance_variable_get("@queued_for_write")[style.name]
344
-
345
- size = file.size if file
346
- size ||= FastImage.new(url).content_length # OPTIONAL (makes an HTTP request)
347
- size ||= File.size(path) if File.exist?(path)
348
- filename = File.basename(path)
349
- mime_type = MIME::Types.type_for(path).first.to_s.presence
350
-
351
336
  {
352
337
  storage: :store,
353
- id: path,
354
- metadata: {
355
- size: size,
356
- filename: filename,
357
- mime_type: mime_type,
358
- }
338
+ id: style.attachment.path(style.name),
339
+ metadata: {}
359
340
  }
360
341
  end
361
342
  end
@@ -385,6 +366,19 @@ instead of Paperclip, using equivalent Shrine storages. For help with
385
366
  translating the code from Paperclip to Shrine, you can consult the reference
386
367
  below.
387
368
 
369
+ You'll notice that Shrine metadata will be absent from the migrated files' data
370
+ (specifically versions). You can run a script that will fill in any missing
371
+ metadata defined in your Shrine uploader:
372
+
373
+ ```rb
374
+ Shrine.plugin :refresh_metadata
375
+
376
+ Photo.find_each do |photo|
377
+ attachment = ImageUploader.uploaded_file(photo.image, &:refresh_metadata!)
378
+ photo.update(image_data: attachment.to_json)
379
+ end
380
+ ```
381
+
388
382
  ## Paperclip to Shrine direct mapping
389
383
 
390
384
  ### `has_attached_file`
@@ -502,9 +496,82 @@ user.avatar.id #=> "users/342/avatar/398543qjfdsf.jpg"
502
496
 
503
497
  #### `#reprocess!`
504
498
 
505
- Shrine doesn't have an equivalent to this, but the [Regenerating versions]
499
+ Shrine doesn't have an equivalent to this, but the [Reprocessing versions]
506
500
  guide provides some useful tips on how to do this.
507
501
 
502
+ ### `Paperclip::Storage::S3`
503
+
504
+ The built-in [`Shrine::Storage::S3`] storage is a direct replacement for
505
+ `Paperclip::Storage::S3`.
506
+
507
+ #### `:s3_credentials`, `:s3_region`, `:bucket`
508
+
509
+ The Shrine storage accepts `:access_key_id`, `:secret_access_key`, `:region`,
510
+ and `:bucket` options in the initializer:
511
+
512
+ ```rb
513
+ Shrine::Storage::S3.new(
514
+ access_key_id: "...",
515
+ secret_access_key: "...",
516
+ region: "...",
517
+ bucket: "...",
518
+ )
519
+ ```
520
+
521
+ #### `:s3_headers`
522
+
523
+ The object data can be configured via the `:upload_options` hash:
524
+
525
+ ```rb
526
+ Shrine::Storage::S3.new(upload_options: {content_disposition: "attachment"}, **options)
527
+ ```
528
+
529
+ You can use the `upload_options` plugin to set upload options dynamically.
530
+
531
+ #### `:s3_permissions`
532
+
533
+ The object permissions can be configured with the `:acl` upload option:
534
+
535
+ ```rb
536
+ Shrine::Storage::S3.new(upload_options: {acl: "private"}, **options)
537
+ ```
538
+
539
+ You can use the `upload_options` plugin to set upload options dynamically.
540
+
541
+ #### `:s3_metadata`
542
+
543
+ The object metadata can be configured with the `:metadata` upload option:
544
+
545
+ ```rb
546
+ Shrine::Storage::S3.new(upload_options: {metadata: {"key" => "value"}}, **options)
547
+ ```
548
+
549
+ You can use the `upload_options` plugin to set upload options dynamically.
550
+
551
+ #### `:s3_protocol`, `:s3_host_alias`, `:s3_host_name`
552
+
553
+ The `#url` method accepts a `:host` option for specifying a CDN host. You can
554
+ use the `default_url_options` plugin to set it by default:
555
+
556
+ ```rb
557
+ Shrine.plugin :default_url_options, store: {host: "http://abc123.cloudfront.net"}
558
+ ```
559
+
560
+ #### `:path`
561
+
562
+ The `#upload` method accepts the destination location as the second argument.
563
+
564
+ ```rb
565
+ s3 = Shrine::Storage::S3.new(**options)
566
+ s3.upload(io, "object/destination/path")
567
+ ```
568
+
569
+ #### `:url`
570
+
571
+ The Shrine storage has no replacement for the `:url` Paperclip option, and it
572
+ isn't needed.
573
+
508
574
  [file]: http://linux.die.net/man/1/file
509
575
  [Reprocessing versions]: http://shrinerb.com/rdoc/files/doc/regenerating_versions_md.html
510
576
  [direct S3 uploads]: http://shrinerb.com/rdoc/files/doc/direct_s3_md.html
577
+ [`Shrine::Storage::S3`]: http://shrinerb.com/rdoc/classes/Shrine/Storage/S3.html
@@ -206,9 +206,9 @@ Shrine.presign_endpoint(:cache) # Rack app that generates presigns for specified
206
206
  ```
207
207
 
208
208
  Unlike Refile, Shrine doesn't ship with complete JavaScript which you can just
209
- include to make it work. Instead, you're expected to use one of the excellent
210
- JavaScript libraries for generic file uploads like [FineUploader], [Dropzone]
211
- or [jQuery-File-Upload]. See also the [Direct Uploads to S3] guide.
209
+ include to make it work. However, [Uppy] is an excellent JavaScript file upload
210
+ library that integrates wonderfully with Shrine, see the [demo app] for a
211
+ complete example.
212
212
 
213
213
  ## Migrating from Refile
214
214
 
@@ -478,9 +478,7 @@ Shrine.plugin :remote_url
478
478
  [shrine-uploadcare]: https://github.com/janko-m/shrine-uploadcare
479
479
  [Attache]: https://github.com/choonkeat/attache
480
480
  [image_processing]: https://github.com/janko-m/image_processing
481
- [FineUploader]: https://github.com/FineUploader/fine-uploader
482
- [Dropzone]: https://github.com/enyo/dropzone
483
- [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
481
+ [Uppy]: https://uppy.io
484
482
  [Direct Uploads to S3]: http://shrinerb.com/rdoc/files/doc/direct_s3_md.html
485
483
  [demo app]: https://github.com/janko-m/shrine/tree/master/demo
486
484
  [Multiple Files]: http://shrinerb.com/rdoc/files/doc/multiple_files_md.html
@@ -63,7 +63,7 @@ Alternatively, if you're using Amazon S3 storage, in tests you can use
63
63
 
64
64
  ## Test data
65
65
 
66
- If you're creating test data dynamically using libraries like [factory_girl],
66
+ If you're creating test data dynamically using libraries like [factory_bot],
67
67
  you can have the test file assigned dynamically when the record is created:
68
68
 
69
69
  ```rb
@@ -240,37 +240,40 @@ end
240
240
  This also has the benefit of allowing you to test `ImageThumbnailsGenerator` in
241
241
  isolation.
242
242
 
243
- ## Direct upload
243
+ ## Direct S3 uploads
244
244
 
245
- If you've set up direct uploads to Amazon S3 (using the `presign_endpoint`
246
- plugin), in tests you'll probably want to just use filesystem or memory storage
247
- to avoid network requests.
245
+ If you want to do direct uploads to Amazon S3 in production, in development and
246
+ tests you'll probably want to keep using filesystem storage to avoid making
247
+ network requests.
248
248
 
249
- The easiest way to do that is to add an `upload_endpoint`, modify it so that it
250
- behaves like S3, and change `presign_endpoint` response to point to the upload
251
- endpoint. Here is how one could modify the test helper in a Rails application:
249
+ The simplest way to do that is to use [Minio]. Minio is an open source object
250
+ storage server with Amazon S3 compatible API. If you're on a Mac you can
251
+ install it with Homebrew:
252
252
 
253
- ```rb
254
- # test/test_helper.rb
253
+ ```
254
+ $ brew install minio
255
+ $ minio server data/
256
+ ```
255
257
 
256
- # create and mount a fake S3 upload endpoint
257
- Shrine.plugin :upload_endpoint
258
- fake_s3 = Shrine.upload_endpoint(:cache, upload_context: -> (request) {
259
- { location: request.params["key"].match(/^cache\//).post_match }
260
- })
261
- Rails.application.routes.prepend { mount fake_s3 => "/s3" }
258
+ Then you can open the Minio UI in the browser and create a new bucket. Once
259
+ you've done that, all that's left to do is point aws-sdk-s3 to your Minio
260
+ server:
262
261
 
263
- # override presigns to return URLs to the fake S3 upload endpoint
264
- Shrine.plugin :presign_endpoint, presign: -> (id, options, request) do
265
- Struct.new(:url, :fields).new("#{request.base_url}/s3", { "key" => "cache/#{id}" })
266
- end
262
+ ```
263
+ Shrine::Storage::S3.new(
264
+ access_key_id: "MINIO_ACCESS_KEY_ID",
265
+ secret_access_key: "MINIO_SECRET_ACCESS_KEY",
266
+ bucket: "MINIO_BUCKET",
267
+ region: "us-east-1",
268
+ )
267
269
  ```
268
270
 
269
271
  [DatabaseCleaner]: https://github.com/DatabaseCleaner/database_cleaner
270
272
  [shrine-memory]: https://github.com/janko-m/shrine-memory
271
- [factory_girl]: https://github.com/thoughtbot/factory_girl
273
+ [factory_bot]: https://github.com/thoughtbot/factory_bot
272
274
  [Capybara]: https://github.com/jnicklas/capybara
273
275
  [`#attach_file`]: http://www.rubydoc.info/github/jnicklas/capybara/master/Capybara/Node/Actions#attach_file-instance_method
274
276
  [Rack::Test]: https://github.com/brynary/rack-test
275
277
  [Rack::TestApp]: https://github.com/kwatch/rack-test_app
276
278
  [aws-sdk-ruby stubs]: http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/ClientStubs.html
279
+ [Minio]: https://minio.io
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  require "shrine/version"
2
4
 
3
5
  require "securerandom"
@@ -154,8 +156,8 @@ class Shrine
154
156
  # class Photo
155
157
  # include Shrine.attachment(:image) # creates a Shrine::Attachment object
156
158
  # end
157
- def attachment(name, **options)
158
- self::Attachment.new(name, **options)
159
+ def attachment(name, *args)
160
+ self::Attachment.new(name, *args)
159
161
  end
160
162
  alias [] attachment
161
163
 
@@ -251,13 +253,13 @@ class Shrine
251
253
  def generate_location(io, context = {})
252
254
  extension = ".#{io.extension}" if io.is_a?(UploadedFile) && io.extension
253
255
  extension ||= File.extname(extract_filename(io).to_s).downcase
254
- basename = generate_uid(io)
256
+ basename = generate_uid(io)
255
257
 
256
- basename + extension.to_s
258
+ basename + extension
257
259
  end
258
260
 
259
261
  # Extracts filename, size and MIME type from the file, which is later
260
- # accessible through `UploadedFile#metadata`.
262
+ # accessible through UploadedFile#metadata.
261
263
  def extract_metadata(io, context = {})
262
264
  {
263
265
  "filename" => extract_filename(io),
@@ -323,8 +325,8 @@ class Shrine
323
325
  # and any upload options. The storage might modify the location or
324
326
  # metadata that were passed in. The uploaded IO is then closed.
325
327
  def copy(io, context)
326
- location = context[:location]
327
- metadata = context[:metadata]
328
+ location = context[:location]
329
+ metadata = context[:metadata]
328
330
  upload_options = context[:upload_options] || {}
329
331
 
330
332
  storage.upload(io, location, shrine_metadata: metadata, **upload_options)
@@ -396,12 +398,13 @@ class Shrine
396
398
  @options = options
397
399
 
398
400
  module_eval <<-RUBY, __FILE__, __LINE__ + 1
399
- def #{name}_attacher
401
+ def #{name}_attacher(options = {})
402
+ @#{name}_attacher = nil if options.any?
400
403
  @#{name}_attacher ||= (
401
404
  attachments = self.class.ancestors.grep(Shrine::Attachment)
402
405
  attachment = attachments.find { |mod| mod.attachment_name == :#{name} }
403
406
  attacher_class = attachment.shrine_class::Attacher
404
- options = attachment.options
407
+ options = attachment.options.merge(options)
405
408
 
406
409
  attacher_class.new(self, :#{name}, options)
407
410
  )
@@ -775,19 +778,23 @@ class Shrine
775
778
  end
776
779
  alias content_type mime_type
777
780
 
778
- # Opens an IO object of the uploaded file for reading and yields it to
779
- # the block, closing it after the block finishes. For opening without
780
- # a block #to_io can be used.
781
+ # Opens an IO object of the uploaded file for reading, yields it to
782
+ # the block, and closes it after the block finishes. If opening without
783
+ # a block, it returns an opened IO object for the uploaded file.
781
784
  #
782
785
  # uploaded_file.open do |io|
783
786
  # puts io.read # prints the content of the file
784
787
  # end
785
788
  def open(*args)
786
- @io = storage.open(id, *args)
787
- yield @io
788
- ensure
789
- @io.close if @io
790
- @io = nil
789
+ return to_io unless block_given?
790
+
791
+ begin
792
+ @io = storage.open(id, *args)
793
+ yield @io
794
+ ensure
795
+ @io.close if @io
796
+ @io = nil
797
+ end
791
798
  end
792
799
 
793
800
  # Calls `#download` on the storage if the storage implements it,
@@ -796,9 +803,14 @@ class Shrine
796
803
  if storage.respond_to?(:download)
797
804
  storage.download(id, *args)
798
805
  else
799
- tempfile = Tempfile.new(["shrine", ".#{extension}"], binmode: true)
800
- open(*args) { |io| IO.copy_stream(io, tempfile.path) }
801
- tempfile.tap(&:open)
806
+ begin
807
+ tempfile = Tempfile.new(["shrine", ".#{extension}"], binmode: true)
808
+ open(*args) { |io| IO.copy_stream(io, tempfile.path) }
809
+ tempfile.tap(&:open)
810
+ rescue
811
+ tempfile.close! if tempfile
812
+ raise
813
+ end
802
814
  end
803
815
  end
804
816
 
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  require "active_record"
2
4
 
3
5
  class Shrine
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  class Shrine
2
4
  module Plugins
3
5
  # The `add_metadata` plugin provides a convenient method for extracting and
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  Shrine.deprecation("The background_helpers plugin has been renamed to \"backgrounding\". Loading the plugin through \"background_helpers\" will stop working in Shrine 3.")
2
4
  require "shrine/plugins/backgrounding"
3
5
  Shrine::Plugins.register_plugin(:background_helpers, Shrine::Plugins::Backgrounding)
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  class Shrine
2
4
  module Plugins
3
5
  # The `backgrounding` plugin enables you to move promoting and deleting of
@@ -10,13 +12,17 @@ class Shrine
10
12
  # jobs, by passing a block.
11
13
  #
12
14
  # Shrine.plugin :backgrounding
15
+ #
16
+ # # makes all uploaders use background jobs
13
17
  # Shrine::Attacher.promote { |data| PromoteJob.perform_async(data) }
14
18
  # Shrine::Attacher.delete { |data| DeleteJob.perform_async(data) }
15
19
  #
16
20
  # If you don't want to apply backgrounding for all uploaders, you can
17
- # declare the hooks only for specific uploaders.
21
+ # declare the hooks only for specific uploaders (in this case it's still
22
+ # recommended to keep the plugin loaded globally).
18
23
  #
19
24
  # class MyUploader < Shrine
25
+ # # makes this uploader use background jobs
20
26
  # Attacher.promote { |data| PromoteJob.perform_async(data) }
21
27
  # Attacher.delete { |data| DeleteJob.perform_async(data) }
22
28
  # end
@@ -176,12 +182,13 @@ class Shrine
176
182
  record = load_record(data)
177
183
  name = data["name"].to_sym
178
184
 
179
- if data["shrine_class"]
185
+ if record.respond_to?(:"#{name}_attacher")
186
+ attacher = record.send(:"#{name}_attacher")
187
+ elsif data["shrine_class"]
180
188
  shrine_class = Object.const_get(data["shrine_class"])
181
189
  attacher = shrine_class::Attacher.new(record, name)
182
190
  else
183
- # anonymous uploader class, try to retrieve attacher from record
184
- attacher = record.send("#{name}_attacher")
191
+ fail Error, "cannot load anonymous uploader class"
185
192
  end
186
193
 
187
194
  attacher