shrine 2.8.0 → 2.9.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

Files changed (64) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +681 -0
  3. data/README.md +73 -21
  4. data/doc/carrierwave.md +75 -20
  5. data/doc/creating_storages.md +15 -26
  6. data/doc/direct_s3.md +113 -31
  7. data/doc/multiple_files.md +4 -8
  8. data/doc/paperclip.md +98 -31
  9. data/doc/refile.md +4 -6
  10. data/doc/testing.md +24 -21
  11. data/lib/shrine.rb +32 -20
  12. data/lib/shrine/plugins/activerecord.rb +2 -0
  13. data/lib/shrine/plugins/add_metadata.rb +2 -0
  14. data/lib/shrine/plugins/background_helpers.rb +2 -0
  15. data/lib/shrine/plugins/backgrounding.rb +11 -4
  16. data/lib/shrine/plugins/backup.rb +2 -0
  17. data/lib/shrine/plugins/cached_attachment_data.rb +2 -0
  18. data/lib/shrine/plugins/copy.rb +2 -0
  19. data/lib/shrine/plugins/data_uri.rb +20 -12
  20. data/lib/shrine/plugins/default_storage.rb +2 -0
  21. data/lib/shrine/plugins/default_url.rb +2 -0
  22. data/lib/shrine/plugins/default_url_options.rb +2 -0
  23. data/lib/shrine/plugins/delete_promoted.rb +2 -0
  24. data/lib/shrine/plugins/delete_raw.rb +2 -0
  25. data/lib/shrine/plugins/determine_mime_type.rb +18 -2
  26. data/lib/shrine/plugins/direct_upload.rb +6 -6
  27. data/lib/shrine/plugins/download_endpoint.rb +2 -0
  28. data/lib/shrine/plugins/dynamic_storage.rb +2 -0
  29. data/lib/shrine/plugins/hooks.rb +2 -0
  30. data/lib/shrine/plugins/included.rb +2 -0
  31. data/lib/shrine/plugins/infer_extension.rb +131 -0
  32. data/lib/shrine/plugins/keep_files.rb +2 -0
  33. data/lib/shrine/plugins/logging.rb +6 -4
  34. data/lib/shrine/plugins/metadata_attributes.rb +2 -0
  35. data/lib/shrine/plugins/migration_helpers.rb +2 -0
  36. data/lib/shrine/plugins/module_include.rb +2 -0
  37. data/lib/shrine/plugins/moving.rb +2 -0
  38. data/lib/shrine/plugins/multi_delete.rb +4 -0
  39. data/lib/shrine/plugins/parallelize.rb +2 -0
  40. data/lib/shrine/plugins/parsed_json.rb +2 -0
  41. data/lib/shrine/plugins/presign_endpoint.rb +7 -7
  42. data/lib/shrine/plugins/pretty_location.rb +2 -0
  43. data/lib/shrine/plugins/processing.rb +2 -0
  44. data/lib/shrine/plugins/rack_file.rb +2 -0
  45. data/lib/shrine/plugins/rack_response.rb +2 -0
  46. data/lib/shrine/plugins/recache.rb +2 -0
  47. data/lib/shrine/plugins/refresh_metadata.rb +2 -0
  48. data/lib/shrine/plugins/remote_url.rb +12 -1
  49. data/lib/shrine/plugins/remove_attachment.rb +2 -0
  50. data/lib/shrine/plugins/remove_invalid.rb +2 -0
  51. data/lib/shrine/plugins/restore_cached_data.rb +2 -0
  52. data/lib/shrine/plugins/sequel.rb +2 -0
  53. data/lib/shrine/plugins/signature.rb +10 -8
  54. data/lib/shrine/plugins/store_dimensions.rb +5 -3
  55. data/lib/shrine/plugins/upload_endpoint.rb +7 -8
  56. data/lib/shrine/plugins/upload_options.rb +2 -0
  57. data/lib/shrine/plugins/validation_helpers.rb +2 -0
  58. data/lib/shrine/plugins/versions.rb +72 -31
  59. data/lib/shrine/storage/file_system.rb +11 -4
  60. data/lib/shrine/storage/linter.rb +5 -13
  61. data/lib/shrine/storage/s3.rb +16 -13
  62. data/lib/shrine/version.rb +3 -1
  63. data/shrine.gemspec +7 -6
  64. metadata +26 -10
data/README.md CHANGED
@@ -37,8 +37,8 @@ Shrine.plugin :rack_file # for non-Rails apps
37
37
  ```
38
38
 
39
39
  Next decide how you will name the attachment attribute on your model, and run a
40
- migration that adds an `<attachment>_data` text column, which Shrine will use
41
- to store all information about the attachment:
40
+ migration that adds an `<attachment>_data` text or JSON column, which Shrine
41
+ will use to store all information about the attachment:
42
42
 
43
43
  ```rb
44
44
  Sequel.migration do # class AddImageDataToPhotos < ActiveRecord::Migration
@@ -340,20 +340,39 @@ The model attachment attributes and callbacks just delegate the behaviour
340
340
  to a `Shrine::Attacher` object.
341
341
 
342
342
  ```rb
343
- attacher = ImageUploader::Attacher.new(photo, :image) # returned by `photo.image_attacher`
343
+ photo.image_attacher #=> #<Shrine::Attacher>
344
+ ```
345
+
346
+ The `Shrine::Attacher` object can be instantiated and used directly:
347
+
348
+ ```rb
349
+ attacher = ImageUploader::Attacher.new(photo, :image)
344
350
 
345
351
  attacher.assign(file) # equivalent to `photo.image = file`
346
352
  attacher.get # equivalent to `photo.image`
347
353
  attacher.url # equivalent to `photo.image_url`
348
354
  ```
349
355
 
350
- The attacher is what drives attaching files to models, and it functions
356
+ The attacher is what drives attaching files to model instances, and it functions
351
357
  independently from models' attachment interface. This means that you can use it
352
358
  as an alternative, in case you prefer not to add additional attributes to the
353
359
  model, or prefer explicitness over callbacks. It's also useful when you need
354
360
  something more advanced which isn't available through the attachment
355
361
  attributes.
356
362
 
363
+ The `Shrine::Attacher` by default uses `:cache` for temporary and `:store` for
364
+ permanent storage, but you can specify a different storage:
365
+
366
+ ```rb
367
+ ImageUploader::Attacher.new(photo, :image, cache: :other_cache, store: :other_store)
368
+
369
+ # OR
370
+
371
+ photo.image_attacher(cache: :other_cache, store: :other_store)
372
+ photo.image = file # uploads to :other_cache storage
373
+ photo.save # promotes to :other_store storage
374
+ ```
375
+
357
376
  Whenever the attacher uploads or deletes files, it sends a `context` hash
358
377
  which includes `:record`, `:name`, and `:action` keys, so that you can perform
359
378
  processing or generate location differently depending on this information. See
@@ -775,12 +794,11 @@ Rails.application.routes.draw do
775
794
  end
776
795
  ```
777
796
 
778
- The above created a `POST /images/upload` endpoint. You can now use a
779
- client-side file upload library like [FineUploader], [Dropzone] or
780
- [jQuery-File-Upload] to upload files asynchronously to the `/images/upload`
781
- endpoint the moment they are selected. Once the file has been uploaded, the
782
- endpoint will return JSON data of the uploaded file, which the client can then
783
- write to a hidden attachment field, to be submitted instead of the raw file.
797
+ The above created a `POST /images/upload` endpoint. You can now use [Uppy] to
798
+ upload files asynchronously to the `/images/upload` endpoint the moment they
799
+ are selected. Once the file has been uploaded, the endpoint will return JSON
800
+ data of the uploaded file, which the client can then write to a hidden
801
+ attachment field, to be submitted instead of the raw file.
784
802
 
785
803
  Many popular storage services can accept file uploads directly from the client
786
804
  ([Amazon S3], [Google Cloud Storage], [Microsoft Azure Storage] etc), which
@@ -846,14 +864,49 @@ libraries are:
846
864
 
847
865
  ## Clearing cache
848
866
 
849
- From time to time you'll want to clean your temporary storage from old files.
850
- Amazon S3 provides [a built-in solution][S3 lifecycle], and for FileSystem you
851
- can run something like this periodically:
867
+ Shrine doesn't automatically delete files uploaded to temporary storage, instead
868
+ you should set up a separate recurring task that will automatically delete old
869
+ cached files.
870
+
871
+ Most of Shrine storage objects come with a `#clear!` method, which you can call
872
+ in a recurring script. For FileSystem and S3 storage it would look like this:
852
873
 
853
874
  ```rb
875
+ # FileSystem storage
854
876
  file_system = Shrine.storages[:cache]
855
877
  file_system.clear!(older_than: Time.now - 7*24*60*60) # delete files older than 1 week
856
878
  ```
879
+ ```rb
880
+ # S3 storage
881
+ s3 = Shrine.storages[:cache]
882
+ s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 } # delete files older than 1 week
883
+ ```
884
+
885
+ Note that for S3 you can also configure bucket lifecycle rules to do this for
886
+ you. This can be done either from the [AWS Console][S3 lifecycle console] or
887
+ via an [API call][S3 lifecycle API]:
888
+
889
+ ```rb
890
+ require "aws-sdk-s3"
891
+
892
+ client = Aws::S3::Client.new(
893
+ access_key_id: "<YOUR KEY>",
894
+ secret_access_key: "<YOUR SECRET>",
895
+ region: "<REGION>",
896
+ )
897
+
898
+ client.put_bucket_lifecycle_configuration(
899
+ bucket: "<YOUR BUCKET>",
900
+ lifecycle_configuration: {
901
+ rules: [{
902
+ expiration: { days: 7 },
903
+ filter: { prefix: "cache/" },
904
+ id: "cache-clear",
905
+ status: "Enabled"
906
+ }]
907
+ }
908
+ )
909
+ ```
857
910
 
858
911
  ## Logging
859
912
 
@@ -903,9 +956,9 @@ generic image servers.
903
956
  Shrine has integrations for many commercial on-the-fly processing services,
904
957
  including [Cloudinary], [Imgix] and [Uploadcare].
905
958
 
906
- If you don't want to use a commercial service, [Attache] and [Dragonfly] are
907
- great open-source image servers. For Attache a Shrine integration is in
908
- progress, while for Dragonfly it is not needed.
959
+ If you don't want to use a commercial service, [Dragonfly] is a great
960
+ open-source image server. See [this blog post][processing post] on how you can
961
+ integrate Dragonfly with Shrine.
909
962
 
910
963
  ## Chunked & Resumable uploads
911
964
 
@@ -960,9 +1013,7 @@ The gem is available as open source under the terms of the [MIT License].
960
1013
  [Context]: https://github.com/janko-m/shrine#context
961
1014
  [image_processing]: https://github.com/janko-m/image_processing
962
1015
  [ffmpeg]: https://github.com/streamio/streamio-ffmpeg
963
- [FineUploader]: https://github.com/FineUploader/fine-uploader
964
- [Dropzone]: https://github.com/enyo/dropzone
965
- [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
1016
+ [Uppy]: https://uppy.io
966
1017
  [Amazon S3]: https://aws.amazon.com/s3/
967
1018
  [Google Cloud Storage]: https://cloud.google.com/storage/
968
1019
  [Microsoft Azure Storage]: https://azure.microsoft.com/en-us/services/storage/
@@ -971,7 +1022,6 @@ The gem is available as open source under the terms of the [MIT License].
971
1022
  [Cloudinary]: https://github.com/janko-m/shrine-cloudinary
972
1023
  [Imgix]: https://github.com/janko-m/shrine-imgix
973
1024
  [Uploadcare]: https://github.com/janko-m/shrine-uploadcare
974
- [Attache]: https://github.com/choonkeat/attache
975
1025
  [Dragonfly]: http://markevans.github.io/dragonfly/
976
1026
  [tus]: http://tus.io
977
1027
  [tus-ruby-server]: https://github.com/janko-m/tus-ruby-server
@@ -985,4 +1035,6 @@ The gem is available as open source under the terms of the [MIT License].
985
1035
  [roda_demo]: https://github.com/janko-m/shrine/tree/master/demo
986
1036
  [rails_demo]: https://github.com/erikdahlstrand/shrine-rails-example
987
1037
  [backgrounding libraries]: https://github.com/janko-m/shrine/wiki/Backgrounding-libraries
988
- [S3 lifecycle]: http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html
1038
+ [S3 lifecycle Console]: http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html
1039
+ [S3 lifecycle API]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_bucket_lifecycle_configuration-instance_method
1040
+ [processing post]: https://twin.github.io/better-file-uploads-with-shrine-processing/
@@ -239,7 +239,7 @@ Shrine. Let's assume we have a `Photo` model with the "image" attachment. First
239
239
  we need to create the `image_data` column for Shrine:
240
240
 
241
241
  ```rb
242
- add_column :photos, :image_data, :text
242
+ add_column :photos, :image_data, :text # or :json or :jsonb if supported
243
243
  ```
244
244
 
245
245
  Afterwards we need to make new uploads write to the `image_data` column. This
@@ -247,9 +247,6 @@ can be done by including the below module to all models that have CarrierWave
247
247
  attachments:
248
248
 
249
249
  ```rb
250
- require "fastimage"
251
- require "mime/types"
252
-
253
250
  module CarrierwaveShrineSynchronization
254
251
  def self.included(model)
255
252
  model.before_save do
@@ -272,6 +269,8 @@ module CarrierwaveShrineSynchronization
272
269
  end
273
270
  end
274
271
 
272
+ # Remove the `.to_json` if you're using a JSON column, otherwise the JSON
273
+ # object will be saved as an escaped string.
275
274
  write_attribute(:"#{name}_data", data.to_json)
276
275
  else
277
276
  write_attribute(:"#{name}_data", nil)
@@ -283,22 +282,13 @@ module CarrierwaveShrineSynchronization
283
282
  # If you'll be using `:prefix` on your Shrine storage, make sure to
284
283
  # subtract it from the path assigned as `:id`.
285
284
  def uploader_to_shrine_data(uploader)
286
- path = uploader.store_path(read_attribute(uploader.mounted_as))
287
-
288
- size = uploader.file.size if changes.key?(uploader.mounted_as)
289
- size ||= FastImage.new(uploader.url).content_length # OPTIONAL (makes an HTTP request)
290
- size ||= File.size(File.join(uploader.root, path)) if File.exist?(path)
291
- filename = File.basename(path)
292
- mime_type = MIME::Types.type_for(path).first.to_s.presence
285
+ filename = read_attribute(uploader.mounted_as)
286
+ path = uploader.store_path(filename)
293
287
 
294
288
  {
295
289
  storage: :store,
296
290
  id: path,
297
- metadata: {
298
- size: size,
299
- filename: filename,
300
- mime_type: mime_type,
301
- },
291
+ metadata: { filename: filename }
302
292
  }
303
293
  end
304
294
  end
@@ -326,6 +316,19 @@ instead of CarrierWave, using equivalent Shrine storages. For help with
326
316
  translating the code from CarrierWave to Shrine, you can consult the reference
327
317
  below.
328
318
 
319
+ You'll notice that Shrine metadata will be absent from the migrated files'
320
+ data. You can run a script that will fill in any missing metadata defined in
321
+ your Shrine uploader:
322
+
323
+ ```rb
324
+ Shrine.plugin :refresh_metadata
325
+
326
+ Photo.find_each do |photo|
327
+ attachment = ImageUploader.uploaded_file(photo.image, &:refresh_metadata!)
328
+ photo.update(image_data: attachment.to_json)
329
+ end
330
+ ```
331
+
329
332
  ## CarrierWave to Shrine direct mapping
330
333
 
331
334
  ### `CarrierWave::Uploader::Base`
@@ -591,10 +594,6 @@ As mentioned before, in Shrine you register storages through `Shrine.storages`,
591
594
  and the attachment storages will automatically be `:cache` and `:store`, but
592
595
  you can change this with the `default_storage` plugin.
593
596
 
594
- #### `fog_*`
595
-
596
- These options are set on the [shrine-fog] storage.
597
-
598
597
  #### `delete_tmp_file_after_storage`, `remove_previously_stored_file_after_update`
599
598
 
600
599
  By default Shrine deletes cached and replaced files, but you can choose to keep
@@ -651,8 +650,64 @@ You can just add conditionals in processing code.
651
650
  No equivalent, it depends on your application whether you need the form to be
652
651
  multipart or not.
653
652
 
653
+ ### `CarrierWave::Storage::Fog`
654
+
655
+ You can use [`Shrine::Storage::S3`] \(built-in\),
656
+ [`Shrine::Storage::GoogleCloudStorage`], or generic [`Shrine::Storage::Fog`]
657
+ storage. The reference will assume you're using S3 storage.
658
+
659
+ #### `:fog_credentials`, `:fog_directory`
660
+
661
+ The S3 Shrine storage accepts `:access_key_id`, `:secret_access_key`, `:region`,
662
+ and `:bucket` options in the initializer:
663
+
664
+ ```rb
665
+ Shrine::Storage::S3.new(
666
+ access_key_id: "...",
667
+ secret_access_key: "...",
668
+ region: "...",
669
+ bucket: "...",
670
+ )
671
+ ```
672
+
673
+ #### `:fog_attributes`
674
+
675
+ The object data can be configured via the `:upload_options` hash:
676
+
677
+ ```rb
678
+ Shrine::Storage::S3.new(upload_options: {content_disposition: "attachment"}, **options)
679
+ ```
680
+
681
+ #### `:fog_public`
682
+
683
+ The object permissions can be configured with the `:acl` upload option:
684
+
685
+ ```rb
686
+ Shrine::Storage::S3.new(upload_options: {acl: "private"}, **options)
687
+ ```
688
+
689
+ #### `:fog_authenticated_url_expiration`
690
+
691
+ The `#url` method accepts the `:expires_in` option, you can set the default
692
+ expiration with the `default_url_options` plugin:
693
+
694
+ ```rb
695
+ plugin :default_url_options, store: {expires_in: 600}
696
+ ```
697
+
698
+ #### `:fog_use_ssl_for_aws`, `:fog_aws_accelerate`
699
+
700
+ Shrine allows you to override the S3 endpoint:
701
+
702
+ ```rb
703
+ Shrine::Storage::S3.new(endnpoint: "https://s3-accelerate.amazonaws.com", **options)
704
+ ```
705
+
654
706
  [image_processing]: https://github.com/janko-m/image_processing
655
707
  [demo app]: https://github.com/janko-m/shrine/tree/master/demo
656
708
  [Reprocessing versions]: http://shrinerb.com/rdoc/files/doc/regenerating_versions_md.html
657
709
  [shrine-fog]: https://github.com/janko-m/shrine-fog
658
710
  [direct uploads]: http://shrinerb.com/rdoc/files/doc/direct_s3_md.html
711
+ [`Shrine::Storage::S3`]: http://shrinerb.com/rdoc/classes/Shrine/Storage/S3.html
712
+ [`Shrine::Storage::GoogleCloudStorage`]: https://github.com/renchap/shrine-google_cloud_storage
713
+ [`Shrine::Storage::Fog`]: https://github.com/janko-m/shrine-fog
@@ -37,9 +37,21 @@ end
37
37
  ## Upload
38
38
 
39
39
  The job of `Storage#upload` is to upload the given IO object to the storage.
40
+ It's recommended to use [HTTP.rb] for uploading, as it accepts any IO object
41
+ that responds to `#read` (not just file objects), and it streams the IO data
42
+ directly to the socket, making it suitable for large uploads.
43
+
44
+ ```rb
45
+ require "http"
46
+
47
+ # streaming raw upload
48
+ HTTP.post("http://example.com/upload", body: io)
49
+ # streaming multipart upload
50
+ HTTP.post("http://example.com/upload", form: { file: HTTP::FormData::File.new(io) })
51
+ ```
52
+
40
53
  It's good practice to test the storage with a [fake IO] object which responds
41
- only to required methods. Some HTTP libraries don't support uploading non-file
42
- IOs, although for [Faraday] and [REST client] you can work around that.
54
+ only to required methods, as not all received IO objects will be file objects.
43
55
 
44
56
  If your storage doesn't control which id the uploaded file will have, you
45
57
  can modify the `id` variable before returning:
@@ -135,28 +147,6 @@ class Shrine
135
147
  end
136
148
  ```
137
149
 
138
- ## Multi delete
139
-
140
- If your storage supports deleting multiple files at the same time, you can
141
- implement an additional method, which will automatically get picked up by the
142
- `multi_delete` plugin:
143
-
144
- ```rb
145
- class Shrine
146
- module Storage
147
- class MyStorage
148
- # ...
149
-
150
- def multi_delete(ids)
151
- # deletes multiple files at once
152
- end
153
-
154
- # ...
155
- end
156
- end
157
- end
158
- ```
159
-
160
150
  ## Clearing
161
151
 
162
152
  While this method is not used by Shrine, it is good to give users the
@@ -235,6 +225,5 @@ Note that using the linter doesn't mean that you shouldn't write any manual
235
225
  tests for your storage. There will likely be some edge cases that won't be
236
226
  tested by the linter.
237
227
 
228
+ [HTTP.rb]: https://github.com/httprb/http
238
229
  [fake IO]: https://github.com/janko-m/shrine-cloudinary/blob/ca587c580ea0762992a2df33fd590c9a1e534905/test/test_helper.rb#L20-L27
239
- [REST client]: https://github.com/janko-m/shrine-cloudinary/blob/ca587c580ea0762992a2df33fd590c9a1e534905/lib/shrine/storage/cloudinary.rb#L138-L141
240
- [Faraday]: https://github.com/janko-m/shrine-uploadcare/blob/2038781ace0f54d82fa06cc04c4c2958919208ad/lib/shrine/storage/uploadcare.rb#L140
@@ -24,7 +24,7 @@ storage service is beneficial for several reasons:
24
24
  times out.
25
25
 
26
26
  You can start by setting both temporary and permanent storage to S3 with
27
- different prefixes (or even buckets):
27
+ different prefixes (or even different buckets):
28
28
 
29
29
  ```rb
30
30
  # Gemfile
@@ -34,10 +34,10 @@ gem "aws-sdk-s3", "~> 1.2"
34
34
  require "shrine/storage/s3"
35
35
 
36
36
  s3_options = {
37
- access_key_id: "abc",
38
- secret_access_key: "123",
39
- region: "my-region",
40
- bucket: "my-bucket",
37
+ access_key_id: "<YOUR KEY>",
38
+ secret_access_key: "<YOUR SECRET>",
39
+ bucket: "<YOUR BUCKET>",
40
+ region: "<REGION>",
41
41
  }
42
42
 
43
43
  Shrine.storages = {
@@ -49,14 +49,36 @@ Shrine.storages = {
49
49
  ## Enabling CORS
50
50
 
51
51
  In order to be able upload files directly to your S3 bucket, you need enable
52
- CORS. You can do that in the AWS S3 Console by clicking on "Properties >
53
- Permissions > Add CORS Configuration", and then just follow the Amazon
54
- documentation on how to write a CORS file.
52
+ CORS. You can do that from the AWS S3 Console by going to your bucket, clicking
53
+ on the "Permissions" tab, then on "CORS Configuration", and following the
54
+ [guide for configuring CORS][CORS guide].
55
55
 
56
- http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
56
+ Alternatively you can configure CORS via an [API call][CORS API]:
57
57
 
58
- Note that due to DNS propagation it may take some time for update of the CORS
59
- settings to be applied.
58
+ ```rb
59
+ require "aws-sdk-s3"
60
+
61
+ client = Aws::S3::Client.new(
62
+ access_key_id: "<YOUR KEY>",
63
+ secret_access_key: "<YOUR SECRET>",
64
+ region: "<REGION>",
65
+ )
66
+
67
+ client.put_bucket_cors(
68
+ bucket: "<YOUR BUCKET>",
69
+ cors_configuration: {
70
+ cors_rules: [{
71
+ allowed_headers: ["Authorization", "Content-Type", "Origin"],
72
+ allowed_methods: ["GET", "POST"],
73
+ allowed_origins: ["*"],
74
+ max_age_seconds: 3000,
75
+ }]
76
+ }
77
+ )
78
+ ```
79
+
80
+ Note that due to DNS propagation it may take some time for the CORS update to
81
+ be applied.
60
82
 
61
83
  ## File hash
62
84
 
@@ -127,11 +149,9 @@ request headers.
127
149
  }
128
150
  ```
129
151
 
130
- You can now use a client-side file upload library like [FineUploader],
131
- [Dropzone] or [jQuery-File-Upload] to upload selected files directly to S3.
132
- When the user selects a file, the client can make a request to the presign
133
- endpoint, and use the returned request information to upload the selected file
134
- directly to S3.
152
+ On the client side you can then make a request to the presign endpoint as soon
153
+ as the user selects a file, and use the returned request information to upload
154
+ the selected file directly to S3. It's recommended to use [Uppy] for this.
135
155
 
136
156
  Once the file has been uploaded, you can generate a JSON representation of the
137
157
  uploaded file on the client-side, and write it to the hidden attachment field.
@@ -168,22 +188,30 @@ generating the form we can use `Shrine::Storage::S3#presign`, which returns a
168
188
  ```erb
169
189
  <%
170
190
  presign = Shrine.storages[:cache].presign SecureRandom.hex,
171
- success_action_redirect: new_album_url,
172
- allow_any: ['utf8', 'authenticity_token']
191
+ success_action_redirect: new_album_url
173
192
  %>
174
193
 
175
194
  <form action="<%= presign.url %>" method="post" enctype="multipart/form-data">
176
- <input type="file" name="file">
177
195
  <% presign.fields.each do |name, value| %>
178
196
  <input type="hidden" name="<%= name %>" value="<%= value %>">
179
197
  <% end %>
198
+ <input type="file" name="file">
180
199
  <input type="submit" value="Upload">
181
200
  </form>
182
201
  ```
183
202
 
184
- Note the additional `success_action_redirect` option which tells S3 where to
185
- redirect to after the file has been uploaded. We also tell S3 to exclude the
186
- `utf8` and `authenticity_token` fields that the Rails form builder generates.
203
+ Note the additional `:success_action_redirect` option which tells S3 where to
204
+ redirect to after the file has been uploaded. If you're using the Rails form
205
+ builder to generate this form, you might need to also tell S3 to ignore the
206
+ additional `utf8` and `authenticity_token` fields that Rails generates:
207
+
208
+ ```rb
209
+ <%
210
+ presign = Shrine.storages[:cache].presign SecureRandom.hex,
211
+ allow_any: ["utf8", "authenticity_token"],
212
+ success_action_redirect: new_album_url
213
+ %>
214
+ ```
187
215
 
188
216
  Let's assume we specified the redirect URL to be a page which renders the form
189
217
  for a new record. S3 will include some information about the upload in form of
@@ -193,7 +221,7 @@ GET parameters in the URL, out of which we only need the `key` parameter:
193
221
  <%
194
222
  cached_file = {
195
223
  storage: "cache",
196
- id: params[:key][/cache\/(.+)/, 1], # we have to remove the prefix part
224
+ id: params[:key][/cache\/(.+)/, 1], # we subtract the storage prefix
197
225
  metadata: {},
198
226
  }
199
227
  %>
@@ -204,7 +232,29 @@ GET parameters in the URL, out of which we only need the `key` parameter:
204
232
  </form>
205
233
  ```
206
234
 
207
- ## Metadata
235
+ ## Object data
236
+
237
+ When the cached S3 object is copied to permanent storage, the destination S3
238
+ object will by default inherit any object data that was assigned to the cached
239
+ object via presign parameters. However, S3 will by default also ignore any new
240
+ object parameters that are given to the copy request.
241
+
242
+ Whether object data will be copied or replaced depends on the value of the
243
+ `:metadata_directive` parameter:
244
+
245
+ * `"COPY"` - destination object will inherit source object data and any new data will be ignored (default)
246
+ * `"REPLACE"` - destination object will not inherit any of the source object data and will accept new data
247
+
248
+ You can use the `upload_options` plugin to change the `:metadata_directive`
249
+ option when S3 objects are copied:
250
+
251
+ ```rb
252
+ plugin :upload_options, store: -> (io, context) do
253
+ { metadata_directive: "REPLACE" } if io.is_a?(Shrine::UploadedFile)
254
+ end
255
+ ```
256
+
257
+ ## Shrine metadata
208
258
 
209
259
  With direct uploads any metadata has to be extracted on the client-side, since
210
260
  the file upload doesn't touch the application, so the Shrine uploader doesn't
@@ -239,9 +289,40 @@ end
239
289
 
240
290
  ## Clearing cache
241
291
 
242
- Since directly uploaded files will stay in your temporary storage, you will
243
- want to periodically delete the old ones that were already promoted. Luckily,
244
- Amazon provides [a built-in solution][object lifecycle] for that.
292
+ Directly uploaded files won't automatically be deleted from your temporary
293
+ storage, so you'll want to periodically clear them. One way to do that is
294
+ by setting up recurring script which calls `Shrine::Storage::S3#clear!`:
295
+
296
+ ```rb
297
+ s3 = Shrine.storages[:cache]
298
+ s3.clear! { |object| object.last_modified < Time.now - 7*24*60*60 } # delete files older than 1 week
299
+ ```
300
+
301
+ Alternatively you can add a bucket lifeycle rule to do this for you. This can
302
+ be done either from the [AWS Console][lifecycle console] or via an [API
303
+ call][lifecycle API]:
304
+
305
+ ```rb
306
+ require "aws-sdk-s3"
307
+
308
+ client = Aws::S3::Client.new(
309
+ access_key_id: "<YOUR KEY>",
310
+ secret_access_key: "<YOUR SECRET>",
311
+ region: "<REGION>",
312
+ )
313
+
314
+ client.put_bucket_lifecycle_configuration(
315
+ bucket: "<YOUR BUCKET>",
316
+ lifecycle_configuration: {
317
+ rules: [{
318
+ expiration: { days: 7 },
319
+ filter: { prefix: "cache/" },
320
+ id: "cache-clear",
321
+ status: "Enabled"
322
+ }]
323
+ }
324
+ )
325
+ ```
245
326
 
246
327
  ## Eventual consistency
247
328
 
@@ -274,8 +355,9 @@ end
274
355
 
275
356
  [`Aws::S3::PresignedPost`]: http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Bucket.html#presigned_post-instance_method
276
357
  [demo app]: https://github.com/janko-m/shrine/tree/master/demo
277
- [Dropzone]: https://github.com/enyo/dropzone
278
- [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
279
- [FineUploader]: https://github.com/FineUploader/fine-uploader
358
+ [Uppy]: https://uppy.io
280
359
  [Amazon S3 Data Consistency Model]: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMode
281
- [object lifecycle]: http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html
360
+ [CORS guide]: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
361
+ [CORS API]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_bucket_cors-instance_method
362
+ [lifecycle Console]: http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html
363
+ [lifecycle API]: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Client.html#put_bucket_lifecycle_configuration-instance_method