shrine 2.6.1 → 2.7.0
Sign up to get free protection for your applications and to get access to all the features.
Potentially problematic release.
This version of shrine might be problematic. Click here for more details.
- checksums.yaml +4 -4
- data/README.md +66 -27
- data/doc/attacher.md +8 -0
- data/doc/carrierwave.md +2 -2
- data/doc/creating_storages.md +3 -2
- data/doc/direct_s3.md +71 -56
- data/doc/multiple_files.md +3 -2
- data/doc/refile.md +18 -15
- data/doc/regenerating_versions.md +1 -1
- data/doc/securing_uploads.md +8 -8
- data/doc/testing.md +27 -21
- data/lib/shrine.rb +35 -24
- data/lib/shrine/plugins/activerecord.rb +22 -3
- data/lib/shrine/plugins/copy.rb +1 -1
- data/lib/shrine/plugins/data_uri.rb +3 -3
- data/lib/shrine/plugins/determine_mime_type.rb +24 -10
- data/lib/shrine/plugins/direct_upload.rb +4 -1
- data/lib/shrine/plugins/download_endpoint.rb +126 -63
- data/lib/shrine/plugins/keep_files.rb +1 -1
- data/lib/shrine/plugins/logging.rb +1 -0
- data/lib/shrine/plugins/metadata_attributes.rb +1 -1
- data/lib/shrine/plugins/presign_endpoint.rb +258 -0
- data/lib/shrine/plugins/rack_file.rb +1 -1
- data/lib/shrine/plugins/rack_response.rb +85 -0
- data/lib/shrine/plugins/remote_url.rb +5 -7
- data/lib/shrine/plugins/sequel.rb +1 -1
- data/lib/shrine/plugins/signature.rb +1 -1
- data/lib/shrine/plugins/upload_endpoint.rb +238 -0
- data/lib/shrine/storage/file_system.rb +3 -2
- data/lib/shrine/storage/s3.rb +62 -54
- data/lib/shrine/version.rb +2 -2
- data/shrine.gemspec +3 -4
- metadata +22 -33
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 2156db4347bf54e372dd3865a117b38e14638419
|
4
|
+
data.tar.gz: 6fc29c5c7837f3355240b85ff313579b35953a62
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 634e5d0bec22737521cef1b2b7eab82871e3be1d13988a171a6686e3f3885d2514531ede958ca757dc9a84c526b049f9d2bb5bccb4119f4c8c090a1272ba88c7
|
7
|
+
data.tar.gz: bfa00d5f2ae6ca28e66cb0e26872cfb127fe00e13b12bf0eddd9965b295a5ea4b7418bf1d74f4e3deb059aa8a66d3262b120f968b369d71f675a3c5b9d265c4f
|
data/README.md
CHANGED
@@ -74,11 +74,17 @@ uploaded file in case of validation errors and [direct uploads].
|
|
74
74
|
<input name="photo[image]" type="file">
|
75
75
|
</form>
|
76
76
|
|
77
|
-
<!--
|
77
|
+
<!-- ActionView::Helpers::FormHelper -->
|
78
78
|
<%= form_for @photo do |f| %>
|
79
79
|
<%= f.hidden_field :image, value: @photo.cached_image_data %>
|
80
80
|
<%= f.file_field :image %>
|
81
81
|
<% end %>
|
82
|
+
|
83
|
+
<!-- SimpleForm -->
|
84
|
+
<%= simple_form_for @photo do |f| %>
|
85
|
+
<%= f.input :image, as: :hidden, input_html: {value: @photo.cached_image_data} %>
|
86
|
+
<%= f.input :image, as: :file %>
|
87
|
+
<% end %>
|
82
88
|
```
|
83
89
|
|
84
90
|
Note that the file field needs to go *after* the hidden field, so that
|
@@ -113,7 +119,7 @@ interface. Storages are configured directly and registered under a name in
|
|
113
119
|
|
114
120
|
```rb
|
115
121
|
# Gemfile
|
116
|
-
gem "aws-sdk", "~> 2
|
122
|
+
gem "aws-sdk-s3", "~> 1.2" # for Amazon S3 storage
|
117
123
|
```
|
118
124
|
```rb
|
119
125
|
require "shrine/storage/s3"
|
@@ -405,7 +411,7 @@ by default Shrine's "mime_type" is **not guaranteed** to hold the actual MIME
|
|
405
411
|
type of the file.
|
406
412
|
|
407
413
|
However, if you load the `determine_mime_type` plugin, that will make Shrine
|
408
|
-
always extract the MIME type from **file content
|
414
|
+
always extract the MIME type from **file content**.
|
409
415
|
|
410
416
|
```rb
|
411
417
|
Shrine.plugin :determine_mime_type
|
@@ -680,6 +686,22 @@ class DocumentUploader < Shrine
|
|
680
686
|
end
|
681
687
|
```
|
682
688
|
|
689
|
+
Validations are inherited from superclasses, but you need to call them manually
|
690
|
+
when defining more validations:
|
691
|
+
|
692
|
+
```ruby
|
693
|
+
class ApplicationUploader < Shrine
|
694
|
+
Attacher.validate { validate_max_size 5.megabytes }
|
695
|
+
end
|
696
|
+
|
697
|
+
class ImageUploader < ApplicationUploader
|
698
|
+
Attacher.validate do
|
699
|
+
super() # empty braces are required
|
700
|
+
validate_mime_type_inclusion %w[image/jpeg image/jpg image/png]
|
701
|
+
end
|
702
|
+
end
|
703
|
+
```
|
704
|
+
|
683
705
|
## Location
|
684
706
|
|
685
707
|
Before Shrine uploads a file, it generates a random location for it. By default
|
@@ -721,37 +743,50 @@ uploader.upload(file, location: "some/specific/location.mp4")
|
|
721
743
|
|
722
744
|
## Direct uploads
|
723
745
|
|
724
|
-
|
725
|
-
|
726
|
-
|
746
|
+
While having files uploaded on form submit is simplest to implement, it doesn't
|
747
|
+
provide the best user experience, because the user doesn't know how long they
|
748
|
+
need to wait for the file to get uploaded.
|
749
|
+
|
750
|
+
To improve the user experience, the application can actually start uploading
|
751
|
+
the file **asynchronously** already when it has been selected, and provide a
|
752
|
+
progress bar. This way the user can estimate when the upload is going to
|
753
|
+
finish, and they can continue filling in other fields in the form while the
|
754
|
+
file is being uploaded.
|
755
|
+
|
756
|
+
Shrine comes with the `upload_endpoint` plugin, which provides a Rack endpoint
|
757
|
+
that accepts file uploads and forwards them to specified storage. We want to
|
758
|
+
set it up to upload to *temporary* storage, because we're replacing the caching
|
759
|
+
step in the default synchronous workflow.
|
727
760
|
|
728
761
|
```rb
|
729
|
-
|
730
|
-
gem "roda"
|
731
|
-
```
|
732
|
-
```rb
|
733
|
-
Shrine.plugin :direct_upload
|
762
|
+
Shrine.plugin :upload_endpoint
|
734
763
|
```
|
735
764
|
```rb
|
736
765
|
Rails.application.routes.draw do
|
737
|
-
mount ImageUploader
|
766
|
+
mount ImageUploader.upload_endpoint(:cache) => "/images/upload"
|
738
767
|
end
|
739
768
|
```
|
740
769
|
|
741
|
-
The above
|
770
|
+
The above created a `POST /images/upload` endpoint. You can now use a
|
771
|
+
client-side file upload library like [FineUploader], [Dropzone] or
|
772
|
+
[jQuery-File-Upload] to upload files asynchronously to the `/images/upload`
|
773
|
+
endpoint the moment they are selected. Once the file has been uploaded, the
|
774
|
+
endpoint will return JSON data of the uploaded file, which the client can then
|
775
|
+
write to a hidden attachment field, to be submitted instead of the raw file.
|
742
776
|
|
743
|
-
|
744
|
-
|
777
|
+
Many popular storage services can accept file uploads directly from the client
|
778
|
+
([Amazon S3], [Google Cloud Storage], [Microsoft Azure Storage] etc), which
|
779
|
+
means you can avoid uploading files through your app. If you're using one of
|
780
|
+
these storage services, you can use the `presign_endpoint` plugin to generate
|
781
|
+
URL, fields, and headers that can be used to upload files directly to the
|
782
|
+
storage service. The only difference from the `upload_endpoint` workflow is
|
783
|
+
that the client has the extra step of fetching the request information before
|
784
|
+
uploading the file.
|
745
785
|
|
746
|
-
|
747
|
-
|
748
|
-
|
749
|
-
|
750
|
-
like [jQuery-File-Upload], [Dropzone] or [FineUploader].
|
751
|
-
|
752
|
-
See the [direct_upload] plugin documentation and [Direct Uploads to S3][direct uploads]
|
753
|
-
guide for more details, as well as the [Roda][roda_demo] and
|
754
|
-
[Rails][rails_demo] demo apps which implement multiple uploads directly to S3.
|
786
|
+
See the [upload_endpoint] and [presign_endpoint] plugin documentations and
|
787
|
+
[Direct Uploads to S3][direct uploads] guide for more details, as well as the
|
788
|
+
[Roda][roda_demo] and [Rails][rails_demo] demo apps which implement multiple
|
789
|
+
uploads directly to S3.
|
755
790
|
|
756
791
|
## Backgrounding
|
757
792
|
|
@@ -917,10 +952,14 @@ The gem is available as open source under the terms of the [MIT License].
|
|
917
952
|
[Context]: https://github.com/janko-m/shrine#context
|
918
953
|
[image_processing]: https://github.com/janko-m/image_processing
|
919
954
|
[ffmpeg]: https://github.com/streamio/streamio-ffmpeg
|
920
|
-
[jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
|
921
|
-
[Dropzone]: https://github.com/enyo/dropzone
|
922
955
|
[FineUploader]: https://github.com/FineUploader/fine-uploader
|
923
|
-
[
|
956
|
+
[Dropzone]: https://github.com/enyo/dropzone
|
957
|
+
[jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
|
958
|
+
[Amazon S3]: https://aws.amazon.com/s3/
|
959
|
+
[Google Cloud Storage]: https://cloud.google.com/storage/
|
960
|
+
[Microsoft Azure Storage]: https://azure.microsoft.com/en-us/services/storage/
|
961
|
+
[upload_endpoint]: http://shrinerb.com/rdoc/classes/Shrine/Plugins/UploadEndpoint.html
|
962
|
+
[presign_endpoint]: http://shrinerb.com/rdoc/classes/Shrine/Plugins/PresignEndpoint.html
|
924
963
|
[Cloudinary]: https://github.com/janko-m/shrine-cloudinary
|
925
964
|
[Imgix]: https://github.com/janko-m/shrine-imgix
|
926
965
|
[Uploadcare]: https://github.com/janko-m/shrine-uploadcare
|
data/doc/attacher.md
CHANGED
@@ -40,6 +40,14 @@ also tell it to use different temporary and permanent storage:
|
|
40
40
|
ImageUploader::Attacher.new(photo, :image, cache: :other_cache, store: :other_store)
|
41
41
|
```
|
42
42
|
|
43
|
+
Note that you can pass the `:cache` and `:store` options via `Attachment.new` too:
|
44
|
+
|
45
|
+
```rb
|
46
|
+
class Photo < Sequel::Model
|
47
|
+
include ImageUploader::Attachment.new(:image, cache: :other_cache, store: :other_store)
|
48
|
+
end
|
49
|
+
```
|
50
|
+
|
43
51
|
The attacher will use the `<attachment>_data` attribute for storing information
|
44
52
|
about the attachment.
|
45
53
|
|
data/doc/carrierwave.md
CHANGED
@@ -39,8 +39,8 @@ via [direct uploads]):
|
|
39
39
|
|
40
40
|
```rb
|
41
41
|
Shrine.storages = {
|
42
|
-
cache: Shrine::
|
43
|
-
store: Shrine::
|
42
|
+
cache: Shrine::Storage::S3.new(prefix: "cache", **s3_options),
|
43
|
+
store: Shrine::Storage::S3.new(prefix: "store", **s3_options),
|
44
44
|
}
|
45
45
|
```
|
46
46
|
|
data/doc/creating_storages.md
CHANGED
@@ -87,11 +87,12 @@ end
|
|
87
87
|
|
88
88
|
If the storage service supports direct uploads, and requires fetching
|
89
89
|
additional information from the server, you can implement a `#presign` method,
|
90
|
-
which will be used by the `
|
90
|
+
which will be used by the `presign_endpoint` plugin. The method should return an
|
91
91
|
object which responds to
|
92
92
|
|
93
93
|
* `#url` – returns the URL to which the file should be uploaded to
|
94
|
-
* `#fields` – returns a
|
94
|
+
* `#fields` – returns a `Hash` of request parameters that should be used for the upload
|
95
|
+
* `#headers` – returns a `Hash` of request headers that should be used for the upload (optional)
|
95
96
|
|
96
97
|
```rb
|
97
98
|
class Shrine
|
data/doc/direct_s3.md
CHANGED
@@ -1,7 +1,8 @@
|
|
1
1
|
# Direct Uploads to S3
|
2
2
|
|
3
|
-
Shrine gives you the ability to upload files directly to Amazon S3
|
4
|
-
|
3
|
+
Shrine gives you the ability to upload files directly to Amazon S3 (or any
|
4
|
+
other storage service that accepts direct uploads). Uploading directly to a
|
5
|
+
storage service is beneficial for several reasons:
|
5
6
|
|
6
7
|
* Accepting uploads is resource-intensive for the server, and delegating it to
|
7
8
|
an external service makes scaling easier.
|
@@ -18,7 +19,7 @@ beneficial for several use cases:
|
|
18
19
|
changes the location.
|
19
20
|
|
20
21
|
* If your request workers have a timeout configured or you're using Heroku,
|
21
|
-
uploading
|
22
|
+
uploading large files to S3 or any external service inside the
|
22
23
|
request-response lifecycle might not be able to finish before the request
|
23
24
|
times out.
|
24
25
|
|
@@ -27,7 +28,7 @@ different prefixes (or even buckets):
|
|
27
28
|
|
28
29
|
```rb
|
29
30
|
# Gemfile
|
30
|
-
gem "aws-sdk", "~> 2
|
31
|
+
gem "aws-sdk-s3", "~> 1.2"
|
31
32
|
```
|
32
33
|
```rb
|
33
34
|
require "shrine/storage/s3"
|
@@ -54,12 +55,12 @@ documentation on how to write a CORS file.
|
|
54
55
|
|
55
56
|
http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
|
56
57
|
|
57
|
-
Note that it may take some time for the CORS
|
58
|
-
|
58
|
+
Note that due to DNS propagation it may take some time for update of the CORS
|
59
|
+
settings to be applied.
|
59
60
|
|
60
61
|
## File hash
|
61
62
|
|
62
|
-
After direct S3 uploads we'll need to manually construct Shrine's
|
63
|
+
After direct S3 uploads we'll need to manually construct Shrine's JSON
|
63
64
|
representation of an uploaded file:
|
64
65
|
|
65
66
|
```rb
|
@@ -67,9 +68,9 @@ representation of an uploaded file:
|
|
67
68
|
"id": "349234854924394", # requied
|
68
69
|
"storage": "cache", # required
|
69
70
|
"metadata": {
|
70
|
-
"size": 45461, # optional
|
71
|
+
"size": 45461, # optional, but recommended
|
71
72
|
"filename": "foo.jpg", # optional
|
72
|
-
"mime_type": "image/jpeg"
|
73
|
+
"mime_type": "image/jpeg" # optional
|
73
74
|
}
|
74
75
|
}
|
75
76
|
```
|
@@ -84,47 +85,50 @@ representation of an uploaded file:
|
|
84
85
|
* Single or multiple file uploads
|
85
86
|
* Some JavaScript needed
|
86
87
|
|
87
|
-
When the user selects
|
88
|
-
server, and use this information to
|
89
|
-
`
|
90
|
-
in our application:
|
88
|
+
When the user selects a file in the form, on the client-side we asynchronously
|
89
|
+
fetch the presign information from the server, and use this information to
|
90
|
+
upload the file to S3. The `presign_endpoint` plugin gives us this presign
|
91
|
+
route, so we just need to mount it in our application:
|
91
92
|
|
92
93
|
```rb
|
93
|
-
|
94
|
-
gem "roda"
|
95
|
-
```
|
96
|
-
```rb
|
97
|
-
plugin :direct_upload
|
94
|
+
Shrine.plugin :presign_endpoint
|
98
95
|
```
|
99
96
|
```rb
|
100
97
|
Rails.application.routes.draw do
|
101
|
-
mount
|
98
|
+
mount Shrine.presign_endpoint(:cache) => "/presign"
|
102
99
|
end
|
103
100
|
```
|
104
101
|
|
105
|
-
|
106
|
-
|
107
|
-
|
102
|
+
The above will create a `GET /presign` route, which returns the S3 URL which
|
103
|
+
the file should be uploaded to, along with the required POST parameters and
|
104
|
+
request headers.
|
108
105
|
|
109
106
|
```rb
|
110
|
-
# GET /
|
107
|
+
# GET /presign
|
111
108
|
{
|
112
|
-
"url"
|
113
|
-
"fields"
|
114
|
-
"key"
|
115
|
-
"policy"
|
116
|
-
"x-amz-credential"
|
117
|
-
"x-amz-algorithm"
|
118
|
-
"x-amz-date"
|
119
|
-
"x-amz-signature"
|
120
|
-
}
|
109
|
+
"url": "https://my-bucket.s3-eu-west-1.amazonaws.com",
|
110
|
+
"fields": {
|
111
|
+
"key": "cache/b7d575850ba61b44c8a9ff889dfdb14d88cdc25f8dd121004c8",
|
112
|
+
"policy": "eyJleHBpcmF0aW9uIjoiMjAxNS0QwMToxMToyOVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJzaHJpbmUtdGVzdGluZyJ9LHsia2V5IjoiYjdkNTc1ODUwYmE2MWI0NGU3Y2M4YTliZmY4OGU5ZGZkYjE2NTQ0ZDk4OGNkYzI1ZjhkZDEyMTAwNGM4In0seyJ4LWFtei1jcmVkZW50aWFsIjoiQUtJQUlKRjU1VE1aWlk0NVVUNlEvMjAxNTEwMjQvZXUtd2VzdC0xL3MzL2F3czRfcmVxdWVzdCJ9LHsieC1hbXotYWxnb3JpdGhtIjoiQVdTNC1ITUFDLVNIQTI1NiJ9LHsieC1hbXotZGF0ZSI6IjIwMTUxMDI0VDAwMTEyOVoifV19",
|
113
|
+
"x-amz-credential": "AKIAIJF55TMZYT6Q/20151024/eu-west-1/s3/aws4_request",
|
114
|
+
"x-amz-algorithm": "AWS4-HMAC-SHA256",
|
115
|
+
"x-amz-date": "20151024T001129Z",
|
116
|
+
"x-amz-signature": "c1eb634f83f96b69bd675f535b3ff15ae184b102fcba51e4db5f4959b4ae26f4"
|
117
|
+
},
|
118
|
+
"headers": {}
|
121
119
|
}
|
122
120
|
```
|
123
121
|
|
124
|
-
|
125
|
-
|
126
|
-
|
127
|
-
|
122
|
+
You can now use a client-side file upload library like [FineUploader],
|
123
|
+
[Dropzone] or [jQuery-File-Upload] to upload selected files directly to S3.
|
124
|
+
When the user selects a file, the client can make a request to the presign
|
125
|
+
endpoint, and use the returned request information to upload the selected file
|
126
|
+
directly to S3.
|
127
|
+
|
128
|
+
Once the file has been uploaded, you can generate a JSON representation of the
|
129
|
+
uploaded file on the client-side, and write it to the hidden attachment field.
|
130
|
+
The `id` field needs to be equal to the `key` presign field minus the storage
|
131
|
+
`:prefix`.
|
128
132
|
|
129
133
|
```html
|
130
134
|
<input type='hidden' name='photo[image]' value='{
|
@@ -138,8 +142,9 @@ can write to the hidden attachment field:
|
|
138
142
|
}'>
|
139
143
|
```
|
140
144
|
|
141
|
-
|
142
|
-
|
145
|
+
This JSON string will now be submitted and assigned to the attachment attribute
|
146
|
+
instead of the raw file. See the [demo app] for an example JavaScript
|
147
|
+
implementation of multiple direct S3 uploads.
|
143
148
|
|
144
149
|
## Strategy B (static)
|
145
150
|
|
@@ -147,14 +152,17 @@ S3 uploads.
|
|
147
152
|
* Only for single uploads
|
148
153
|
* No JavaScript needed
|
149
154
|
|
150
|
-
An alternative to the previous strategy is
|
151
|
-
|
152
|
-
|
153
|
-
|
154
|
-
object, which has `#url` and `#fields` methods:
|
155
|
+
An alternative to the previous strategy is to generate an S3 upload form on
|
156
|
+
page render. The user can then select a file and submit it directly to S3. For
|
157
|
+
generating the form we can use `Shrine::Storage::S3#presign`, which returns a
|
158
|
+
[`Aws::S3::PresignedPost`] object with `#url` and `#fields` attributes:
|
155
159
|
|
156
160
|
```erb
|
157
|
-
<%
|
161
|
+
<%
|
162
|
+
presign = Shrine.storages[:cache].presign SecureRandom.hex,
|
163
|
+
success_action_redirect: new_album_url,
|
164
|
+
allow_any: ['utf8', 'authenticity_token']
|
165
|
+
%>
|
158
166
|
|
159
167
|
<form action="<%= presign.url %>" method="post" enctype="multipart/form-data">
|
160
168
|
<input type="file" name="file">
|
@@ -165,9 +173,13 @@ object, which has `#url` and `#fields` methods:
|
|
165
173
|
</form>
|
166
174
|
```
|
167
175
|
|
168
|
-
|
169
|
-
|
170
|
-
|
176
|
+
Note the additional `success_action_redirect` option which tells S3 where to
|
177
|
+
redirect to after the file has been uploaded. We also tell S3 to exclude the
|
178
|
+
`utf8` and `authenticity_token` fields that the Rails form builder generates.
|
179
|
+
|
180
|
+
Let's assume we specified the redirect URL to be a page which renders the form
|
181
|
+
for a new record. S3 will include some information about the upload in form of
|
182
|
+
GET parameters in the URL, out of which we only need the `key` parameter:
|
171
183
|
|
172
184
|
```erb
|
173
185
|
<%
|
@@ -186,21 +198,24 @@ Shrine's uploaded file representation:
|
|
186
198
|
|
187
199
|
## Metadata
|
188
200
|
|
189
|
-
With direct uploads any metadata has to be extracted on the client, since
|
190
|
-
|
191
|
-
|
201
|
+
With direct uploads any metadata has to be extracted on the client-side, since
|
202
|
+
the file upload doesn't touch the application, so the Shrine uploader doesn't
|
203
|
+
get a chance to extract the metadata. When directly uploaded file is promoted
|
204
|
+
to permanent storage, Shrine's default behaviour is to just copy the received
|
205
|
+
metadata.
|
192
206
|
|
193
207
|
If you want to re-extract metadata on the server before file validation, you
|
194
208
|
can load the `restore_cached_data`. That will make Shrine open the S3 file for
|
195
|
-
reading,
|
196
|
-
received from the client with
|
209
|
+
reading, pass it for metadata extraction, and then override the metadata
|
210
|
+
received from the client with the extracted ones.
|
197
211
|
|
198
212
|
```rb
|
199
213
|
plugin :restore_cached_data
|
200
214
|
```
|
201
215
|
|
202
216
|
Note that if you don't need this metadata before file validation, and you would
|
203
|
-
like to have it extracted in a background job, you can do
|
217
|
+
like to have it extracted in a background job, you can do that with the
|
218
|
+
following trick:
|
204
219
|
|
205
220
|
```rb
|
206
221
|
class MyUploader < Shrine
|
@@ -218,8 +233,7 @@ end
|
|
218
233
|
|
219
234
|
Since directly uploaded files will stay in your temporary storage, you will
|
220
235
|
want to periodically delete the old ones that were already promoted. Luckily,
|
221
|
-
Amazon provides [a built-in solution]
|
222
|
-
for that.
|
236
|
+
Amazon provides [a built-in solution][object lifecycle] for that.
|
223
237
|
|
224
238
|
## Eventual consistency
|
225
239
|
|
@@ -250,9 +264,10 @@ Shrine::Attacher.promote do |data|
|
|
250
264
|
end
|
251
265
|
```
|
252
266
|
|
253
|
-
[`Aws::S3::PresignedPost`]: http://docs.aws.amazon.com/
|
267
|
+
[`Aws::S3::PresignedPost`]: http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Bucket.html#presigned_post-instance_method
|
254
268
|
[demo app]: https://github.com/janko-m/shrine/tree/master/demo
|
255
269
|
[Dropzone]: https://github.com/enyo/dropzone
|
256
270
|
[jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
|
257
271
|
[FineUploader]: https://github.com/FineUploader/fine-uploader
|
258
272
|
[Amazon S3 Data Consistency Model]: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMode
|
273
|
+
[object lifecycle]: http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html
|
data/doc/multiple_files.md
CHANGED
@@ -89,8 +89,9 @@ to add the `multiple` attribute to the file field.
|
|
89
89
|
|
90
90
|
You can then use a generic JavaScript file upload library like
|
91
91
|
[jQuery-File-Upload], [Dropzone] or [FineUploader] to asynchronously upload
|
92
|
-
each the selected files to your app or an external service. See the
|
93
|
-
`
|
92
|
+
each of the selected files to your app or to an external service. See the
|
93
|
+
`upload_endpoint` and `presign_endpoint` plugins, and [Direct Uploads to S3]
|
94
|
+
guide for more details.
|
94
95
|
|
95
96
|
After each upload finishes, you can generate a nested hash for the new
|
96
97
|
associated record, and write the uploaded file JSON to the attachment field:
|