shrine 2.6.1 → 2.7.0

Sign up to get free protection for your applications and to get access to all the features.

Potentially problematic release.


This version of shrine might be problematic. Click here for more details.

checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 0474d268473ec2255e7710efdc424ea7abcaa129
4
- data.tar.gz: 75489de0c246f813fd48fd707e0f9e1355de7f80
3
+ metadata.gz: 2156db4347bf54e372dd3865a117b38e14638419
4
+ data.tar.gz: 6fc29c5c7837f3355240b85ff313579b35953a62
5
5
  SHA512:
6
- metadata.gz: 65de646144983c25c985aaca273c503bff2391c53f3267cae93d4d535ad398ffffb6833f6b3f6d12fb1e8d2fba04aa80c395389c351c808a36d51ea39fe797e9
7
- data.tar.gz: d4ebe688c1f33850f0a91df74528b3761aa0e842f9d97aa4b64ab6d9b5bfd3d6f4b1c62f3a32b488b2f2e109cfe009b90c2cc6987c248f9420df77869284fec6
6
+ metadata.gz: 634e5d0bec22737521cef1b2b7eab82871e3be1d13988a171a6686e3f3885d2514531ede958ca757dc9a84c526b049f9d2bb5bccb4119f4c8c090a1272ba88c7
7
+ data.tar.gz: bfa00d5f2ae6ca28e66cb0e26872cfb127fe00e13b12bf0eddd9965b295a5ea4b7418bf1d74f4e3deb059aa8a66d3262b120f968b369d71f675a3c5b9d265c4f
data/README.md CHANGED
@@ -74,11 +74,17 @@ uploaded file in case of validation errors and [direct uploads].
74
74
  <input name="photo[image]" type="file">
75
75
  </form>
76
76
 
77
- <!-- Rails: -->
77
+ <!-- ActionView::Helpers::FormHelper -->
78
78
  <%= form_for @photo do |f| %>
79
79
  <%= f.hidden_field :image, value: @photo.cached_image_data %>
80
80
  <%= f.file_field :image %>
81
81
  <% end %>
82
+
83
+ <!-- SimpleForm -->
84
+ <%= simple_form_for @photo do |f| %>
85
+ <%= f.input :image, as: :hidden, input_html: {value: @photo.cached_image_data} %>
86
+ <%= f.input :image, as: :file %>
87
+ <% end %>
82
88
  ```
83
89
 
84
90
  Note that the file field needs to go *after* the hidden field, so that
@@ -113,7 +119,7 @@ interface. Storages are configured directly and registered under a name in
113
119
 
114
120
  ```rb
115
121
  # Gemfile
116
- gem "aws-sdk", "~> 2.1" # for Amazon S3 storage
122
+ gem "aws-sdk-s3", "~> 1.2" # for Amazon S3 storage
117
123
  ```
118
124
  ```rb
119
125
  require "shrine/storage/s3"
@@ -405,7 +411,7 @@ by default Shrine's "mime_type" is **not guaranteed** to hold the actual MIME
405
411
  type of the file.
406
412
 
407
413
  However, if you load the `determine_mime_type` plugin, that will make Shrine
408
- always extract the MIME type from **file content** .
414
+ always extract the MIME type from **file content**.
409
415
 
410
416
  ```rb
411
417
  Shrine.plugin :determine_mime_type
@@ -680,6 +686,22 @@ class DocumentUploader < Shrine
680
686
  end
681
687
  ```
682
688
 
689
+ Validations are inherited from superclasses, but you need to call them manually
690
+ when defining more validations:
691
+
692
+ ```ruby
693
+ class ApplicationUploader < Shrine
694
+ Attacher.validate { validate_max_size 5.megabytes }
695
+ end
696
+
697
+ class ImageUploader < ApplicationUploader
698
+ Attacher.validate do
699
+ super() # empty braces are required
700
+ validate_mime_type_inclusion %w[image/jpeg image/jpg image/png]
701
+ end
702
+ end
703
+ ```
704
+
683
705
  ## Location
684
706
 
685
707
  Before Shrine uploads a file, it generates a random location for it. By default
@@ -721,37 +743,50 @@ uploader.upload(file, location: "some/specific/location.mp4")
721
743
 
722
744
  ## Direct uploads
723
745
 
724
- Shrine comes with a `direct_upload` plugin that can be used for client-side
725
- asynchronous uploads to your app or an external service like Amazon S3. It
726
- provides a [Roda] app which you can mount in your app:
746
+ While having files uploaded on form submit is simplest to implement, it doesn't
747
+ provide the best user experience, because the user doesn't know how long they
748
+ need to wait for the file to get uploaded.
749
+
750
+ To improve the user experience, the application can actually start uploading
751
+ the file **asynchronously** already when it has been selected, and provide a
752
+ progress bar. This way the user can estimate when the upload is going to
753
+ finish, and they can continue filling in other fields in the form while the
754
+ file is being uploaded.
755
+
756
+ Shrine comes with the `upload_endpoint` plugin, which provides a Rack endpoint
757
+ that accepts file uploads and forwards them to specified storage. We want to
758
+ set it up to upload to *temporary* storage, because we're replacing the caching
759
+ step in the default synchronous workflow.
727
760
 
728
761
  ```rb
729
- # Gemfile
730
- gem "roda"
731
- ```
732
- ```rb
733
- Shrine.plugin :direct_upload
762
+ Shrine.plugin :upload_endpoint
734
763
  ```
735
764
  ```rb
736
765
  Rails.application.routes.draw do
737
- mount ImageUploader::UploadEndpoint => "/images"
766
+ mount ImageUploader.upload_endpoint(:cache) => "/images/upload"
738
767
  end
739
768
  ```
740
769
 
741
- The above setup will provide the following endpoints:
770
+ The above created a `POST /images/upload` endpoint. You can now use a
771
+ client-side file upload library like [FineUploader], [Dropzone] or
772
+ [jQuery-File-Upload] to upload files asynchronously to the `/images/upload`
773
+ endpoint the moment they are selected. Once the file has been uploaded, the
774
+ endpoint will return JSON data of the uploaded file, which the client can then
775
+ write to a hidden attachment field, to be submitted instead of the raw file.
742
776
 
743
- * `POST /images/cache/upload` - for direct uploads to your app
744
- * `GET /images/cache/presign` - for direct uploads to external service (e.g. Amazon S3)
777
+ Many popular storage services can accept file uploads directly from the client
778
+ ([Amazon S3], [Google Cloud Storage], [Microsoft Azure Storage] etc), which
779
+ means you can avoid uploading files through your app. If you're using one of
780
+ these storage services, you can use the `presign_endpoint` plugin to generate
781
+ URL, fields, and headers that can be used to upload files directly to the
782
+ storage service. The only difference from the `upload_endpoint` workflow is
783
+ that the client has the extra step of fetching the request information before
784
+ uploading the file.
745
785
 
746
- Now when the user selects a file, the client can immediately start uploading
747
- the file asynchronously using one of these endpoints. The JSON data of the
748
- uploaded file can then be written to the hidden attachment field, and submitted
749
- instead of the file. For JavaScript you can use generic file upload libraries
750
- like [jQuery-File-Upload], [Dropzone] or [FineUploader].
751
-
752
- See the [direct_upload] plugin documentation and [Direct Uploads to S3][direct uploads]
753
- guide for more details, as well as the [Roda][roda_demo] and
754
- [Rails][rails_demo] demo apps which implement multiple uploads directly to S3.
786
+ See the [upload_endpoint] and [presign_endpoint] plugin documentations and
787
+ [Direct Uploads to S3][direct uploads] guide for more details, as well as the
788
+ [Roda][roda_demo] and [Rails][rails_demo] demo apps which implement multiple
789
+ uploads directly to S3.
755
790
 
756
791
  ## Backgrounding
757
792
 
@@ -917,10 +952,14 @@ The gem is available as open source under the terms of the [MIT License].
917
952
  [Context]: https://github.com/janko-m/shrine#context
918
953
  [image_processing]: https://github.com/janko-m/image_processing
919
954
  [ffmpeg]: https://github.com/streamio/streamio-ffmpeg
920
- [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
921
- [Dropzone]: https://github.com/enyo/dropzone
922
955
  [FineUploader]: https://github.com/FineUploader/fine-uploader
923
- [direct_upload]: http://shrinerb.com/rdoc/classes/Shrine/Plugins/DirectUpload.html
956
+ [Dropzone]: https://github.com/enyo/dropzone
957
+ [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
958
+ [Amazon S3]: https://aws.amazon.com/s3/
959
+ [Google Cloud Storage]: https://cloud.google.com/storage/
960
+ [Microsoft Azure Storage]: https://azure.microsoft.com/en-us/services/storage/
961
+ [upload_endpoint]: http://shrinerb.com/rdoc/classes/Shrine/Plugins/UploadEndpoint.html
962
+ [presign_endpoint]: http://shrinerb.com/rdoc/classes/Shrine/Plugins/PresignEndpoint.html
924
963
  [Cloudinary]: https://github.com/janko-m/shrine-cloudinary
925
964
  [Imgix]: https://github.com/janko-m/shrine-imgix
926
965
  [Uploadcare]: https://github.com/janko-m/shrine-uploadcare
@@ -40,6 +40,14 @@ also tell it to use different temporary and permanent storage:
40
40
  ImageUploader::Attacher.new(photo, :image, cache: :other_cache, store: :other_store)
41
41
  ```
42
42
 
43
+ Note that you can pass the `:cache` and `:store` options via `Attachment.new` too:
44
+
45
+ ```rb
46
+ class Photo < Sequel::Model
47
+ include ImageUploader::Attachment.new(:image, cache: :other_cache, store: :other_store)
48
+ end
49
+ ```
50
+
43
51
  The attacher will use the `<attachment>_data` attribute for storing information
44
52
  about the attachment.
45
53
 
@@ -39,8 +39,8 @@ via [direct uploads]):
39
39
 
40
40
  ```rb
41
41
  Shrine.storages = {
42
- cache: Shrine::Storages::S3.new(prefix: "cache", **s3_options),
43
- store: Shrine::Storages::S3.new(prefix: "store", **s3_options),
42
+ cache: Shrine::Storage::S3.new(prefix: "cache", **s3_options),
43
+ store: Shrine::Storage::S3.new(prefix: "store", **s3_options),
44
44
  }
45
45
  ```
46
46
 
@@ -87,11 +87,12 @@ end
87
87
 
88
88
  If the storage service supports direct uploads, and requires fetching
89
89
  additional information from the server, you can implement a `#presign` method,
90
- which will be used by the `direct_upload` plugin. The method should return an
90
+ which will be used by the `presign_endpoint` plugin. The method should return an
91
91
  object which responds to
92
92
 
93
93
  * `#url` – returns the URL to which the file should be uploaded to
94
- * `#fields` – returns a hash of request parameters for the upload
94
+ * `#fields` – returns a `Hash` of request parameters that should be used for the upload
95
+ * `#headers` – returns a `Hash` of request headers that should be used for the upload (optional)
95
96
 
96
97
  ```rb
97
98
  class Shrine
@@ -1,7 +1,8 @@
1
1
  # Direct Uploads to S3
2
2
 
3
- Shrine gives you the ability to upload files directly to Amazon S3, which is
4
- beneficial for several use cases:
3
+ Shrine gives you the ability to upload files directly to Amazon S3 (or any
4
+ other storage service that accepts direct uploads). Uploading directly to a
5
+ storage service is beneficial for several reasons:
5
6
 
6
7
  * Accepting uploads is resource-intensive for the server, and delegating it to
7
8
  an external service makes scaling easier.
@@ -18,7 +19,7 @@ beneficial for several use cases:
18
19
  changes the location.
19
20
 
20
21
  * If your request workers have a timeout configured or you're using Heroku,
21
- uploading a large files to S3 or any external service inside the
22
+ uploading large files to S3 or any external service inside the
22
23
  request-response lifecycle might not be able to finish before the request
23
24
  times out.
24
25
 
@@ -27,7 +28,7 @@ different prefixes (or even buckets):
27
28
 
28
29
  ```rb
29
30
  # Gemfile
30
- gem "aws-sdk", "~> 2.1"
31
+ gem "aws-sdk-s3", "~> 1.2"
31
32
  ```
32
33
  ```rb
33
34
  require "shrine/storage/s3"
@@ -54,12 +55,12 @@ documentation on how to write a CORS file.
54
55
 
55
56
  http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
56
57
 
57
- Note that it may take some time for the CORS settings to be applied, due to
58
- DNS propagation.
58
+ Note that due to DNS propagation it may take some time for update of the CORS
59
+ settings to be applied.
59
60
 
60
61
  ## File hash
61
62
 
62
- After direct S3 uploads we'll need to manually construct Shrine's
63
+ After direct S3 uploads we'll need to manually construct Shrine's JSON
63
64
  representation of an uploaded file:
64
65
 
65
66
  ```rb
@@ -67,9 +68,9 @@ representation of an uploaded file:
67
68
  "id": "349234854924394", # requied
68
69
  "storage": "cache", # required
69
70
  "metadata": {
70
- "size": 45461, # optional
71
+ "size": 45461, # optional, but recommended
71
72
  "filename": "foo.jpg", # optional
72
- "mime_type": "image/jpeg", # optional
73
+ "mime_type": "image/jpeg" # optional
73
74
  }
74
75
  }
75
76
  ```
@@ -84,47 +85,50 @@ representation of an uploaded file:
84
85
  * Single or multiple file uploads
85
86
  * Some JavaScript needed
86
87
 
87
- When the user selects the file, we dynamically fetch the presign from the
88
- server, and use this information to start uploading the file to S3. The
89
- `direct_upload` plugin gives us this presign route, so we just need to mount it
90
- in our application:
88
+ When the user selects a file in the form, on the client-side we asynchronously
89
+ fetch the presign information from the server, and use this information to
90
+ upload the file to S3. The `presign_endpoint` plugin gives us this presign
91
+ route, so we just need to mount it in our application:
91
92
 
92
93
  ```rb
93
- # Gemfile
94
- gem "roda"
95
- ```
96
- ```rb
97
- plugin :direct_upload
94
+ Shrine.plugin :presign_endpoint
98
95
  ```
99
96
  ```rb
100
97
  Rails.application.routes.draw do
101
- mount ImageUploader::UploadEndpoint => "/images"
98
+ mount Shrine.presign_endpoint(:cache) => "/presign"
102
99
  end
103
100
  ```
104
101
 
105
- This gives your application a `GET /images/cache/presign` route, which
106
- returns the S3 URL which the file should be uploaded to, along with the
107
- necessary request parameters:
102
+ The above will create a `GET /presign` route, which returns the S3 URL which
103
+ the file should be uploaded to, along with the required POST parameters and
104
+ request headers.
108
105
 
109
106
  ```rb
110
- # GET /images/cache/presign
107
+ # GET /presign
111
108
  {
112
- "url" => "https://my-bucket.s3-eu-west-1.amazonaws.com",
113
- "fields" => {
114
- "key" => "cache/b7d575850ba61b44c8a9ff889dfdb14d88cdc25f8dd121004c8",
115
- "policy" => "eyJleHBpcmF0aW9uIjoiMjAxNS0QwMToxMToyOVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJzaHJpbmUtdGVzdGluZyJ9LHsia2V5IjoiYjdkNTc1ODUwYmE2MWI0NGU3Y2M4YTliZmY4OGU5ZGZkYjE2NTQ0ZDk4OGNkYzI1ZjhkZDEyMTAwNGM4In0seyJ4LWFtei1jcmVkZW50aWFsIjoiQUtJQUlKRjU1VE1aWlk0NVVUNlEvMjAxNTEwMjQvZXUtd2VzdC0xL3MzL2F3czRfcmVxdWVzdCJ9LHsieC1hbXotYWxnb3JpdGhtIjoiQVdTNC1ITUFDLVNIQTI1NiJ9LHsieC1hbXotZGF0ZSI6IjIwMTUxMDI0VDAwMTEyOVoifV19",
116
- "x-amz-credential" => "AKIAIJF55TMZYT6Q/20151024/eu-west-1/s3/aws4_request",
117
- "x-amz-algorithm" => "AWS4-HMAC-SHA256",
118
- "x-amz-date" => "20151024T001129Z",
119
- "x-amz-signature" => "c1eb634f83f96b69bd675f535b3ff15ae184b102fcba51e4db5f4959b4ae26f4"
120
- }
109
+ "url": "https://my-bucket.s3-eu-west-1.amazonaws.com",
110
+ "fields": {
111
+ "key": "cache/b7d575850ba61b44c8a9ff889dfdb14d88cdc25f8dd121004c8",
112
+ "policy": "eyJleHBpcmF0aW9uIjoiMjAxNS0QwMToxMToyOVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJzaHJpbmUtdGVzdGluZyJ9LHsia2V5IjoiYjdkNTc1ODUwYmE2MWI0NGU3Y2M4YTliZmY4OGU5ZGZkYjE2NTQ0ZDk4OGNkYzI1ZjhkZDEyMTAwNGM4In0seyJ4LWFtei1jcmVkZW50aWFsIjoiQUtJQUlKRjU1VE1aWlk0NVVUNlEvMjAxNTEwMjQvZXUtd2VzdC0xL3MzL2F3czRfcmVxdWVzdCJ9LHsieC1hbXotYWxnb3JpdGhtIjoiQVdTNC1ITUFDLVNIQTI1NiJ9LHsieC1hbXotZGF0ZSI6IjIwMTUxMDI0VDAwMTEyOVoifV19",
113
+ "x-amz-credential": "AKIAIJF55TMZYT6Q/20151024/eu-west-1/s3/aws4_request",
114
+ "x-amz-algorithm": "AWS4-HMAC-SHA256",
115
+ "x-amz-date": "20151024T001129Z",
116
+ "x-amz-signature": "c1eb634f83f96b69bd675f535b3ff15ae184b102fcba51e4db5f4959b4ae26f4"
117
+ },
118
+ "headers": {}
121
119
  }
122
120
  ```
123
121
 
124
- For uploading to S3 you'll probably want to use a JavaScript file upload
125
- library like [jQuery-File-Upload], [Dropzone] or [FineUploader]. After the
126
- upload you should create a JSON representation of the uploaded file, which you
127
- can write to the hidden attachment field:
122
+ You can now use a client-side file upload library like [FineUploader],
123
+ [Dropzone] or [jQuery-File-Upload] to upload selected files directly to S3.
124
+ When the user selects a file, the client can make a request to the presign
125
+ endpoint, and use the returned request information to upload the selected file
126
+ directly to S3.
127
+
128
+ Once the file has been uploaded, you can generate a JSON representation of the
129
+ uploaded file on the client-side, and write it to the hidden attachment field.
130
+ The `id` field needs to be equal to the `key` presign field minus the storage
131
+ `:prefix`.
128
132
 
129
133
  ```html
130
134
  <input type='hidden' name='photo[image]' value='{
@@ -138,8 +142,9 @@ can write to the hidden attachment field:
138
142
  }'>
139
143
  ```
140
144
 
141
- See the [demo app] for an example JavaScript implementation of multiple direct
142
- S3 uploads.
145
+ This JSON string will now be submitted and assigned to the attachment attribute
146
+ instead of the raw file. See the [demo app] for an example JavaScript
147
+ implementation of multiple direct S3 uploads.
143
148
 
144
149
  ## Strategy B (static)
145
150
 
@@ -147,14 +152,17 @@ S3 uploads.
147
152
  * Only for single uploads
148
153
  * No JavaScript needed
149
154
 
150
- An alternative to the previous strategy is generating a file upload form
151
- immediately when the page is rendered, and then file upload can be either
152
- asynchronous, or synchronous with redirection. For generating the form we can
153
- use `Shrine::Storage::S3#presign`, which returns a [`Aws::S3::PresignedPost`]
154
- object, which has `#url` and `#fields` methods:
155
+ An alternative to the previous strategy is to generate an S3 upload form on
156
+ page render. The user can then select a file and submit it directly to S3. For
157
+ generating the form we can use `Shrine::Storage::S3#presign`, which returns a
158
+ [`Aws::S3::PresignedPost`] object with `#url` and `#fields` attributes:
155
159
 
156
160
  ```erb
157
- <% presign = Shrine.storages[:cache].presign(SecureRandom.hex, success_action_redirect: new_album_url) %>
161
+ <%
162
+ presign = Shrine.storages[:cache].presign SecureRandom.hex,
163
+ success_action_redirect: new_album_url,
164
+ allow_any: ['utf8', 'authenticity_token']
165
+ %>
158
166
 
159
167
  <form action="<%= presign.url %>" method="post" enctype="multipart/form-data">
160
168
  <input type="file" name="file">
@@ -165,9 +173,13 @@ object, which has `#url` and `#fields` methods:
165
173
  </form>
166
174
  ```
167
175
 
168
- If you're doing synchronous upload with redirection, the redirect URL will
169
- include the object key in the query parameters, which you can use to generate
170
- Shrine's uploaded file representation:
176
+ Note the additional `success_action_redirect` option which tells S3 where to
177
+ redirect to after the file has been uploaded. We also tell S3 to exclude the
178
+ `utf8` and `authenticity_token` fields that the Rails form builder generates.
179
+
180
+ Let's assume we specified the redirect URL to be a page which renders the form
181
+ for a new record. S3 will include some information about the upload in form of
182
+ GET parameters in the URL, out of which we only need the `key` parameter:
171
183
 
172
184
  ```erb
173
185
  <%
@@ -186,21 +198,24 @@ Shrine's uploaded file representation:
186
198
 
187
199
  ## Metadata
188
200
 
189
- With direct uploads any metadata has to be extracted on the client, since
190
- caching the file doesn't touch your application. When the cached file is stored,
191
- Shrine's default behaviour is to simply copy over cached file's metadata.
201
+ With direct uploads any metadata has to be extracted on the client-side, since
202
+ the file upload doesn't touch the application, so the Shrine uploader doesn't
203
+ get a chance to extract the metadata. When directly uploaded file is promoted
204
+ to permanent storage, Shrine's default behaviour is to just copy the received
205
+ metadata.
192
206
 
193
207
  If you want to re-extract metadata on the server before file validation, you
194
208
  can load the `restore_cached_data`. That will make Shrine open the S3 file for
195
- reading, give it for metadata extraction, and then override the metadata
196
- received from the client with one extracted by Shrine.
209
+ reading, pass it for metadata extraction, and then override the metadata
210
+ received from the client with the extracted ones.
197
211
 
198
212
  ```rb
199
213
  plugin :restore_cached_data
200
214
  ```
201
215
 
202
216
  Note that if you don't need this metadata before file validation, and you would
203
- like to have it extracted in a background job, you can do the following trick:
217
+ like to have it extracted in a background job, you can do that with the
218
+ following trick:
204
219
 
205
220
  ```rb
206
221
  class MyUploader < Shrine
@@ -218,8 +233,7 @@ end
218
233
 
219
234
  Since directly uploaded files will stay in your temporary storage, you will
220
235
  want to periodically delete the old ones that were already promoted. Luckily,
221
- Amazon provides [a built-in solution](http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html)
222
- for that.
236
+ Amazon provides [a built-in solution][object lifecycle] for that.
223
237
 
224
238
  ## Eventual consistency
225
239
 
@@ -250,9 +264,10 @@ Shrine::Attacher.promote do |data|
250
264
  end
251
265
  ```
252
266
 
253
- [`Aws::S3::PresignedPost`]: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Bucket.html#presigned_post-instance_method
267
+ [`Aws::S3::PresignedPost`]: http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Bucket.html#presigned_post-instance_method
254
268
  [demo app]: https://github.com/janko-m/shrine/tree/master/demo
255
269
  [Dropzone]: https://github.com/enyo/dropzone
256
270
  [jQuery-File-Upload]: https://github.com/blueimp/jQuery-File-Upload
257
271
  [FineUploader]: https://github.com/FineUploader/fine-uploader
258
272
  [Amazon S3 Data Consistency Model]: http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMode
273
+ [object lifecycle]: http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html
@@ -89,8 +89,9 @@ to add the `multiple` attribute to the file field.
89
89
 
90
90
  You can then use a generic JavaScript file upload library like
91
91
  [jQuery-File-Upload], [Dropzone] or [FineUploader] to asynchronously upload
92
- each the selected files to your app or an external service. See the
93
- `direct_upload` plugin, and [Direct Uploads to S3] guide for more details.
92
+ each of the selected files to your app or to an external service. See the
93
+ `upload_endpoint` and `presign_endpoint` plugins, and [Direct Uploads to S3]
94
+ guide for more details.
94
95
 
95
96
  After each upload finishes, you can generate a nested hash for the new
96
97
  associated record, and write the uploaded file JSON to the attachment field: