tus-server 2.2.1 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 316f557990315ae072a266cb92bd5acdf4a5e930f734fc7886cb8c2cc2ec2eb3
4
- data.tar.gz: a98df5216aa495f2f18bad0f61b2ed7797f714139991f700fdb403b94abd0271
3
+ metadata.gz: c07ca8df0a0b881580058e119e27a4cf63efbd0854018d00d8f3cd3f7a093b04
4
+ data.tar.gz: ea0da9293bee4cee32f77ce5383adc389d34d745a025a01d570ff971b21f2a0d
5
5
  SHA512:
6
- metadata.gz: '02662191fa1b2e8df42bbba1a5f32d1b70ae3623cc89736ad4bbaa4c46cbc97364c0c4edd3c5fdfc348611bbca76132e7c71d6c785e953aa6f556f986755680b'
7
- data.tar.gz: 6ac96ebc6710e231dabaa66faf8fced5fbf80faceb12f4992f3afa2235d4908f40649f00e794e0923a2311f0ee91ed2f4df0e7adf8c19321f1554f451010d2f8
6
+ metadata.gz: 1284b1b2af5e29ccbd5ae20658da75f0ad8de5d11c2198ed98704f120aa8ebcbe6f9a9815d728d518dd12ea16046725733055a3868b8a2e0685985fedca4fa4f
7
+ data.tar.gz: 0a9779bc5f55e685329a7cc76ed071ecf4e73b574609da3db298303697da898cd32d4a7da3e41f4e7ef59b273abfd69c79f03ec9aecf94d754ef6ccbbcdb08cf
@@ -1,120 +1,152 @@
1
+ ## 2.3.0 (2019-05-14)
2
+
3
+ * Allow uploading files larger than 50 GB to S3 storage by scaling the part size according to upload length (@janko)
4
+
5
+ * Allow S3 limits to be overriden via the `:limits` option (@janko)
6
+
7
+ * Reject uploads larger than 5 TB in `S3#create_file` (@janko)
8
+
9
+ * Remove superfluous `#head_object` operation from `S3#concatenate` (@janko)
10
+
11
+ * Validate that partial uploads are finished on the concatenation request (@janko)
12
+
13
+ * Unparallelize validating partial uploads for concatentation (@janko)
14
+
15
+ * Move total length calculation for concatenation into `Tus::Server` (@janko)
16
+
17
+ * Allow `Upload-Concat` and `Upload-Defer-Length` headers in CORS headers (@janko)
18
+
19
+ * Always override `:content_disposition` option with the value from `:upload_options` in `S3#create_file` (@janko)
20
+
21
+ * Apply `name`/`type` metadata to `:content_type`/`:content_disposition` options in `S3#create_file` (@janko)
22
+
23
+ * Use `content_disposition` gem for `:content_disposition` option in `S3#create_file` (@janko)
24
+
25
+ * Remove checks whether multipart upload was successfully aborted in S3 storage (@janko)
26
+
27
+ * Don't return any `Content-Type` when type is not specified in metadata (@janko)
28
+
29
+ * Add `ETag` header to download endpoint to prevent `Rack::ETag` buffering file content (@janko)
30
+
31
+ * Take `:prefix` into account in `Tus::Storage::S3#expire_files` (@janko)
32
+
1
33
  ## 2.2.1 (2018-12-19)
2
34
 
3
- * Use `content_disposition` gem to generate `Content-Disposition` in download endpoint (@janko-m)
35
+ * Use `content_disposition` gem to generate `Content-Disposition` in download endpoint (@janko)
4
36
 
5
37
  ## 2.2.0 (2018-12-02)
6
38
 
7
- * Add `before_create`, `after_create`, `after_finish`, and `after_terminate` hooks (@janko-m)
39
+ * Add `before_create`, `after_create`, `after_finish`, and `after_terminate` hooks (@janko)
8
40
 
9
- * Rename `Tus::Info#concatenation?` to `Tus::Info#final?` (@janko-m)
41
+ * Rename `Tus::Info#concatenation?` to `Tus::Info#final?` (@janko)
10
42
 
11
- * Use `Storage#concurrency` for parallelized retrieval of partial uploads in `Upload-Concat` validation (@janko-m)
43
+ * Use `Storage#concurrency` for parallelized retrieval of partial uploads in `Upload-Concat` validation (@janko)
12
44
 
13
- * Replace `:thread_count` with `:concurrency` in S3 storage (@janko-m)
45
+ * Replace `:thread_count` with `:concurrency` in S3 storage (@janko)
14
46
 
15
- * Validate that sum of partial uploads doesn't exceed `Tus-Max-Size` on concatenation (@janko-m)
47
+ * Validate that sum of partial uploads doesn't exceed `Tus-Max-Size` on concatenation (@janko)
16
48
 
17
- * Drop MRI 2.2 support (@janko-m)
49
+ * Drop MRI 2.2 support (@janko)
18
50
 
19
- * Accept absolute URLs of partial uploads when creating a final upload (@janko-m)
51
+ * Accept absolute URLs of partial uploads when creating a final upload (@janko)
20
52
 
21
53
  ## 2.1.2 (2018-10-21)
22
54
 
23
- * Make tus-ruby-server fully work with non-rewindable Rack input (@janko-m)
55
+ * Make tus-ruby-server fully work with non-rewindable Rack input (@janko)
24
56
 
25
57
  ## 2.1.1 (2018-05-26)
26
58
 
27
- * Rename `:download_url` option to `:redirect_download` (@janko-m)
59
+ * Rename `:download_url` option to `:redirect_download` (@janko)
28
60
 
29
61
  ## 2.1.0 (2018-05-15)
30
62
 
31
- * Add `:download_url` server option for redirecting to a download URL (@janko-m)
63
+ * Add `:download_url` server option for redirecting to a download URL (@janko)
32
64
 
33
- * Allow application servers to serve files stored on disk via the `Rack::Sendfile` middleware (@janko-m)
65
+ * Allow application servers to serve files stored on disk via the `Rack::Sendfile` middleware (@janko)
34
66
 
35
- * Reject `Upload-Metadata` which contains key-value pairs separated by spaces (@janko-m)
67
+ * Reject `Upload-Metadata` which contains key-value pairs separated by spaces (@janko)
36
68
 
37
- * Don't overwite info file if it already exists in `Tus::Storage::FileSystem` (@janko-m)
69
+ * Don't overwite info file if it already exists in `Tus::Storage::FileSystem` (@janko)
38
70
 
39
71
  ## 2.0.2 (2017-12-24)
40
72
 
41
- * Handle `name` and `type` metadata for Uppy compatibility (@janko-m)
73
+ * Handle `name` and `type` metadata for Uppy compatibility (@janko)
42
74
 
43
75
  ## 2.0.1 (2017-11-13)
44
76
 
45
- * Add back support for Roda 2.x (@janko-m)
77
+ * Add back support for Roda 2.x (@janko)
46
78
 
47
79
  ## 2.0.0 (2017-11-13)
48
80
 
49
- * Upgrade to Roda 3 (@janko-m)
81
+ * Upgrade to Roda 3 (@janko)
50
82
 
51
- * Remove deprecated support for aws-sdk 2.x in `Tus::Storage::S3` (@janko-m)
83
+ * Remove deprecated support for aws-sdk 2.x in `Tus::Storage::S3` (@janko)
52
84
 
53
- * Drop official support for MRI 2.1 (@janko-m)
85
+ * Drop official support for MRI 2.1 (@janko)
54
86
 
55
- * Add generic `Tus::Response` class that storages can use (@janko-m)
87
+ * Add generic `Tus::Response` class that storages can use (@janko)
56
88
 
57
- * Remove `Tus::Response#length` (@janko-m)
89
+ * Remove `Tus::Response#length` (@janko)
58
90
 
59
- * Remove deprecated Goliath integration (@janko-m)
91
+ * Remove deprecated Goliath integration (@janko)
60
92
 
61
- * Return `400 Bad Request` instead of `404 Not Found` when some partial uploads are missing in a concatenation request (@janko-m)
93
+ * Return `400 Bad Request` instead of `404 Not Found` when some partial uploads are missing in a concatenation request (@janko)
62
94
 
63
- * Use Rack directly instead of Roda's `streaming` plugin for downloding (@janko-m)
95
+ * Use Rack directly instead of Roda's `streaming` plugin for downloding (@janko)
64
96
 
65
97
  ## 1.2.1 (2017-11-05)
66
98
 
67
- * Improve communication when handling `aws-sdk 2.x` fallback in `Tus::Storage::S3` (@janko-m)
99
+ * Improve communication when handling `aws-sdk 2.x` fallback in `Tus::Storage::S3` (@janko)
68
100
 
69
101
  ## 1.2.0 (2017-09-18)
70
102
 
71
- * Deprecate `aws-sdk` 2.x in favour of the new `aws-sdk-s3` gem (@janko-m)
103
+ * Deprecate `aws-sdk` 2.x in favour of the new `aws-sdk-s3` gem (@janko)
72
104
 
73
105
  ## 1.1.3 (2017-09-17)
74
106
 
75
- * Return `Accept-Ranges: bytes` response header in download endpoint (@janko-m)
107
+ * Return `Accept-Ranges: bytes` response header in download endpoint (@janko)
76
108
 
77
109
  ## 1.1.2 (2017-09-12)
78
110
 
79
- * Add support for the new `aws-sdk-s3` gem (@janko-m)
111
+ * Add support for the new `aws-sdk-s3` gem (@janko)
80
112
 
81
113
  ## 1.1.1 (2017-07-23)
82
114
 
83
- * Restore backwards compatibility with MRI 2.1 and MRI 2.2 that was broken in previous release (@janko-m)
115
+ * Restore backwards compatibility with MRI 2.1 and MRI 2.2 that was broken in previous release (@janko)
84
116
 
85
117
  ## 1.1.0 (2017-07-23)
86
118
 
87
- * Ignore retryable networking errors in `Tus::Storage::S3#patch_file` for resiliency (@janko-m)
119
+ * Ignore retryable networking errors in `Tus::Storage::S3#patch_file` for resiliency (@janko)
88
120
 
89
- * Deprecate `Tus::Server::Goliath` in favour of [goliath-rack_proxy](https://github.com/janko-m/goliath-rack_proxy) (@janko-m)
121
+ * Deprecate `Tus::Server::Goliath` in favour of [goliath-rack_proxy](https://github.com/janko/goliath-rack_proxy) (@janko)
90
122
 
91
- * Reduce string allocations in MRI 2.3+ with `frozen-string-literal: true` magic comments (@janko-m)
123
+ * Reduce string allocations in MRI 2.3+ with `frozen-string-literal: true` magic comments (@janko)
92
124
 
93
125
  ## 1.0.0 (2017-07-17)
94
126
 
95
- * Add Goliath integration (@janko-m)
127
+ * Add Goliath integration (@janko)
96
128
 
97
- * [BREAKING] Save data in `"#{uid}"` instead of `"#{uid}.file"` in `Tus::Storage::Filesystem` (@janko-m)
129
+ * [BREAKING] Save data in `"#{uid}"` instead of `"#{uid}.file"` in `Tus::Storage::Filesystem` (@janko)
98
130
 
99
- * Modify S3 storage to cache chunks into memory instead of disk, which reduces disk IO (@janko-m)
131
+ * Modify S3 storage to cache chunks into memory instead of disk, which reduces disk IO (@janko)
100
132
 
101
- * [BREAKING] Require each storage to return the number of bytes uploaded in `#patch_file` (@janko-m)
133
+ * [BREAKING] Require each storage to return the number of bytes uploaded in `#patch_file` (@janko)
102
134
 
103
- * Make S3 storage upload all received data from `tus-js-client` that doesn't have max chunk size configured (@janko-m)
135
+ * Make S3 storage upload all received data from `tus-js-client` that doesn't have max chunk size configured (@janko)
104
136
 
105
- * Verify that all partial uploads have `Upload-Concat: partial` before concatenation (@janko-m)
137
+ * Verify that all partial uploads have `Upload-Concat: partial` before concatenation (@janko)
106
138
 
107
- * Include CORS and tus response headers in 404 responses (@janko-m)
139
+ * Include CORS and tus response headers in 404 responses (@janko)
108
140
 
109
- * Improve streaming on dynamic Rack inputs such as `Unicorn::TeeInput` for S3 and Gridfs storage (@janko-m)
141
+ * Improve streaming on dynamic Rack inputs such as `Unicorn::TeeInput` for S3 and Gridfs storage (@janko)
110
142
 
111
- * Terminate HTTP connection to S3 when response is closed (@janko-m)
143
+ * Terminate HTTP connection to S3 when response is closed (@janko)
112
144
 
113
- * Allow `Transfer-Encoding: chunked` to be used, meaning `Content-Length` can be blank (@janko-m)
145
+ * Allow `Transfer-Encoding: chunked` to be used, meaning `Content-Length` can be blank (@janko)
114
146
 
115
- * Remove newlines from the base64-encoded CRC32 signature (@janko-m)
147
+ * Remove newlines from the base64-encoded CRC32 signature (@janko)
116
148
 
117
- * Lazily require `digest`, `zlib`, and `base64` standard libraries (@janko-m)
149
+ * Lazily require `digest`, `zlib`, and `base64` standard libraries (@janko)
118
150
 
119
151
  ## 0.10.2 (2017-04-19)
120
152
 
@@ -122,94 +154,94 @@
122
154
 
123
155
  ## 0.10.1 (2017-04-13)
124
156
 
125
- * Fix download endpoint returning incorrect response body in some cases in development (@janko-m)
157
+ * Fix download endpoint returning incorrect response body in some cases in development (@janko)
126
158
 
127
- * Remove `concatenation-unfinished` from list of supported extensions (@janko-m)
159
+ * Remove `concatenation-unfinished` from list of supported extensions (@janko)
128
160
 
129
161
  ## 0.10.0 (2017-03-27)
130
162
 
131
- * Fix invalid `Content-Disposition` header in GET requests to due mutation of `Tus::Server.opts[:disposition]` (@janko-m)
163
+ * Fix invalid `Content-Disposition` header in GET requests to due mutation of `Tus::Server.opts[:disposition]` (@janko)
132
164
 
133
- * Make `Response` object from `Tus::Server::S3` also respond to `#close` (@janko-m)
165
+ * Make `Response` object from `Tus::Server::S3` also respond to `#close` (@janko)
134
166
 
135
- * Don't return `Content-Type` header when there is no content returned (@janko-m)
167
+ * Don't return `Content-Type` header when there is no content returned (@janko)
136
168
 
137
- * Return `Content-Type: text/plain` when returning errors (@janko-m)
169
+ * Return `Content-Type: text/plain` when returning errors (@janko)
138
170
 
139
- * Return `Content-Type: application/octet-stream` by default in the GET endpoint (@janko-m)
171
+ * Return `Content-Type: application/octet-stream` by default in the GET endpoint (@janko)
140
172
 
141
- * Make UNIX permissions configurable via `:permissions` and `:directory_permissions` in `Tus::Storage::Filesystem` (@janko-m)
173
+ * Make UNIX permissions configurable via `:permissions` and `:directory_permissions` in `Tus::Storage::Filesystem` (@janko)
142
174
 
143
- * Apply UNIX permissions `0644` for files and `0777` for directories in `Tus::Storage::Filesystem` (@janko-m)
175
+ * Apply UNIX permissions `0644` for files and `0777` for directories in `Tus::Storage::Filesystem` (@janko)
144
176
 
145
- * Fix `creation-defer-length` feature not working with unlimited upload size (@janko-m)
177
+ * Fix `creation-defer-length` feature not working with unlimited upload size (@janko)
146
178
 
147
- * Make the filesize of accepted uploads unlimited by default (@janko-m)
179
+ * Make the filesize of accepted uploads unlimited by default (@janko)
148
180
 
149
- * Modify tus server to call `Storage#finalize_file` when the last chunk was uploaded (@janko-m)
181
+ * Modify tus server to call `Storage#finalize_file` when the last chunk was uploaded (@janko)
150
182
 
151
- * Don't require length of uploaded chunks to be a multiple of `:chunkSize` in `Tus::Storage::Gridfs` (@janko-m)
183
+ * Don't require length of uploaded chunks to be a multiple of `:chunkSize` in `Tus::Storage::Gridfs` (@janko)
152
184
 
153
- * Don't infer `:chunkSize` from first uploaded chunk in `Tus::Storage::Gridfs` (@janko-m)
185
+ * Don't infer `:chunkSize` from first uploaded chunk in `Tus::Storage::Gridfs` (@janko)
154
186
 
155
- * Add `#length` to `Response` objects returned from `Storage#get_file` (@janko-m)
187
+ * Add `#length` to `Response` objects returned from `Storage#get_file` (@janko)
156
188
 
157
189
  ## 0.9.1 (2017-03-24)
158
190
 
159
- * Fix `Tus::Storage::S3` not properly supporting the concatenation feature (@janko-m)
191
+ * Fix `Tus::Storage::S3` not properly supporting the concatenation feature (@janko)
160
192
 
161
193
  ## 0.9.0 (2017-03-24)
162
194
 
163
- * Add Amazon S3 storage under `Tus::Storage::S3` (@janko-m)
195
+ * Add Amazon S3 storage under `Tus::Storage::S3` (@janko)
164
196
 
165
- * Make the checksum feature actually work by generating the checksum correctly (@janko-m)
197
+ * Make the checksum feature actually work by generating the checksum correctly (@janko)
166
198
 
167
- * Make `Content-Disposition` header on the GET endpoint configurable (@janko-m)
199
+ * Make `Content-Disposition` header on the GET endpoint configurable (@janko)
168
200
 
169
- * Change `Content-Disposition` header on the GET endpoint from "attachment" to "inline" (@janko-m)
201
+ * Change `Content-Disposition` header on the GET endpoint from "attachment" to "inline" (@janko)
170
202
 
171
- * Delegate concatenation logic to individual storages, allowing the storages to implement it much more efficiently (@janko-m)
203
+ * Delegate concatenation logic to individual storages, allowing the storages to implement it much more efficiently (@janko)
172
204
 
173
- * Allow storages to save additional information in the info hash (@janko-m)
205
+ * Allow storages to save additional information in the info hash (@janko)
174
206
 
175
- * Don't automatically delete expired files, instead require the developer to call `Storage#expire_files` in a recurring task (@janko-m)
207
+ * Don't automatically delete expired files, instead require the developer to call `Storage#expire_files` in a recurring task (@janko)
176
208
 
177
- * Delegate expiration logic to the individual storages, allowing the storages to implement it much more efficiently (@janko-m)
209
+ * Delegate expiration logic to the individual storages, allowing the storages to implement it much more efficiently (@janko)
178
210
 
179
- * Modify storages to raise `Tus::NotFound` when file wasn't found (@janko-m)
211
+ * Modify storages to raise `Tus::NotFound` when file wasn't found (@janko)
180
212
 
181
- * Add `Tus::Error` which storages can use (@janko-m)
213
+ * Add `Tus::Error` which storages can use (@janko)
182
214
 
183
- * In `Tus::Storage::Gridfs` require that each uploaded chunk except the last one can be distributed into even Mongo chunks (@janko-m)
215
+ * In `Tus::Storage::Gridfs` require that each uploaded chunk except the last one can be distributed into even Mongo chunks (@janko)
184
216
 
185
- * Return `403 Forbidden` in the GET endpoint when attempting to download an unfinished upload (@janko-m)
217
+ * Return `403 Forbidden` in the GET endpoint when attempting to download an unfinished upload (@janko)
186
218
 
187
- * Allow client to send `Upload-Length` on any PATCH request when `Upload-Defer-Length` is used (@janko-m)
219
+ * Allow client to send `Upload-Length` on any PATCH request when `Upload-Defer-Length` is used (@janko)
188
220
 
189
- * Support `Range` requests in the GET endpoint (@janko-m)
221
+ * Support `Range` requests in the GET endpoint (@janko)
190
222
 
191
- * Stream file content in the GET endpoint directly from the storage (@janko-m)
223
+ * Stream file content in the GET endpoint directly from the storage (@janko)
192
224
 
193
- * Update `:length`, `:uploadDate` and `:contentType` Mongo fields on each PATCH request (@janko-m)
225
+ * Update `:length`, `:uploadDate` and `:contentType` Mongo fields on each PATCH request (@janko)
194
226
 
195
- * Insert all sub-chunks in a single Mongo operation in `Tus::Storage::Gridfs` (@janko-m)
227
+ * Insert all sub-chunks in a single Mongo operation in `Tus::Storage::Gridfs` (@janko)
196
228
 
197
- * Infer Mongo chunk size from the size of the first uploaded chunk (@janko-m)
229
+ * Infer Mongo chunk size from the size of the first uploaded chunk (@janko)
198
230
 
199
- * Add `:chunk_size` option to `Tus::Storage::Gridfs` (@janko-m)
231
+ * Add `:chunk_size` option to `Tus::Storage::Gridfs` (@janko)
200
232
 
201
- * Avoid reading the whole request body into memory by doing streaming uploads (@janko-m)
233
+ * Avoid reading the whole request body into memory by doing streaming uploads (@janko)
202
234
 
203
235
  ## 0.2.0 (2016-11-23)
204
236
 
205
- * Refresh `Upload-Expires` for the file after each PATCH request (@janko-m)
237
+ * Refresh `Upload-Expires` for the file after each PATCH request (@janko)
206
238
 
207
239
  ## 0.1.1 (2016-11-21)
208
240
 
209
- * Support Rack 1.x in addition to Rack 2.x (@janko-m)
241
+ * Support Rack 1.x in addition to Rack 2.x (@janko)
210
242
 
211
- * Don't return 404 when deleting a non-existing file (@janko-m)
243
+ * Don't return 404 when deleting a non-existing file (@janko)
212
244
 
213
- * Return 204 for OPTIONS requests even when the file is missing (@janko-m)
245
+ * Return 204 for OPTIONS requests even when the file is missing (@janko)
214
246
 
215
- * Make sure that none of the "empty status codes" return content (@janko-m)
247
+ * Make sure that none of the "empty status codes" return content (@janko)
data/README.md CHANGED
@@ -206,57 +206,90 @@ Tus::Server.opts[:storage] = Tus::Storage::S3.new(
206
206
  )
207
207
  ```
208
208
 
209
- One thing to note is that S3's multipart API requires each chunk except the
210
- last to be **5MB or larger**, so that is the minimum chunk size that you can
211
- specify on your tus client if you want to use the S3 storage.
209
+ If you want to files to be stored in a certain subdirectory, you can specify
210
+ a `:prefix` in the storage configuration.
212
211
 
213
- If you'll be retrieving uploaded files through the tus server app, it's
214
- recommended to set `Tus::Server.opts[:redirect_download]` to `true`. This will
215
- avoid tus server downloading and serving the file from S3, and instead have the
216
- download endpoint redirect to the direct S3 object URL.
212
+ ```rb
213
+ Tus::Storage::S3.new(prefix: "tus", **options)
214
+ ```
215
+
216
+ You can also specify additional options that will be fowarded to
217
+ [`Aws::S3::Client#create_multipart_upload`] using `:upload_options`.
217
218
 
218
219
  ```rb
219
- Tus::Server.opts[:redirect_download] = true
220
+ Tus::Storage::S3.new(upload_options: { acl: "public-read" }, **options)
220
221
  ```
221
222
 
222
- You can customize how the S3 object URL is being generated by passing a block
223
- to `:redirect_download`, which will then be evaluated in the context of the
224
- `Tus::Server` instance (which allows accessing the `request` object). See
225
- [`Aws::S3::Object#get`] for the list of options that
226
- `Tus::Storage::S3#file_url` accepts.
223
+ All other options will be forwarded to [`Aws::S3::Client#initialize`]:
227
224
 
228
225
  ```rb
229
- Tus::Server.opts[:redirect_download] = -> (uid, info, **options) do
230
- storage.file_url(uid, info, expires_in: 10, **options) # link expires after 10 seconds
231
- end
226
+ Tus::Storage::S3.new(
227
+ use_accelerate_endpoint: true,
228
+ logger: Logger.new(STDOUT),
229
+ retry_limit: 5,
230
+ http_open_timeout: 10,
231
+ # ...
232
+ )
232
233
  ```
233
234
 
234
- If you want to files to be stored in a certain subdirectory, you can specify
235
- a `:prefix` in the storage configuration.
235
+ If you're using [concatenation], you can specify the concurrency in which S3
236
+ storage will copy partial uploads to the final upload (defaults to `10`):
236
237
 
237
238
  ```rb
238
- Tus::Storage::S3.new(prefix: "tus", **options)
239
+ Tus::Storage::S3.new(concurrency: { concatenation: 20 }, **options)
239
240
  ```
240
241
 
241
- You can also specify additional options that will be fowarded to
242
- [`Aws::S3::Client#create_multipart_upload`] using `:upload_options`.
242
+ #### Limits
243
+
244
+ Be aware that the AWS S3 Multipart Upload API has the following limits:
245
+
246
+ | Item | Specification |
247
+ | ---- | ------------- |
248
+ | Part size | 5 MB to 5 GB, last part can be < 5 MB |
249
+ | Maximum number of parts per upload | 10,000 |
250
+ | Maximum object size | 5 TB |
251
+
252
+ This means that if you're chunking uploads in your tus client, the chunk size
253
+ needs to be **5 MB or larger**. Furthermore, if you're allowing your users to
254
+ upload files larger than 50 GB, you will the minimum chunk size needs to be
255
+ higher (`ceil(max_length, max_multipart_parts)`). Note that chunking is
256
+ optional if you're running on Falcon, but it's mandatory if you're using Puma
257
+ or another web server.
258
+
259
+ `Tus::Storage::S3` is relying on the above limits for determining the multipart
260
+ part size. If you're using a different S3-compatible service which has different
261
+ limits, you should pass them in when initializing the storage:
243
262
 
244
263
  ```rb
245
- Tus::Storage::S3.new(upload_options: { content_disposition: "attachment" }, **options)
264
+ Tus::Storage::S3.new(limits: {
265
+ min_part_size: 5 * 1024 * 1024,
266
+ max_part_size: 5 * 1024 * 1024 * 1024,
267
+ max_multipart_parts: 10_000,
268
+ max_object_size: 5 * 1024 * 1024 * 1024,
269
+ }, **options)
246
270
  ```
247
271
 
248
- All other options will be forwarded to [`Aws::S3::Client#initialize`], so you
249
- can for example change the `:endpoint` to use S3's accelerate host:
272
+ #### Serving files
273
+
274
+ If you'll be retrieving uploaded files through the tus server app, it's
275
+ recommended to set `Tus::Server.opts[:redirect_download]` to `true`. This will
276
+ avoid tus server downloading and serving the file from S3, and instead have the
277
+ download endpoint redirect to the direct S3 object URL.
250
278
 
251
279
  ```rb
252
- Tus::Storage::S3.new(endpoint: "https://s3-accelerate.amazonaws.com", **options)
280
+ Tus::Server.opts[:redirect_download] = true
253
281
  ```
254
282
 
255
- If you're using [concatenation], you can specify the concurrency in which S3
256
- storage will copy partial uploads to the final upload (defaults to `10`):
283
+ You can customize how the S3 object URL is being generated by passing a block
284
+ to `:redirect_download`, which will then be evaluated in the context of the
285
+ `Tus::Server` instance (which allows accessing the `request` object). See
286
+ [`Aws::S3::Object#get`] for the list of options that
287
+ `Tus::Storage::S3#file_url` accepts.
257
288
 
258
289
  ```rb
259
- Tus::Storage::S3.new(concurrency: { concatenation: 20 }, **options)
290
+ Tus::Server.opts[:redirect_download] = -> (uid, info, **options) do
291
+ storage.file_url(uid, info, expires_in: 10, **options) # link expires after 10 seconds
292
+ end
260
293
  ```
261
294
 
262
295
  ### Google Cloud Storage, Microsoft Azure Blob Storage
@@ -40,13 +40,21 @@ module Tus
40
40
  end
41
41
 
42
42
  def offset
43
- Integer(@hash["Upload-Offset"])
43
+ Integer(@hash["Upload-Offset"]) if @hash["Upload-Offset"]
44
44
  end
45
45
 
46
46
  def metadata
47
47
  parse_metadata(@hash["Upload-Metadata"])
48
48
  end
49
49
 
50
+ def name
51
+ metadata["name"] || metadata["filename"]
52
+ end
53
+
54
+ def type
55
+ metadata["type"] || metadata["content_type"]
56
+ end
57
+
50
58
  def expires
51
59
  Time.parse(@hash["Upload-Expires"])
52
60
  end
@@ -94,9 +94,9 @@ module Tus
94
94
  before_create(uid, info)
95
95
 
96
96
  if info.final?
97
- validate_partial_uploads!(info.partial_uploads)
97
+ length = validate_partial_uploads!(info.partial_uploads)
98
98
 
99
- length = storage.concatenate(uid, info.partial_uploads, info.to_h)
99
+ storage.concatenate(uid, info.partial_uploads, info.to_h)
100
100
  info["Upload-Length"] = length.to_s
101
101
  info["Upload-Offset"] = length.to_s
102
102
  else
@@ -185,26 +185,19 @@ module Tus
185
185
  r.get do
186
186
  validate_upload_finished!(info)
187
187
 
188
- metadata = info.metadata
189
- name = metadata["name"] || metadata["filename"]
190
- type = metadata["type"] || metadata["content_type"]
191
-
192
- content_disposition = ContentDisposition.(disposition: opts[:disposition], filename: name)
193
- content_type = type || "application/octet-stream"
194
-
195
188
  if redirect_download
196
189
  redirect_url = instance_exec(uid, info.to_h,
197
- content_type: content_type,
198
- content_disposition: content_disposition,
190
+ content_type: info.type,
191
+ content_disposition: ContentDisposition.(disposition: opts[:disposition], filename: info.name),
199
192
  &redirect_download)
200
193
 
201
194
  r.redirect redirect_url
202
195
  else
203
196
  range = handle_range_request!(info.length)
204
197
 
205
- response.headers["Content-Length"] = range.size.to_s
206
- response.headers["Content-Disposition"] = content_disposition
207
- response.headers["Content-Type"] = content_type
198
+ response.headers["Content-Disposition"] = ContentDisposition.(disposition: opts[:disposition], filename: info.name)
199
+ response.headers["Content-Type"] = info.type if info.type
200
+ response.headers["ETag"] = %(W/"#{uid}")
208
201
 
209
202
  body = storage.get_file(uid, info.to_h, range: range)
210
203
 
@@ -310,50 +303,34 @@ module Tus
310
303
  end
311
304
  end
312
305
 
313
- # Validates that each partial upload exists and is marked as one.
306
+ # Validates that each partial upload exists and is marked as one, and at the
307
+ # same time calculates the sum of part lengths.
314
308
  def validate_partial_uploads!(part_uids)
315
- input = Queue.new
316
- part_uids.each { |part_uid| input << part_uid }
317
- input.close
318
-
319
- results = Queue.new
309
+ length = 0
320
310
 
321
- thread_count = storage.concurrency[:concatenation] if storage.respond_to?(:concurrency)
322
- thread_count ||= 10
323
-
324
- threads = thread_count.times.map do
325
- Thread.new do
326
- begin
327
- loop do
328
- part_uid = input.pop or break
329
- part_info = storage.read_info(part_uid)
330
- results << Tus::Info.new(part_info)
331
- end
332
- nil
333
- rescue => error
334
- input.clear
335
- error
336
- end
311
+ part_uids.each do |part_uid|
312
+ begin
313
+ part_info = storage.read_info(part_uid)
314
+ rescue Tus::NotFound
315
+ error!(400, "Partial upload not found")
337
316
  end
338
- end
339
317
 
340
- errors = threads.map(&:value).compact
318
+ part_info = Tus::Info.new(part_info)
341
319
 
342
- if errors.any? { |error| error.is_a?(Tus::NotFound) }
343
- error!(400, "One or more partial uploads were not found")
344
- elsif errors.any?
345
- fail errors.first
346
- end
320
+ error!(400, "Upload is not partial") unless part_info.partial?
347
321
 
348
- part_infos = Array.new(results.size) { results.pop } # convert Queue into an Array
322
+ unless part_info.length == part_info.offset
323
+ error!(400, "Partial upload is not finished")
324
+ end
349
325
 
350
- unless part_infos.all?(&:partial?)
351
- error!(400, "One or more uploads were not partial")
326
+ length += part_info.length
352
327
  end
353
328
 
354
- if max_size && part_infos.map(&:length).inject(0, :+) > max_size
355
- error!(400, "The sum of partial upload lengths exceed Tus-Max-Size")
329
+ if max_size && length > max_size
330
+ error!(400, "The sum of partial upload lengths exceeds Tus-Max-Size")
356
331
  end
332
+
333
+ length
357
334
  end
358
335
 
359
336
  def validate_upload_checksum!(input)
@@ -392,9 +369,24 @@ module Tus
392
369
  response.headers["Content-Range"] = "bytes #{range.begin}-#{range.end}/#{length}"
393
370
  end
394
371
 
372
+ response.headers["Content-Length"] = range.size.to_s
373
+
395
374
  range
396
375
  end
397
376
 
377
+ def redirect_download
378
+ value = opts[:redirect_download]
379
+
380
+ if opts[:download_url]
381
+ value ||= opts[:download_url]
382
+ warn "[TUS-RUBY-SERVER DEPRECATION] The :download_url option has been renamed to :redirect_download."
383
+ end
384
+
385
+ value = storage.method(:file_url) if value == true
386
+
387
+ value
388
+ end
389
+
398
390
  def handle_cors!
399
391
  origin = request.headers["Origin"]
400
392
 
@@ -404,10 +396,10 @@ module Tus
404
396
 
405
397
  if request.options?
406
398
  response.headers["Access-Control-Allow-Methods"] = "POST, GET, HEAD, PATCH, DELETE, OPTIONS"
407
- response.headers["Access-Control-Allow-Headers"] = "Origin, X-Requested-With, Content-Type, Upload-Length, Upload-Offset, Tus-Resumable, Upload-Metadata"
399
+ response.headers["Access-Control-Allow-Headers"] = "Origin, X-Requested-With, Content-Type, Upload-Length, Upload-Offset, Tus-Resumable, Upload-Metadata, Upload-Defer-Length, Upload-Concat"
408
400
  response.headers["Access-Control-Max-Age"] = "86400"
409
401
  else
410
- response.headers["Access-Control-Expose-Headers"] = "Upload-Offset, Location, Upload-Length, Tus-Version, Tus-Resumable, Tus-Max-Size, Tus-Extension, Upload-Metadata"
402
+ response.headers["Access-Control-Expose-Headers"] = "Upload-Offset, Location, Upload-Length, Tus-Version, Tus-Resumable, Tus-Max-Size, Tus-Extension, Upload-Metadata, Upload-Defer-Length, Upload-Concat"
411
403
  end
412
404
  end
413
405
 
@@ -429,19 +421,6 @@ module Tus
429
421
  request.halt
430
422
  end
431
423
 
432
- def redirect_download
433
- value = opts[:redirect_download]
434
-
435
- if opts[:download_url]
436
- value ||= opts[:download_url]
437
- warn "[TUS-RUBY-SERVER DEPRECATION] The :download_url option has been renamed to :redirect_download."
438
- end
439
-
440
- value = storage.method(:file_url) if value == true
441
-
442
- value
443
- end
444
-
445
424
  def storage
446
425
  opts[:storage] || Tus::Storage::Filesystem.new("data")
447
426
  end
@@ -54,9 +54,6 @@ module Tus
54
54
 
55
55
  # Delete parts after concatenation.
56
56
  delete(part_uids)
57
-
58
- # Tus server requires us to return the size of the concatenated file.
59
- file_path(uid).size
60
57
  end
61
58
 
62
59
  # Appends data to the specified upload in a streaming fashion, and
@@ -53,7 +53,7 @@ module Tus
53
53
  validate_parts!(grid_infos, part_uids)
54
54
 
55
55
  length = grid_infos.map { |doc| doc[:length] }.reduce(0, :+)
56
- content_type = Tus::Info.new(info).metadata["content_type"]
56
+ content_type = Tus::Info.new(info).type
57
57
 
58
58
  grid_file = create_grid_file(
59
59
  filename: uid,
@@ -75,9 +75,6 @@ module Tus
75
75
 
76
76
  # Delete the parts after concatenation.
77
77
  files_collection.delete_many(filename: {"$in" => part_uids})
78
-
79
- # Tus server requires us to return the size of the concatenated file.
80
- length
81
78
  end
82
79
 
83
80
  # Appends data to the specified upload in a streaming fashion, and
@@ -3,6 +3,7 @@
3
3
  gem "aws-sdk-s3", "~> 1.2"
4
4
 
5
5
  require "aws-sdk-s3"
6
+ require "content_disposition"
6
7
 
7
8
  require "tus/info"
8
9
  require "tus/response"
@@ -14,12 +15,16 @@ require "cgi"
14
15
  module Tus
15
16
  module Storage
16
17
  class S3
17
- MIN_PART_SIZE = 5 * 1024 * 1024 # 5MB is the minimum part size for S3 multipart uploads
18
+ # AWS S3 multipart upload limits
19
+ MIN_PART_SIZE = 5 * 1024 * 1024
20
+ MAX_PART_SIZE = 5 * 1024 * 1024 * 1024
21
+ MAX_MULTIPART_PARTS = 10_000
22
+ MAX_OBJECT_SIZE = 5 * 1024 * 1024 * 1024 * 1024
18
23
 
19
- attr_reader :client, :bucket, :prefix, :upload_options, :concurrency
24
+ attr_reader :client, :bucket, :prefix, :upload_options, :limits, :concurrency
20
25
 
21
26
  # Initializes an aws-sdk-s3 client with the given credentials.
22
- def initialize(bucket:, prefix: nil, upload_options: {}, concurrency: {}, thread_count: nil, **client_options)
27
+ def initialize(bucket:, prefix: nil, upload_options: {}, limits: {}, concurrency: {}, thread_count: nil, **client_options)
23
28
  fail ArgumentError, "the :bucket option was nil" unless bucket
24
29
 
25
30
  if thread_count
@@ -33,6 +38,7 @@ module Tus
33
38
  @bucket = resource.bucket(bucket)
34
39
  @prefix = prefix
35
40
  @upload_options = upload_options
41
+ @limits = limits
36
42
  @concurrency = concurrency
37
43
  end
38
44
 
@@ -41,18 +47,15 @@ module Tus
41
47
  def create_file(uid, info = {})
42
48
  tus_info = Tus::Info.new(info)
43
49
 
44
- options = upload_options.dup
45
- options[:content_type] = tus_info.metadata["content_type"]
46
-
47
- if filename = tus_info.metadata["filename"]
48
- # Aws-sdk-s3 doesn't sign non-ASCII characters correctly, and browsers
49
- # will automatically URI-decode filenames.
50
- filename = CGI.escape(filename).gsub("+", " ")
51
-
52
- options[:content_disposition] ||= "inline"
53
- options[:content_disposition] += "; filename=\"#{filename}\""
50
+ if tus_info.length && tus_info.length > max_object_size
51
+ fail Tus::Error, "upload length exceeds maximum S3 object size"
54
52
  end
55
53
 
54
+ options = {}
55
+ options[:content_type] = tus_info.type if tus_info.type
56
+ options[:content_disposition] = ContentDisposition.inline(tus_info.name) if tus_info.name
57
+ options.merge!(upload_options)
58
+
56
59
  multipart_upload = object(uid).initiate_multipart_upload(options)
57
60
 
58
61
  info["multipart_id"] = multipart_upload.id
@@ -81,12 +84,8 @@ module Tus
81
84
  finalize_file(uid, info)
82
85
 
83
86
  delete(part_uids.flat_map { |part_uid| [object(part_uid), object("#{part_uid}.info")] })
84
-
85
- # Tus server requires us to return the size of the concatenated file.
86
- object = client.head_object(bucket: bucket.name, key: object(uid).key)
87
- object.content_length
88
87
  rescue => error
89
- abort_multipart_upload(multipart_upload) if multipart_upload
88
+ multipart_upload.abort if multipart_upload
90
89
  raise error
91
90
  end
92
91
 
@@ -109,21 +108,22 @@ module Tus
109
108
  part_offset = info["multipart_parts"].count
110
109
  bytes_uploaded = 0
111
110
 
112
- jobs = []
113
- chunk = input.read(MIN_PART_SIZE)
111
+ part_size = calculate_part_size(tus_info.length)
112
+
113
+ chunk = input.read(part_size)
114
114
 
115
115
  while chunk
116
- next_chunk = input.read(MIN_PART_SIZE)
116
+ next_chunk = input.read(part_size)
117
117
 
118
118
  # merge next chunk into previous if it's smaller than minimum chunk size
119
- if next_chunk && next_chunk.bytesize < MIN_PART_SIZE
119
+ if next_chunk && next_chunk.bytesize < part_size
120
120
  chunk << next_chunk
121
121
  next_chunk.clear
122
122
  next_chunk = nil
123
123
  end
124
124
 
125
- # abort if chunk is smaller than 5MB and is not the last chunk
126
- if chunk.bytesize < MIN_PART_SIZE
125
+ # abort if chunk is smaller than part size and is not the last chunk
126
+ if chunk.bytesize < part_size
127
127
  break if (tus_info.length && tus_info.offset) &&
128
128
  chunk.bytesize + tus_info.offset < tus_info.length
129
129
  end
@@ -197,7 +197,7 @@ module Tus
197
197
  def delete_file(uid, info = {})
198
198
  if info["multipart_id"]
199
199
  multipart_upload = object(uid).multipart_upload(info["multipart_id"])
200
- abort_multipart_upload(multipart_upload)
200
+ multipart_upload.abort
201
201
 
202
202
  delete [object("#{uid}.info")]
203
203
  else
@@ -209,21 +209,17 @@ module Tus
209
209
  # multipart uploads still in progress, it checks the upload date of the
210
210
  # last multipart part.
211
211
  def expire_files(expiration_date)
212
- old_objects = bucket.objects.select do |object|
213
- object.last_modified <= expiration_date
214
- end
215
-
216
- delete(old_objects)
217
-
218
- bucket.multipart_uploads.each do |multipart_upload|
219
- # no need to check multipart uploads initiated before expiration date
220
- next if multipart_upload.initiated > expiration_date
221
-
222
- most_recent_part = multipart_upload.parts.sort_by(&:last_modified).last
223
- if most_recent_part.nil? || most_recent_part.last_modified <= expiration_date
224
- abort_multipart_upload(multipart_upload)
225
- end
226
- end
212
+ delete bucket.objects(prefix: @prefix)
213
+ .select { |object| object.last_modified <= expiration_date }
214
+
215
+ bucket.multipart_uploads
216
+ .select { |multipart_upload| multipart_upload.key.start_with?(prefix.to_s) }
217
+ .select { |multipart_upload| multipart_upload.initiated <= expiration_date }
218
+ .select { |multipart_upload|
219
+ last_modified = multipart_upload.parts.map(&:last_modified).max
220
+ last_modified.nil? || last_modified <= expiration_date
221
+ }
222
+ .each(&:abort)
227
223
  end
228
224
 
229
225
  private
@@ -240,6 +236,23 @@ module Tus
240
236
  { "part_number" => part_number, "etag" => response.etag }
241
237
  end
242
238
 
239
+ # Calculates minimum multipart part size required to upload the whole
240
+ # file, taking into account AWS S3 multipart limits on part size and
241
+ # number of parts.
242
+ def calculate_part_size(length)
243
+ return min_part_size if length.nil?
244
+ return length if length <= min_part_size
245
+ return min_part_size if length <= min_part_size * max_multipart_parts
246
+
247
+ part_size = Rational(length, max_multipart_parts).ceil
248
+
249
+ if part_size > max_part_size
250
+ fail Tus::Error, "chunk size for upload exceeds maximum part size"
251
+ end
252
+
253
+ part_size
254
+ end
255
+
243
256
  def delete(objects)
244
257
  # S3 can delete maximum of 1000 objects in a single request
245
258
  objects.each_slice(1000) do |objects_batch|
@@ -248,18 +261,6 @@ module Tus
248
261
  end
249
262
  end
250
263
 
251
- # In order to ensure the multipart upload was successfully aborted,
252
- # we need to check whether all parts have been deleted, and retry
253
- # the abort if the list is nonempty.
254
- def abort_multipart_upload(multipart_upload)
255
- loop do
256
- multipart_upload.abort
257
- break unless multipart_upload.parts.any?
258
- end
259
- rescue Aws::S3::Errors::NoSuchUpload
260
- # multipart upload was successfully aborted or doesn't exist
261
- end
262
-
263
264
  # Creates multipart parts for the specified multipart upload by copying
264
265
  # given objects into them. It uses a queue and a fixed-size thread pool
265
266
  # which consumes that queue.
@@ -325,6 +326,11 @@ module Tus
325
326
  def object(key)
326
327
  bucket.object([*prefix, key].join("/"))
327
328
  end
329
+
330
+ def min_part_size; limits.fetch(:min_part_size, MIN_PART_SIZE); end
331
+ def max_part_size; limits.fetch(:max_part_size, MAX_PART_SIZE); end
332
+ def max_multipart_parts; limits.fetch(:max_multipart_parts, MAX_MULTIPART_PARTS); end
333
+ def max_object_size; limits.fetch(:max_object_size, MAX_OBJECT_SIZE); end
328
334
  end
329
335
  end
330
336
  end
@@ -1,12 +1,12 @@
1
1
  Gem::Specification.new do |gem|
2
2
  gem.name = "tus-server"
3
- gem.version = "2.2.1"
3
+ gem.version = "2.3.0"
4
4
 
5
5
  gem.required_ruby_version = ">= 2.3"
6
6
 
7
7
  gem.summary = "Ruby server implementation of tus.io, the open protocol for resumable file uploads."
8
8
 
9
- gem.homepage = "https://github.com/janko-m/tus-ruby-server"
9
+ gem.homepage = "https://github.com/janko/tus-ruby-server"
10
10
  gem.authors = ["Janko Marohnić"]
11
11
  gem.email = ["janko.marohnic@gmail.com"]
12
12
  gem.license = "MIT"
@@ -21,6 +21,6 @@ Gem::Specification.new do |gem|
21
21
  gem.add_development_dependency "minitest", "~> 5.8"
22
22
  gem.add_development_dependency "rack-test_app"
23
23
  gem.add_development_dependency "cucumber", "~> 3.1"
24
- gem.add_development_dependency "mongo"
25
24
  gem.add_development_dependency "aws-sdk-s3", "~> 1.2"
25
+ gem.add_development_dependency "aws-sdk-core", "~> 3.23"
26
26
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: tus-server
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.2.1
4
+ version: 2.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Janko Marohnić
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2018-12-19 00:00:00.000000000 Z
11
+ date: 2019-05-14 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: roda
@@ -101,33 +101,33 @@ dependencies:
101
101
  - !ruby/object:Gem::Version
102
102
  version: '3.1'
103
103
  - !ruby/object:Gem::Dependency
104
- name: mongo
104
+ name: aws-sdk-s3
105
105
  requirement: !ruby/object:Gem::Requirement
106
106
  requirements:
107
- - - ">="
107
+ - - "~>"
108
108
  - !ruby/object:Gem::Version
109
- version: '0'
109
+ version: '1.2'
110
110
  type: :development
111
111
  prerelease: false
112
112
  version_requirements: !ruby/object:Gem::Requirement
113
113
  requirements:
114
- - - ">="
114
+ - - "~>"
115
115
  - !ruby/object:Gem::Version
116
- version: '0'
116
+ version: '1.2'
117
117
  - !ruby/object:Gem::Dependency
118
- name: aws-sdk-s3
118
+ name: aws-sdk-core
119
119
  requirement: !ruby/object:Gem::Requirement
120
120
  requirements:
121
121
  - - "~>"
122
122
  - !ruby/object:Gem::Version
123
- version: '1.2'
123
+ version: '3.23'
124
124
  type: :development
125
125
  prerelease: false
126
126
  version_requirements: !ruby/object:Gem::Requirement
127
127
  requirements:
128
128
  - - "~>"
129
129
  - !ruby/object:Gem::Version
130
- version: '1.2'
130
+ version: '3.23'
131
131
  description:
132
132
  email:
133
133
  - janko.marohnic@gmail.com
@@ -150,7 +150,7 @@ files:
150
150
  - lib/tus/storage/gridfs.rb
151
151
  - lib/tus/storage/s3.rb
152
152
  - tus-server.gemspec
153
- homepage: https://github.com/janko-m/tus-ruby-server
153
+ homepage: https://github.com/janko/tus-ruby-server
154
154
  licenses:
155
155
  - MIT
156
156
  metadata: {}
@@ -169,8 +169,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
169
169
  - !ruby/object:Gem::Version
170
170
  version: '0'
171
171
  requirements: []
172
- rubyforge_project:
173
- rubygems_version: 2.7.6
172
+ rubygems_version: 3.0.3
174
173
  signing_key:
175
174
  specification_version: 4
176
175
  summary: Ruby server implementation of tus.io, the open protocol for resumable file