zip_kit 6.3.2 → 6.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/rbi/zip_kit.rbs ADDED
@@ -0,0 +1,2000 @@
1
+ module ZipKit
2
+ VERSION: untyped
3
+
4
+ class Railtie < Rails::Railtie
5
+ end
6
+
7
+ # A ZIP archive contains a flat list of entries. These entries can implicitly
8
+ # create directories when the archive is expanded. For example, an entry with
9
+ # the filename of "some folder/file.docx" will make the unarchiving application
10
+ # create a directory called "some folder" automatically, and then deposit the
11
+ # file "file.docx" in that directory. These "implicit" directories can be
12
+ # arbitrarily nested, and create a tree structure of directories. That structure
13
+ # however is implicit as the archive contains a flat list.
14
+ #
15
+ # This creates opportunities for conflicts. For example, imagine the following
16
+ # structure:
17
+ #
18
+ # * `something/` - specifies an empty directory with the name "something"
19
+ # * `something` - specifies a file, creates a conflict
20
+ #
21
+ # This can be prevented with filename uniqueness checks. It does get funkier however
22
+ # as the rabbit hole goes down:
23
+ #
24
+ # * `dir/subdir/another_subdir/yet_another_subdir/file.bin` - declares a file and directories
25
+ # * `dir/subdir/another_subdir/yet_another_subdir` - declares a file at one of the levels, creates a conflict
26
+ #
27
+ # The results of this ZIP structure aren't very easy to predict as they depend on the
28
+ # application that opens the archive. For example, BOMArchiveHelper on macOS will expand files
29
+ # as they are declared in the ZIP, but once a conflict occurs it will fail with "error -21". It
30
+ # is not very transparent to the user why unarchiving fails, and it has to - and can reliably - only
31
+ # be prevented when the archive gets created.
32
+ #
33
+ # Unfortunately that conflicts with another "magical" feature of ZipKit which automatically
34
+ # "fixes" duplicate filenames - filenames (paths) which have already been added to the archive.
35
+ # This fix is performed by appending (1), then (2) and so forth to the filename so that the
36
+ # conflict is avoided. This is not possible to apply to directories, because when one of the
37
+ # path components is reused in multiple filenames it means those entities should end up in
38
+ # the same directory (subdirectory) once the archive is opened.
39
+ #
40
+ # The `PathSet` keeps track of entries as they get added using 2 Sets (cheap presence checks),
41
+ # one for directories and one for files. It will raise a `Conflict` exception if there are
42
+ # files clobbering one another, or in case files collide with directories.
43
+ class PathSet
44
+ def initialize: () -> void
45
+
46
+ # Adds a directory path to the set of known paths, including
47
+ # all the directories that contain it. So, calling
48
+ # add_directory_path("dir/dir2/dir3")
49
+ # will add "dir", "dir/dir2", "dir/dir2/dir3".
50
+ #
51
+ # _@param_ `path` — the path to the directory to add
52
+ def add_directory_path: (String path) -> void
53
+
54
+ # Adds a file path to the set of known paths, including
55
+ # all the directories that contain it. Once a file has been added,
56
+ # it is no longer possible to add a directory having the same path
57
+ # as this would cause conflict.
58
+ #
59
+ # The operation also adds all the containing directories for the file, so
60
+ # add_file_path("dir/dir2/file.doc")
61
+ # will add "dir" and "dir/dir2" as directories, "dir/dir2/dir3".
62
+ #
63
+ # _@param_ `file_path` — the path to the directory to add
64
+ def add_file_path: (String file_path) -> void
65
+
66
+ # Tells whether a specific full path is already known to the PathSet.
67
+ # Can be a path for a directory or for a file.
68
+ #
69
+ # _@param_ `path_in_archive` — the path to check for inclusion
70
+ def include?: (String path_in_archive) -> bool
71
+
72
+ # Clears the contained sets
73
+ def clear: () -> void
74
+
75
+ # sord omit - no YARD type given for "path_in_archive", using untyped
76
+ # Adds the directory or file path to the path set
77
+ def add_directory_or_file_path: (untyped path_in_archive) -> void
78
+
79
+ # sord omit - no YARD type given for "path", using untyped
80
+ # sord omit - no YARD return type given, using untyped
81
+ def non_empty_path_components: (untyped path) -> untyped
82
+
83
+ # sord omit - no YARD type given for "path", using untyped
84
+ # sord omit - no YARD return type given, using untyped
85
+ def path_and_ancestors: (untyped path) -> untyped
86
+
87
+ class Conflict < StandardError
88
+ end
89
+
90
+ class FileClobbersDirectory < ZipKit::PathSet::Conflict
91
+ end
92
+
93
+ class DirectoryClobbersFile < ZipKit::PathSet::Conflict
94
+ end
95
+ end
96
+
97
+ # Is used to write ZIP archives without having to read them back or to overwrite
98
+ # data. It outputs into any object that supports `<<` or `write`, namely:
99
+ #
100
+ # * `Array` - will contain binary strings
101
+ # * `File` - data will be written to it as it gets generated
102
+ # * `IO` (`Socket`, `StringIO`) - data gets written into it
103
+ # * `String` - in binary encoding and unfrozen - also makes a decent output target
104
+ #
105
+ # or anything else that responds to `#<<` or `#write`.
106
+ #
107
+ # You can also combine output through the `Streamer` with direct output to the destination,
108
+ # all while preserving the correct offsets in the ZIP file structures. This allows usage
109
+ # of `sendfile()` or socket `splice()` calls for "through" proxying.
110
+ #
111
+ # If you want to avoid data descriptors - or write data bypassing the Streamer -
112
+ # you need to know the CRC32 (as a uint) and the filesize upfront,
113
+ # before the writing of the entry body starts.
114
+ #
115
+ # ## Using the Streamer with runtime compression
116
+ #
117
+ # You can use the Streamer with data descriptors (the CRC32 and the sizes will be
118
+ # written after the file data). This allows non-rewinding on-the-fly compression.
119
+ # The streamer will pick the optimum compression method ("stored" or "deflated")
120
+ # depending on the nature of the byte stream you send into it (by using a small buffer).
121
+ # If you are compressing large files, the Deflater object that the Streamer controls
122
+ # will be regularly flushed to prevent memory inflation.
123
+ #
124
+ # ZipKit::Streamer.open(file_socket_or_string) do |zip|
125
+ # zip.write_file('mov.mp4') do |sink|
126
+ # File.open('mov.mp4', 'rb'){|source| IO.copy_stream(source, sink) }
127
+ # end
128
+ # zip.write_file('long-novel.txt') do |sink|
129
+ # File.open('novel.txt', 'rb'){|source| IO.copy_stream(source, sink) }
130
+ # end
131
+ # end
132
+ #
133
+ # The central directory will be written automatically at the end of the `open` block.
134
+ #
135
+ # ## Using the Streamer with entries of known size and having a known CRC32 checksum
136
+ #
137
+ # Streamer allows "IO splicing" - in this mode it will only control the metadata output,
138
+ # but you can write the data to the socket/file outside of the Streamer. For example, when
139
+ # using the sendfile gem:
140
+ #
141
+ # ZipKit::Streamer.open(socket) do | zip |
142
+ # zip.add_stored_entry(filename: "myfile1.bin", size: 9090821, crc32: 12485)
143
+ # socket.sendfile(tempfile1)
144
+ # zip.simulate_write(tempfile1.size)
145
+ #
146
+ # zip.add_stored_entry(filename: "myfile2.bin", size: 458678, crc32: 89568)
147
+ # socket.sendfile(tempfile2)
148
+ # zip.simulate_write(tempfile2.size)
149
+ # end
150
+ #
151
+ # Note that you need to use `simulate_write` in this case. This needs to happen since Streamer
152
+ # writes absolute offsets into the ZIP (local file header offsets and the like),
153
+ # and it relies on the output object to tell it how many bytes have been written
154
+ # so far. When using `sendfile` the Ruby write methods get bypassed entirely, and the
155
+ # offsets in the IO will not be updated - which will result in an invalid ZIP.
156
+ #
157
+ #
158
+ # ## On-the-fly deflate -using the Streamer with async/suspended writes and data descriptors
159
+ #
160
+ # If you are unable to use the block versions of `write_deflated_file` and `write_stored_file`
161
+ # there is an option to use a separate writer object. It gets returned from `write_deflated_file`
162
+ # and `write_stored_file` if you do not provide them with a block, and will accept data writes.
163
+ # Do note that you _must_ call `#close` on that object yourself:
164
+ #
165
+ # ZipKit::Streamer.open(socket) do | zip |
166
+ # w = zip.write_stored_file('mov.mp4')
167
+ # IO.copy_stream(source_io, w)
168
+ # w.close
169
+ # end
170
+ #
171
+ # The central directory will be written automatically at the end of the `open` block. If you need
172
+ # to manage the Streamer manually, or defer the central directory write until appropriate, use
173
+ # the constructor instead and call `Streamer#close`:
174
+ #
175
+ # zip = ZipKit::Streamer.new(out_io)
176
+ # .....
177
+ # zip.close
178
+ #
179
+ # Calling {Streamer#close} **will not** call `#close` on the underlying IO object.
180
+ class Streamer
181
+ include ZipKit::WriteShovel
182
+ STORED: untyped
183
+ DEFLATED: untyped
184
+ EntryBodySizeMismatch: untyped
185
+ InvalidOutput: untyped
186
+ Overflow: untyped
187
+ UnknownMode: untyped
188
+ OffsetOutOfSync: untyped
189
+
190
+ # sord omit - no YARD return type given, using untyped
191
+ # Creates a new Streamer on top of the given IO-ish object and yields it. Once the given block
192
+ # returns, the Streamer will have it's `close` method called, which will write out the central
193
+ # directory of the archive to the output.
194
+ #
195
+ # _@param_ `stream` — the destination IO for the ZIP (should respond to `tell` and `<<`)
196
+ #
197
+ # _@param_ `kwargs_for_new` — keyword arguments for #initialize
198
+ def self.open: (IO stream, **::Hash[untyped, untyped] kwargs_for_new) -> untyped
199
+
200
+ # sord duck - #<< looks like a duck type, replacing with untyped
201
+ # Creates a new Streamer on top of the given IO-ish object.
202
+ #
203
+ # _@param_ `writable` — the destination IO for the ZIP. Anything that responds to `<<` can be used.
204
+ #
205
+ # _@param_ `writer` — the object to be used as the writer. Defaults to an instance of ZipKit::ZipWriter, normally you won't need to override it
206
+ #
207
+ # _@param_ `auto_rename_duplicate_filenames` — whether duplicate filenames, when encountered, should be suffixed with (1), (2) etc. Default value is `false` - if dupliate names are used an exception will be raised
208
+ def initialize: (untyped writable, ?writer: ZipKit::ZipWriter, ?auto_rename_duplicate_filenames: bool) -> void
209
+
210
+ # Writes a part of a zip entry body (actual binary data of the entry) into the output stream.
211
+ #
212
+ # _@param_ `binary_data` — a String in binary encoding
213
+ #
214
+ # _@return_ — self
215
+ def <<: (String binary_data) -> untyped
216
+
217
+ # Advances the internal IO pointer to keep the offsets of the ZIP file in
218
+ # check. Use this if you are going to use accelerated writes to the socket
219
+ # (like the `sendfile()` call) after writing the headers, or if you
220
+ # just need to figure out the size of the archive.
221
+ #
222
+ # _@param_ `num_bytes` — how many bytes are going to be written bypassing the Streamer
223
+ #
224
+ # _@return_ — position in the output stream / ZIP archive
225
+ def simulate_write: (Integer num_bytes) -> Integer
226
+
227
+ # Writes out the local header for an entry (file in the ZIP) that is using
228
+ # the deflated storage model (is compressed). Once this method is called,
229
+ # the `<<` method has to be called to write the actual contents of the body.
230
+ #
231
+ # Note that the deflated body that is going to be written into the output
232
+ # has to be _precompressed_ (pre-deflated) before writing it into the
233
+ # Streamer, because otherwise it is impossible to know it's size upfront.
234
+ #
235
+ # _@param_ `filename` — the name of the file in the entry
236
+ #
237
+ # _@param_ `modification_time` — the modification time of the file in the archive
238
+ #
239
+ # _@param_ `compressed_size` — the size of the compressed entry that is going to be written into the archive
240
+ #
241
+ # _@param_ `uncompressed_size` — the size of the entry when uncompressed, in bytes
242
+ #
243
+ # _@param_ `crc32` — the CRC32 checksum of the entry when uncompressed
244
+ #
245
+ # _@param_ `use_data_descriptor` — whether the entry body will be followed by a data descriptor
246
+ #
247
+ # _@param_ `unix_permissions` — which UNIX permissions to set, normally the default should be used
248
+ #
249
+ # _@return_ — the offset the output IO is at after writing the entry header
250
+ def add_deflated_entry: (
251
+ filename: String,
252
+ ?modification_time: Time,
253
+ ?compressed_size: Integer,
254
+ ?uncompressed_size: Integer,
255
+ ?crc32: Integer,
256
+ ?unix_permissions: Integer?,
257
+ ?use_data_descriptor: bool
258
+ ) -> Integer
259
+
260
+ # Writes out the local header for an entry (file in the ZIP) that is using
261
+ # the stored storage model (is stored as-is).
262
+ # Once this method is called, the `<<` method has to be called one or more
263
+ # times to write the actual contents of the body.
264
+ #
265
+ # _@param_ `filename` — the name of the file in the entry
266
+ #
267
+ # _@param_ `modification_time` — the modification time of the file in the archive
268
+ #
269
+ # _@param_ `size` — the size of the file when uncompressed, in bytes
270
+ #
271
+ # _@param_ `crc32` — the CRC32 checksum of the entry when uncompressed
272
+ #
273
+ # _@param_ `use_data_descriptor` — whether the entry body will be followed by a data descriptor. When in use
274
+ #
275
+ # _@param_ `unix_permissions` — which UNIX permissions to set, normally the default should be used
276
+ #
277
+ # _@return_ — the offset the output IO is at after writing the entry header
278
+ def add_stored_entry: (
279
+ filename: String,
280
+ ?modification_time: Time,
281
+ ?size: Integer,
282
+ ?crc32: Integer,
283
+ ?unix_permissions: Integer?,
284
+ ?use_data_descriptor: bool
285
+ ) -> Integer
286
+
287
+ # Adds an empty directory to the archive with a size of 0 and permissions of 755.
288
+ #
289
+ # _@param_ `dirname` — the name of the directory in the archive
290
+ #
291
+ # _@param_ `modification_time` — the modification time of the directory in the archive
292
+ #
293
+ # _@param_ `unix_permissions` — which UNIX permissions to set, normally the default should be used
294
+ #
295
+ # _@return_ — the offset the output IO is at after writing the entry header
296
+ def add_empty_directory: (dirname: String, ?modification_time: Time, ?unix_permissions: Integer?) -> Integer
297
+
298
+ # Opens the stream for a file stored in the archive, and yields a writer
299
+ # for that file to the block.
300
+ # The writer will buffer a small amount of data and see whether compression is
301
+ # effective for the data being output. If compression turns out to work well -
302
+ # for instance, if the output is mostly text - it is going to create a deflated
303
+ # file inside the zip. If the compression benefits are negligible, it will
304
+ # create a stored file inside the zip. It will delegate either to `write_deflated_file`
305
+ # or to `write_stored_file`.
306
+ #
307
+ # Using a block, the write will be terminated with a data descriptor outright.
308
+ #
309
+ # zip.write_file("foo.txt") do |sink|
310
+ # IO.copy_stream(source_file, sink)
311
+ # end
312
+ #
313
+ # If deferred writes are desired (for example - to integrate with an API that
314
+ # does not support blocks, or to work with non-blocking environments) the method
315
+ # has to be called without a block. In that case it returns the sink instead,
316
+ # permitting to write to it in a deferred fashion. When `close` is called on
317
+ # the sink, any remanining compression output will be flushed and the data
318
+ # descriptor is going to be written.
319
+ #
320
+ # Note that even though it does not have to happen within the same call stack,
321
+ # call sequencing still must be observed. It is therefore not possible to do
322
+ # this:
323
+ #
324
+ # writer_for_file1 = zip.write_file("somefile.jpg")
325
+ # writer_for_file2 = zip.write_file("another.tif")
326
+ # writer_for_file1 << data
327
+ # writer_for_file2 << data
328
+ #
329
+ # because it is likely to result in an invalid ZIP file structure later on.
330
+ # So using this facility in async scenarios is certainly possible, but care
331
+ # and attention is recommended.
332
+ #
333
+ # _@param_ `filename` — the name of the file in the archive
334
+ #
335
+ # _@param_ `modification_time` — the modification time of the file in the archive
336
+ #
337
+ # _@param_ `unix_permissions` — which UNIX permissions to set, normally the default should be used
338
+ #
339
+ # _@return_ — without a block - the Writable sink which has to be closed manually
340
+ def write_file: (String filename, ?modification_time: Time, ?unix_permissions: Integer?) ?{ (ZipKit::Streamer::Writable sink) -> void } -> ZipKit::Streamer::Writable
341
+
342
+ # Opens the stream for a stored file in the archive, and yields a writer
343
+ # for that file to the block.
344
+ # Once the write completes, a data descriptor will be written with the
345
+ # actual compressed/uncompressed sizes and the CRC32 checksum.
346
+ #
347
+ # Using a block, the write will be terminated with a data descriptor outright.
348
+ #
349
+ # zip.write_stored_file("foo.txt") do |sink|
350
+ # IO.copy_stream(source_file, sink)
351
+ # end
352
+ #
353
+ # If deferred writes are desired (for example - to integrate with an API that
354
+ # does not support blocks, or to work with non-blocking environments) the method
355
+ # has to be called without a block. In that case it returns the sink instead,
356
+ # permitting to write to it in a deferred fashion. When `close` is called on
357
+ # the sink, any remanining compression output will be flushed and the data
358
+ # descriptor is going to be written.
359
+ #
360
+ # Note that even though it does not have to happen within the same call stack,
361
+ # call sequencing still must be observed. It is therefore not possible to do
362
+ # this:
363
+ #
364
+ # writer_for_file1 = zip.write_stored_file("somefile.jpg")
365
+ # writer_for_file2 = zip.write_stored_file("another.tif")
366
+ # writer_for_file1 << data
367
+ # writer_for_file2 << data
368
+ #
369
+ # because it is likely to result in an invalid ZIP file structure later on.
370
+ # So using this facility in async scenarios is certainly possible, but care
371
+ # and attention is recommended.
372
+ #
373
+ # If an exception is raised inside the block that is passed to the method, a `rollback!` call
374
+ # will be performed automatically and the entry just written will be omitted from the ZIP
375
+ # central directory. This can be useful if you want to rescue the exception and reattempt
376
+ # adding the ZIP file. Note that you will need to call `write_deflated_file` again to start a
377
+ # new file - you can't keep writing to the one that failed.
378
+ #
379
+ # _@param_ `filename` — the name of the file in the archive
380
+ #
381
+ # _@param_ `modification_time` — the modification time of the file in the archive
382
+ #
383
+ # _@param_ `unix_permissions` — which UNIX permissions to set, normally the default should be used
384
+ #
385
+ # _@return_ — without a block - the Writable sink which has to be closed manually
386
+ def write_stored_file: (String filename, ?modification_time: Time, ?unix_permissions: Integer?) ?{ (ZipKit::Streamer::Writable sink) -> void } -> ZipKit::Streamer::Writable
387
+
388
+ # Opens the stream for a deflated file in the archive, and yields a writer
389
+ # for that file to the block. Once the write completes, a data descriptor
390
+ # will be written with the actual compressed/uncompressed sizes and the
391
+ # CRC32 checksum.
392
+ #
393
+ # Using a block, the write will be terminated with a data descriptor outright.
394
+ #
395
+ # zip.write_stored_file("foo.txt") do |sink|
396
+ # IO.copy_stream(source_file, sink)
397
+ # end
398
+ #
399
+ # If deferred writes are desired (for example - to integrate with an API that
400
+ # does not support blocks, or to work with non-blocking environments) the method
401
+ # has to be called without a block. In that case it returns the sink instead,
402
+ # permitting to write to it in a deferred fashion. When `close` is called on
403
+ # the sink, any remanining compression output will be flushed and the data
404
+ # descriptor is going to be written.
405
+ #
406
+ # Note that even though it does not have to happen within the same call stack,
407
+ # call sequencing still must be observed. It is therefore not possible to do
408
+ # this:
409
+ #
410
+ # writer_for_file1 = zip.write_deflated_file("somefile.jpg")
411
+ # writer_for_file2 = zip.write_deflated_file("another.tif")
412
+ # writer_for_file1 << data
413
+ # writer_for_file2 << data
414
+ # writer_for_file1.close
415
+ # writer_for_file2.close
416
+ #
417
+ # because it is likely to result in an invalid ZIP file structure later on.
418
+ # So using this facility in async scenarios is certainly possible, but care
419
+ # and attention is recommended.
420
+ #
421
+ # If an exception is raised inside the block that is passed to the method, a `rollback!` call
422
+ # will be performed automatically and the entry just written will be omitted from the ZIP
423
+ # central directory. This can be useful if you want to rescue the exception and reattempt
424
+ # adding the ZIP file. Note that you will need to call `write_deflated_file` again to start a
425
+ # new file - you can't keep writing to the one that failed.
426
+ #
427
+ # _@param_ `filename` — the name of the file in the archive
428
+ #
429
+ # _@param_ `modification_time` — the modification time of the file in the archive
430
+ #
431
+ # _@param_ `unix_permissions` — which UNIX permissions to set, normally the default should be used
432
+ #
433
+ # _@return_ — without a block - the Writable sink which has to be closed manually
434
+ def write_deflated_file: (String filename, ?modification_time: Time, ?unix_permissions: Integer?) ?{ (ZipKit::Streamer::Writable sink) -> void } -> ZipKit::Streamer::Writable
435
+
436
+ # Closes the archive. Writes the central directory, and switches the writer into
437
+ # a state where it can no longer be written to.
438
+ #
439
+ # Once this method is called, the `Streamer` should be discarded (the ZIP archive is complete).
440
+ #
441
+ # _@return_ — the offset the output IO is at after closing the archive
442
+ def close: () -> Integer
443
+
444
+ # Sets up the ZipWriter with wrappers if necessary. The method is called once, when the Streamer
445
+ # gets instantiated - the Writer then gets reused. This method is primarily there so that you
446
+ # can override it.
447
+ #
448
+ # _@return_ — the writer to perform writes with
449
+ def create_writer: () -> ZipKit::ZipWriter
450
+
451
+ # Updates the last entry written with the CRC32 checksum and compressed/uncompressed
452
+ # sizes. For stored entries, `compressed_size` and `uncompressed_size` are the same.
453
+ # After updating the entry will immediately write the data descriptor bytes
454
+ # to the output.
455
+ #
456
+ # _@param_ `crc32` — the CRC32 checksum of the entry when uncompressed
457
+ #
458
+ # _@param_ `compressed_size` — the size of the compressed segment within the ZIP
459
+ #
460
+ # _@param_ `uncompressed_size` — the size of the entry once uncompressed
461
+ #
462
+ # _@return_ — the offset the output IO is at after writing the data descriptor
463
+ def update_last_entry_and_write_data_descriptor: (crc32: Integer, compressed_size: Integer, uncompressed_size: Integer) -> Integer
464
+
465
+ # Removes the buffered local entry for the last file written. This can be used when rescuing from exceptions
466
+ # when you want to skip the file that failed writing into the ZIP from getting written out into the
467
+ # ZIP central directory. This is useful when, for example, you encounter errors retrieving the file
468
+ # that you want to place inside the ZIP from a remote storage location and some network exception
469
+ # gets raised. `write_deflated_file` and `write_stored_file` will rollback for you automatically.
470
+ # Of course it is not possible to remove the failed entry from the ZIP file entirely, as the data
471
+ # is likely already on the wire. However, excluding the entry from the central directory of the ZIP
472
+ # file will allow better-behaved ZIP unarchivers to extract the entries which did store correctly,
473
+ # provided they read the ZIP from the central directory and not straight-ahead.
474
+ # Rolling back does not perform any writes.
475
+ #
476
+ # `rollback!` gets called for you if an exception is raised inside the block of `write_file`,
477
+ # `write_deflated_file` and `write_stored_file`.
478
+ #
479
+ # _@return_ — position in the output stream / ZIP archive
480
+ #
481
+ # ```ruby
482
+ # zip.add_stored_entry(filename: "data.bin", size: 4.megabytes, crc32: the_crc)
483
+ # while chunk = remote.read(65*2048)
484
+ # zip << chunk
485
+ # rescue Timeout::Error
486
+ # zip.rollback!
487
+ # # and proceed to the next file
488
+ # end
489
+ # ```
490
+ def rollback!: () -> Integer
491
+
492
+ # sord omit - no YARD type given for "writable", using untyped
493
+ # sord omit - no YARD return type given, using untyped
494
+ def yield_or_return_writable: (untyped writable) -> untyped
495
+
496
+ # sord omit - no YARD return type given, using untyped
497
+ def verify_offsets!: () -> untyped
498
+
499
+ # sord omit - no YARD type given for "filename:", using untyped
500
+ # sord omit - no YARD type given for "modification_time:", using untyped
501
+ # sord omit - no YARD type given for "crc32:", using untyped
502
+ # sord omit - no YARD type given for "storage_mode:", using untyped
503
+ # sord omit - no YARD type given for "compressed_size:", using untyped
504
+ # sord omit - no YARD type given for "uncompressed_size:", using untyped
505
+ # sord omit - no YARD type given for "use_data_descriptor:", using untyped
506
+ # sord omit - no YARD type given for "unix_permissions:", using untyped
507
+ # sord omit - no YARD return type given, using untyped
508
+ def add_file_and_write_local_header: (
509
+ filename: untyped,
510
+ modification_time: untyped,
511
+ crc32: untyped,
512
+ storage_mode: untyped,
513
+ compressed_size: untyped,
514
+ uncompressed_size: untyped,
515
+ use_data_descriptor: untyped,
516
+ unix_permissions: untyped
517
+ ) -> untyped
518
+
519
+ # sord omit - no YARD type given for "filename", using untyped
520
+ # sord omit - no YARD return type given, using untyped
521
+ def remove_backslash: (untyped filename) -> untyped
522
+
523
+ # Writes the given data to the output stream. Allows the object to be used as
524
+ # a target for `IO.copy_stream(from, to)`
525
+ #
526
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
527
+ #
528
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
529
+ def write: (String bytes) -> Integer
530
+
531
+ # Is used internally by Streamer to keep track of entries in the archive during writing.
532
+ # Normally you will not have to use this class directly
533
+ class Entry < Struct
534
+ def initialize: () -> void
535
+
536
+ # sord omit - no YARD return type given, using untyped
537
+ def total_bytes_used: () -> untyped
538
+
539
+ # sord omit - no YARD return type given, using untyped
540
+ # Set the general purpose flags for the entry. We care about is the EFS
541
+ # bit (bit 11) which should be set if the filename is UTF8. If it is, we need to set the
542
+ # bit so that the unarchiving application knows that the filename in the archive is UTF-8
543
+ # encoded, and not some DOS default. For ASCII entries it does not matter.
544
+ # Additionally, we care about bit 3 which toggles the use of the postfix data descriptor.
545
+ def gp_flags: () -> untyped
546
+
547
+ def filler?: () -> bool
548
+
549
+ # Returns the value of attribute filename
550
+ attr_accessor filename: Object
551
+
552
+ # Returns the value of attribute crc32
553
+ attr_accessor crc32: Object
554
+
555
+ # Returns the value of attribute compressed_size
556
+ attr_accessor compressed_size: Object
557
+
558
+ # Returns the value of attribute uncompressed_size
559
+ attr_accessor uncompressed_size: Object
560
+
561
+ # Returns the value of attribute storage_mode
562
+ attr_accessor storage_mode: Object
563
+
564
+ # Returns the value of attribute mtime
565
+ attr_accessor mtime: Object
566
+
567
+ # Returns the value of attribute use_data_descriptor
568
+ attr_accessor use_data_descriptor: Object
569
+
570
+ # Returns the value of attribute local_header_offset
571
+ attr_accessor local_header_offset: Object
572
+
573
+ # Returns the value of attribute bytes_used_for_local_header
574
+ attr_accessor bytes_used_for_local_header: Object
575
+
576
+ # Returns the value of attribute bytes_used_for_data_descriptor
577
+ attr_accessor bytes_used_for_data_descriptor: Object
578
+
579
+ # Returns the value of attribute unix_permissions
580
+ attr_accessor unix_permissions: Object
581
+ end
582
+
583
+ # Is used internally by Streamer to keep track of entries in the archive during writing.
584
+ # Normally you will not have to use this class directly
585
+ class Filler < Struct
586
+ def filler?: () -> bool
587
+
588
+ # Returns the value of attribute total_bytes_used
589
+ attr_accessor total_bytes_used: Object
590
+ end
591
+
592
+ # Gets yielded from the writing methods of the Streamer
593
+ # and accepts the data being written into the ZIP for deflate
594
+ # or stored modes. Can be used as a destination for `IO.copy_stream`
595
+ #
596
+ # IO.copy_stream(File.open('source.bin', 'rb), writable)
597
+ class Writable
598
+ include ZipKit::WriteShovel
599
+
600
+ # sord omit - no YARD type given for "streamer", using untyped
601
+ # sord omit - no YARD type given for "writer", using untyped
602
+ # Initializes a new Writable with the object it delegates the writes to.
603
+ # Normally you would not need to use this method directly
604
+ def initialize: (untyped streamer, untyped writer) -> void
605
+
606
+ # Writes the given data to the output stream
607
+ #
608
+ # _@param_ `string` — the string to write (part of the uncompressed file)
609
+ def <<: (String string) -> self
610
+
611
+ # sord omit - no YARD return type given, using untyped
612
+ # Flushes the writer and recovers the CRC32/size values. It then calls
613
+ # `update_last_entry_and_write_data_descriptor` on the given Streamer.
614
+ def close: () -> untyped
615
+
616
+ # sord omit - no YARD return type given, using untyped
617
+ def release_resources_on_failure!: () -> untyped
618
+
619
+ # Writes the given data to the output stream. Allows the object to be used as
620
+ # a target for `IO.copy_stream(from, to)`
621
+ #
622
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
623
+ #
624
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
625
+ def write: (String bytes) -> Integer
626
+ end
627
+
628
+ # Will be used to pick whether to store a file in the `stored` or
629
+ # `deflated` mode, by compressing the first N bytes of the file and
630
+ # comparing the stored and deflated data sizes. If deflate produces
631
+ # a sizable compression gain for this data, it will create a deflated
632
+ # file inside the ZIP archive. If the file doesn't compress well, it
633
+ # will use the "stored" mode for the entry. About 128KB of the
634
+ # file will be buffered to pick the appropriate storage mode. The
635
+ # Heuristic will call either `write_stored_file` or `write_deflated_file`
636
+ # on the Streamer passed into it once it knows which compression
637
+ # method should be applied
638
+ class Heuristic < ZipKit::Streamer::Writable
639
+ include ZipKit::ZlibCleanup
640
+ BYTES_WRITTEN_THRESHOLD: untyped
641
+ MINIMUM_VIABLE_COMPRESSION: untyped
642
+
643
+ # sord omit - no YARD type given for "streamer", using untyped
644
+ # sord omit - no YARD type given for "filename", using untyped
645
+ # sord omit - no YARD type given for "**write_file_options", using untyped
646
+ def initialize: (untyped streamer, untyped filename, **untyped write_file_options) -> void
647
+
648
+ # sord infer - argument name in single @param inferred as "bytes"
649
+ def <<: (String bytes) -> self
650
+
651
+ # sord omit - no YARD return type given, using untyped
652
+ def close: () -> untyped
653
+
654
+ # sord omit - no YARD return type given, using untyped
655
+ def release_resources_on_failure!: () -> untyped
656
+
657
+ # sord omit - no YARD return type given, using untyped
658
+ def decide: () -> untyped
659
+
660
+ # sord warn - "Zlib::Deflater?" does not appear to be a type
661
+ # This method is used to flush and close the native zlib handles
662
+ # should an archiving routine encounter an error. This is necessary,
663
+ # since otherwise unclosed deflaters may hang around in memory
664
+ # indefinitely, creating leaks.
665
+ #
666
+ # _@param_ `deflater` — the deflater to safely finish and close
667
+ #
668
+ # _@return_ — void
669
+ def safely_dispose_of_incomplete_deflater: (SORD_ERROR_ZlibDeflater deflater) -> untyped
670
+ end
671
+
672
+ # Sends writes to the given `io`, and also registers all the data passing
673
+ # through it in a CRC32 checksum calculator. Is made to be completely
674
+ # interchangeable with the DeflatedWriter in terms of interface.
675
+ class StoredWriter
676
+ include ZipKit::WriteShovel
677
+ CRC32_BUFFER_SIZE: untyped
678
+
679
+ # sord omit - no YARD type given for "io", using untyped
680
+ def initialize: (untyped io) -> void
681
+
682
+ # Writes the given data to the contained IO object.
683
+ #
684
+ # _@param_ `data` — data to be written
685
+ #
686
+ # _@return_ — self
687
+ def <<: (String data) -> untyped
688
+
689
+ # Returns the amount of data written and the CRC32 checksum. The return value
690
+ # can be directly used as the argument to {Streamer#update_last_entry_and_write_data_descriptor}
691
+ #
692
+ # _@return_ — a hash of `{crc32, compressed_size, uncompressed_size}`
693
+ def finish: () -> ::Hash[untyped, untyped]
694
+
695
+ # sord omit - no YARD return type given, using untyped
696
+ def release_resources_on_failure!: () -> untyped
697
+
698
+ # Writes the given data to the output stream. Allows the object to be used as
699
+ # a target for `IO.copy_stream(from, to)`
700
+ #
701
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
702
+ #
703
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
704
+ def write: (String bytes) -> Integer
705
+ end
706
+
707
+ # Sends writes to the given `io` compressed using a `Zlib::Deflate`. Also
708
+ # registers data passing through it in a CRC32 checksum calculator. Is made to be completely
709
+ # interchangeable with the StoredWriter in terms of interface.
710
+ class DeflatedWriter
711
+ include ZipKit::WriteShovel
712
+ include ZipKit::ZlibCleanup
713
+ CRC32_BUFFER_SIZE: untyped
714
+
715
+ # sord omit - no YARD type given for "io", using untyped
716
+ def initialize: (untyped io) -> void
717
+
718
+ # Writes the given data into the deflater, and flushes the deflater
719
+ # after having written more than FLUSH_EVERY_N_BYTES bytes of data
720
+ #
721
+ # _@param_ `data` — data to be written
722
+ #
723
+ # _@return_ — self
724
+ def <<: (String data) -> untyped
725
+
726
+ # Returns the amount of data received for writing, the amount of
727
+ # compressed data written and the CRC32 checksum. The return value
728
+ # can be directly used as the argument to {Streamer#update_last_entry_and_write_data_descriptor}
729
+ #
730
+ # _@return_ — a hash of `{crc32, compressed_size, uncompressed_size}`
731
+ def finish: () -> ::Hash[untyped, untyped]
732
+
733
+ # sord omit - no YARD return type given, using untyped
734
+ def release_resources_on_failure!: () -> untyped
735
+
736
+ # sord warn - "Zlib::Deflater?" does not appear to be a type
737
+ # This method is used to flush and close the native zlib handles
738
+ # should an archiving routine encounter an error. This is necessary,
739
+ # since otherwise unclosed deflaters may hang around in memory
740
+ # indefinitely, creating leaks.
741
+ #
742
+ # _@param_ `deflater` — the deflater to safely finish and close
743
+ #
744
+ # _@return_ — void
745
+ def safely_dispose_of_incomplete_deflater: (SORD_ERROR_ZlibDeflater deflater) -> untyped
746
+
747
+ # Writes the given data to the output stream. Allows the object to be used as
748
+ # a target for `IO.copy_stream(from, to)`
749
+ #
750
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
751
+ #
752
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
753
+ def write: (String bytes) -> Integer
754
+ end
755
+ end
756
+
757
+ # An object that fakes just-enough of an IO to be dangerous
758
+ # - or, more precisely, to be useful as a source for the FileReader
759
+ # central directory parser. Effectively we substitute an IO object
760
+ # for an object that fetches parts of the remote file over HTTP using `Range:`
761
+ # headers. The `RemoteIO` acts as an adapter between an object that performs the
762
+ # actual fetches over HTTP and an object that expects a handful of IO methods to be
763
+ # available.
764
+ class RemoteIO
765
+ # sord warn - URI wasn't able to be resolved to a constant in this project
766
+ # _@param_ `url` — the HTTP/HTTPS URL of the object to be retrieved
767
+ def initialize: ((String | URI) url) -> void
768
+
769
+ # sord omit - no YARD return type given, using untyped
770
+ # Emulates IO#seek
771
+ #
772
+ # _@param_ `offset` — absolute offset in the remote resource to seek to
773
+ #
774
+ # _@param_ `mode` — The seek mode (only SEEK_SET is supported)
775
+ def seek: (Integer offset, ?Integer mode) -> untyped
776
+
777
+ # Emulates IO#size.
778
+ #
779
+ # _@return_ — the size of the remote resource
780
+ def size: () -> Integer
781
+
782
+ # Emulates IO#read, but requires the number of bytes to read
783
+ # The read will be limited to the
784
+ # size of the remote resource relative to the current offset in the IO,
785
+ # so if you are at offset 0 in the IO of size 10, doing a `read(20)`
786
+ # will only return you 10 bytes of result, and not raise any exceptions.
787
+ #
788
+ # _@param_ `n_bytes` — how many bytes to read, or `nil` to read all the way to the end
789
+ #
790
+ # _@return_ — the read bytes
791
+ def read: (?Integer? n_bytes) -> String
792
+
793
+ # Returns the current pointer position within the IO
794
+ def tell: () -> Integer
795
+
796
+ # Only used internally when reading the remote ZIP.
797
+ #
798
+ # _@param_ `range` — the HTTP range of data to fetch from remote
799
+ #
800
+ # _@return_ — the response body of the ranged request
801
+ def request_range: (::Range[untyped] range) -> String
802
+
803
+ # For working with S3 it is a better idea to perform a GET request for one byte, since doing a HEAD
804
+ # request needs a different permission - and standard GET presigned URLs are not allowed to perform it
805
+ #
806
+ # _@return_ — the size of the remote resource, parsed either from Content-Length or Content-Range header
807
+ def request_object_size: () -> Integer
808
+
809
+ # sord omit - no YARD type given for "a", using untyped
810
+ # sord omit - no YARD type given for "b", using untyped
811
+ # sord omit - no YARD type given for "c", using untyped
812
+ # sord omit - no YARD return type given, using untyped
813
+ def clamp: (untyped a, untyped b, untyped c) -> untyped
814
+ end
815
+
816
+ # A low-level ZIP file data writer. You can use it to write out various headers and central directory elements
817
+ # separately. The class handles the actual encoding of the data according to the ZIP format APPNOTE document.
818
+ #
819
+ # The primary reason the writer is a separate object is because it is kept stateless. That is, all the data that
820
+ # is needed for writing a piece of the ZIP (say, the EOCD record, or a data descriptor) can be written
821
+ # without depending on data available elsewhere. This makes the writer very easy to test, since each of
822
+ # it's methods outputs something that only depends on the method's arguments. For example, we use this
823
+ # to test writing Zip64 files which, when tested in a streaming fashion, would need tricky IO stubs
824
+ # to wind IO objects back and forth by large offsets. Instead, we can just write out the EOCD record
825
+ # with given offsets as arguments.
826
+ #
827
+ # Since some methods need a lot of data about the entity being written, everything is passed via
828
+ # keyword arguments - this way it is much less likely that you can make a mistake writing something.
829
+ #
830
+ # Another reason for having a separate Writer is that most ZIP libraries attach the methods for
831
+ # writing out the file headers to some sort of Entry object, which represents a file within the ZIP.
832
+ # However, when you are diagnosing issues with the ZIP files you produce, you actually want to have
833
+ # absolute _most_ of the code responsible for writing the actual encoded bytes available to you on
834
+ # one screen. Altering or checking that code then becomes much, much easier. The methods doing the
835
+ # writing are also intentionally left very verbose - so that you can follow what is happening at
836
+ # all times.
837
+ #
838
+ # All methods of the writer accept anything that responds to `<<` as `io` argument - you can use
839
+ # that to output to String objects, or to output to Arrays that you can later join together.
840
+ class ZipWriter
841
+ FOUR_BYTE_MAX_UINT: untyped
842
+ TWO_BYTE_MAX_UINT: untyped
843
+ ZIP_KIT_COMMENT: untyped
844
+ VERSION_MADE_BY: untyped
845
+ VERSION_NEEDED_TO_EXTRACT: untyped
846
+ VERSION_NEEDED_TO_EXTRACT_ZIP64: untyped
847
+ DEFAULT_FILE_UNIX_PERMISSIONS: untyped
848
+ DEFAULT_DIRECTORY_UNIX_PERMISSIONS: untyped
849
+ FILE_TYPE_FILE: untyped
850
+ FILE_TYPE_DIRECTORY: untyped
851
+ MADE_BY_SIGNATURE: untyped
852
+ C_UINT4: untyped
853
+ C_UINT2: untyped
854
+ C_UINT8: untyped
855
+ C_CHAR: untyped
856
+ C_INT4: untyped
857
+
858
+ # sord duck - #<< looks like a duck type, replacing with untyped
859
+ # Writes the local file header, that precedes the actual file _data_.
860
+ #
861
+ # _@param_ `io` — the buffer to write the local file header to
862
+ #
863
+ # _@param_ `filename` — the name of the file in the archive
864
+ #
865
+ # _@param_ `compressed_size` — The size of the compressed (or stored) data - how much space it uses in the ZIP
866
+ #
867
+ # _@param_ `uncompressed_size` — The size of the file once extracted
868
+ #
869
+ # _@param_ `crc32` — The CRC32 checksum of the file
870
+ #
871
+ # _@param_ `mtime` — the modification time to be recorded in the ZIP
872
+ #
873
+ # _@param_ `gp_flags` — bit-packed general purpose flags
874
+ #
875
+ # _@param_ `storage_mode` — 8 for deflated, 0 for stored...
876
+ def write_local_file_header: (
877
+ io: untyped,
878
+ filename: String,
879
+ compressed_size: Integer,
880
+ uncompressed_size: Integer,
881
+ crc32: Integer,
882
+ gp_flags: Integer,
883
+ mtime: Time,
884
+ storage_mode: Integer
885
+ ) -> void
886
+
887
+ # sord duck - #<< looks like a duck type, replacing with untyped
888
+ # sord omit - no YARD type given for "local_file_header_location:", using untyped
889
+ # sord omit - no YARD type given for "storage_mode:", using untyped
890
+ # Writes the file header for the central directory, for a particular file in the archive. When writing out this data,
891
+ # ensure that the CRC32 and both sizes (compressed/uncompressed) are correct for the entry in question.
892
+ #
893
+ # _@param_ `io` — the buffer to write the local file header to
894
+ #
895
+ # _@param_ `filename` — the name of the file in the archive
896
+ #
897
+ # _@param_ `compressed_size` — The size of the compressed (or stored) data - how much space it uses in the ZIP
898
+ #
899
+ # _@param_ `uncompressed_size` — The size of the file once extracted
900
+ #
901
+ # _@param_ `crc32` — The CRC32 checksum of the file
902
+ #
903
+ # _@param_ `mtime` — the modification time to be recorded in the ZIP
904
+ #
905
+ # _@param_ `gp_flags` — bit-packed general purpose flags
906
+ #
907
+ # _@param_ `unix_permissions` — the permissions for the file, or nil for the default to be used
908
+ def write_central_directory_file_header: (
909
+ io: untyped,
910
+ local_file_header_location: untyped,
911
+ gp_flags: Integer,
912
+ storage_mode: untyped,
913
+ compressed_size: Integer,
914
+ uncompressed_size: Integer,
915
+ mtime: Time,
916
+ crc32: Integer,
917
+ filename: String,
918
+ ?unix_permissions: Integer?
919
+ ) -> void
920
+
921
+ # sord duck - #<< looks like a duck type, replacing with untyped
922
+ # Writes the data descriptor following the file data for a file whose local file header
923
+ # was written with general-purpose flag bit 3 set. If the one of the sizes exceeds the Zip64 threshold,
924
+ # the data descriptor will have the sizes written out as 8-byte values instead of 4-byte values.
925
+ #
926
+ # _@param_ `io` — the buffer to write the local file header to
927
+ #
928
+ # _@param_ `crc32` — The CRC32 checksum of the file
929
+ #
930
+ # _@param_ `compressed_size` — The size of the compressed (or stored) data - how much space it uses in the ZIP
931
+ #
932
+ # _@param_ `uncompressed_size` — The size of the file once extracted
933
+ def write_data_descriptor: (
934
+ io: untyped,
935
+ compressed_size: Integer,
936
+ uncompressed_size: Integer,
937
+ crc32: Integer
938
+ ) -> void
939
+
940
+ # sord duck - #<< looks like a duck type, replacing with untyped
941
+ # Writes the "end of central directory record" (including the Zip6 salient bits if necessary)
942
+ #
943
+ # _@param_ `io` — the buffer to write the central directory to.
944
+ #
945
+ # _@param_ `start_of_central_directory_location` — byte offset of the start of central directory form the beginning of ZIP file
946
+ #
947
+ # _@param_ `central_directory_size` — the size of the central directory (only file headers) in bytes
948
+ #
949
+ # _@param_ `num_files_in_archive` — How many files the archive contains
950
+ #
951
+ # _@param_ `comment` — the comment for the archive (defaults to ZIP_KIT_COMMENT)
952
+ def write_end_of_central_directory: (
953
+ io: untyped,
954
+ start_of_central_directory_location: Integer,
955
+ central_directory_size: Integer,
956
+ num_files_in_archive: Integer,
957
+ ?comment: String
958
+ ) -> void
959
+
960
+ # Writes the Zip64 extra field for the local file header. Will be used by `write_local_file_header` when any sizes given to it warrant that.
961
+ #
962
+ # _@param_ `compressed_size` — The size of the compressed (or stored) data - how much space it uses in the ZIP
963
+ #
964
+ # _@param_ `uncompressed_size` — The size of the file once extracted
965
+ def zip_64_extra_for_local_file_header: (compressed_size: Integer, uncompressed_size: Integer) -> String
966
+
967
+ # sord omit - no YARD type given for "mtime", using untyped
968
+ # sord omit - no YARD return type given, using untyped
969
+ # Writes the extended timestamp information field for local headers.
970
+ #
971
+ # The spec defines 2
972
+ # different formats - the one for the local file header can also accomodate the
973
+ # atime and ctime, whereas the one for the central directory can only take
974
+ # the mtime - and refers the reader to the local header extra to obtain the
975
+ # remaining times
976
+ def timestamp_extra_for_local_file_header: (untyped mtime) -> untyped
977
+
978
+ # Writes the Zip64 extra field for the central directory header.It differs from the extra used in the local file header because it
979
+ # also contains the location of the local file header in the ZIP as an 8-byte int.
980
+ #
981
+ # _@param_ `compressed_size` — The size of the compressed (or stored) data - how much space it uses in the ZIP
982
+ #
983
+ # _@param_ `uncompressed_size` — The size of the file once extracted
984
+ #
985
+ # _@param_ `local_file_header_location` — Byte offset of the start of the local file header from the beginning of the ZIP archive
986
+ def zip_64_extra_for_central_directory_file_header: (compressed_size: Integer, uncompressed_size: Integer, local_file_header_location: Integer) -> String
987
+
988
+ # sord omit - no YARD type given for "t", using untyped
989
+ # sord omit - no YARD return type given, using untyped
990
+ def to_binary_dos_time: (untyped t) -> untyped
991
+
992
+ # sord omit - no YARD type given for "t", using untyped
993
+ # sord omit - no YARD return type given, using untyped
994
+ def to_binary_dos_date: (untyped t) -> untyped
995
+
996
+ # sord omit - no YARD type given for "values_to_packspecs", using untyped
997
+ # sord omit - no YARD return type given, using untyped
998
+ # Unzips a given array of tuples of "numeric value, pack specifier" and then packs all the odd
999
+ # values using specifiers from all the even values. It is harder to explain than to show:
1000
+ #
1001
+ # pack_array([1, 'V', 2, 'v', 148, 'v]) #=> "\x01\x00\x00\x00\x02\x00\x94\x00"
1002
+ #
1003
+ # will do the following two transforms:
1004
+ #
1005
+ # [1, 'V', 2, 'v', 148, 'v] -> [1,2,148], ['V','v','v'] -> [1,2,148].pack('Vvv') -> "\x01\x00\x00\x00\x02\x00\x94\x00".
1006
+ # This might seem like a "clever optimisation" but the issue is that `pack` needs an array allocated per call, and
1007
+ # we output very verbosely - value-by-value. This might be quite a few array allocs. Using something like this
1008
+ # helps us save the array allocs
1009
+ def pack_array: (untyped values_to_packspecs) -> untyped
1010
+
1011
+ # sord omit - no YARD type given for "unix_permissions_int", using untyped
1012
+ # sord omit - no YARD type given for "file_type_int", using untyped
1013
+ # sord omit - no YARD return type given, using untyped
1014
+ def generate_external_attrs: (untyped unix_permissions_int, untyped file_type_int) -> untyped
1015
+ end
1016
+
1017
+ # Acts as a converter between callers which send data to the `#<<` method (such as all the ZipKit
1018
+ # writer methods, which push onto anything), and a given block. Every time `#<<` gets called on the BlockWrite,
1019
+ # the block given to the constructor will be called with the same argument. ZipKit uses this object
1020
+ # when integrating with Rack and in the OutputEnumerator. Normally you wouldn't need to use it manually but
1021
+ # you always can. BlockWrite will also ensure the binary string encoding is forced onto any string
1022
+ # that passes through it.
1023
+ #
1024
+ # For example, you can create a Rack response body like so:
1025
+ #
1026
+ # class MyRackResponse
1027
+ # def each
1028
+ # writer = ZipKit::BlockWrite.new {|chunk| yield(chunk) }
1029
+ # writer << "Hello" << "world" << "!"
1030
+ # end
1031
+ # end
1032
+ # [200, {}, MyRackResponse.new]
1033
+ class BlockWrite
1034
+ include ZipKit::WriteShovel
1035
+
1036
+ # Creates a new BlockWrite.
1037
+ #
1038
+ # _@param_ `block` — The block that will be called when this object receives the `<<` message
1039
+ def initialize: () ?{ (String bytes) -> void } -> void
1040
+
1041
+ # Sends a string through to the block stored in the BlockWrite.
1042
+ #
1043
+ # _@param_ `buf` — the string to write. Note that a zero-length String will not be forwarded to the block, as it has special meaning when used with chunked encoding (it indicates the end of the stream).
1044
+ def <<: (String buf) -> ZipKit::BlockWrite
1045
+
1046
+ # Writes the given data to the output stream. Allows the object to be used as
1047
+ # a target for `IO.copy_stream(from, to)`
1048
+ #
1049
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
1050
+ #
1051
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
1052
+ def write: (String bytes) -> Integer
1053
+ end
1054
+
1055
+ # A very barebones ZIP file reader. Is made for maximum interoperability, but at the same
1056
+ # time we attempt to keep it somewhat concise.
1057
+ #
1058
+ # ## REALLY CRAZY IMPORTANT STUFF: SECURITY IMPLICATIONS
1059
+ #
1060
+ # Please **BEWARE** - using this is a security risk if you are reading files that have been
1061
+ # supplied by users. This implementation has _not_ been formally verified for correctness. As
1062
+ # ZIP files contain relative offsets in lots of places it might be possible for a maliciously
1063
+ # crafted ZIP file to put the decode procedure in an endless loop, make it attempt huge reads
1064
+ # from the input file and so on. Additionally, the reader module for deflated data has
1065
+ # no support for ZIP bomb protection. So either limit the `FileReader` usage to the files you
1066
+ # trust, or triple-check all the inputs upfront. Patches to make this reader more secure
1067
+ # are welcome of course.
1068
+ #
1069
+ # ## Usage
1070
+ #
1071
+ # File.open('zipfile.zip', 'rb') do |f|
1072
+ # entries = ZipKit::FileReader.read_zip_structure(io: f)
1073
+ # entries.each do |e|
1074
+ # File.open(e.filename, 'wb') do |extracted_file|
1075
+ # ex = e.extractor_from(f)
1076
+ # extracted_file << ex.extract(1024 * 1024) until ex.eof?
1077
+ # end
1078
+ # end
1079
+ # end
1080
+ #
1081
+ # ## Supported features
1082
+ #
1083
+ # * Deflate and stored storage modes
1084
+ # * Zip64 (extra fields and offsets)
1085
+ # * Data descriptors
1086
+ #
1087
+ # ## Unsupported features
1088
+ #
1089
+ # * Archives split over multiple disks/files
1090
+ # * Any ZIP encryption
1091
+ # * EFS language flag and InfoZIP filename extra field
1092
+ # * CRC32 checksums are _not_ verified
1093
+ #
1094
+ # ## Mode of operation
1095
+ #
1096
+ # By default, `FileReader` _ignores_ the data in local file headers (as it is
1097
+ # often unreliable). It reads the ZIP file "from the tail", finds the
1098
+ # end-of-central-directory signatures, then reads the central directory entries,
1099
+ # reconstitutes the entries with their filenames, attributes and so on, and
1100
+ # sets these entries up with the absolute _offsets_ into the source file/IO object.
1101
+ # These offsets can then be used to extract the actual compressed data of
1102
+ # the files and to expand it.
1103
+ #
1104
+ # ## Recovering damaged or incomplete ZIP files
1105
+ #
1106
+ # If the ZIP file you are trying to read does not contain the central directory
1107
+ # records `read_zip_structure` will not work, since it starts the read process
1108
+ # from the EOCD marker at the end of the central directory and then crawls
1109
+ # "back" in the IO to figure out the rest. You can explicitly apply a fallback
1110
+ # for reading the archive "straight ahead" instead using `read_zip_straight_ahead`
1111
+ # - the method will instead scan your IO from the very start, skipping over
1112
+ # the actual entry data. This is less efficient than central directory parsing since
1113
+ # it involves a much larger number of reads (1 read from the IO per entry in the ZIP).
1114
+ class FileReader
1115
+ ReadError: untyped
1116
+ UnsupportedFeature: untyped
1117
+ InvalidStructure: untyped
1118
+ LocalHeaderPending: untyped
1119
+ MissingEOCD: untyped
1120
+ C_UINT4: untyped
1121
+ C_UINT2: untyped
1122
+ C_UINT8: untyped
1123
+ MAX_END_OF_CENTRAL_DIRECTORY_RECORD_SIZE: untyped
1124
+ MAX_LOCAL_HEADER_SIZE: untyped
1125
+ SIZE_OF_USABLE_EOCD_RECORD: untyped
1126
+
1127
+ # sord duck - #tell looks like a duck type, replacing with untyped
1128
+ # sord duck - #seek looks like a duck type, replacing with untyped
1129
+ # sord duck - #read looks like a duck type with an equivalent RBS interface, replacing with _Reader
1130
+ # sord duck - #size looks like a duck type, replacing with untyped
1131
+ # Parse an IO handle to a ZIP archive into an array of Entry objects.
1132
+ #
1133
+ # _@param_ `io` — an IO-ish object
1134
+ #
1135
+ # _@param_ `read_local_headers` — whether the local headers must be read upfront. When reading a locally available ZIP file this option will not have much use since the small reads from the file handle are not going to be that important. However, if you are using remote reads to decipher a ZIP file located on an HTTP server, the operation _must_ perform an HTTP request for _each entry in the ZIP file_ to determine where the actual file data starts. This, for a ZIP archive of 1000 files, will incur 1000 extra HTTP requests - which you might not want to perform upfront, or - at least - not want to perform _at once_. When the option is set to `false`, you will be getting instances of `LazyEntry` instead of `Entry`. Those objects will raise an exception when you attempt to access their compressed data offset in the ZIP (since the reads have not been performed yet). As a rule, this option can be left in it's default setting (`true`) unless you want to _only_ read the central directory, or you need to limit the number of HTTP requests.
1136
+ #
1137
+ # _@return_ — an array of entries within the ZIP being parsed
1138
+ def read_zip_structure: (io: (untyped | _Reader), ?read_local_headers: bool) -> ::Array[ZipEntry]
1139
+
1140
+ # sord duck - #tell looks like a duck type, replacing with untyped
1141
+ # sord duck - #read looks like a duck type with an equivalent RBS interface, replacing with _Reader
1142
+ # sord duck - #seek looks like a duck type, replacing with untyped
1143
+ # sord omit - no YARD return type given, using untyped
1144
+ # Sometimes you might encounter truncated ZIP files, which do not contain
1145
+ # any central directory whatsoever - or where the central directory is
1146
+ # truncated. In that case, employing the technique of reading the ZIP
1147
+ # "from the end" is impossible, and the only recourse is reading each
1148
+ # local file header in sucession. If the entries in such a ZIP use data
1149
+ # descriptors, you would need to scan after the entry until you encounter
1150
+ # the data descriptor signature - and that might be unreliable at best.
1151
+ # Therefore, this reading technique does not support data descriptors.
1152
+ # It can however recover the entries you still can read if these entries
1153
+ # contain all the necessary information about the contained file.
1154
+ #
1155
+ # headers from @return [Array<ZipEntry>] an array of entries that could be
1156
+ # recovered before hitting EOF
1157
+ #
1158
+ # _@param_ `io` — the IO-ish object to read the local file
1159
+ def read_zip_straight_ahead: (io: (untyped | _Reader)) -> untyped
1160
+
1161
+ # sord duck - #read looks like a duck type with an equivalent RBS interface, replacing with _Reader
1162
+ # Parse the local header entry and get the offset in the IO at which the
1163
+ # actual compressed data of the file starts within the ZIP.
1164
+ # The method will eager-read the entire local header for the file
1165
+ # (the maximum size the local header may use), starting at the given offset,
1166
+ # and will then compute its size. That size plus the local header offset
1167
+ # given will be the compressed data offset of the entry (read starting at
1168
+ # this offset to get the data).
1169
+ #
1170
+ # the compressed data offset
1171
+ #
1172
+ # _@param_ `io` — an IO-ish object the ZIP file can be read from
1173
+ #
1174
+ # _@return_ — the parsed local header entry and
1175
+ def read_local_file_header: (io: _Reader) -> ::Array[(ZipEntry | Integer)]
1176
+
1177
+ # sord duck - #seek looks like a duck type, replacing with untyped
1178
+ # sord duck - #read looks like a duck type with an equivalent RBS interface, replacing with _Reader
1179
+ # sord omit - no YARD return type given, using untyped
1180
+ # Get the offset in the IO at which the actual compressed data of the file
1181
+ # starts within the ZIP. The method will eager-read the entire local header
1182
+ # for the file (the maximum size the local header may use), starting at the
1183
+ # given offset, and will then compute its size. That size plus the local
1184
+ # header offset given will be the compressed data offset of the entry
1185
+ # (read starting at this offset to get the data).
1186
+ #
1187
+ # local file header is supposed to begin @return [Integer] absolute offset
1188
+ # (0-based) of where the compressed data begins for this file within the ZIP
1189
+ #
1190
+ # _@param_ `io` — an IO-ish object the ZIP file can be read from
1191
+ #
1192
+ # _@param_ `local_file_header_offset` — absolute offset (0-based) where the
1193
+ def get_compressed_data_offset: (io: (untyped | _Reader), local_file_header_offset: Integer) -> untyped
1194
+
1195
+ # Parse an IO handle to a ZIP archive into an array of Entry objects, reading from the end
1196
+ # of the IO object.
1197
+ #
1198
+ # _@param_ `options` — any options the instance method of the same name accepts
1199
+ #
1200
+ # _@return_ — an array of entries within the ZIP being parsed
1201
+ #
1202
+ # _@see_ `#read_zip_structure`
1203
+ def self.read_zip_structure: (**::Hash[untyped, untyped] options) -> ::Array[ZipEntry]
1204
+
1205
+ # Parse an IO handle to a ZIP archive into an array of Entry objects, reading from the start of
1206
+ # the file and parsing local file headers one-by-one
1207
+ #
1208
+ # _@param_ `options` — any options the instance method of the same name accepts
1209
+ #
1210
+ # _@return_ — an array of entries within the ZIP being parsed
1211
+ #
1212
+ # _@see_ `#read_zip_straight_ahead`
1213
+ def self.read_zip_straight_ahead: (**::Hash[untyped, untyped] options) -> ::Array[ZipEntry]
1214
+
1215
+ # sord omit - no YARD type given for "entries", using untyped
1216
+ # sord omit - no YARD type given for "io", using untyped
1217
+ # sord omit - no YARD return type given, using untyped
1218
+ def read_local_headers: (untyped entries, untyped io) -> untyped
1219
+
1220
+ # sord omit - no YARD type given for "io", using untyped
1221
+ # sord omit - no YARD return type given, using untyped
1222
+ def skip_ahead_2: (untyped io) -> untyped
1223
+
1224
+ # sord omit - no YARD type given for "io", using untyped
1225
+ # sord omit - no YARD return type given, using untyped
1226
+ def skip_ahead_4: (untyped io) -> untyped
1227
+
1228
+ # sord omit - no YARD type given for "io", using untyped
1229
+ # sord omit - no YARD return type given, using untyped
1230
+ def skip_ahead_8: (untyped io) -> untyped
1231
+
1232
+ # sord omit - no YARD type given for "io", using untyped
1233
+ # sord omit - no YARD type given for "absolute_pos", using untyped
1234
+ # sord omit - no YARD return type given, using untyped
1235
+ def seek: (untyped io, untyped absolute_pos) -> untyped
1236
+
1237
+ # sord omit - no YARD type given for "io", using untyped
1238
+ # sord omit - no YARD type given for "signature_magic_number", using untyped
1239
+ # sord omit - no YARD return type given, using untyped
1240
+ def assert_signature: (untyped io, untyped signature_magic_number) -> untyped
1241
+
1242
+ # sord omit - no YARD type given for "io", using untyped
1243
+ # sord omit - no YARD type given for "n", using untyped
1244
+ # sord omit - no YARD return type given, using untyped
1245
+ def skip_ahead_n: (untyped io, untyped n) -> untyped
1246
+
1247
+ # sord omit - no YARD type given for "io", using untyped
1248
+ # sord omit - no YARD type given for "n_bytes", using untyped
1249
+ # sord omit - no YARD return type given, using untyped
1250
+ def read_n: (untyped io, untyped n_bytes) -> untyped
1251
+
1252
+ # sord omit - no YARD type given for "io", using untyped
1253
+ # sord omit - no YARD return type given, using untyped
1254
+ def read_2b: (untyped io) -> untyped
1255
+
1256
+ # sord omit - no YARD type given for "io", using untyped
1257
+ # sord omit - no YARD return type given, using untyped
1258
+ def read_4b: (untyped io) -> untyped
1259
+
1260
+ # sord omit - no YARD type given for "io", using untyped
1261
+ # sord omit - no YARD return type given, using untyped
1262
+ def read_8b: (untyped io) -> untyped
1263
+
1264
+ # sord omit - no YARD type given for "io", using untyped
1265
+ # sord omit - no YARD return type given, using untyped
1266
+ def read_cdir_entry: (untyped io) -> untyped
1267
+
1268
+ # sord omit - no YARD type given for "file_io", using untyped
1269
+ # sord omit - no YARD type given for "zip_file_size", using untyped
1270
+ # sord omit - no YARD return type given, using untyped
1271
+ def get_eocd_offset: (untyped file_io, untyped zip_file_size) -> untyped
1272
+
1273
+ # sord omit - no YARD type given for "of_substring", using untyped
1274
+ # sord omit - no YARD type given for "in_string", using untyped
1275
+ # sord omit - no YARD return type given, using untyped
1276
+ def all_indices_of_substr_in_str: (untyped of_substring, untyped in_string) -> untyped
1277
+
1278
+ # sord omit - no YARD type given for "in_str", using untyped
1279
+ # sord omit - no YARD return type given, using untyped
1280
+ # We have to scan the maximum possible number
1281
+ # of bytes that the EOCD can theoretically occupy including the comment after it,
1282
+ # and we have to find a combination of:
1283
+ # [EOCD signature, <some ZIP medatata>, comment byte size, comment of size]
1284
+ # at the end. To do so, we first find all indices of the signature in the trailer
1285
+ # string, and then check whether the bytestring starting at the signature and
1286
+ # ending at the end of string satisfies that given pattern.
1287
+ def locate_eocd_signature: (untyped in_str) -> untyped
1288
+
1289
+ # sord omit - no YARD type given for "file_io", using untyped
1290
+ # sord omit - no YARD type given for "eocd_offset", using untyped
1291
+ # sord omit - no YARD return type given, using untyped
1292
+ # Find the Zip64 EOCD locator segment offset. Do this by seeking backwards from the
1293
+ # EOCD record in the archive by fixed offsets
1294
+ # get_zip64_eocd_location is too high. [15.17/15]
1295
+ def get_zip64_eocd_location: (untyped file_io, untyped eocd_offset) -> untyped
1296
+
1297
+ # sord omit - no YARD type given for "io", using untyped
1298
+ # sord omit - no YARD type given for "zip64_end_of_cdir_location", using untyped
1299
+ # sord omit - no YARD return type given, using untyped
1300
+ # num_files_and_central_directory_offset_zip64 is too high. [21.12/15]
1301
+ def num_files_and_central_directory_offset_zip64: (untyped io, untyped zip64_end_of_cdir_location) -> untyped
1302
+
1303
+ # sord omit - no YARD type given for "file_io", using untyped
1304
+ # sord omit - no YARD type given for "eocd_offset", using untyped
1305
+ # sord omit - no YARD return type given, using untyped
1306
+ # Start of the central directory offset
1307
+ def num_files_and_central_directory_offset: (untyped file_io, untyped eocd_offset) -> untyped
1308
+
1309
+ # sord omit - no YARD return type given, using untyped
1310
+ # Is provided as a stub to be overridden in a subclass if you need it. Will report
1311
+ # during various stages of reading. The log message is contained in the return value
1312
+ # of `yield` in the method (the log messages are lazy-evaluated).
1313
+ def log: () -> untyped
1314
+
1315
+ # sord omit - no YARD type given for "extra_fields_str", using untyped
1316
+ # sord omit - no YARD return type given, using untyped
1317
+ def parse_out_extra_fields: (untyped extra_fields_str) -> untyped
1318
+
1319
+ # Rubocop: convention: Missing top-level class documentation comment.
1320
+ class StoredReader
1321
+ # sord omit - no YARD type given for "from_io", using untyped
1322
+ # sord omit - no YARD type given for "compressed_data_size", using untyped
1323
+ def initialize: (untyped from_io, untyped compressed_data_size) -> void
1324
+
1325
+ # sord omit - no YARD type given for "n_bytes", using untyped
1326
+ # sord omit - no YARD return type given, using untyped
1327
+ def extract: (?untyped n_bytes) -> untyped
1328
+
1329
+ def eof?: () -> bool
1330
+ end
1331
+
1332
+ # Rubocop: convention: Missing top-level class documentation comment.
1333
+ class InflatingReader
1334
+ # sord omit - no YARD type given for "from_io", using untyped
1335
+ # sord omit - no YARD type given for "compressed_data_size", using untyped
1336
+ def initialize: (untyped from_io, untyped compressed_data_size) -> void
1337
+
1338
+ # sord omit - no YARD type given for "n_bytes", using untyped
1339
+ # sord omit - no YARD return type given, using untyped
1340
+ def extract: (?untyped n_bytes) -> untyped
1341
+
1342
+ def eof?: () -> bool
1343
+ end
1344
+
1345
+ # Represents a file within the ZIP archive being read. This is different from
1346
+ # the Entry object used in Streamer for ZIP writing, since during writing more
1347
+ # data can be kept in memory for immediate use.
1348
+ class ZipEntry
1349
+ # sord omit - no YARD type given for "from_io", using untyped
1350
+ # Returns a reader for the actual compressed data of the entry.
1351
+ #
1352
+ # reader = entry.extractor_from(source_file)
1353
+ # outfile << reader.extract(512 * 1024) until reader.eof?
1354
+ #
1355
+ # _@return_ — the reader for the data
1356
+ def extractor_from: (untyped from_io) -> (StoredReader | InflatingReader)
1357
+
1358
+ # _@return_ — at what offset you should start reading
1359
+ # for the compressed data in your original IO object
1360
+ def compressed_data_offset: () -> Integer
1361
+
1362
+ # Tells whether the compressed data offset is already known for this entry
1363
+ def known_offset?: () -> bool
1364
+
1365
+ # Tells whether the entry uses a data descriptor (this is defined
1366
+ # by bit 3 in the GP flags).
1367
+ def uses_data_descriptor?: () -> bool
1368
+
1369
+ # sord infer - inferred type of parameter "offset" as Integer using getter's return type
1370
+ # sord omit - no YARD return type given, using untyped
1371
+ # Sets the offset at which the compressed data for this file starts in the ZIP.
1372
+ # By default, the value will be set by the Reader for you. If you use delayed
1373
+ # reading, you need to set it by using the `get_compressed_data_offset` on the Reader:
1374
+ #
1375
+ # entry.compressed_data_offset = reader.get_compressed_data_offset(io: file,
1376
+ # local_file_header_offset: entry.local_header_offset)
1377
+ def compressed_data_offset=: (Integer offset) -> untyped
1378
+
1379
+ # _@return_ — bit-packed version signature of the program that made the archive
1380
+ attr_accessor made_by: Integer
1381
+
1382
+ # _@return_ — ZIP version support needed to extract this file
1383
+ attr_accessor version_needed_to_extract: Integer
1384
+
1385
+ # _@return_ — bit-packed general purpose flags
1386
+ attr_accessor gp_flags: Integer
1387
+
1388
+ # _@return_ — Storage mode (0 for stored, 8 for deflate)
1389
+ attr_accessor storage_mode: Integer
1390
+
1391
+ # _@return_ — the bit-packed DOS time
1392
+ attr_accessor dos_time: Integer
1393
+
1394
+ # _@return_ — the bit-packed DOS date
1395
+ attr_accessor dos_date: Integer
1396
+
1397
+ # _@return_ — the CRC32 checksum of this file
1398
+ attr_accessor crc32: Integer
1399
+
1400
+ # _@return_ — size of compressed file data in the ZIP
1401
+ attr_accessor compressed_size: Integer
1402
+
1403
+ # _@return_ — size of the file once uncompressed
1404
+ attr_accessor uncompressed_size: Integer
1405
+
1406
+ # _@return_ — the filename
1407
+ attr_accessor filename: String
1408
+
1409
+ # _@return_ — disk number where this file starts
1410
+ attr_accessor disk_number_start: Integer
1411
+
1412
+ # _@return_ — internal attributes of the file
1413
+ attr_accessor internal_attrs: Integer
1414
+
1415
+ # _@return_ — external attributes of the file
1416
+ attr_accessor external_attrs: Integer
1417
+
1418
+ # _@return_ — at what offset the local file header starts
1419
+ # in your original IO object
1420
+ attr_accessor local_file_header_offset: Integer
1421
+
1422
+ # _@return_ — the file comment
1423
+ attr_accessor comment: String
1424
+ end
1425
+ end
1426
+
1427
+ # Used when you need to supply a destination IO for some
1428
+ # write operations, but want to discard the data (like when
1429
+ # estimating the size of a ZIP)
1430
+ module NullWriter
1431
+ # _@param_ `_` — the data to write
1432
+ def self.<<: (String _) -> self
1433
+ end
1434
+
1435
+ # Alows reading the central directory of a remote ZIP file without
1436
+ # downloading the entire file. The central directory provides the
1437
+ # offsets at which the actual file contents is located. You can then
1438
+ # use the `Range:` HTTP headers to download those entries separately.
1439
+ #
1440
+ # Please read the security warning in `FileReader` _VERY CAREFULLY_
1441
+ # before you use this module.
1442
+ module RemoteUncap
1443
+ # {ZipKit::FileReader} when reading
1444
+ # files within the remote archive
1445
+ #
1446
+ # _@param_ `uri` — the HTTP(S) URL to read the ZIP footer from
1447
+ #
1448
+ # _@param_ `reader_class` — which class to use for reading
1449
+ #
1450
+ # _@param_ `options_for_zip_reader` — any additional options to give to
1451
+ #
1452
+ # _@return_ — metadata about the
1453
+ def self.files_within_zip_at: (String uri, ?reader_class: Class, **::Hash[untyped, untyped] options_for_zip_reader) -> ::Array[ZipKit::FileReader::ZipEntry]
1454
+ end
1455
+
1456
+ # A simple stateful class for keeping track of a CRC32 value through multiple writes
1457
+ class StreamCRC32
1458
+ include ZipKit::WriteShovel
1459
+ STRINGS_HAVE_CAPACITY_SUPPORT: untyped
1460
+ CRC_BUF_SIZE: untyped
1461
+
1462
+ # Compute a CRC32 value from an IO object. The object should respond to `read` and `eof?`
1463
+ #
1464
+ # _@param_ `io` — the IO to read the data from
1465
+ #
1466
+ # _@return_ — the computed CRC32 value
1467
+ def self.from_io: (IO io) -> Integer
1468
+
1469
+ # Creates a new streaming CRC32 calculator
1470
+ def initialize: () -> void
1471
+
1472
+ # Append data to the CRC32. Updates the contained CRC32 value in place.
1473
+ #
1474
+ # _@param_ `blob` — the string to compute the CRC32 from
1475
+ def <<: (String blob) -> self
1476
+
1477
+ # Returns the CRC32 value computed so far
1478
+ #
1479
+ # _@return_ — the updated CRC32 value for all the blobs so far
1480
+ def to_i: () -> Integer
1481
+
1482
+ # Appends a known CRC32 value to the current one, and combines the
1483
+ # contained CRC32 value in-place.
1484
+ #
1485
+ # _@param_ `crc32` — the CRC32 value to append
1486
+ #
1487
+ # _@param_ `blob_size` — the size of the daata the `crc32` is computed from
1488
+ #
1489
+ # _@return_ — the updated CRC32 value for all the blobs so far
1490
+ def append: (Integer crc32, Integer blob_size) -> Integer
1491
+
1492
+ # Writes the given data to the output stream. Allows the object to be used as
1493
+ # a target for `IO.copy_stream(from, to)`
1494
+ #
1495
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
1496
+ #
1497
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
1498
+ def write: (String bytes) -> Integer
1499
+ end
1500
+
1501
+ # Some operations (such as CRC32) benefit when they are performed
1502
+ # on larger chunks of data. In certain use cases, it is possible that
1503
+ # the consumer of ZipKit is going to be writing small chunks
1504
+ # in rapid succession, so CRC32 is going to have to perform a lot of
1505
+ # CRC32 combine operations - and this adds up. Since the CRC32 value
1506
+ # is usually not needed until the complete output has completed
1507
+ # we can buffer at least some amount of data before computing CRC32 over it.
1508
+ # We also use this buffer for output via Rack, where some amount of buffering
1509
+ # helps reduce the number of syscalls made by the webserver. ZipKit performs
1510
+ # lots of very small writes, and some degree of speedup (about 20%) can be achieved
1511
+ # with a buffer of a few KB.
1512
+ #
1513
+ # Note that there is no guarantee that the write buffer is going to flush at or above
1514
+ # the given `buffer_size`, because for writes which exceed the buffer size it will
1515
+ # first `flush` and then write through the oversized chunk, without buffering it. This
1516
+ # helps conserve memory. Also note that the buffer will *not* duplicate strings for you
1517
+ # and *will* yield the same buffer String over and over, so if you are storing it in an
1518
+ # Array you might need to duplicate it.
1519
+ #
1520
+ # Note also that the WriteBuffer assumes that the object it `<<`-writes into is going
1521
+ # to **consume** in some way the string that it passes in. After the `<<` method returns,
1522
+ # the WriteBuffer will be cleared, and it passes the same String reference on every call
1523
+ # to `<<`. Therefore, if you need to retain the output of the WriteBuffer in, say, an Array,
1524
+ # you might need to `.dup` the `String` it gives you.
1525
+ class WriteBuffer
1526
+ # sord duck - #<< looks like a duck type, replacing with untyped
1527
+ # Creates a new WriteBuffer bypassing into a given writable object
1528
+ #
1529
+ # _@param_ `writable` — An object that responds to `#<<` with a String as argument
1530
+ #
1531
+ # _@param_ `buffer_size` — How many bytes to buffer
1532
+ def initialize: (untyped writable, Integer buffer_size) -> void
1533
+
1534
+ # Appends the given data to the write buffer, and flushes the buffer into the
1535
+ # writable if the buffer size exceeds the `buffer_size` given at initialization
1536
+ #
1537
+ # _@param_ `string` — data to be written
1538
+ #
1539
+ # _@return_ — self
1540
+ def <<: (String string) -> untyped
1541
+
1542
+ # Explicitly flushes the buffer if it contains anything
1543
+ #
1544
+ # _@return_ — self
1545
+ def flush: () -> untyped
1546
+ end
1547
+
1548
+ # A lot of objects in ZipKit accept bytes that may be sent
1549
+ # to the `<<` operator (the "shovel" operator). This is in the tradition
1550
+ # of late Jim Weirich and his Builder gem. In [this presentation](https://youtu.be/1BVFlvRPZVM?t=2403)
1551
+ # he justifies this design very eloquently. In ZipKit we follow this example.
1552
+ # However, there is a number of methods in Ruby - including the standard library -
1553
+ # which expect your object to implement the `write` method instead. Since the `write`
1554
+ # method can be expressed in terms of the `<<` method, why not allow all ZipKit
1555
+ # "IO-ish" things to also respond to `write`? This is what this module does.
1556
+ # Jim would be proud. We miss you, Jim.
1557
+ module WriteShovel
1558
+ # Writes the given data to the output stream. Allows the object to be used as
1559
+ # a target for `IO.copy_stream(from, to)`
1560
+ #
1561
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
1562
+ #
1563
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
1564
+ def write: (String bytes) -> Integer
1565
+ end
1566
+
1567
+ module ZlibCleanup
1568
+ # sord warn - "Zlib::Deflater?" does not appear to be a type
1569
+ # This method is used to flush and close the native zlib handles
1570
+ # should an archiving routine encounter an error. This is necessary,
1571
+ # since otherwise unclosed deflaters may hang around in memory
1572
+ # indefinitely, creating leaks.
1573
+ #
1574
+ # _@param_ `deflater` — the deflater to safely finish and close
1575
+ #
1576
+ # _@return_ — void
1577
+ def safely_dispose_of_incomplete_deflater: (SORD_ERROR_ZlibDeflater deflater) -> untyped
1578
+ end
1579
+
1580
+ # Permits Deflate compression in independent blocks. The workflow is as follows:
1581
+ #
1582
+ # * Run every block to compress through deflate_chunk, remove the header,
1583
+ # footer and adler32 from the result
1584
+ # * Write out the compressed block bodies (the ones deflate_chunk returns)
1585
+ # to your output, in sequence
1586
+ # * Write out the footer (\03\00)
1587
+ #
1588
+ # The resulting stream is guaranteed to be handled properly by all zip
1589
+ # unarchiving tools, including the BOMArchiveHelper/ArchiveUtility on OSX.
1590
+ #
1591
+ # You could also build a compressor for Rubyzip using this module quite easily,
1592
+ # even though this is outside the scope of the library.
1593
+ #
1594
+ # When you deflate the chunks separately, you need to write the end marker
1595
+ # yourself (using `write_terminator`).
1596
+ # If you just want to deflate a large IO's contents, use
1597
+ # `deflate_in_blocks_and_terminate` to have the end marker written out for you.
1598
+ #
1599
+ # Basic usage to compress a file in parts:
1600
+ #
1601
+ # source_file = File.open('12_gigs.bin', 'rb')
1602
+ # compressed = Tempfile.new
1603
+ # # Will not compress everything in memory, but do it per chunk to spare
1604
+ # memory. `compressed`
1605
+ # # will be written to at the end of each chunk.
1606
+ # ZipKit::BlockDeflate.deflate_in_blocks_and_terminate(source_file,
1607
+ # compressed)
1608
+ #
1609
+ # You can also do the same to parts that you will later concatenate together
1610
+ # elsewhere, in that case you need to skip the end marker:
1611
+ #
1612
+ # compressed = Tempfile.new
1613
+ # ZipKit::BlockDeflate.deflate_in_blocks(File.open('part1.bin', 'rb),
1614
+ # compressed)
1615
+ # ZipKit::BlockDeflate.deflate_in_blocks(File.open('part2.bin', 'rb),
1616
+ # compressed)
1617
+ # ZipKit::BlockDeflate.deflate_in_blocks(File.open('partN.bin', 'rb),
1618
+ # compressed)
1619
+ # ZipKit::BlockDeflate.write_terminator(compressed)
1620
+ #
1621
+ # You can also elect to just compress strings in memory (to splice them later):
1622
+ #
1623
+ # compressed_string = ZipKit::BlockDeflate.deflate_chunk(big_string)
1624
+ class BlockDeflate
1625
+ DEFAULT_BLOCKSIZE: untyped
1626
+ END_MARKER: untyped
1627
+ VALID_COMPRESSIONS: untyped
1628
+
1629
+ # Write the end marker (\x3\x0) to the given IO.
1630
+ #
1631
+ # `output_io` can also be a {ZipKit::Streamer} to expedite ops.
1632
+ #
1633
+ # _@param_ `output_io` — the stream to write to (should respond to `:<<`)
1634
+ #
1635
+ # _@return_ — number of bytes written to `output_io`
1636
+ def self.write_terminator: (IO output_io) -> Integer
1637
+
1638
+ # Compress a given binary string and flush the deflate stream at byte boundary.
1639
+ # The returned string can be spliced into another deflate stream.
1640
+ #
1641
+ # _@param_ `bytes` — Bytes to compress
1642
+ #
1643
+ # _@param_ `level` — Zlib compression level (defaults to `Zlib::DEFAULT_COMPRESSION`)
1644
+ #
1645
+ # _@return_ — compressed bytes
1646
+ def self.deflate_chunk: (String bytes, ?level: Integer) -> String
1647
+
1648
+ # Compress the contents of input_io into output_io, in blocks
1649
+ # of block_size. Aligns the parts so that they can be concatenated later.
1650
+ # Writes deflate end marker (\x3\x0) into `output_io` as the final step, so
1651
+ # the contents of `output_io` can be spliced verbatim into a ZIP archive.
1652
+ #
1653
+ # Once the write completes, no more parts for concatenation should be written to
1654
+ # the same stream.
1655
+ #
1656
+ # `output_io` can also be a {ZipKit::Streamer} to expedite ops.
1657
+ #
1658
+ # _@param_ `input_io` — the stream to read from (should respond to `:read`)
1659
+ #
1660
+ # _@param_ `output_io` — the stream to write to (should respond to `:<<`)
1661
+ #
1662
+ # _@param_ `level` — Zlib compression level (defaults to `Zlib::DEFAULT_COMPRESSION`)
1663
+ #
1664
+ # _@param_ `block_size` — The block size to use (defaults to `DEFAULT_BLOCKSIZE`)
1665
+ #
1666
+ # _@return_ — number of bytes written to `output_io`
1667
+ def self.deflate_in_blocks_and_terminate: (
1668
+ IO input_io,
1669
+ IO output_io,
1670
+ ?level: Integer,
1671
+ ?block_size: Integer
1672
+ ) -> Integer
1673
+
1674
+ # Compress the contents of input_io into output_io, in blocks
1675
+ # of block_size. Align the parts so that they can be concatenated later.
1676
+ # Will not write the deflate end marker (\x3\x0) so more parts can be written
1677
+ # later and succesfully read back in provided the end marker wll be written.
1678
+ #
1679
+ # `output_io` can also be a {ZipKit::Streamer} to expedite ops.
1680
+ #
1681
+ # _@param_ `input_io` — the stream to read from (should respond to `:read`)
1682
+ #
1683
+ # _@param_ `output_io` — the stream to write to (should respond to `:<<`)
1684
+ #
1685
+ # _@param_ `level` — Zlib compression level (defaults to `Zlib::DEFAULT_COMPRESSION`)
1686
+ #
1687
+ # _@param_ `block_size` — The block size to use (defaults to `DEFAULT_BLOCKSIZE`)
1688
+ #
1689
+ # _@return_ — number of bytes written to `output_io`
1690
+ def self.deflate_in_blocks: (
1691
+ IO input_io,
1692
+ IO output_io,
1693
+ ?level: Integer,
1694
+ ?block_size: Integer
1695
+ ) -> Integer
1696
+ end
1697
+
1698
+ # Helps to estimate archive sizes
1699
+ class SizeEstimator
1700
+ # Creates a new estimator with a Streamer object. Normally you should use
1701
+ # `estimate` instead an not use this method directly.
1702
+ #
1703
+ # _@param_ `streamer`
1704
+ def initialize: (ZipKit::Streamer streamer) -> void
1705
+
1706
+ # Performs the estimate using fake archiving. It needs to know the sizes of the
1707
+ # entries upfront. Usage:
1708
+ #
1709
+ # expected_zip_size = SizeEstimator.estimate do | estimator |
1710
+ # estimator.add_stored_entry(filename: "file.doc", size: 898291)
1711
+ # estimator.add_deflated_entry(filename: "family.tif",
1712
+ # uncompressed_size: 89281911, compressed_size: 121908)
1713
+ # end
1714
+ #
1715
+ # _@param_ `kwargs_for_streamer_new` — Any options to pass to Streamer, see {Streamer#initialize}
1716
+ #
1717
+ # _@return_ — the size of the resulting archive, in bytes
1718
+ def self.estimate: (**untyped kwargs_for_streamer_new) ?{ (SizeEstimator estimator) -> void } -> Integer
1719
+
1720
+ # Add a fake entry to the archive, to see how big it is going to be in the end.
1721
+ #
1722
+ # _@param_ `filename` — the name of the file (filenames are variable-width in the ZIP)
1723
+ #
1724
+ # _@param_ `size` — size of the uncompressed entry
1725
+ #
1726
+ # _@param_ `use_data_descriptor` — whether there is going to be a data descriptor written after the entry body, to specify size. You must enable this if you are going to be using {Streamer#write_stored_file} as otherwise your estimated size is not going to be accurate
1727
+ #
1728
+ # _@return_ — self
1729
+ def add_stored_entry: (filename: String, size: Integer, ?use_data_descriptor: bool) -> untyped
1730
+
1731
+ # Add a fake entry to the archive, to see how big it is going to be in the end.
1732
+ #
1733
+ # _@param_ `filename` — the name of the file (filenames are variable-width in the ZIP)
1734
+ #
1735
+ # _@param_ `uncompressed_size` — size of the uncompressed entry
1736
+ #
1737
+ # _@param_ `compressed_size` — size of the compressed entry
1738
+ #
1739
+ # _@param_ `use_data_descriptor` — whether there is going to be a data descriptor written after the entry body, to specify size. You must enable this if you are going to be using {Streamer#write_deflated_file} as otherwise your estimated size is not going to be accurate
1740
+ #
1741
+ # _@return_ — self
1742
+ def add_deflated_entry: (
1743
+ filename: String,
1744
+ uncompressed_size: Integer,
1745
+ compressed_size: Integer,
1746
+ ?use_data_descriptor: bool
1747
+ ) -> untyped
1748
+
1749
+ # Add an empty directory to the archive.
1750
+ #
1751
+ # _@param_ `dirname` — the name of the directory
1752
+ #
1753
+ # _@return_ — self
1754
+ def add_empty_directory_entry: (dirname: String) -> untyped
1755
+ end
1756
+
1757
+ # A tiny wrapper over any object that supports :<<.
1758
+ # Adds :tell and :advance_position_by. This is needed for write destinations
1759
+ # which do not respond to `#pos` or `#tell`. A lot of ZIP archive format parts
1760
+ # include "offsets in archive" - a byte offset from the start of file. Keeping
1761
+ # track of this value is what this object will do. It also allows "advancing"
1762
+ # this value if data gets written using a bypass (such as `IO#sendfile`)
1763
+ class WriteAndTell
1764
+ include ZipKit::WriteShovel
1765
+
1766
+ # sord omit - no YARD type given for "io", using untyped
1767
+ def initialize: (untyped io) -> void
1768
+
1769
+ # sord omit - no YARD type given for "bytes", using untyped
1770
+ # sord omit - no YARD return type given, using untyped
1771
+ def <<: (untyped bytes) -> untyped
1772
+
1773
+ # sord omit - no YARD type given for "num_bytes", using untyped
1774
+ # sord omit - no YARD return type given, using untyped
1775
+ def advance_position_by: (untyped num_bytes) -> untyped
1776
+
1777
+ # sord omit - no YARD return type given, using untyped
1778
+ def tell: () -> untyped
1779
+
1780
+ # Writes the given data to the output stream. Allows the object to be used as
1781
+ # a target for `IO.copy_stream(from, to)`
1782
+ #
1783
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
1784
+ #
1785
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
1786
+ def write: (String bytes) -> Integer
1787
+ end
1788
+
1789
+ # Should be included into a Rails controller for easy ZIP output from any action.
1790
+ module RailsStreaming
1791
+ # Opens a {ZipKit::Streamer} and yields it to the caller. The output of the streamer
1792
+ # will be sent through to the HTTP response body as it gets produced.
1793
+ #
1794
+ # Note that there is an important difference in how this method works, depending whether
1795
+ # you use it in a controller which includes `ActionController::Live` vs. one that does not.
1796
+ # With a standard `ActionController` this method will assign a response body, but streaming
1797
+ # will begin when your action method returns. With `ActionController::Live` the streaming
1798
+ # will begin immediately, before the method returns. In all other aspects the method should
1799
+ # stream correctly in both types of controllers.
1800
+ #
1801
+ # If you encounter buffering (streaming does not start for a very long time) you probably
1802
+ # have a piece of Rack middleware in your stack which buffers. Known offenders are `Rack::ContentLength`,
1803
+ # `Rack::MiniProfiler` and `Rack::ETag`. ZipKit will try to work around these but it is not
1804
+ # always possible. If you encounter buffering, examine your middleware stack and try to suss
1805
+ # out whether any middleware might be buffering. You can also try setting `use_chunked_transfer_encoding`
1806
+ # to `true` - this is not recommended but sometimes necessary, for example to bypass `Rack::ContentLength`.
1807
+ #
1808
+ # _@param_ `filename` — name of the file for the Content-Disposition header
1809
+ #
1810
+ # _@param_ `type` — the content type (MIME type) of the archive being output
1811
+ #
1812
+ # _@param_ `use_chunked_transfer_encoding` — whether to forcibly encode output as chunked. Normally you should not need this.
1813
+ #
1814
+ # _@param_ `output_enumerator_options` — options that will be passed to the OutputEnumerator - these include options for the Streamer. See {ZipKit::OutputEnumerator#initialize} for the full list of options.
1815
+ #
1816
+ # _@return_ — always returns true
1817
+ def zip_kit_stream: (
1818
+ ?filename: String,
1819
+ ?_type: String,
1820
+ ?use_chunked_transfer_encoding: bool,
1821
+ **::Hash[untyped, untyped] output_enumerator_options
1822
+ ) ?{ (ZipKit::Streamer zip) -> void } -> bool
1823
+ end
1824
+
1825
+ # The output enumerator makes it possible to "pull" from a ZipKit streamer
1826
+ # object instead of having it "push" writes to you. It will "stash" the block which
1827
+ # writes the ZIP archive through the streamer, and when you call `each` on the Enumerator
1828
+ # it will yield you the bytes the block writes. Since it is an enumerator you can
1829
+ # use `next` to take chunks written by the ZipKit streamer one by one. It can be very
1830
+ # convenient when you need to segment your ZIP output into bigger chunks for, say,
1831
+ # uploading them to a cloud storage provider such as S3.
1832
+ #
1833
+ # Another use of the `OutputEnumerator` is as a Rack response body - since a Rack
1834
+ # response body object must support `#each` yielding successive binary strings.
1835
+ # Which is exactly what `OutputEnumerator` does.
1836
+ #
1837
+ # The enumerator can provide you some more conveinences for HTTP output - correct streaming
1838
+ # headers and a body with chunked transfer encoding.
1839
+ #
1840
+ # iterable_zip_body = ZipKit::OutputEnumerator.new do | streamer |
1841
+ # streamer.write_file('big.csv') do |sink|
1842
+ # CSV(sink) do |csv_writer|
1843
+ # csv_writer << Person.column_names
1844
+ # Person.all.find_each do |person|
1845
+ # csv_writer << person.attributes.values
1846
+ # end
1847
+ # end
1848
+ # end
1849
+ # end
1850
+ #
1851
+ # You can grab the headers one usually needs for streaming from `#streaming_http_headers`:
1852
+ #
1853
+ # [200, iterable_zip_body.streaming_http_headers, iterable_zip_body]
1854
+ #
1855
+ # to bypass things like `Rack::ETag` and the nginx buffering.
1856
+ class OutputEnumerator
1857
+ DEFAULT_WRITE_BUFFER_SIZE: untyped
1858
+
1859
+ # Creates a new OutputEnumerator enumerator. The enumerator can be read from using `each`,
1860
+ # and the creation of the ZIP is in lockstep with the caller calling `each` on the returned
1861
+ # output enumerator object. This can be used when the calling program wants to stream the
1862
+ # output of the ZIP archive and throttle that output, or split it into chunks, or use it
1863
+ # as a generator.
1864
+ #
1865
+ # For example:
1866
+ #
1867
+ # # The block given to {output_enum} won't be executed immediately - rather it
1868
+ # # will only start to execute when the caller starts to read from the output
1869
+ # # by calling `each`
1870
+ # body = ::ZipKit::OutputEnumerator.new(writer: CustomWriter) do |streamer|
1871
+ # streamer.add_stored_entry(filename: 'large.tif', size: 1289894, crc32: 198210)
1872
+ # streamer << large_file.read(1024*1024) until large_file.eof?
1873
+ # ...
1874
+ # end
1875
+ #
1876
+ # body.each do |bin_string|
1877
+ # # Send the output somewhere, buffer it in a file etc.
1878
+ # # The block passed into `initialize` will only start executing once `#each`
1879
+ # # is called
1880
+ # ...
1881
+ # end
1882
+ #
1883
+ # _@param_ `streamer_options` — options for Streamer, see {ZipKit::Streamer.new}
1884
+ #
1885
+ # _@param_ `write_buffer_size` — By default all ZipKit writes are unbuffered. For output to sockets it is beneficial to bulkify those writes so that they are roughly sized to a socket buffer chunk. This object will bulkify writes for you in this way (so `each` will yield not on every call to `<<` from the Streamer but at block size boundaries or greater). Set the parameter to 0 for unbuffered writes.
1886
+ #
1887
+ # _@param_ `blk` — a block that will receive the Streamer object when executing. The block will not be executed immediately but only once `each` is called on the OutputEnumerator
1888
+ def initialize: (?write_buffer_size: Integer, **::Hash[untyped, untyped] streamer_options) -> void
1889
+
1890
+ # sord omit - no YARD return type given, using untyped
1891
+ # Executes the block given to the constructor with a {ZipKit::Streamer}
1892
+ # and passes each written chunk to the block given to the method. This allows one
1893
+ # to "take" output of the ZIP piecewise. If called without a block will return an Enumerator
1894
+ # that you can pull data from using `next`.
1895
+ #
1896
+ # **NOTE** Because the `WriteBuffer` inside this object can reuse the buffer, it is important
1897
+ # that the `String` that is yielded **either** gets consumed eagerly (written byte-by-byte somewhere, or `#dup`-ed)
1898
+ # since the write buffer will clear it after your block returns. If you expand this Enumerator
1899
+ # eagerly into an Array you might notice that a lot of the segments of your ZIP output are
1900
+ # empty - this means that you need to duplicate them.
1901
+ def each: () -> untyped
1902
+
1903
+ # Returns a Hash of HTTP response headers you are likely to need to have your response stream correctly.
1904
+ # This is on the {ZipKit::OutputEnumerator} class since those headers are common, independent of the
1905
+ # particular response body getting served. You might want to override the headers with your particular
1906
+ # ones - for example, specific content types are needed for files which are, technically, ZIP files
1907
+ # but are of a file format built "on top" of ZIPs - such as ODTs, [pkpass files](https://developer.apple.com/documentation/walletpasses/building_a_pass)
1908
+ # and ePubs.
1909
+ #
1910
+ # More value, however, is in the "technical" headers this method will provide. It will take the following steps to make sure streaming works correctly.
1911
+ #
1912
+ # * `Last-Modified` will be set to "now" so that the response is considered "fresh" by `Rack::ETag`. This is done so that `Rack::ETag` won't try to
1913
+ # calculate a lax ETag value and thus won't start buffering your response out of nowhere
1914
+ # * `Content-Encoding` will be set to `identity`. This is so that proxies or the Rack middleware that applies compression to the response (like gzip)
1915
+ # is not going to try to compress your response. It also tells the receiving browsers (or downstream proxies) that they should not attempt to
1916
+ # open or uncompress the response before saving it or passing it onwards.
1917
+ # * `X-Accel-Buffering` will be set to 'no` - this tells both nginx and the Google Cloud load balancer that the response should not be buffered
1918
+ #
1919
+ # These header values are known to get as close as possible to guaranteeing streaming on most environments where Ruby web applications may be hosted.
1920
+ def self.streaming_http_headers: () -> ::Hash[untyped, untyped]
1921
+
1922
+ # Returns a Hash of HTTP response headers for this particular response. This used to contain "Content-Length" for
1923
+ # presized responses, but is now effectively a no-op.
1924
+ #
1925
+ # _@see_ `[ZipKit::OutputEnumerator.streaming_http_headers]`
1926
+ def streaming_http_headers: () -> ::Hash[untyped, untyped]
1927
+
1928
+ # Returns a tuple of `headers, body` - headers are a `Hash` and the body is
1929
+ # an object that can be used as a Rack response body. This method used to accept arguments
1930
+ # but will now just ignore them.
1931
+ def to_headers_and_rack_response_body: () -> ::Array[untyped]
1932
+ end
1933
+
1934
+ # A body wrapper that emits chunked responses, creating valid
1935
+ # Transfer-Encoding::Chunked HTTP response body. This is copied from Rack::Chunked::Body,
1936
+ # because Rack is not going to include that class after version 3.x
1937
+ # Rails has a substitute class for this inside ActionController::Streaming,
1938
+ # but that module is a private constant in the Rails codebase, and is thus
1939
+ # considered "private" from the Rails standpoint. It is not that much code to
1940
+ # carry, so we copy it into our code.
1941
+ class RackChunkedBody
1942
+ TERM: untyped
1943
+ TAIL: untyped
1944
+
1945
+ # sord duck - #each looks like a duck type with an equivalent RBS interface, replacing with _Each[untyped]
1946
+ # _@param_ `body` — the enumerable that yields bytes, usually a `OutputEnumerator`
1947
+ def initialize: (_Each[untyped] body) -> void
1948
+
1949
+ # sord omit - no YARD return type given, using untyped
1950
+ # For each string yielded by the response body, yield
1951
+ # the element in chunked encoding - and finish off with a terminator
1952
+ def each: () -> untyped
1953
+ end
1954
+
1955
+ module UniquifyFilename
1956
+ # sord duck - #include? looks like a duck type, replacing with untyped
1957
+ # Makes a given filename unique by appending a (n) suffix
1958
+ # between just before the filename extension. So "file.txt" gets
1959
+ # transformed into "file (1).txt". The transformation is applied
1960
+ # repeatedly as long as the generated filename is present
1961
+ # in `while_included_in` object
1962
+ #
1963
+ # _@param_ `path` — the path to make unique
1964
+ #
1965
+ # _@param_ `while_included_in` — an object that stores the list of already used paths
1966
+ #
1967
+ # _@return_ — the path as is, or with the suffix required to make it unique
1968
+ def self.call: (String path, untyped while_included_in) -> String
1969
+ end
1970
+
1971
+ # Contains a file handle which can be closed once the response finishes sending.
1972
+ # It supports `to_path` so that `Rack::Sendfile` can intercept it.
1973
+ # This class is deprecated and is going to be removed in zip_kit 7.x
1974
+ # @api deprecated
1975
+ class RackTempfileBody
1976
+ TEMPFILE_NAME_PREFIX: untyped
1977
+
1978
+ # sord omit - no YARD type given for "env", using untyped
1979
+ # sord duck - #each looks like a duck type with an equivalent RBS interface, replacing with _Each[untyped]
1980
+ # _@param_ `body` — the enumerable that yields bytes, usually a `OutputEnumerator`. The `body` will be read in full immediately and closed.
1981
+ def initialize: (untyped env, _Each[untyped] body) -> void
1982
+
1983
+ # Returns the size of the contained `Tempfile` so that a correct
1984
+ # Content-Length header can be set
1985
+ def size: () -> Integer
1986
+
1987
+ # Returns the path to the `Tempfile`, so that Rack::Sendfile can send this response
1988
+ # using the downstream webserver
1989
+ def to_path: () -> String
1990
+
1991
+ # Stream the file's contents if `Rack::Sendfile` isn't present.
1992
+ def each: () -> void
1993
+
1994
+ # sord omit - no YARD return type given, using untyped
1995
+ def flush: () -> untyped
1996
+
1997
+ # sord omit - no YARD type given for :tempfile, using untyped
1998
+ attr_reader tempfile: untyped
1999
+ end
2000
+ end