zip_kit 6.2.0 → 6.2.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 14382b872a41cb63ba80b664d0b42135b8105aabbf42f2813716d3b98c1f4ff5
4
- data.tar.gz: c78ff42650fba09aa01854a91cf491f51ab307413c01b8cd7c0f0ccb0d8954cb
3
+ metadata.gz: f1d33b58f4501d3ddbae7abcab3957fde0549abf734eae72ca1a7ce45601f479
4
+ data.tar.gz: e9126924e6fe75329237ba551a1a65218676c7d2b3757f4ad91e73eb0bce154e
5
5
  SHA512:
6
- metadata.gz: 41a91eda762ca8668fe2696746367ade01b9029f03056f8d9da93b6dfb1f811d4eaec7b1159015287128db1ef94382a2776bdac86872cbe642a087c46154b450
7
- data.tar.gz: 676b8fd3e58f255087731cc209249bbba6e9ab8f87269cb182fc3ed62664d0c1a4ae14a51415fb4c9fc5f8674182a795d8f5103a57fb2b5a5ba28441948fa66e
6
+ metadata.gz: 011e57f856ebe7f625b0bfa5eeb4a240c6c38f2b07ff0434b7e89805516b2d47b6d4230ac404203d00562586909c72d7ea225a7238d47af70fb99ed97b3d50bc
7
+ data.tar.gz: b68fbaae2e57314c47e7971aeef2150341ba80308c7dee1536718250f5cac01b4b387fe3f73cde5f84fb1ffcd93500343ec451d0fccf9f19b2c0c4a58e74aa2f
data/CHANGELOG.md CHANGED
@@ -1,3 +1,8 @@
1
+ ## 6.2.1
2
+
3
+ * Make `RailsStreaming` compatible with `ActionController::Live` (previously the response would hang)
4
+ * Make `BlockWrite` respond to `write` in addition to `<<`
5
+
1
6
  ## 6.2.0
2
7
 
3
8
  * Remove forced `Transfer-Encoding: chunked` and the chunking body wrapper. It is actually a good idea to trust the app webserver to apply the transfer encoding as is appropriate. For the case when "you really have to", add a bypass in `RailsStreaming#zip_kit_stream` for forcing the chunking manually.
@@ -149,7 +154,7 @@
149
154
  ## 4.4.2
150
155
 
151
156
  * Add 2.4 to Travis rubies
152
- * Fix a severe performance degradation in Streamer with large file counts (https://github.com/WeTransfer/zip_kit/pull/14)
157
+ * Fix a severe performance degradation in Streamer with large file counts (https://github.com/WeTransfer/zip_tricks/pull/14)
153
158
 
154
159
  ## 4.4.1
155
160
 
data/CONTRIBUTING.md CHANGED
@@ -106,11 +106,11 @@ project:
106
106
 
107
107
  ```bash
108
108
  # Clone your fork of the repo into the current directory
109
- git clone git@github.com:WeTransfer/zip_kit.git
109
+ git clone git@github.com:julik/zip_kit.git
110
110
  # Navigate to the newly cloned directory
111
111
  cd zip_kit
112
112
  # Assign the original repo to a remote called "upstream"
113
- git remote add upstream git@github.com:WeTransfer/zip_kit.git
113
+ git remote add upstream git@github.com:julik/zip_kit.git
114
114
  ```
115
115
 
116
116
  2. If you cloned a while ago, get the latest changes from upstream:
data/README.md CHANGED
@@ -59,7 +59,7 @@ via HTTP.
59
59
  and the ZIP output will run in the same thread as your main request. Your testing flows (be it minitest or
60
60
  RSpec) should work normally with controller actions returning ZIPs.
61
61
 
62
- ## Writing into other streaming destinations and through streaming wrappers
62
+ ## Writing into streaming destinations
63
63
 
64
64
  Any object that accepts bytes via either `<<` or `write` methods can be a write destination. For example, here
65
65
  is how to upload a sizeable ZIP to S3 - the SDK will happily chop your upload into multipart upload parts:
@@ -69,23 +69,23 @@ bucket = Aws::S3::Bucket.new("mybucket")
69
69
  obj = bucket.object("big.zip")
70
70
  obj.upload_stream do |write_stream|
71
71
  ZipKit::Streamer.open(write_stream) do |zip|
72
- zip.write_file("large.csv") do |sink|
73
- CSV(sink) do |csv|
74
- csv << ["Line", "Item"]
75
- 20_000.times do |n|
76
- csv << [n, "Item number #{n}"]
77
- end
72
+ zip.write_file("file.csv") do |sink|
73
+ File.open("large.csv", "rb") do |file_input|
74
+ IO.copy_stream(file_input, sink)
78
75
  end
79
76
  end
80
77
  end
81
78
  end
82
79
  ```
83
80
 
81
+ ## Writing through streaming wrappers
82
+
84
83
  Any object that writes using either `<<` or `write` can write into a `sink`. For example, you can do streaming
85
- output with [builder](https://github.com/jimweirich/builder#project-builder)
84
+ output with [builder](https://github.com/jimweirich/builder#project-builder) which calls `<<` on its `target`
85
+ every time a complete write call is done:
86
86
 
87
87
  ```ruby
88
- zip.write_file('report1.csv') do |sink|
88
+ zip.write_file('employees.xml') do |sink|
89
89
  builder = Builder::XmlMarkup.new(target: sink, indent: 2)
90
90
  builder.people do
91
91
  Person.all.find_each do |person|
@@ -95,8 +95,18 @@ zip.write_file('report1.csv') do |sink|
95
95
  end
96
96
  ```
97
97
 
98
- and this output will be compressed and output into the ZIP file on the fly. zip_kit composes with any
99
- Ruby code that streams its output into a destination.
98
+ The output will be compressed and output into the ZIP file on the fly. Same for CSV:
99
+
100
+ ```ruby
101
+ zip.write_file('line_items.csv') do |sink|
102
+ CSV(sink) do |csv|
103
+ csv << ["Line", "Item"]
104
+ 20_000.times do |n|
105
+ csv << [n, "Item number #{n}"]
106
+ end
107
+ end
108
+ end
109
+ ```
100
110
 
101
111
  ## Create a ZIP file without size estimation, compress on-the-fly during writes
102
112
 
@@ -122,12 +132,10 @@ since you do not know how large the compressed data segments are going to be.
122
132
  ## Send a ZIP from a Rack response
123
133
 
124
134
  zip_kit provides an `OutputEnumerator` object which will yield the binary chunks piece
125
- by piece, and apply some amount of buffering as well. Note that you might want to wrap
126
- it with a chunked transfer encoder - the `to_rack_response_headers_and_body` method will do
127
- that for you. Return the headers and the body to your webserver and you will have your ZIP streamed!
128
- The block that you give to the `OutputEnumerator` receive the {ZipKit::Streamer} object and will only
129
- start executing once your response body starts getting iterated over - when actually sending
130
- the response to the client (unless you are using a buffering Rack webserver, such as Webrick).
135
+ by piece, and apply some amount of buffering as well. Return the headers and the body to your webserver
136
+ and you will have your ZIP streamed! The block that you give to the `OutputEnumerator` will receive
137
+ the {ZipKit::Streamer} object and will only start executing once your response body starts getting iterated
138
+ over - when actually sending the response to the client (unless you are using a buffering Rack webserver, such as Webrick).
131
139
 
132
140
  ```ruby
133
141
  body = ZipKit::OutputEnumerator.new do | zip |
@@ -139,8 +147,7 @@ body = ZipKit::OutputEnumerator.new do | zip |
139
147
  end
140
148
  end
141
149
 
142
- headers, streaming_body = body.to_rack_response_headers_and_body(env)
143
- [200, headers, streaming_body]
150
+ [200, body.streaming_http_headers, body]
144
151
  ```
145
152
 
146
153
  ## Send a ZIP file of known size, with correct headers
@@ -160,8 +167,10 @@ zip_body = ZipKit::OutputEnumerator.new do | zip |
160
167
  zip << read_file('myfile2.bin')
161
168
  end
162
169
 
163
- headers, streaming_body = body.to_rack_response_headers_and_body(env, content_length: bytesize)
164
- [200, headers, streaming_body]
170
+ hh = zip_body.streaming_http_headers
171
+ hh["Content-Length"] = bytesize.to_s
172
+
173
+ [200, hh, zip_body]
165
174
  ```
166
175
 
167
176
  ## Writing ZIP files using the Streamer bypass
@@ -17,9 +17,12 @@
17
17
  # end
18
18
  # [200, {}, MyRackResponse.new]
19
19
  class ZipKit::BlockWrite
20
+ include ZipKit::WriteShovel
21
+
20
22
  # Creates a new BlockWrite.
21
23
  #
22
24
  # @param block The block that will be called when this object receives the `<<` message
25
+ # @yieldparam bytes[String] A string in binary encoding which has just been written into the object
23
26
  def initialize(&block)
24
27
  @block = block
25
28
  end
@@ -36,7 +39,7 @@ class ZipKit::BlockWrite
36
39
  # @param buf[String] the string to write. Note that a zero-length String
37
40
  # will not be forwarded to the block, as it has special meaning when used
38
41
  # with chunked encoding (it indicates the end of the stream).
39
- # @return self
42
+ # @return [ZipKit::BlockWrite]
40
43
  def <<(buf)
41
44
  # Zero-size output has a special meaning when using chunked encoding
42
45
  return if buf.nil? || buf.bytesize.zero?
@@ -60,14 +60,11 @@ class ZipKit::OutputEnumerator
60
60
  # ...
61
61
  # end
62
62
  #
63
- # @param kwargs_for_new [Hash] keyword arguments for {Streamer.new}
64
- # @return [ZipKit::OutputEnumerator] the enumerator you can read bytestrings of the ZIP from by calling `each`
65
- #
66
63
  # @param streamer_options[Hash] options for Streamer, see {ZipKit::Streamer.new}
67
64
  # @param write_buffer_size[Integer] By default all ZipKit writes are unbuffered. For output to sockets
68
65
  # it is beneficial to bulkify those writes so that they are roughly sized to a socket buffer chunk. This
69
66
  # object will bulkify writes for you in this way (so `each` will yield not on every call to `<<` from the Streamer
70
- # but at block size boundaries or greater). Set it to 0 for unbuffered writes.
67
+ # but at block size boundaries or greater). Set the parameter to 0 for unbuffered writes.
71
68
  # @param blk a block that will receive the Streamer object when executing. The block will not be executed
72
69
  # immediately but only once `each` is called on the OutputEnumerator
73
70
  def initialize(write_buffer_size: DEFAULT_WRITE_BUFFER_SIZE, **streamer_options, &blk)
@@ -100,9 +97,14 @@ class ZipKit::OutputEnumerator
100
97
  end
101
98
 
102
99
  # Returns a Hash of HTTP response headers you are likely to need to have your response stream correctly.
100
+ # This is on the {ZipKit::OutputEnumerator} class since those headers are common, independent of the
101
+ # particular response body getting served. You might want to override the headers with your particular
102
+ # ones - for example, specific content types are needed for files which are, technically, ZIP files
103
+ # but are of a file format built "on top" of ZIPs - such as ODTs, [pkpass files](https://developer.apple.com/documentation/walletpasses/building_a_pass)
104
+ # and ePubs.
103
105
  #
104
106
  # @return [Hash]
105
- def streaming_http_headers
107
+ def self.streaming_http_headers
106
108
  _headers = {
107
109
  # We need to ensure Rack::ETag does not suddenly start buffering us, see
108
110
  # https://github.com/rack/rack/issues/1619#issuecomment-606315714
@@ -121,6 +123,15 @@ class ZipKit::OutputEnumerator
121
123
  }
122
124
  end
123
125
 
126
+ # Returns a Hash of HTTP response headers for this particular response. This used to contain "Content-Length" for
127
+ # presized responses, but is now effectively a no-op.
128
+ #
129
+ # @see [ZipKit::OutputEnumerator.streaming_http_headers]
130
+ # @return [Hash]
131
+ def streaming_http_headers
132
+ self.class.streaming_http_headers
133
+ end
134
+
124
135
  # Returns a tuple of `headers, body` - headers are a `Hash` and the body is
125
136
  # an object that can be used as a Rack response body. This method used to accept arguments
126
137
  # but will now just ignore them.
@@ -13,10 +13,6 @@ module ZipKit::RailsStreaming
13
13
  # @yieldparam [ZipKit::Streamer] the streamer that can be written to
14
14
  # @return [ZipKit::OutputEnumerator] The output enumerator assigned to the response body
15
15
  def zip_kit_stream(filename: "download.zip", type: "application/zip", use_chunked_transfer_encoding: false, **zip_streamer_options, &zip_streaming_blk)
16
- # The output enumerator yields chunks of bytes generated from ZipKit. Instantiating it
17
- # first will also validate the Streamer options.
18
- output_enum = ZipKit::OutputEnumerator.new(**zip_streamer_options, &zip_streaming_blk)
19
-
20
16
  # We want some common headers for file sending. Rails will also set
21
17
  # self.sending_file = true for us when we call send_file_headers!
22
18
  send_file_headers!(type: type, filename: filename)
@@ -29,16 +25,39 @@ module ZipKit::RailsStreaming
29
25
  logger&.warn { "The downstream HTTP proxy/LB insists on HTTP/1.0 protocol, ZIP response will be buffered." }
30
26
  end
31
27
 
32
- headers = output_enum.streaming_http_headers
28
+ headers = ZipKit::OutputEnumerator.streaming_http_headers
29
+ response.headers.merge!(headers)
33
30
 
34
- # In rare circumstances (such as the app using Rack::ContentLength - which should normally
35
- # not be used allow the user to force the use of the chunked encoding
36
- if use_chunked_transfer_encoding
37
- output_enum = ZipKit::RackChunkedBody.new(output_enum)
38
- headers["Transfer-Encoding"] = "chunked"
39
- end
31
+ # The output enumerator yields chunks of bytes generated from the Streamer,
32
+ # with some buffering
33
+ output_enum = ZipKit::OutputEnumerator.new(**zip_streamer_options, &zip_streaming_blk)
40
34
 
41
- response.headers.merge!(headers)
42
- self.response_body = output_enum
35
+ # Time for some branching, which mostly has to do with the 999 flavours of
36
+ # "how to make both Rails and Rack stream"
37
+ if self.class.ancestors.include?(ActionController::Live)
38
+ # If this controller includes Live it will not work correctly with a Rack
39
+ # response body assignment - we need to write into the Live output stream instead
40
+ begin
41
+ output_enum.each { |bytes| response.stream.write(bytes) }
42
+ ensure
43
+ response.stream.close
44
+ end
45
+ elsif use_chunked_transfer_encoding
46
+ # Chunked encoding may be forced if, for example, you _need_ to bypass Rack::ContentLength.
47
+ # Rack::ContentLength is normally not in a Rails middleware stack, but it might get
48
+ # introduced unintentionally - for example, "rackup" adds the ContentLength middleware for you.
49
+ # There is a recommendation to leave the chunked encoding to the app server, so that servers
50
+ # that support HTTP/2 can use native framing and not have to deal with the chunked encoding,
51
+ # see https://github.com/julik/zip_kit/issues/7
52
+ # But it is not to be excluded that a user may need to force the chunked encoding to bypass
53
+ # some especially pesky Rack middleware that just would not cooperate. Those include
54
+ # Rack::MiniProfiler and the above-mentioned Rack::ContentLength.
55
+ response.headers["Transfer-Encoding"] = "chunked"
56
+ self.response_body = ZipKit::RackChunkedBody.new(output_enum)
57
+ else
58
+ # Stream using a Rack body assigned to the ActionController response body, without
59
+ # doing explicit chunked encoding. See above for the reasoning.
60
+ self.response_body = output_enum
61
+ end
43
62
  end
44
63
  end
@@ -2,19 +2,19 @@
2
2
 
3
3
  require "set"
4
4
 
5
- # Is used to write streamed ZIP archives into the provided IO-ish object.
6
- # The output IO is never going to be rewound or seeked, so the output
7
- # of this object can be coupled directly to, say, a Rack output. The
8
- # output can also be a String, Array or anything that responds to `<<`.
5
+ # Is used to write ZIP archives without having to read them back or to overwrite
6
+ # data. It outputs into any object that supports `<<` or `write`, namely:
9
7
  #
10
- # Allows for splicing raw files (for "stored" entries without compression)
11
- # and splicing of deflated files (for "deflated" storage mode).
8
+ # An `Array`, `File`, `IO`, `Socket` and even `String` all can be output destinations
9
+ # for the `Streamer`.
12
10
  #
13
- # For stored entries, you need to know the CRC32 (as a uint) and the filesize upfront,
14
- # before the writing of the entry body starts.
11
+ # You can also combine output through the `Streamer` with direct output to the destination,
12
+ # all while preserving the correct offsets in the ZIP file structures. This allows usage
13
+ # of `sendfile()` or socket `splice()` calls for "through" proxying.
15
14
  #
16
- # Any object that responds to `<<` can be used as the Streamer target - you can use
17
- # a String, an Array, a Socket or a File, at your leisure.
15
+ # If you want to avoid data descriptors - or write data bypassing the Streamer -
16
+ # you need to know the CRC32 (as a uint) and the filesize upfront,
17
+ # before the writing of the entry body starts.
18
18
  #
19
19
  # ## Using the Streamer with runtime compression
20
20
  #
@@ -34,7 +34,7 @@ require "set"
34
34
  # end
35
35
  # end
36
36
  #
37
- # The central directory will be written automatically at the end of the block.
37
+ # The central directory will be written automatically at the end of the `open` block.
38
38
  #
39
39
  # ## Using the Streamer with entries of known size and having a known CRC32 checksum
40
40
  #
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module ZipKit
4
- VERSION = "6.2.0"
4
+ VERSION = "6.2.1"
5
5
  end
@@ -13,8 +13,8 @@ module ZipKit::WriteShovel
13
13
  # Writes the given data to the output stream. Allows the object to be used as
14
14
  # a target for `IO.copy_stream(from, to)`
15
15
  #
16
- # @param d[String] the binary string to write (part of the uncompressed file)
17
- # @return [Fixnum] the number of bytes written
16
+ # @param bytes[String] the binary string to write (part of the uncompressed file)
17
+ # @return [Fixnum] the number of bytes written (will always be the bytesize of `bytes`)
18
18
  def write(bytes)
19
19
  self << bytes
20
20
  bytes.bytesize
data/rbi/zip_kit.rbi CHANGED
@@ -1,6 +1,6 @@
1
1
  # typed: strong
2
2
  module ZipKit
3
- VERSION = T.let("6.2.0", T.untyped)
3
+ VERSION = T.let("6.2.1", T.untyped)
4
4
 
5
5
  # A ZIP archive contains a flat list of entries. These entries can implicitly
6
6
  # create directories when the archive is expanded. For example, an entry with
@@ -100,19 +100,19 @@ module ZipKit
100
100
  end
101
101
  end
102
102
 
103
- # Is used to write streamed ZIP archives into the provided IO-ish object.
104
- # The output IO is never going to be rewound or seeked, so the output
105
- # of this object can be coupled directly to, say, a Rack output. The
106
- # output can also be a String, Array or anything that responds to `<<`.
103
+ # Is used to write ZIP archives without having to read them back or to overwrite
104
+ # data. It outputs into any object that supports `<<` or `write`, namely:
107
105
  #
108
- # Allows for splicing raw files (for "stored" entries without compression)
109
- # and splicing of deflated files (for "deflated" storage mode).
106
+ # An `Array`, `File`, `IO`, `Socket` and even `String` all can be output destinations
107
+ # for the `Streamer`.
110
108
  #
111
- # For stored entries, you need to know the CRC32 (as a uint) and the filesize upfront,
112
- # before the writing of the entry body starts.
109
+ # You can also combine output through the `Streamer` with direct output to the destination,
110
+ # all while preserving the correct offsets in the ZIP file structures. This allows usage
111
+ # of `sendfile()` or socket `splice()` calls for "through" proxying.
113
112
  #
114
- # Any object that responds to `<<` can be used as the Streamer target - you can use
115
- # a String, an Array, a Socket or a File, at your leisure.
113
+ # If you want to avoid data descriptors - or write data bypassing the Streamer -
114
+ # you need to know the CRC32 (as a uint) and the filesize upfront,
115
+ # before the writing of the entry body starts.
116
116
  #
117
117
  # ## Using the Streamer with runtime compression
118
118
  #
@@ -132,7 +132,7 @@ module ZipKit
132
132
  # end
133
133
  # end
134
134
  #
135
- # The central directory will be written automatically at the end of the block.
135
+ # The central directory will be written automatically at the end of the `open` block.
136
136
  #
137
137
  # ## Using the Streamer with entries of known size and having a known CRC32 checksum
138
138
  #
@@ -563,13 +563,12 @@ module ZipKit
563
563
  sig { params(filename: T.untyped).returns(T.untyped) }
564
564
  def remove_backslash(filename); end
565
565
 
566
- # sord infer - argument name in single @param inferred as "bytes"
567
566
  # Writes the given data to the output stream. Allows the object to be used as
568
567
  # a target for `IO.copy_stream(from, to)`
569
568
  #
570
- # _@param_ `d` — the binary string to write (part of the uncompressed file)
569
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
571
570
  #
572
- # _@return_ — the number of bytes written
571
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
573
572
  sig { params(bytes: String).returns(Fixnum) }
574
573
  def write(bytes); end
575
574
 
@@ -678,13 +677,12 @@ module ZipKit
678
677
  sig { returns(T.untyped) }
679
678
  def close; end
680
679
 
681
- # sord infer - argument name in single @param inferred as "bytes"
682
680
  # Writes the given data to the output stream. Allows the object to be used as
683
681
  # a target for `IO.copy_stream(from, to)`
684
682
  #
685
- # _@param_ `d` — the binary string to write (part of the uncompressed file)
683
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
686
684
  #
687
- # _@return_ — the number of bytes written
685
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
688
686
  sig { params(bytes: String).returns(Fixnum) }
689
687
  def write(bytes); end
690
688
  end
@@ -748,13 +746,12 @@ module ZipKit
748
746
  sig { returns(T::Hash[T.untyped, T.untyped]) }
749
747
  def finish; end
750
748
 
751
- # sord infer - argument name in single @param inferred as "bytes"
752
749
  # Writes the given data to the output stream. Allows the object to be used as
753
750
  # a target for `IO.copy_stream(from, to)`
754
751
  #
755
- # _@param_ `d` — the binary string to write (part of the uncompressed file)
752
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
756
753
  #
757
- # _@return_ — the number of bytes written
754
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
758
755
  sig { params(bytes: String).returns(Fixnum) }
759
756
  def write(bytes); end
760
757
  end
@@ -787,13 +784,12 @@ module ZipKit
787
784
  sig { returns(T::Hash[T.untyped, T.untyped]) }
788
785
  def finish; end
789
786
 
790
- # sord infer - argument name in single @param inferred as "bytes"
791
787
  # Writes the given data to the output stream. Allows the object to be used as
792
788
  # a target for `IO.copy_stream(from, to)`
793
789
  #
794
- # _@param_ `d` — the binary string to write (part of the uncompressed file)
790
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
795
791
  #
796
- # _@return_ — the number of bytes written
792
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
797
793
  sig { params(bytes: String).returns(Fixnum) }
798
794
  def write(bytes); end
799
795
  end
@@ -1107,19 +1103,28 @@ end, T.untyped)
1107
1103
  # end
1108
1104
  # [200, {}, MyRackResponse.new]
1109
1105
  class BlockWrite
1106
+ include ZipKit::WriteShovel
1107
+
1110
1108
  # Creates a new BlockWrite.
1111
1109
  #
1112
1110
  # _@param_ `block` — The block that will be called when this object receives the `<<` message
1113
- sig { params(block: T.untyped).void }
1111
+ sig { params(block: T.proc.params(bytes: String).void).void }
1114
1112
  def initialize(&block); end
1115
1113
 
1116
1114
  # Sends a string through to the block stored in the BlockWrite.
1117
1115
  #
1118
1116
  # _@param_ `buf` — the string to write. Note that a zero-length String will not be forwarded to the block, as it has special meaning when used with chunked encoding (it indicates the end of the stream).
1119
- #
1120
- # _@return_ — self
1121
- sig { params(buf: String).returns(T.untyped) }
1117
+ sig { params(buf: String).returns(ZipKit::BlockWrite) }
1122
1118
  def <<(buf); end
1119
+
1120
+ # Writes the given data to the output stream. Allows the object to be used as
1121
+ # a target for `IO.copy_stream(from, to)`
1122
+ #
1123
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
1124
+ #
1125
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
1126
+ sig { params(bytes: String).returns(Fixnum) }
1127
+ def write(bytes); end
1123
1128
  end
1124
1129
 
1125
1130
  # A very barebones ZIP file reader. Is made for maximum interoperability, but at the same
@@ -1657,13 +1662,12 @@ end, T.untyped)
1657
1662
  sig { params(crc32: Fixnum, blob_size: Fixnum).returns(Fixnum) }
1658
1663
  def append(crc32, blob_size); end
1659
1664
 
1660
- # sord infer - argument name in single @param inferred as "bytes"
1661
1665
  # Writes the given data to the output stream. Allows the object to be used as
1662
1666
  # a target for `IO.copy_stream(from, to)`
1663
1667
  #
1664
- # _@param_ `d` — the binary string to write (part of the uncompressed file)
1668
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
1665
1669
  #
1666
- # _@return_ — the number of bytes written
1670
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
1667
1671
  sig { params(bytes: String).returns(Fixnum) }
1668
1672
  def write(bytes); end
1669
1673
  end
@@ -1728,13 +1732,12 @@ end, T.untyped)
1728
1732
  # "IO-ish" things to also respond to `write`? This is what this module does.
1729
1733
  # Jim would be proud. We miss you, Jim.
1730
1734
  module WriteShovel
1731
- # sord infer - argument name in single @param inferred as "bytes"
1732
1735
  # Writes the given data to the output stream. Allows the object to be used as
1733
1736
  # a target for `IO.copy_stream(from, to)`
1734
1737
  #
1735
- # _@param_ `d` — the binary string to write (part of the uncompressed file)
1738
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
1736
1739
  #
1737
- # _@return_ — the number of bytes written
1740
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
1738
1741
  sig { params(bytes: String).returns(Fixnum) }
1739
1742
  def write(bytes); end
1740
1743
  end
@@ -1960,13 +1963,12 @@ end, T.untyped)
1960
1963
  sig { returns(T.untyped) }
1961
1964
  def tell; end
1962
1965
 
1963
- # sord infer - argument name in single @param inferred as "bytes"
1964
1966
  # Writes the given data to the output stream. Allows the object to be used as
1965
1967
  # a target for `IO.copy_stream(from, to)`
1966
1968
  #
1967
- # _@param_ `d` — the binary string to write (part of the uncompressed file)
1969
+ # _@param_ `bytes` — the binary string to write (part of the uncompressed file)
1968
1970
  #
1969
- # _@return_ — the number of bytes written
1971
+ # _@return_ — the number of bytes written (will always be the bytesize of `bytes`)
1970
1972
  sig { params(bytes: String).returns(Fixnum) }
1971
1973
  def write(bytes); end
1972
1974
  end
@@ -2056,15 +2058,11 @@ end, T.untyped)
2056
2058
  # ...
2057
2059
  # end
2058
2060
  #
2059
- # _@param_ `kwargs_for_new` — keyword arguments for {Streamer.new}
2060
- #
2061
2061
  # _@param_ `streamer_options` — options for Streamer, see {ZipKit::Streamer.new}
2062
2062
  #
2063
- # _@param_ `write_buffer_size` — By default all ZipKit writes are unbuffered. For output to sockets it is beneficial to bulkify those writes so that they are roughly sized to a socket buffer chunk. This object will bulkify writes for you in this way (so `each` will yield not on every call to `<<` from the Streamer but at block size boundaries or greater). Set it to 0 for unbuffered writes.
2063
+ # _@param_ `write_buffer_size` — By default all ZipKit writes are unbuffered. For output to sockets it is beneficial to bulkify those writes so that they are roughly sized to a socket buffer chunk. This object will bulkify writes for you in this way (so `each` will yield not on every call to `<<` from the Streamer but at block size boundaries or greater). Set the parameter to 0 for unbuffered writes.
2064
2064
  #
2065
2065
  # _@param_ `blk` — a block that will receive the Streamer object when executing. The block will not be executed immediately but only once `each` is called on the OutputEnumerator
2066
- #
2067
- # _@return_ — the enumerator you can read bytestrings of the ZIP from by calling `each`
2068
2066
  sig { params(write_buffer_size: Integer, streamer_options: T::Hash[T.untyped, T.untyped], blk: T.untyped).void }
2069
2067
  def initialize(write_buffer_size: DEFAULT_WRITE_BUFFER_SIZE, **streamer_options, &blk); end
2070
2068
 
@@ -2083,6 +2081,18 @@ end, T.untyped)
2083
2081
  def each; end
2084
2082
 
2085
2083
  # Returns a Hash of HTTP response headers you are likely to need to have your response stream correctly.
2084
+ # This is on the {ZipKit::OutputEnumerator} class since those headers are common, independent of the
2085
+ # particular response body getting served. You might want to override the headers with your particular
2086
+ # ones - for example, specific content types are needed for files which are, technically, ZIP files
2087
+ # but are of a file format built "on top" of ZIPs - such as ODTs, [pkpass files](https://developer.apple.com/documentation/walletpasses/building_a_pass)
2088
+ # and ePubs.
2089
+ sig { returns(T::Hash[T.untyped, T.untyped]) }
2090
+ def self.streaming_http_headers; end
2091
+
2092
+ # Returns a Hash of HTTP response headers for this particular response. This used to contain "Content-Length" for
2093
+ # presized responses, but is now effectively a no-op.
2094
+ #
2095
+ # _@see_ `[ZipKit::OutputEnumerator.streaming_http_headers]`
2086
2096
  sig { returns(T::Hash[T.untyped, T.untyped]) }
2087
2097
  def streaming_http_headers; end
2088
2098
 
data/zip_kit.gemspec CHANGED
@@ -7,7 +7,7 @@ Gem::Specification.new do |spec|
7
7
  spec.version = ZipKit::VERSION
8
8
  spec.authors = ["Julik Tarkhanov", "Noah Berman", "Dmitry Tymchuk", "David Bosveld", "Felix Bünemann"]
9
9
  spec.email = ["me@julik.nl"]
10
-
10
+ spec.license = "MIT"
11
11
  spec.summary = "Stream out ZIP files from Ruby. Successor to zip_tricks."
12
12
  spec.description = "Stream out ZIP files from Ruby. Successor to zip_tricks."
13
13
  spec.homepage = "https://github.com/julik/zip_kit"
@@ -23,9 +23,12 @@ Gem::Specification.new do |spec|
23
23
  spec.require_paths = ["lib"]
24
24
 
25
25
  spec.add_development_dependency "bundler"
26
- spec.add_development_dependency "rubyzip", "~> 1"
27
26
 
28
- spec.add_development_dependency "rack" # For tests where we spin up a server
27
+ # zip_kit does not use any runtime dependencies (besides zlib). However, for testing
28
+ # things quite a few things are used - and for a good reason.
29
+
30
+ spec.add_development_dependency "rubyzip", "~> 1" # We test our output with _another_ ZIP library, which is the way to go here
31
+ spec.add_development_dependency "rack" # For tests where we spin up a server. Both for streaming out and for testing reads over HTTP
29
32
  spec.add_development_dependency "rake", "~> 12.2"
30
33
  spec.add_development_dependency "rspec", "~> 3"
31
34
  spec.add_development_dependency "rspec-mocks", "~> 3.10", ">= 3.10.2" # ruby 3 compatibility
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: zip_kit
3
3
  version: !ruby/object:Gem::Version
4
- version: 6.2.0
4
+ version: 6.2.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Julik Tarkhanov
@@ -12,7 +12,7 @@ authors:
12
12
  autorequire:
13
13
  bindir: exe
14
14
  cert_chain: []
15
- date: 2024-03-11 00:00:00.000000000 Z
15
+ date: 2024-03-23 00:00:00.000000000 Z
16
16
  dependencies:
17
17
  - !ruby/object:Gem::Dependency
18
18
  name: bundler
@@ -324,7 +324,8 @@ files:
324
324
  - rbi/zip_kit.rbi
325
325
  - zip_kit.gemspec
326
326
  homepage: https://github.com/julik/zip_kit
327
- licenses: []
327
+ licenses:
328
+ - MIT
328
329
  metadata:
329
330
  allowed_push_host: https://rubygems.org
330
331
  post_install_message: