httpx 1.7.2 → 1.7.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (57) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +3 -1
  3. data/doc/release_notes/1_7_3.md +29 -0
  4. data/doc/release_notes/1_7_4.md +42 -0
  5. data/lib/httpx/adapters/datadog.rb +24 -60
  6. data/lib/httpx/adapters/webmock.rb +3 -4
  7. data/lib/httpx/connection/http1.rb +6 -1
  8. data/lib/httpx/connection/http2.rb +43 -30
  9. data/lib/httpx/connection.rb +74 -22
  10. data/lib/httpx/plugins/auth/digest.rb +2 -1
  11. data/lib/httpx/plugins/brotli.rb +33 -5
  12. data/lib/httpx/plugins/cookies/cookie.rb +34 -11
  13. data/lib/httpx/plugins/cookies/jar.rb +93 -18
  14. data/lib/httpx/plugins/cookies.rb +7 -3
  15. data/lib/httpx/plugins/expect.rb +30 -3
  16. data/lib/httpx/plugins/fiber_concurrency.rb +2 -4
  17. data/lib/httpx/plugins/follow_redirects.rb +7 -1
  18. data/lib/httpx/plugins/h2c.rb +1 -1
  19. data/lib/httpx/plugins/proxy/http.rb +15 -8
  20. data/lib/httpx/plugins/proxy.rb +10 -2
  21. data/lib/httpx/plugins/rate_limiter.rb +19 -19
  22. data/lib/httpx/plugins/retries.rb +17 -9
  23. data/lib/httpx/plugins/ssrf_filter.rb +1 -0
  24. data/lib/httpx/plugins/stream_bidi.rb +6 -0
  25. data/lib/httpx/plugins/tracing.rb +137 -0
  26. data/lib/httpx/request.rb +1 -1
  27. data/lib/httpx/resolver/multi.rb +1 -8
  28. data/lib/httpx/resolver/native.rb +1 -1
  29. data/lib/httpx/resolver/resolver.rb +21 -2
  30. data/lib/httpx/resolver/system.rb +3 -1
  31. data/lib/httpx/selector.rb +4 -4
  32. data/lib/httpx/session.rb +11 -6
  33. data/lib/httpx/version.rb +1 -1
  34. data/sig/chainable.rbs +2 -1
  35. data/sig/connection/http1.rbs +2 -0
  36. data/sig/connection/http2.rbs +11 -4
  37. data/sig/connection.rbs +7 -0
  38. data/sig/plugins/brotli.rbs +11 -6
  39. data/sig/plugins/cookies/cookie.rbs +3 -2
  40. data/sig/plugins/cookies/jar.rbs +11 -0
  41. data/sig/plugins/cookies.rbs +2 -0
  42. data/sig/plugins/expect.rbs +17 -2
  43. data/sig/plugins/proxy/socks4.rbs +4 -0
  44. data/sig/plugins/rate_limiter.rbs +2 -2
  45. data/sig/plugins/response_cache.rbs +3 -3
  46. data/sig/plugins/retries.rbs +17 -13
  47. data/sig/plugins/tracing.rbs +41 -0
  48. data/sig/request.rbs +1 -0
  49. data/sig/resolver/native.rbs +2 -0
  50. data/sig/resolver/resolver.rbs +4 -2
  51. data/sig/resolver/system.rbs +0 -2
  52. data/sig/response/body.rbs +1 -1
  53. data/sig/selector.rbs +4 -0
  54. data/sig/session.rbs +2 -0
  55. data/sig/transcoder/gzip.rbs +1 -1
  56. data/sig/transcoder.rbs +0 -2
  57. metadata +9 -3
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c67e4695d8ef368321f14a3c113daad0e49280413e3da42272644ad84fe6f622
4
- data.tar.gz: 62bb1ab9d91ca69c9b1fa561051ffb0ed137afb53e0b40f71cfd477a84a78773
3
+ metadata.gz: 125df778093197c9a1fe8872211cf4085a5a885a40536b58501ce1938f8d28ee
4
+ data.tar.gz: 5a17b5b02212f65ba3472f4dfda96a017fc1ef9a4fb7a3dce85d7b4ac6a8c6bb
5
5
  SHA512:
6
- metadata.gz: 266576dae6b8ed604228b464281239b760df9682d27e9010b18ac7dbab4fb00f23115e5bcae1a0ecee6b191ee03a0eca758c37f8f0baee0e2ad9df5ca0c7eff6
7
- data.tar.gz: e355df3c634811d0d8e08bcb10b661a4f486073e6ca808ca3af3585a4d014cd404c8d7a55e38d0d05419f205d9dc81a54fb5a17eab7e1716c38c160297963e0f
6
+ metadata.gz: b6a4014db970d25a1ac51ea73ac5bd828bf8c0f1012ec778abdf054d9dd6dcfe6cf7fc5f73f915a39d74e4e420329e1293e020d63955601e6f669ae6164a016b
7
+ data.tar.gz: 96b98469fe6194bbd3427adfd1bfc6032be9c44f2c68ff301a6da20a90741b1979aa194c113a11e1b82cb9a54c3559ef7c35b908d41c10297746e438b557eca3
data/README.md CHANGED
@@ -46,7 +46,9 @@ And that's the simplest one there is. But you can also do:
46
46
  HTTPX.post("http://example.com", form: { user: "john", password: "pass" })
47
47
 
48
48
  http = HTTPX.with(headers: { "x-my-name" => "joe" })
49
- http.patch("http://example.com/file", body: File.open("path/to/file")) # request body is streamed
49
+ File.open("path/to/file") do |file|
50
+ http.patch("http://example.com/file", body: file) # request body is streamed
51
+ end
50
52
  ```
51
53
 
52
54
  If you want to do some more things with the response, you can get an `HTTPX::Response`:
@@ -0,0 +1,29 @@
1
+ # 1.7.3
2
+
3
+ ## Improvements
4
+
5
+ ### cookies plugin: Jar as CookieStore
6
+
7
+ While previously an implementation detail, the cookie jar from a `:cookie` plugin-enabled session can now be manipulated by the end user:
8
+
9
+ ```ruby
10
+ cookies_sess = HTTPX.plugin(:cookies)
11
+
12
+ jar = cookies.make_jar
13
+
14
+ sess = cookies_ses.with(cookies: jar)
15
+
16
+ # perform requests using sess, get/set/delete cookies in jar
17
+ ```
18
+
19
+ The jar API now closely follows the [Web Cookie Store API](https://developer.mozilla.org/en-US/docs/Web/API/CookieStore), by providing the same set of functions.
20
+
21
+ Some API backwards compatibility is maintained, however since this was an internal implementation detail, this effort isn't meant to be thorough.
22
+
23
+ ## Bugfixes
24
+
25
+ * `http-2`: clear buffered data chunks when receiving a `GOAWAY` stream frame; without this, the client kept sending the corresponding `DATA` frames, despite the peer server making it known that it wouldn't process it. While this is valid HTTP/2, this could increase the connection window until a point where it'd go over the max frame size. this issue was observed during large file uploads where the first request could fail and make the client renegotiate.
26
+ * `webmock` adapter: fixed response body length accounting which was making `response.body.empty?` return true for responses with payload.
27
+ * `:rate_limiter` plugin relies on an internal refactoring to be able to wait for the time suggested by the peer server instead of the potentially relying on custom user logic via own `:retry_after`.
28
+ * `:fiber_concurrency`: fix wrong names for native/system resolver overrides.
29
+ * connection: fix for race condition when closing the connection, where the state only transitions to `closed` after checking the connection back in to the pool, potentially corrupting it if another session meanwhile has picked it up and manipulated it.
@@ -0,0 +1,42 @@
1
+ # 1.7.4
2
+
3
+ ## Features
4
+
5
+ ### Tracing plugin
6
+
7
+ A new `:tracing` plugin was introduced. It adds support for a new option, `:tracer`, which accepts an object which responds to the following callbacks:
8
+
9
+ * `#enabled?(request)` - should return true or false depending on whether tracing is enabled
10
+ * `#start(request)` - called when a request is about to be sent
11
+ * `#finish(request, response)` - called when a response is received
12
+ * `#reset(request)` - called when a request is being prepared to be resent, in cases where it makes sense (i.e. when a request is retried).
13
+
14
+ You can pass chain several tracers, and callbacks will be relayed to all of them:
15
+
16
+ ```ruby
17
+ HTTP.plugin(:tracing).with(tracer: telemetry_platform_tracer).with(tracer: telemetry2_platform_tracer)
18
+ ```
19
+
20
+ This was developed to be the foundation on top of which the datadag and OTel integrations will be built.
21
+
22
+ ## Improvements
23
+
24
+ * try fetching response immediately after send the request to the connection; this allows returning from errors much earlier and bug free than doing another round of waits on I/O.
25
+ * when a connection is reconnected, and it was established the first time that the peer can accept only 1 request at a time, the connection will keep that informaation and keep sending requests 1 at a time afterwards.
26
+
27
+ ## Bugfixes
28
+
29
+ * fix regression from introducing connection post state transition callbacks, by foregoing disconnect when there's pending backlog.
30
+ * transition requests to `:idle` before routing them to a different connection on merge (this could possibly leave dangling timeout callbacks otherwise).
31
+ * `:brotli` plugin was integrated with the stream writer component which allows writing compressed payload in chunks.
32
+ * `:brotli` plugin integrates with the `brotli` gem v0.8.0, which fixed an issue dealing with large payload responses due to the lack of support for decoding payloads in chunks.
33
+ * http1 parser: reset before early returning on `Upgrade` responses (it was left in an invalid "parsing headers", which in the case of a keep-alive connection, would cause the next request to fail being parsed).
34
+ * `datadog` adapter: fixed initialization of the request start time after connections were opened (it was being set to connection initialization time every time, instead of just on the first request before connection is established).
35
+ * parsers: also reroute non-completed in-flight requests back to the connection so they can be retried (previously, only pending requests were).
36
+ * `:proxy` plugin: do not try disconnecting unnecessarily when resetting may already do so (if conditions apply).
37
+ * `:proxy` plugin: removed call to unexisting `#reset!` function.
38
+ * `:proxy` plugin: also close wrapped sockets.
39
+ * connection: on force_close, move connection disconnection logic below so that, if requests are reenqueued from the parser, this can be halted.
40
+ * connection: when transition to `:idle`, reenqueue requests from parser before resetting it.
41
+ * implement `#lazy_resolve` on resolvers, as when they're picked from the selector (instead of from the pool), they may not be wrapped by a Multi proxy.
42
+ * allow resolvers transitioning from `:idle` to `:closed` and forego disconnecting when the resolver is not able to transition to `:closed` (which protects from a possible fiber scheduler context switch which changed the state under the hood).
@@ -46,35 +46,25 @@ module Datadog::Tracing
46
46
 
47
47
  SPAN_REQUEST = "httpx.request"
48
48
 
49
- # initializes tracing on the +request+.
50
- def call(request)
51
- return unless configuration(request).enabled
52
-
53
- span = nil
54
-
55
- # request objects are reused, when already buffered requests get rerouted to a different
56
- # connection due to connection issues, or when they already got a response, but need to
57
- # be retried. In such situations, the original span needs to be extended for the former,
58
- # while a new is required for the latter.
59
- request.on(:idle) do
60
- span = nil
61
- end
62
- # the span is initialized when the request is buffered in the parser, which is the closest
63
- # one gets to actually sending the request.
64
- request.on(:headers) do
65
- next if span
49
+ def enabled?(request)
50
+ configuration(request).enabled
51
+ end
66
52
 
67
- span = initialize_span(request, now)
68
- end
53
+ def start(request)
54
+ request.datadog_span = initialize_span(request, now)
55
+ end
56
+
57
+ def reset(request)
58
+ request.datadog_span = nil
59
+ end
69
60
 
70
- request.on(:response) do |response|
71
- span = initialize_span(request, request.init_time) if !span && request.init_time
61
+ def finish(request, response)
62
+ request.datadog_span ||= initialize_span(request, request.init_time) if request.init_time
72
63
 
73
- finish(response, span)
74
- end
64
+ finish_span(response, request.datadog_span)
75
65
  end
76
66
 
77
- def finish(response, span)
67
+ def finish_span(response, span)
78
68
  if response.is_a?(::HTTPX::ErrorResponse)
79
69
  span.set_error(response.error)
80
70
  else
@@ -137,9 +127,9 @@ module Datadog::Tracing
137
127
  ) if Datadog.configuration.tracing.respond_to?(:header_tags)
138
128
 
139
129
  span
140
- rescue StandardError => e
141
- Datadog.logger.error("error preparing span for http request: #{e}")
142
- Datadog.logger.error(e.backtrace)
130
+ rescue StandardError => e
131
+ Datadog.logger.error("error preparing span for http request: #{e}")
132
+ Datadog.logger.error(e.backtrace)
143
133
  end
144
134
 
145
135
  def now
@@ -179,44 +169,18 @@ module Datadog::Tracing
179
169
  end
180
170
  end
181
171
 
182
- module RequestMethods
183
- attr_accessor :init_time
184
-
185
- # intercepts request initialization to inject the tracing logic.
186
- def initialize(*)
187
- super
188
-
189
- @init_time = nil
190
-
191
- return unless Datadog::Tracing.enabled?
192
-
193
- RequestTracer.call(self)
172
+ class << self
173
+ def load_dependencies(klass)
174
+ klass.plugin(:tracing)
194
175
  end
195
176
 
196
- def response=(*)
197
- # init_time should be set when it's send to a connection.
198
- # However, there are situations where connection initialization fails.
199
- # Example is the :ssrf_filter plugin, which raises an error on
200
- # initialize if the host is an IP which matches against the known set.
201
- # in such cases, we'll just set here right here.
202
- @init_time ||= ::Datadog::Core::Utils::Time.now.utc
203
-
204
- super
177
+ def extra_options(options)
178
+ options.merge(tracer: RequestTracer)
205
179
  end
206
180
  end
207
181
 
208
- module ConnectionMethods
209
- def initialize(*)
210
- super
211
-
212
- @init_time = ::Datadog::Core::Utils::Time.now.utc
213
- end
214
-
215
- def send(request)
216
- request.init_time ||= @init_time
217
-
218
- super
219
- end
182
+ module RequestMethods
183
+ attr_accessor :datadog_span
220
184
  end
221
185
  end
222
186
 
@@ -82,6 +82,7 @@ module WebMock
82
82
 
83
83
  def mock!
84
84
  @mocked = true
85
+ @body.mock!
85
86
  end
86
87
 
87
88
  def mocked?
@@ -90,10 +91,8 @@ module WebMock
90
91
  end
91
92
 
92
93
  module ResponseBodyMethods
93
- def decode_chunk(chunk)
94
- return chunk if @response.mocked?
95
-
96
- super
94
+ def mock!
95
+ @inflaters = nil
97
96
  end
98
97
  end
99
98
 
@@ -49,7 +49,12 @@ module HTTPX
49
49
  @max_requests = @options.max_requests || MAX_REQUESTS
50
50
  @parser.reset!
51
51
  @handshake_completed = false
52
+ reset_requests
53
+ end
54
+
55
+ def reset_requests
52
56
  @pending.unshift(*@requests)
57
+ @requests.clear
53
58
  end
54
59
 
55
60
  def close
@@ -175,6 +180,7 @@ module HTTPX
175
180
 
176
181
  if @parser.upgrade?
177
182
  response << @parser.upgrade_data
183
+ @parser.reset!
178
184
  throw(:called)
179
185
  end
180
186
 
@@ -280,7 +286,6 @@ module HTTPX
280
286
  end
281
287
 
282
288
  def disable_pipelining
283
- return if @requests.empty?
284
289
  # do not disable pipelining if already set to 1 request at a time
285
290
  return if @max_concurrent_requests == 1
286
291
 
@@ -3,8 +3,6 @@
3
3
  require "securerandom"
4
4
  require "http/2"
5
5
 
6
- HTTP2::Connection.__send__(:public, :send_buffer) if HTTP2::VERSION < "1.1.1"
7
-
8
6
  module HTTPX
9
7
  class Connection::HTTP2
10
8
  include Callbacks
@@ -163,6 +161,8 @@ module HTTPX
163
161
  @pings.any?
164
162
  end
165
163
 
164
+ def reset_requests; end
165
+
166
166
  private
167
167
 
168
168
  def can_buffer_more_requests?
@@ -215,9 +215,7 @@ module HTTPX
215
215
  def handle_stream(stream, request)
216
216
  request.on(:refuse, &method(:on_stream_refuse).curry(3)[stream, request])
217
217
  stream.on(:close, &method(:on_stream_close).curry(3)[stream, request])
218
- stream.on(:half_close) do
219
- log(level: 2) { "#{stream.id}: waiting for response..." }
220
- end
218
+ stream.on(:half_close) { on_stream_half_close(stream, request) }
221
219
  stream.on(:altsvc, &method(:on_altsvc).curry(2)[request.origin])
222
220
  stream.on(:headers, &method(:on_stream_headers).curry(3)[stream, request])
223
221
  stream.on(:data, &method(:on_stream_data).curry(3)[stream, request])
@@ -302,7 +300,7 @@ module HTTPX
302
300
  end
303
301
 
304
302
  log(color: :yellow) do
305
- h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact_headers(v)}" }.join("\n")
303
+ h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{k == ":status" ? v : log_redact_headers(v)}" }.join("\n")
306
304
  end
307
305
  _, status = h.shift
308
306
  headers = request.options.headers_class.new(h)
@@ -331,6 +329,16 @@ module HTTPX
331
329
  stream.close
332
330
  end
333
331
 
332
+ def on_stream_half_close(stream, _request)
333
+ unless stream.send_buffer.empty?
334
+ stream.send_buffer.clear
335
+ stream.data("", end_stream: true)
336
+ end
337
+
338
+ # TODO: omit log line if response already here
339
+ log(level: 2) { "#{stream.id}: waiting for response..." }
340
+ end
341
+
334
342
  def on_stream_close(stream, request, error)
335
343
  return if error == :stream_closed && !@streams.key?(request)
336
344
 
@@ -404,34 +412,39 @@ module HTTPX
404
412
 
405
413
  def on_frame_sent(frame)
406
414
  log(level: 2) { "#{frame[:stream]}: frame was sent!" }
407
- log(level: 2, color: :blue) do
408
- payload =
409
- case frame[:type]
410
- when :data
411
- frame.merge(payload: frame[:payload].bytesize)
412
- when :headers, :ping
413
- frame.merge(payload: log_redact_headers(frame[:payload]))
414
- else
415
- frame
416
- end
417
- "#{frame[:stream]}: #{payload}"
418
- end
415
+ log(level: 2, color: :blue) { "#{frame[:stream]}: #{frame_with_extra_info(frame)}" }
419
416
  end
420
417
 
421
418
  def on_frame_received(frame)
422
419
  log(level: 2) { "#{frame[:stream]}: frame was received!" }
423
- log(level: 2, color: :magenta) do
424
- payload =
425
- case frame[:type]
426
- when :data
427
- frame.merge(payload: frame[:payload].bytesize)
428
- when :headers, :ping
429
- frame.merge(payload: log_redact_headers(frame[:payload]))
430
- else
431
- frame
432
- end
433
- "#{frame[:stream]}: #{payload}"
434
- end
420
+ log(level: 2, color: :magenta) { "#{frame[:stream]}: #{frame_with_extra_info(frame)}" }
421
+ end
422
+
423
+ def frame_with_extra_info(frame)
424
+ case frame[:type]
425
+ when :data
426
+ frame.merge(payload: frame[:payload].bytesize)
427
+ when :headers, :ping
428
+ frame.merge(payload: log_redact_headers(frame[:payload]))
429
+ when :window_update
430
+ connection_or_stream = if (id = frame[:stream]).zero?
431
+ @connection
432
+ else
433
+ @streams.each_value.find { |s| s.id == id }
434
+ end
435
+ if connection_or_stream
436
+ frame.merge(
437
+ local_window: connection_or_stream.local_window,
438
+ remote_window: connection_or_stream.remote_window,
439
+ buffered_amount: connection_or_stream.buffered_amount,
440
+ stream_state: connection_or_stream.state,
441
+ )
442
+ else
443
+ frame
444
+ end
445
+ else
446
+ frame
447
+ end.merge(connection_state: @connection.state)
435
448
  end
436
449
 
437
450
  def on_altsvc(origin, frame)
@@ -45,10 +45,10 @@ module HTTPX
45
45
  protected :ssl_session, :sibling
46
46
 
47
47
  def initialize(uri, options)
48
- @current_session = @current_selector =
49
- @parser = @sibling = @coalesced_connection = @altsvc_connection =
50
- @family = @io = @ssl_session = @timeout =
51
- @connected_at = @response_received_at = nil
48
+ @current_session = @current_selector = @max_concurrent_requests =
49
+ @parser = @sibling = @coalesced_connection = @altsvc_connection =
50
+ @family = @io = @ssl_session = @timeout =
51
+ @connected_at = @response_received_at = nil
52
52
 
53
53
  @exhausted = @cloned = @main_sibling = false
54
54
 
@@ -154,6 +154,7 @@ module HTTPX
154
154
  end if @io
155
155
  end
156
156
  connection.purge_pending do |req|
157
+ req.transition(:idle)
157
158
  send(req)
158
159
  end
159
160
  end
@@ -161,8 +162,9 @@ module HTTPX
161
162
  def purge_pending(&block)
162
163
  pendings = []
163
164
  if @parser
164
- @inflight -= @parser.pending.size
165
- pendings << @parser.pending
165
+ pending = @parser.pending
166
+ @inflight -= pending.size
167
+ pendings << pending
166
168
  end
167
169
  pendings << @pending
168
170
  pendings.each do |pending|
@@ -227,6 +229,9 @@ module HTTPX
227
229
  consume
228
230
  end
229
231
  nil
232
+ rescue IOError => e
233
+ @write_buffer.clear
234
+ on_io_error(e)
230
235
  rescue StandardError => e
231
236
  @write_buffer.clear
232
237
  on_error(e)
@@ -256,16 +261,17 @@ module HTTPX
256
261
  # bypasses state machine rules while setting the connection in the
257
262
  # :closed state.
258
263
  def force_close(delete_pending = false)
264
+ force_purge
265
+ return unless @state == :closed
266
+
259
267
  if delete_pending
260
268
  @pending.clear
261
269
  elsif (parser = @parser)
262
270
  enqueue_pending_requests_from_parser(parser)
263
271
  end
264
- return if @state == :closed
265
272
 
266
- @state = :closed
267
- @write_buffer.clear
268
- purge_after_closed
273
+ return unless @pending.empty?
274
+
269
275
  disconnect
270
276
  emit(:force_closed, delete_pending)
271
277
  end
@@ -281,6 +287,15 @@ module HTTPX
281
287
  def reset
282
288
  return if @state == :closing || @state == :closed
283
289
 
290
+ parser = @parser
291
+
292
+ if parser && parser.respond_to?(:max_concurrent_requests)
293
+ # if connection being reset has at some downgraded the number of concurrent
294
+ # requests, such as in the case where an attempt to use HTTP/1 pipelining failed,
295
+ # keep that information around.
296
+ @max_concurrent_requests = parser.max_concurrent_requests
297
+ end
298
+
284
299
  transition(:closing)
285
300
 
286
301
  transition(:closed)
@@ -323,7 +338,10 @@ module HTTPX
323
338
  purge_after_closed
324
339
  @write_buffer.clear
325
340
  transition(:idle)
326
- @parser = nil if @parser
341
+ return unless @parser
342
+
343
+ enqueue_pending_requests_from_parser(parser)
344
+ @parser = nil
327
345
  end
328
346
 
329
347
  def used?
@@ -375,6 +393,19 @@ module HTTPX
375
393
  current_session.deselect_connection(self, current_selector, @cloned)
376
394
  end
377
395
 
396
+ def on_connect_error(e)
397
+ # connect errors, exit gracefully
398
+ error = ConnectionError.new(e.message)
399
+ error.set_backtrace(e.backtrace)
400
+ handle_connect_error(error) if connecting?
401
+ force_close
402
+ end
403
+
404
+ def on_io_error(e)
405
+ on_error(e)
406
+ force_close(true)
407
+ end
408
+
378
409
  def on_error(error, request = nil)
379
410
  if error.is_a?(OperationTimeoutError)
380
411
 
@@ -493,7 +524,7 @@ module HTTPX
493
524
  # flush as many bytes as the sockets allow.
494
525
  #
495
526
  loop do
496
- # buffer has been drainned, mark and exit the write loop.
527
+ # buffer has been drained, mark and exit the write loop.
497
528
  if @write_buffer.empty?
498
529
  # we only mark as drained on the first loop
499
530
  write_drained = write_drained.nil? && @inflight.positive?
@@ -578,6 +609,7 @@ module HTTPX
578
609
  end
579
610
 
580
611
  def enqueue_pending_requests_from_parser(parser)
612
+ parser.reset_requests # move sequential requests back to pending queue.
581
613
  parser_pending_requests = parser.pending
582
614
 
583
615
  return if parser_pending_requests.empty?
@@ -586,11 +618,14 @@ module HTTPX
586
618
  # back to the pending list before the parser is reset.
587
619
  @inflight -= parser_pending_requests.size
588
620
  @pending.unshift(*parser_pending_requests)
621
+
622
+ parser.pending.clear
589
623
  end
590
624
 
591
625
  def build_parser(protocol = @io.protocol)
592
626
  parser = parser_type(protocol).new(@write_buffer, @options)
593
627
  set_parser_callbacks(parser)
628
+ parser.max_concurrent_requests = @max_concurrent_requests if @max_concurrent_requests && parser.respond_to?(:max_concurrent_requests=)
594
629
  parser
595
630
  end
596
631
 
@@ -627,7 +662,6 @@ module HTTPX
627
662
  end
628
663
  parser.on(:close) do
629
664
  reset
630
- disconnect
631
665
  end
632
666
  parser.on(:close_handshake) do
633
667
  consume unless @state == :closed
@@ -684,11 +718,7 @@ module HTTPX
684
718
  Errno::ENOENT,
685
719
  SocketError,
686
720
  IOError => e
687
- # connect errors, exit gracefully
688
- error = ConnectionError.new(e.message)
689
- error.set_backtrace(e.backtrace)
690
- handle_connect_error(error) if connecting?
691
- force_close
721
+ on_connect_error(e)
692
722
  rescue TLSError, ::HTTP2::Error::ProtocolError, ::HTTP2::Error::HandshakeError => e
693
723
  # connect errors, exit gracefully
694
724
  handle_error(e)
@@ -719,10 +749,10 @@ module HTTPX
719
749
  when :inactive
720
750
  return unless @state == :open
721
751
 
752
+ # @type ivar @parser: HTTP1 | HTTP2
753
+
722
754
  # do not deactivate connection in use
723
755
  return if @inflight.positive? || @parser.waiting_for_ping?
724
-
725
- disconnect
726
756
  when :closing
727
757
  return unless connecting? || @state == :open
728
758
 
@@ -740,8 +770,6 @@ module HTTPX
740
770
  return unless @write_buffer.empty?
741
771
 
742
772
  purge_after_closed
743
- disconnect if @pending.empty?
744
-
745
773
  when :already_open
746
774
  nextstate = :open
747
775
  # the first check for given io readiness must still use a timeout.
@@ -758,6 +786,30 @@ module HTTPX
758
786
  end
759
787
  log(level: 3) { "#{@state} -> #{nextstate}" }
760
788
  @state = nextstate
789
+ # post state change
790
+ case nextstate
791
+ when :inactive
792
+ disconnect
793
+ when :closed
794
+ # TODO: should this raise an error instead?
795
+ return unless @pending.empty?
796
+
797
+ disconnect
798
+ end
799
+ end
800
+
801
+ def force_purge
802
+ return if @state == :closed
803
+
804
+ @state = :closed
805
+ @write_buffer.clear
806
+ begin
807
+ purge_after_closed
808
+ rescue IOError
809
+ # may be raised when closing the socket.
810
+ # due to connection reuse / fiber scheduling, it may
811
+ # have been reopened, to bail out in that case.
812
+ end
761
813
  end
762
814
 
763
815
  def close_sibling
@@ -8,7 +8,8 @@ module HTTPX
8
8
  module Plugins
9
9
  module Authentication
10
10
  class Digest
11
- Error = Class.new(Error)
11
+ class Error < Error
12
+ end
12
13
 
13
14
  def initialize(user, password, hashed: false, **)
14
15
  @user = user
@@ -3,11 +3,33 @@
3
3
  module HTTPX
4
4
  module Plugins
5
5
  module Brotli
6
+ class Error < HTTPX::Error; end
7
+
6
8
  class Deflater < Transcoder::Deflater
9
+ def initialize(body)
10
+ @compressor = ::Brotli::Compressor.new
11
+ super
12
+ end
13
+
7
14
  def deflate(chunk)
8
- return unless chunk
15
+ return @compressor.process(chunk) << @compressor.flush if chunk
16
+
17
+ @compressor.finish
18
+ end
19
+ end
20
+
21
+ class Inflater
22
+ def initialize(bytesize)
23
+ @inflater = ::Brotli::Decompressor.new
24
+ @bytesize = bytesize
25
+ end
26
+
27
+ def call(chunk)
28
+ buffer = @inflater.process(chunk)
29
+ @bytesize -= chunk.bytesize
30
+ raise Error, "Unexpected end of compressed stream" if @bytesize <= 0 && !@inflater.finished?
9
31
 
10
- ::Brotli.deflate(chunk)
32
+ buffer
11
33
  end
12
34
  end
13
35
 
@@ -30,19 +52,25 @@ module HTTPX
30
52
  module_function
31
53
 
32
54
  def load_dependencies(*)
55
+ gem "brotli", ">= 0.8.0"
33
56
  require "brotli"
34
57
  end
35
58
 
36
59
  def self.extra_options(options)
37
- options.merge(supported_compression_formats: %w[br] + options.supported_compression_formats)
60
+ supported_compression_formats = (%w[br] + options.supported_compression_formats).freeze
61
+ options.merge(
62
+ supported_compression_formats: supported_compression_formats,
63
+ headers: options.headers_class.new(options.headers.merge("accept-encoding" => supported_compression_formats))
64
+ )
38
65
  end
39
66
 
40
67
  def encode(body)
41
68
  Deflater.new(body)
42
69
  end
43
70
 
44
- def decode(_response, **)
45
- ::Brotli.method(:inflate)
71
+ def decode(response, bytesize: nil)
72
+ bytesize ||= response.headers.key?("content-length") ? response.headers["content-length"].to_i : Float::INFINITY
73
+ Inflater.new(bytesize)
46
74
  end
47
75
  end
48
76
  register_plugin :brotli, Brotli