karafka 2.1.8 → 2.1.10

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 042f365fb134a24ae360d678590ce798014751c8a23fb267001c920a42aa5324
4
- data.tar.gz: ba2950de557a5f6c775577ce392d60ee839184f50b7d9225969c684625c9ecd0
3
+ metadata.gz: 78db49b9cd20426b92ad1a3417b9bc2e79e9efad7f7d11e42873ffda75b850c2
4
+ data.tar.gz: 40b2a338c11ad9d6d07fdb0b82f422fe8f64417805db85a9bd111f9bce29039f
5
5
  SHA512:
6
- metadata.gz: 30b5fcd92c348c50482cb84542380ad28b317d47e01efad2d2049cd3ba7872c5f66a7a4fde93d8bfa262d7fcb3745dedbdb89a4fd5e6cdf2288a43606ea2361d
7
- data.tar.gz: d9c8ba95b2b71f46a3d35e2f5be634473a7aa9f45d9b5924e1019dcb53c7cea6aca8c594f9f4d3bf85a14176c3cb98a7167b4eb6e1f6f2192059d8040440e4a9
6
+ metadata.gz: a522abfbe63ebde9255032b5c6760016a014b8952f8c3f593c754479921408451ac0aeca632ba32d1fcefac78acd97a110e777949f6e70587d454febf5ad34f5
7
+ data.tar.gz: d12c97a4829bb0278041ce20e38f21c4894c77a7c9d4b3e4c1c222c502632fe03b0cce3d224f4b8c28ec3743c9c51e41c98a53310761687e92ce587445eae300
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,5 +1,22 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.1.10 (2023-08-21)
4
+ - [Enhancement] Introduce `connection.client.rebalance_callback` event for instrumentation of rebalances.
5
+ - [Refactor] Introduce low level commands proxy to handle deviation in how we want to run certain commands and how rdkafka-ruby runs that by design.
6
+ - [Fix] Do not report lags in the DD listener for cases where the assignment is not workable.
7
+ - [Fix] Do not report negative lags in the DD listener.
8
+ - [Fix] Extremely fast shutdown after boot in specs can cause process not to stop.
9
+ - [Fix] Disable `allow.auto.create.topics` for admin by default to prevent accidental topics creation on topics metadata lookups.
10
+ - [Fix] Improve the `query_watermark_offsets` operations by increasing too low timeout.
11
+ - [Fix] Increase `TplBuilder` timeouts to compensate for remote clusters.
12
+ - [Fix] Always try to unsubscribe short-lived consumers used throughout the system, especially in the admin APIs.
13
+ - [Fix] Add missing `connection.client.poll.error` error type reference.
14
+
15
+ ## 2.1.9 (2023-08-06)
16
+ - **[Feature]** Introduce ability to customize pause strategy on a per topic basis (Pro).
17
+ - [Improvement] Disable the extensive messages logging in the default `karafka.rb` template.
18
+ - [Change] Require `waterdrop` `>= 2.6.6` due to extra `LoggerListener` API.
19
+
3
20
  ## 2.1.8 (2023-07-29)
4
21
  - [Improvement] Introduce `Karafka::BaseConsumer#used?` method to indicate, that at least one invocation of `#consume` took or will take place. This can be used as a replacement to the non-direct `messages.count` check for shutdown and revocation to ensure, that the consumption took place or is taking place (in case of running LRJ).
5
22
  - [Improvement] Make `messages#to_a` return copy of the underlying array to prevent scenarios, where the mutation impacts offset management.
data/Gemfile.lock CHANGED
@@ -1,19 +1,19 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.1.8)
4
+ karafka (2.1.10)
5
5
  karafka-core (>= 2.1.1, < 2.2.0)
6
6
  thor (>= 0.20)
7
- waterdrop (>= 2.6.2, < 3.0.0)
7
+ waterdrop (>= 2.6.6, < 3.0.0)
8
8
  zeitwerk (~> 2.3)
9
9
 
10
10
  GEM
11
11
  remote: https://rubygems.org/
12
12
  specs:
13
- activejob (7.0.6)
14
- activesupport (= 7.0.6)
13
+ activejob (7.0.7)
14
+ activesupport (= 7.0.7)
15
15
  globalid (>= 0.3.6)
16
- activesupport (7.0.6)
16
+ activesupport (7.0.7)
17
17
  concurrent-ruby (~> 1.0, >= 1.0.2)
18
18
  i18n (>= 1.6, < 2)
19
19
  minitest (>= 5.1)
@@ -44,10 +44,10 @@ GEM
44
44
  roda (~> 3.68, >= 3.68)
45
45
  tilt (~> 2.0)
46
46
  mini_portile2 (2.8.4)
47
- minitest (5.18.1)
47
+ minitest (5.19.0)
48
48
  rack (3.0.8)
49
49
  rake (13.0.6)
50
- roda (3.70.0)
50
+ roda (3.71.0)
51
51
  rack
52
52
  rspec (3.12.0)
53
53
  rspec-core (~> 3.12.0)
@@ -72,10 +72,10 @@ GEM
72
72
  tilt (2.2.0)
73
73
  tzinfo (2.0.6)
74
74
  concurrent-ruby (~> 1.0)
75
- waterdrop (2.6.5)
75
+ waterdrop (2.6.6)
76
76
  karafka-core (>= 2.1.1, < 3.0.0)
77
77
  zeitwerk (~> 2.3)
78
- zeitwerk (2.6.8)
78
+ zeitwerk (2.6.11)
79
79
 
80
80
  PLATFORMS
81
81
  x86_64-linux
data/certs/cert_chain.pem CHANGED
@@ -1,26 +1,26 @@
1
1
  -----BEGIN CERTIFICATE-----
2
2
  MIIEcDCCAtigAwIBAgIBATANBgkqhkiG9w0BAQsFADA/MRAwDgYDVQQDDAdjb250
3
3
  YWN0MRcwFQYKCZImiZPyLGQBGRYHa2FyYWZrYTESMBAGCgmSJomT8ixkARkWAmlv
4
- MB4XDTIyMDgxOTE3MjEzN1oXDTIzMDgxOTE3MjEzN1owPzEQMA4GA1UEAwwHY29u
4
+ MB4XDTIzMDgyMTA3MjU1NFoXDTI0MDgyMDA3MjU1NFowPzEQMA4GA1UEAwwHY29u
5
5
  dGFjdDEXMBUGCgmSJomT8ixkARkWB2thcmFma2ExEjAQBgoJkiaJk/IsZAEZFgJp
6
- bzCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAODzeO3L6lxdATzMHKNW
7
- jFA/GGunoPuylO/BMzy8RiQHh7VIvysAKs0tHhTx3g2D0STDpF+hcQcPELFikiT2
8
- F+1wOHj/SsrK7VKqfA8+gq04hKc5sQoX2Egf9k3V0YJ3eZ6R/koHkQ8A0TVt0w6F
9
- ZQckoV4MqnEAx0g/FZN3mnHTlJ3VFLSBqJEIe+S6FZMl92mSv+hTrlUG8VaYxSfN
10
- lTCvnKk284F6QZq5XIENLRmcDd/3aPBLnLwNnyMyhB+6gK8cUO+CFlDO5tjo/aBA
11
- rUnl++wGG0JooF1ed0v+evOn9KoMBG6rHewcf79qJbVOscbD8qSAmo+sCXtcFryr
12
- KRMTB8gNbowJkFRJDEe8tfRy11u1fYzFg/qNO82FJd62rKAw2wN0C29yCeQOPRb1
13
- Cw9Y4ZwK9VFNEcV9L+3pHTHn2XfuZHtDaG198VweiF6raFO4yiEYccodH/USP0L5
14
- cbcCFtmu/4HDSxL1ByQXO84A0ybJuk3/+aPUSXe9C9U8fwIDAQABo3cwdTAJBgNV
15
- HRMEAjAAMAsGA1UdDwQEAwIEsDAdBgNVHQ4EFgQUSlcEakb7gfn/5E2WY6z73BF/
16
- iZkwHQYDVR0RBBYwFIESY29udGFjdEBrYXJhZmthLmlvMB0GA1UdEgQWMBSBEmNv
17
- bnRhY3RAa2FyYWZrYS5pbzANBgkqhkiG9w0BAQsFAAOCAYEA1aS+E7RXJ1w9g9mJ
18
- G0NzFxe64OEuENosNlvYQCbRKGCXAU1qqelYkBQHseRgRKxLICrnypRo9IEobyHa
19
- vDnJ4r7Tsb34dleqQW2zY/obG+cia3Ym2JsegXWF7dDOzCXJ4FN8MFoT2jHlqLLw
20
- yrap0YO5zx0GSQ0Dwy8h2n2v2vanMEeCx7iNm3ERgR5WuN5sjzWoz2A/JLEEcK0C
21
- EnAGKCWAd1fuG8IemDjT1edsd5FyYR4bIX0m+99oDuFZyPiiIbalmyYiSBBp59Yb
22
- Q0P8zeBi4OfwCZNcxqz0KONmw9JLNv6DgyEAH5xe/4JzhMEgvIRiPj0pHfA7oqQF
23
- KUNqvD1KlxbEC+bZfE5IZhnqYLdld/Ksqd22FI1RBhiS1Ejfsj99LVIm9cBuZEY2
24
- Qf04B9ceLUaC4fPVEz10FyobjaFoY4i32xRto3XnrzeAgfEe4swLq8bQsR3w/EF3
25
- MGU0FeSV2Yj7Xc2x/7BzLK8xQn5l7Yy75iPF+KP3vVmDHnNl
6
+ bzCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAOuZpyQKEwsTG9plLat7
7
+ 8bUaNuNBEnouTsNMr6X+XTgvyrAxTuocdsyP1sNCjdS1B8RiiDH1/Nt9qpvlBWon
8
+ sdJ1SYhaWNVfqiYStTDnCx3PRMmHRdD4KqUWKpN6VpZ1O/Zu+9Mw0COmvXgZuuO9
9
+ wMSJkXRo6dTCfMedLAIxjMeBIxtoLR2e6Jm6MR8+8WYYVWrO9kSOOt5eKQLBY7aK
10
+ b/Dc40EcJKPg3Z30Pia1M9ZyRlb6SOj6SKpHRqc7vbVQxjEw6Jjal1lZ49m3YZMd
11
+ ArMAs9lQZNdSw5/UX6HWWURLowg6k10RnhTUtYyzO9BFev0JFJftHnmuk8vtb+SD
12
+ 5VPmjFXg2VOcw0B7FtG75Vackk8QKfgVe3nSPhVpew2CSPlbJzH80wChbr19+e3+
13
+ YGr1tOiaJrL6c+PNmb0F31NXMKpj/r+n15HwlTMRxQrzFcgjBlxf2XFGnPQXHhBm
14
+ kp1OFnEq4GG9sON4glRldkwzi/f/fGcZmo5fm3d+0ZdNgwIDAQABo3cwdTAJBgNV
15
+ HRMEAjAAMAsGA1UdDwQEAwIEsDAdBgNVHQ4EFgQUPVH5+dLA80A1kJ2Uz5iGwfOa
16
+ 1+swHQYDVR0RBBYwFIESY29udGFjdEBrYXJhZmthLmlvMB0GA1UdEgQWMBSBEmNv
17
+ bnRhY3RAa2FyYWZrYS5pbzANBgkqhkiG9w0BAQsFAAOCAYEAnpa0jcN7JzREHMTQ
18
+ bfZ+xcvlrzuROMY6A3zIZmQgbnoZZNuX4cMRrT1p1HuwXpxdpHPw7dDjYqWw3+1h
19
+ 3mXLeMuk7amjQpYoSWU/OIZMhIsARra22UN8qkkUlUj3AwTaChVKN/bPJOM2DzfU
20
+ kz9vUgLeYYFfQbZqeI6SsM7ltilRV4W8D9yNUQQvOxCFxtLOetJ00fC/E7zMUzbK
21
+ IBwYFQYsbI6XQzgAIPW6nGSYKgRhkfpmquXSNKZRIQ4V6bFrufa+DzD0bt2ZA3ah
22
+ fMmJguyb5L2Gf1zpDXzFSPMG7YQFLzwYz1zZZvOU7/UCpQsHpID/YxqDp4+Dgb+Y
23
+ qma0whX8UG/gXFV2pYWpYOfpatvahwi+A1TwPQsuZwkkhi1OyF1At3RY+hjSXyav
24
+ AnG1dJU+yL2BK7vaVytLTstJME5mepSZ46qqIJXMuWob/YPDmVaBF39TDSG9e34s
25
+ msG3BiCqgOgHAnL23+CN3Rt8MsuRfEtoTKpJVcCfoEoNHOkc
26
26
  -----END CERTIFICATE-----
@@ -23,6 +23,11 @@ en:
23
23
  delaying.delay_format: 'needs to be equal or more than 0 and an integer'
24
24
  delaying.active_format: 'needs to be boolean'
25
25
 
26
+ pause_timeout_format: needs to be an integer bigger than 0
27
+ pause_max_timeout_format: needs to be an integer bigger than 0
28
+ pause_with_exponential_backoff_format: needs to be either true or false
29
+ pause_timeout_max_timeout_vs_pause_max_timeout: pause_timeout must be less or equal to pause_max_timeout
30
+
26
31
  config:
27
32
  encryption.active_format: 'needs to be either true or false'
28
33
  encryption.public_key_invalid: 'is not a valid public RSA key'
data/karafka.gemspec CHANGED
@@ -23,7 +23,7 @@ Gem::Specification.new do |spec|
23
23
 
24
24
  spec.add_dependency 'karafka-core', '>= 2.1.1', '< 2.2.0'
25
25
  spec.add_dependency 'thor', '>= 0.20'
26
- spec.add_dependency 'waterdrop', '>= 2.6.2', '< 3.0.0'
26
+ spec.add_dependency 'waterdrop', '>= 2.6.6', '< 3.0.0'
27
27
  spec.add_dependency 'zeitwerk', '~> 2.3'
28
28
 
29
29
  if $PROGRAM_NAME.end_with?('gem')
data/lib/karafka/admin.rb CHANGED
@@ -13,9 +13,6 @@ module Karafka
13
13
  # retry after checking that the operation was finished or failed using external factor.
14
14
  MAX_WAIT_TIMEOUT = 1
15
15
 
16
- # Max time for a TPL request. We increase it to compensate for remote clusters latency
17
- TPL_REQUEST_TIMEOUT = 2_000
18
-
19
16
  # How many times should be try. 1 x 60 => 60 seconds wait in total
20
17
  MAX_ATTEMPTS = 60
21
18
 
@@ -29,11 +26,12 @@ module Karafka
29
26
  'fetch.message.max.bytes': 5 * 1_048_576,
30
27
  # Do not commit offset automatically, this prevents offset tracking for operations involving
31
28
  # a consumer instance
32
- 'enable.auto.commit': false
29
+ 'enable.auto.commit': false,
30
+ # Make sure that topic metadata lookups do not create topics accidentally
31
+ 'allow.auto.create.topics': false
33
32
  }.freeze
34
33
 
35
- private_constant :CONFIG_DEFAULTS, :MAX_WAIT_TIMEOUT, :TPL_REQUEST_TIMEOUT,
36
- :MAX_ATTEMPTS
34
+ private_constant :CONFIG_DEFAULTS, :MAX_WAIT_TIMEOUT, :MAX_ATTEMPTS
37
35
 
38
36
  class << self
39
37
  # Allows us to read messages from the topic
@@ -184,10 +182,24 @@ module Karafka
184
182
  # This API can be used in other pieces of code and allows for low-level consumer usage
185
183
  #
186
184
  # @param settings [Hash] extra settings to customize consumer
185
+ #
186
+ # @note We always ship and yield a proxied consumer because admin API performance is not
187
+ # that relevant. That is, there are no high frequency calls that would have to be delegated
187
188
  def with_consumer(settings = {})
188
189
  consumer = config(:consumer, settings).consumer
189
- yield(consumer)
190
+ proxy = ::Karafka::Connection::Proxy.new(consumer)
191
+ yield(proxy)
190
192
  ensure
193
+ # Always unsubscribe consumer just to be sure, that no metadata requests are running
194
+ # when we close the consumer. This in theory should prevent from some race-conditions
195
+ # that originate from librdkafka
196
+ begin
197
+ consumer&.unsubscribe
198
+ # Ignore any errors and continue to close consumer despite them
199
+ rescue Rdkafka::RdkafkaError
200
+ nil
201
+ end
202
+
191
203
  consumer&.close
192
204
  end
193
205
 
@@ -261,7 +273,7 @@ module Karafka
261
273
  name, partition => offset
262
274
  )
263
275
 
264
- real_offsets = consumer.offsets_for_times(tpl, TPL_REQUEST_TIMEOUT)
276
+ real_offsets = consumer.offsets_for_times(tpl)
265
277
  detected_offset = real_offsets.to_h.dig(name, partition)
266
278
 
267
279
  detected_offset&.offset || raise(Errors::InvalidTimeBasedOffsetError)
data/lib/karafka/app.rb CHANGED
@@ -53,6 +53,12 @@ module Karafka
53
53
  RUBY
54
54
  end
55
55
 
56
+ # @return [Boolean] true if we should be done in general with processing anything
57
+ # @note It is a meta status from the status object
58
+ def done?
59
+ App.config.internal.status.done?
60
+ end
61
+
56
62
  # Methods that should be delegated to Karafka module
57
63
  %i[
58
64
  root
@@ -70,9 +70,9 @@ module Karafka
70
70
 
71
71
  # Executes the default consumer flow.
72
72
  #
73
- # @return [Boolean] true if there was no exception, otherwise false.
74
- #
75
73
  # @private
74
+ #
75
+ # @return [Boolean] true if there was no exception, otherwise false.
76
76
  # @note We keep the seek offset tracking, and use it to compensate for async offset flushing
77
77
  # that may not yet kick in when error occurs. That way we pause always on the last processed
78
78
  # message.
@@ -20,9 +20,6 @@ module Karafka
20
20
  # How many times should we retry polling in case of a failure
21
21
  MAX_POLL_RETRIES = 20
22
22
 
23
- # Max time for a TPL request. We increase it to compensate for remote clusters latency
24
- TPL_REQUEST_TIMEOUT = 2_000
25
-
26
23
  # 1 minute of max wait for the first rebalance before a forceful attempt
27
24
  # This applies only to a case when a short-lived Karafka instance with a client would be
28
25
  # closed before first rebalance. Mitigates a librdkafka bug.
@@ -32,8 +29,7 @@ module Karafka
32
29
  # potential race conditions and other issues
33
30
  SHUTDOWN_MUTEX = Mutex.new
34
31
 
35
- private_constant :MAX_POLL_RETRIES, :SHUTDOWN_MUTEX, :TPL_REQUEST_TIMEOUT,
36
- :COOPERATIVE_STICKY_MAX_WAIT
32
+ private_constant :MAX_POLL_RETRIES, :SHUTDOWN_MUTEX, :COOPERATIVE_STICKY_MAX_WAIT
37
33
 
38
34
  # Creates a new consumer instance.
39
35
  #
@@ -350,10 +346,12 @@ module Karafka
350
346
  message.partition => message.offset
351
347
  )
352
348
 
349
+ proxy = Proxy.new(@kafka)
350
+
353
351
  # Now we can overwrite the seek message offset with our resolved offset and we can
354
352
  # then seek to the appropriate message
355
353
  # We set the timeout to 2_000 to make sure that remote clusters handle this well
356
- real_offsets = @kafka.offsets_for_times(tpl, TPL_REQUEST_TIMEOUT)
354
+ real_offsets = proxy.offsets_for_times(tpl)
357
355
  detected_partition = real_offsets.to_h.dig(message.topic, message.partition)
358
356
 
359
357
  # There always needs to be an offset. In case we seek into the future, where there
@@ -387,6 +385,21 @@ module Karafka
387
385
  end
388
386
  end
389
387
 
388
+ # Unsubscribes from all the subscriptions
389
+ # @note This is a private API to be used only on shutdown
390
+ # @note We do not re-raise since this is supposed to be only used on close and can be safely
391
+ # ignored. We do however want to instrument on it
392
+ def unsubscribe
393
+ @kafka.unsubscribe
394
+ rescue ::Rdkafka::RdkafkaError => e
395
+ Karafka.monitor.instrument(
396
+ 'error.occurred',
397
+ caller: self,
398
+ error: e,
399
+ type: 'connection.client.unsubscribe.error'
400
+ )
401
+ end
402
+
390
403
  # @param topic [String]
391
404
  # @param partition [Integer]
392
405
  # @return [Rdkafka::Consumer::TopicPartitionList]
@@ -85,7 +85,7 @@ module Karafka
85
85
  # propagate this far.
86
86
  def fetch_loop
87
87
  # Run the main loop as long as we are not stopping or moving into quiet mode
88
- until Karafka::App.stopping? || Karafka::App.quieting? || Karafka::App.quiet?
88
+ until Karafka::App.done?
89
89
  Karafka.monitor.instrument(
90
90
  'connection.listener.fetch_loop',
91
91
  caller: self,
@@ -192,7 +192,7 @@ module Karafka
192
192
  # Resumes processing of partitions that were paused due to an error.
193
193
  def resume_paused_partitions
194
194
  @coordinators.resume do |topic, partition|
195
- @client.resume(topic, partition)
195
+ @client.resume(topic.name, partition)
196
196
  end
197
197
  end
198
198
 
@@ -14,20 +14,20 @@ module Karafka
14
14
 
15
15
  # Creates or fetches pause tracker of a given topic partition.
16
16
  #
17
- # @param topic [String] topic name
17
+ # @param topic [::Karafka::Routing::Topic] topic
18
18
  # @param partition [Integer] partition number
19
19
  # @return [Karafka::TimeTrackers::Pause] pause tracker instance
20
20
  def fetch(topic, partition)
21
21
  @pauses[topic][partition] ||= TimeTrackers::Pause.new(
22
- timeout: Karafka::App.config.pause_timeout,
23
- max_timeout: Karafka::App.config.pause_max_timeout,
24
- exponential_backoff: Karafka::App.config.pause_with_exponential_backoff
22
+ timeout: topic.pause_timeout,
23
+ max_timeout: topic.pause_max_timeout,
24
+ exponential_backoff: topic.pause_with_exponential_backoff
25
25
  )
26
26
  end
27
27
 
28
28
  # Resumes processing of partitions for which pause time has ended.
29
29
  #
30
- # @yieldparam [String] topic name
30
+ # @yieldparam [Karafka::Routing::Topic] topic
31
31
  # @yieldparam [Integer] partition number
32
32
  def resume
33
33
  @pauses.each do |topic, partitions|
@@ -0,0 +1,92 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Connection
5
+ # Usually it is ok to use the `Rdkafka::Consumer` directly because we need 1:1 its
6
+ # functionality. There are however cases where we want to have extra recoveries or other
7
+ # handling of errors and settings. This is where this module comes in handy.
8
+ #
9
+ # We do not want to wrap and delegate all via a proxy object for performance reasons, but we
10
+ # do still want to be able to alter some functionalities. This wrapper helps us do it when
11
+ # it would be needed
12
+ class Proxy < SimpleDelegator
13
+ # Timeout on the watermark query
14
+ WATERMARK_REQUEST_TIMEOUT = 5_000
15
+
16
+ # Timeout on the TPL request query
17
+ TPL_REQUEST_TIMEOUT = 5_000
18
+
19
+ # How many attempts we want to take for something that would end up with all_brokers_down
20
+ BROKERS_DOWN_MAX_ATTEMPTS = 3
21
+
22
+ # How long should we wait in between all_brokers_down
23
+ BROKERS_DOWN_BACKOFF_TIME = 1
24
+
25
+ private_constant :WATERMARK_REQUEST_TIMEOUT, :BROKERS_DOWN_MAX_ATTEMPTS,
26
+ :BROKERS_DOWN_BACKOFF_TIME, :TPL_REQUEST_TIMEOUT
27
+
28
+ attr_accessor :wrapped
29
+
30
+ alias __getobj__ wrapped
31
+
32
+ # @param obj [Rdkafka::Consumer, Proxy] rdkafka consumer or consumer wrapped with proxy
33
+ def initialize(obj)
34
+ super
35
+ # Do not allow for wrapping proxy with a proxy. This will prevent a case where we might
36
+ # wrap an already wrapped object with another proxy level. Simplifies passing consumers
37
+ # and makes it safe to wrap without type checking
38
+ @wrapped = obj.is_a?(self.class) ? obj.wrapped : obj
39
+ end
40
+
41
+ # Proxies the `#query_watermark_offsets` with extra recovery from timeout problems.
42
+ # We impose our own custom timeout to make sure, that high-latency clusters and overloaded
43
+ # clusters can handle our requests.
44
+ #
45
+ # @param topic [String] topic name
46
+ # @param partition [Partition]
47
+ # @return [Array<Integer, Integer>] watermark offsets
48
+ def query_watermark_offsets(topic, partition)
49
+ with_brokers_down_retry do
50
+ @wrapped.query_watermark_offsets(
51
+ topic,
52
+ partition,
53
+ WATERMARK_REQUEST_TIMEOUT
54
+ )
55
+ end
56
+ end
57
+
58
+ # Similar to `#query_watermark_offsets`, this method can be sensitive to latency. We handle
59
+ # this the same way
60
+ #
61
+ # @param tpl [Rdkafka::Consumer::TopicPartitionList] tpl to get time offsets
62
+ # @return [Rdkafka::Consumer::TopicPartitionList] tpl with time offsets
63
+ def offsets_for_times(tpl)
64
+ with_brokers_down_retry do
65
+ @wrapped.offsets_for_times(tpl, TPL_REQUEST_TIMEOUT)
66
+ end
67
+ end
68
+
69
+ private
70
+
71
+ # Runs expected block of code with few retries on all_brokers_down
72
+ # librdkafka can return `all_brokers_down` for scenarios when broker is overloaded or not
73
+ # reachable due to latency.
74
+ def with_brokers_down_retry
75
+ attempt ||= 0
76
+ attempt += 1
77
+
78
+ yield
79
+ rescue Rdkafka::RdkafkaError => e
80
+ raise if e.code != :all_brokers_down
81
+
82
+ if attempt <= BROKERS_DOWN_MAX_ATTEMPTS
83
+ sleep(BROKERS_DOWN_BACKOFF_TIME)
84
+
85
+ retry
86
+ end
87
+
88
+ raise
89
+ end
90
+ end
91
+ end
92
+ end
@@ -280,6 +280,9 @@ module Karafka
280
280
  when 'connection.client.rebalance_callback.error'
281
281
  error "Rebalance callback error occurred: #{error}"
282
282
  error details
283
+ when 'connection.client.unsubscribe.error'
284
+ error "Client unsubscribe error occurred: #{error}"
285
+ error details
283
286
  else
284
287
  # This should never happen. Please contact the maintainers
285
288
  raise Errors::UnsupportedCaseError, event
@@ -35,6 +35,10 @@ module Karafka
35
35
  connection.listener.fetch_loop
36
36
  connection.listener.fetch_loop.received
37
37
 
38
+ connection.client.rebalance_callback
39
+ connection.client.poll.error
40
+ connection.client.unsubscribe.error
41
+
38
42
  consumer.consume
39
43
  consumer.consumed
40
44
  consumer.consuming.pause
@@ -220,6 +220,11 @@ module Karafka
220
220
  next if partition_name == '-1'
221
221
  # Skip until lag info is available
222
222
  next if partition_statistics['consumer_lag'] == -1
223
+ next if partition_statistics['consumer_lag_stored'] == -1
224
+
225
+ # Skip if we do not own the fetch assignment
226
+ next if partition_statistics['fetch_state'] == 'stopped'
227
+ next if partition_statistics['fetch_state'] == 'none'
223
228
 
224
229
  public_send(
225
230
  metric.type,
@@ -1,7 +1,9 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Karafka
4
+ # Namespace for third-party libraries patches
4
5
  module Patches
6
+ # Rdkafka patches specific to Karafka
5
7
  module Rdkafka
6
8
  # Binding patches that slightly change how rdkafka operates in certain places
7
9
  module Bindings
@@ -51,11 +53,18 @@ module Karafka
51
53
  # @param opaque [Rdkafka::Opaque]
52
54
  # @param tpl [Rdkafka::Consumer::TopicPartitionList]
53
55
  def trigger_callbacks(code, opaque, tpl)
54
- case code
55
- when RB::RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS
56
- opaque.call_on_partitions_assigned(tpl)
57
- when RB::RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS
58
- opaque.call_on_partitions_revoked(tpl)
56
+ Karafka.monitor.instrument(
57
+ 'connection.client.rebalance_callback',
58
+ caller: self,
59
+ code: code,
60
+ tpl: tpl
61
+ ) do
62
+ case code
63
+ when RB::RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS
64
+ opaque.call_on_partitions_assigned(tpl)
65
+ when RB::RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS
66
+ opaque.call_on_partitions_revoked(tpl)
67
+ end
59
68
  end
60
69
  rescue StandardError => e
61
70
  Karafka.monitor.instrument(
@@ -14,11 +14,6 @@
14
14
  module Karafka
15
15
  module Pro
16
16
  class Iterator
17
- # Max time for a TPL request. We increase it to compensate for remote clusters latency
18
- TPL_REQUEST_TIMEOUT = 2_000
19
-
20
- private_constant :TPL_REQUEST_TIMEOUT
21
-
22
17
  # Because we have various formats in which we can provide the offsets, before we can
23
18
  # subscribe to them, there needs to be a bit of normalization.
24
19
  #
@@ -30,7 +25,7 @@ module Karafka
30
25
  # @param consumer [::Rdkafka::Consumer] consumer instance needed to talk with Kafka
31
26
  # @param expanded_topics [Hash] hash with expanded and normalized topics data
32
27
  def initialize(consumer, expanded_topics)
33
- @consumer = consumer
28
+ @consumer = Connection::Proxy.new(consumer)
34
29
  @expanded_topics = expanded_topics
35
30
  @mapped_topics = Hash.new { |h, k| h[k] = {} }
36
31
  end
@@ -144,7 +139,7 @@ module Karafka
144
139
  # If there were no time-based, no need to query Kafka
145
140
  return if time_tpl.empty?
146
141
 
147
- real_offsets = @consumer.offsets_for_times(time_tpl, TPL_REQUEST_TIMEOUT)
142
+ real_offsets = @consumer.offsets_for_times(time_tpl)
148
143
 
149
144
  real_offsets.to_h.each do |name, results|
150
145
  results.each do |result|
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ # This Karafka component is a Pro component under a commercial license.
4
+ # This Karafka component is NOT licensed under LGPL.
5
+ #
6
+ # All of the commercial components are present in the lib/karafka/pro directory of this
7
+ # repository and their usage requires commercial license agreement.
8
+ #
9
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
10
+ #
11
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
12
+ # your code to Maciej Mensfeld.
13
+
14
+ module Karafka
15
+ module Pro
16
+ module Routing
17
+ module Features
18
+ class Pausing < Base
19
+ # Contract to make sure, that the pause settings on a per topic basis are as expected
20
+ class Contract < Contracts::Base
21
+ configure do |config|
22
+ config.error_messages = YAML.safe_load(
23
+ File.read(
24
+ File.join(Karafka.gem_root, 'config', 'locales', 'pro_errors.yml')
25
+ )
26
+ ).fetch('en').fetch('validations').fetch('topic')
27
+
28
+ required(:pause_timeout) { |val| val.is_a?(Integer) && val.positive? }
29
+ required(:pause_max_timeout) { |val| val.is_a?(Integer) && val.positive? }
30
+ required(:pause_with_exponential_backoff) { |val| [true, false].include?(val) }
31
+
32
+ virtual do |data, errors|
33
+ next unless errors.empty?
34
+
35
+ pause_timeout = data.fetch(:pause_timeout)
36
+ pause_max_timeout = data.fetch(:pause_max_timeout)
37
+
38
+ next if pause_timeout <= pause_max_timeout
39
+
40
+ [[%i[pause_timeout], :max_timeout_vs_pause_max_timeout]]
41
+ end
42
+ end
43
+ end
44
+ end
45
+ end
46
+ end
47
+ end
48
+ end
@@ -0,0 +1,44 @@
1
+ # frozen_string_literal: true
2
+
3
+ # This Karafka component is a Pro component under a commercial license.
4
+ # This Karafka component is NOT licensed under LGPL.
5
+ #
6
+ # All of the commercial components are present in the lib/karafka/pro directory of this
7
+ # repository and their usage requires commercial license agreement.
8
+ #
9
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
10
+ #
11
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
12
+ # your code to Maciej Mensfeld.
13
+
14
+ module Karafka
15
+ module Pro
16
+ module Routing
17
+ module Features
18
+ class Pausing < Base
19
+ # Expansion allowing for a per topic pause strategy definitions
20
+ module Topic
21
+ # Allows for per-topic pausing strategy setting
22
+ #
23
+ # @param timeout [Integer] how long should we wait upon processing error (milliseconds)
24
+ # @param max_timeout [Integer] what is the max timeout in case of an exponential
25
+ # backoff (milliseconds)
26
+ # @param with_exponential_backoff [Boolean] should we use exponential backoff
27
+ #
28
+ # @note We do not construct here the nested config like we do with other routing
29
+ # features, because this feature operates on the OSS layer by injection of values
30
+ # and a nested config is not needed.
31
+ def pause(timeout: nil, max_timeout: nil, with_exponential_backoff: nil)
32
+ self.pause_timeout = timeout if timeout
33
+ self.pause_max_timeout = max_timeout if max_timeout
34
+
35
+ return unless with_exponential_backoff
36
+
37
+ self.pause_with_exponential_backoff = with_exponential_backoff
38
+ end
39
+ end
40
+ end
41
+ end
42
+ end
43
+ end
44
+ end
@@ -0,0 +1,25 @@
1
+ # frozen_string_literal: true
2
+
3
+ # This Karafka component is a Pro component under a commercial license.
4
+ # This Karafka component is NOT licensed under LGPL.
5
+ #
6
+ # All of the commercial components are present in the lib/karafka/pro directory of this
7
+ # repository and their usage requires commercial license agreement.
8
+ #
9
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
10
+ #
11
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
12
+ # your code to Maciej Mensfeld.
13
+
14
+ module Karafka
15
+ module Pro
16
+ module Routing
17
+ module Features
18
+ # Feature allowing for a per-route reconfiguration of the pausing strategy
19
+ # It can be useful when different topics should have different backoff policies
20
+ class Pausing < Base
21
+ end
22
+ end
23
+ end
24
+ end
25
+ end
@@ -17,14 +17,18 @@ module Karafka
17
17
  @topics = topics
18
18
  end
19
19
 
20
- # @param topic [String] topic name
20
+ # @param topic_name [String] topic name
21
21
  # @param partition [Integer] partition number
22
- def find_or_create(topic, partition)
23
- @coordinators[topic][partition] ||= @coordinator_class.new(
24
- @topics.find(topic),
25
- partition,
26
- @pauses_manager.fetch(topic, partition)
27
- )
22
+ def find_or_create(topic_name, partition)
23
+ @coordinators[topic_name][partition] ||= begin
24
+ routing_topic = @topics.find(topic_name)
25
+
26
+ @coordinator_class.new(
27
+ routing_topic,
28
+ partition,
29
+ @pauses_manager.fetch(routing_topic, partition)
30
+ )
31
+ end
28
32
  end
29
33
 
30
34
  # Resumes processing of partitions for which pause time has ended.
@@ -35,16 +39,16 @@ module Karafka
35
39
  @pauses_manager.resume(&block)
36
40
  end
37
41
 
38
- # @param topic [String] topic name
42
+ # @param topic_name [String] topic name
39
43
  # @param partition [Integer] partition number
40
- def revoke(topic, partition)
41
- return unless @coordinators[topic].key?(partition)
44
+ def revoke(topic_name, partition)
45
+ return unless @coordinators[topic_name].key?(partition)
42
46
 
43
47
  # The fact that we delete here does not change the fact that the executor still holds the
44
48
  # reference to this coordinator. We delete it here, as we will no longer process any
45
49
  # new stuff with it and we may need a new coordinator if we regain this partition, but the
46
50
  # coordinator may still be in use
47
- @coordinators[topic].delete(partition).revoke
51
+ @coordinators[topic_name].delete(partition).revoke
48
52
  end
49
53
 
50
54
  # Clears coordinators and re-created the pauses manager
@@ -18,6 +18,9 @@ module Karafka
18
18
  max_wait_time
19
19
  initial_offset
20
20
  consumer_persistence
21
+ pause_timeout
22
+ pause_max_timeout
23
+ pause_with_exponential_backoff
21
24
  ].freeze
22
25
 
23
26
  private_constant :INHERITABLE_ATTRIBUTES
@@ -62,5 +62,15 @@ module Karafka
62
62
  end
63
63
  end
64
64
  end
65
+
66
+ # @return [Boolean] true if we are in any of the status that would indicate we should no longer
67
+ # process incoming data. It is a meta status built from others and not a separate state in
68
+ # the sense of a state machine
69
+ def done?
70
+ # Short-track for the most common case not to invoke all others on normal execution
71
+ return false if running?
72
+
73
+ stopping? || stopped? || quieting? || quiet? || terminated?
74
+ end
65
75
  end
66
76
  end
@@ -43,7 +43,13 @@ class KarafkaApp < Karafka::App
43
43
  # This logger prints the producer development info using the Karafka logger.
44
44
  # It is similar to the consumer logger listener but producer oriented.
45
45
  Karafka.producer.monitor.subscribe(
46
- WaterDrop::Instrumentation::LoggerListener.new(Karafka.logger)
46
+ WaterDrop::Instrumentation::LoggerListener.new(
47
+ # Log producer operations using the Karafka logger
48
+ Karafka.logger,
49
+ # If you set this to true, logs will contain each message details
50
+ # Please note, that this can be extensive
51
+ log_messages: false
52
+ )
47
53
  )
48
54
 
49
55
  routes.draw do
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.1.8'
6
+ VERSION = '2.1.10'
7
7
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.1.8
4
+ version: 2.1.10
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -12,30 +12,30 @@ cert_chain:
12
12
  -----BEGIN CERTIFICATE-----
13
13
  MIIEcDCCAtigAwIBAgIBATANBgkqhkiG9w0BAQsFADA/MRAwDgYDVQQDDAdjb250
14
14
  YWN0MRcwFQYKCZImiZPyLGQBGRYHa2FyYWZrYTESMBAGCgmSJomT8ixkARkWAmlv
15
- MB4XDTIyMDgxOTE3MjEzN1oXDTIzMDgxOTE3MjEzN1owPzEQMA4GA1UEAwwHY29u
15
+ MB4XDTIzMDgyMTA3MjU1NFoXDTI0MDgyMDA3MjU1NFowPzEQMA4GA1UEAwwHY29u
16
16
  dGFjdDEXMBUGCgmSJomT8ixkARkWB2thcmFma2ExEjAQBgoJkiaJk/IsZAEZFgJp
17
- bzCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAODzeO3L6lxdATzMHKNW
18
- jFA/GGunoPuylO/BMzy8RiQHh7VIvysAKs0tHhTx3g2D0STDpF+hcQcPELFikiT2
19
- F+1wOHj/SsrK7VKqfA8+gq04hKc5sQoX2Egf9k3V0YJ3eZ6R/koHkQ8A0TVt0w6F
20
- ZQckoV4MqnEAx0g/FZN3mnHTlJ3VFLSBqJEIe+S6FZMl92mSv+hTrlUG8VaYxSfN
21
- lTCvnKk284F6QZq5XIENLRmcDd/3aPBLnLwNnyMyhB+6gK8cUO+CFlDO5tjo/aBA
22
- rUnl++wGG0JooF1ed0v+evOn9KoMBG6rHewcf79qJbVOscbD8qSAmo+sCXtcFryr
23
- KRMTB8gNbowJkFRJDEe8tfRy11u1fYzFg/qNO82FJd62rKAw2wN0C29yCeQOPRb1
24
- Cw9Y4ZwK9VFNEcV9L+3pHTHn2XfuZHtDaG198VweiF6raFO4yiEYccodH/USP0L5
25
- cbcCFtmu/4HDSxL1ByQXO84A0ybJuk3/+aPUSXe9C9U8fwIDAQABo3cwdTAJBgNV
26
- HRMEAjAAMAsGA1UdDwQEAwIEsDAdBgNVHQ4EFgQUSlcEakb7gfn/5E2WY6z73BF/
27
- iZkwHQYDVR0RBBYwFIESY29udGFjdEBrYXJhZmthLmlvMB0GA1UdEgQWMBSBEmNv
28
- bnRhY3RAa2FyYWZrYS5pbzANBgkqhkiG9w0BAQsFAAOCAYEA1aS+E7RXJ1w9g9mJ
29
- G0NzFxe64OEuENosNlvYQCbRKGCXAU1qqelYkBQHseRgRKxLICrnypRo9IEobyHa
30
- vDnJ4r7Tsb34dleqQW2zY/obG+cia3Ym2JsegXWF7dDOzCXJ4FN8MFoT2jHlqLLw
31
- yrap0YO5zx0GSQ0Dwy8h2n2v2vanMEeCx7iNm3ERgR5WuN5sjzWoz2A/JLEEcK0C
32
- EnAGKCWAd1fuG8IemDjT1edsd5FyYR4bIX0m+99oDuFZyPiiIbalmyYiSBBp59Yb
33
- Q0P8zeBi4OfwCZNcxqz0KONmw9JLNv6DgyEAH5xe/4JzhMEgvIRiPj0pHfA7oqQF
34
- KUNqvD1KlxbEC+bZfE5IZhnqYLdld/Ksqd22FI1RBhiS1Ejfsj99LVIm9cBuZEY2
35
- Qf04B9ceLUaC4fPVEz10FyobjaFoY4i32xRto3XnrzeAgfEe4swLq8bQsR3w/EF3
36
- MGU0FeSV2Yj7Xc2x/7BzLK8xQn5l7Yy75iPF+KP3vVmDHnNl
17
+ bzCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAOuZpyQKEwsTG9plLat7
18
+ 8bUaNuNBEnouTsNMr6X+XTgvyrAxTuocdsyP1sNCjdS1B8RiiDH1/Nt9qpvlBWon
19
+ sdJ1SYhaWNVfqiYStTDnCx3PRMmHRdD4KqUWKpN6VpZ1O/Zu+9Mw0COmvXgZuuO9
20
+ wMSJkXRo6dTCfMedLAIxjMeBIxtoLR2e6Jm6MR8+8WYYVWrO9kSOOt5eKQLBY7aK
21
+ b/Dc40EcJKPg3Z30Pia1M9ZyRlb6SOj6SKpHRqc7vbVQxjEw6Jjal1lZ49m3YZMd
22
+ ArMAs9lQZNdSw5/UX6HWWURLowg6k10RnhTUtYyzO9BFev0JFJftHnmuk8vtb+SD
23
+ 5VPmjFXg2VOcw0B7FtG75Vackk8QKfgVe3nSPhVpew2CSPlbJzH80wChbr19+e3+
24
+ YGr1tOiaJrL6c+PNmb0F31NXMKpj/r+n15HwlTMRxQrzFcgjBlxf2XFGnPQXHhBm
25
+ kp1OFnEq4GG9sON4glRldkwzi/f/fGcZmo5fm3d+0ZdNgwIDAQABo3cwdTAJBgNV
26
+ HRMEAjAAMAsGA1UdDwQEAwIEsDAdBgNVHQ4EFgQUPVH5+dLA80A1kJ2Uz5iGwfOa
27
+ 1+swHQYDVR0RBBYwFIESY29udGFjdEBrYXJhZmthLmlvMB0GA1UdEgQWMBSBEmNv
28
+ bnRhY3RAa2FyYWZrYS5pbzANBgkqhkiG9w0BAQsFAAOCAYEAnpa0jcN7JzREHMTQ
29
+ bfZ+xcvlrzuROMY6A3zIZmQgbnoZZNuX4cMRrT1p1HuwXpxdpHPw7dDjYqWw3+1h
30
+ 3mXLeMuk7amjQpYoSWU/OIZMhIsARra22UN8qkkUlUj3AwTaChVKN/bPJOM2DzfU
31
+ kz9vUgLeYYFfQbZqeI6SsM7ltilRV4W8D9yNUQQvOxCFxtLOetJ00fC/E7zMUzbK
32
+ IBwYFQYsbI6XQzgAIPW6nGSYKgRhkfpmquXSNKZRIQ4V6bFrufa+DzD0bt2ZA3ah
33
+ fMmJguyb5L2Gf1zpDXzFSPMG7YQFLzwYz1zZZvOU7/UCpQsHpID/YxqDp4+Dgb+Y
34
+ qma0whX8UG/gXFV2pYWpYOfpatvahwi+A1TwPQsuZwkkhi1OyF1At3RY+hjSXyav
35
+ AnG1dJU+yL2BK7vaVytLTstJME5mepSZ46qqIJXMuWob/YPDmVaBF39TDSG9e34s
36
+ msG3BiCqgOgHAnL23+CN3Rt8MsuRfEtoTKpJVcCfoEoNHOkc
37
37
  -----END CERTIFICATE-----
38
- date: 2023-07-29 00:00:00.000000000 Z
38
+ date: 2023-08-21 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: karafka-core
@@ -77,7 +77,7 @@ dependencies:
77
77
  requirements:
78
78
  - - ">="
79
79
  - !ruby/object:Gem::Version
80
- version: 2.6.2
80
+ version: 2.6.6
81
81
  - - "<"
82
82
  - !ruby/object:Gem::Version
83
83
  version: 3.0.0
@@ -87,7 +87,7 @@ dependencies:
87
87
  requirements:
88
88
  - - ">="
89
89
  - !ruby/object:Gem::Version
90
- version: 2.6.2
90
+ version: 2.6.6
91
91
  - - "<"
92
92
  - !ruby/object:Gem::Version
93
93
  version: 3.0.0
@@ -178,6 +178,7 @@ files:
178
178
  - lib/karafka/connection/listeners_batch.rb
179
179
  - lib/karafka/connection/messages_buffer.rb
180
180
  - lib/karafka/connection/pauses_manager.rb
181
+ - lib/karafka/connection/proxy.rb
181
182
  - lib/karafka/connection/raw_messages_buffer.rb
182
183
  - lib/karafka/connection/rebalance_manager.rb
183
184
  - lib/karafka/contracts.rb
@@ -314,6 +315,9 @@ files:
314
315
  - lib/karafka/pro/routing/features/long_running_job/config.rb
315
316
  - lib/karafka/pro/routing/features/long_running_job/contract.rb
316
317
  - lib/karafka/pro/routing/features/long_running_job/topic.rb
318
+ - lib/karafka/pro/routing/features/pausing.rb
319
+ - lib/karafka/pro/routing/features/pausing/contract.rb
320
+ - lib/karafka/pro/routing/features/pausing/topic.rb
317
321
  - lib/karafka/pro/routing/features/throttling.rb
318
322
  - lib/karafka/pro/routing/features/throttling/config.rb
319
323
  - lib/karafka/pro/routing/features/throttling/contract.rb
metadata.gz.sig CHANGED
@@ -1 +1,4 @@
1
- �߼$tJtpo�n��K����>���9mK�*��=��pc3��;����;zP���{ȁ½������Iԋ���ɹX*�����:`�� �>]�I׉��GA���wN�^E���oZO��M�͔�+�`jlX����Z^ ֿ��S�S��\ꡛ��–��-�fHU@����1� WSnH.�KfN��[������v+��Xt����A:��lO0hy��x3]?�|(���k��C+� Xf���B�١����L!��}�(@tl��G����>)ߙ ������Rxܦ �)5E�>�&-�Z���l����ç� ��<��@cػM;�?ۗ��郆p)6��N��\�6ÄDC��G?��~c��%
1
+ дy���]���GmM�R#�u��YN���SF|�F#�h@��>��=4���h��6�|�)�+Re�2�$�95*]�s<2T;E ���|)K6��{G􈒟ʪU�L����:���:T�[�M ���_�ZGi��y9;��B�b0u�P�Pi岥��A~ΰ�(��}��Iq@Q��kS�c
2
+ 7����
3
+ �����].��`�+g��41*,$[��@���B8>���v ����dy9�'2
4
+ �6Fn�v?����'�1ߛNzgzV��ɦ�հS�.�P���̑�UPH\�0���H7���:Ɨ(0���͜�|�`�R��[`xm�����Jt0�n���ܝӶv�-���*s��A1�Iq:�"�AY�