karafka 2.0.18 → 2.0.19

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 17800cf4d60529269b31340d76245ed2035b463f03daceeb482241528eb5d681
4
- data.tar.gz: 9caed11b4f7221d9c5e9c33607ba965a0fd9c75c7ce037d40e38df119a8255ea
3
+ metadata.gz: be91c3848b757c6af4c25f905df2b081629532bd29dbcea23ed2ef0af2e4e4a2
4
+ data.tar.gz: 6823d4335e4b395546642101d6754b97958c86810cbcd12819559acff74bd90d
5
5
  SHA512:
6
- metadata.gz: 5de87acf0d5dbef037c2e980d1354c398fe47bf9a14312f772b346f59bd6f638aa0d4444b9d32984f494cd66dc3cb752f843dc72534ad9ec7df367ecde100dfc
7
- data.tar.gz: 8577247eeb4eae1d5abc9e74de9c2c95787535662ed8a2a664824871fe1907abb7aa6e14619926cde218cfb06db5496ec0c38dd6fc3fc3ad0efe0b5702fe067e
6
+ metadata.gz: fce0259ee987e37c01ea037f81ea91b4eb770ea8eabcb9f93c66aa1a1960c903030648b5441945ef28f43a88660d18240e6db61f6885a169d70eb46174543616
7
+ data.tar.gz: f93985c98daba5965f8f0597da4744d1dae0603f24a6a44f4b337462f66c8b0f08d0c22dd734205a0aebe7c3dbdb73237ff568f7f1ff842b38c05a7f4b5ce463
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,5 +1,12 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.0.19 (2022-11-20)
4
+ - **[Feature]** Provide ability to skip failing messages without dispatching them to an alternative topic (DLQ).
5
+ - [Improvement] Improve the integration with Ruby on Rails by preventing double-require of components.
6
+ - [Improvement] Improve stability of the shutdown process upon critical errors.
7
+ - [Improvement] Improve stability of the integrations spec suite.
8
+ - [Fix] Fix an issue where upon fast startup of multiple subscription groups from the same consumer group, a ghost queue would be created due to problems in `Concurrent::Hash`.
9
+
3
10
  ## 2.0.18 (2022-11-18)
4
11
  - **[Feature]** Support quiet mode via `TSTP` signal. When used, Karafka will finish processing current messages, run `shutdown` jobs, and switch to a quiet mode where no new work is being accepted. At the same time, it will keep the consumer group quiet, and thus no rebalance will be triggered. This can be particularly useful during deployments.
5
12
  - [Improvement] Trigger `#revoked` for jobs in case revocation would happen during shutdown when jobs are still running. This should ensure, we get a notion of revocation for Pro LRJ jobs even when revocation happening upon shutdown (#1150).
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.0.18)
4
+ karafka (2.0.19)
5
5
  karafka-core (>= 2.0.2, < 3.0.0)
6
6
  rdkafka (>= 0.12)
7
7
  thor (>= 0.20)
data/karafka.gemspec CHANGED
@@ -34,7 +34,12 @@ Gem::Specification.new do |spec|
34
34
  spec.require_paths = %w[lib]
35
35
 
36
36
  spec.metadata = {
37
+ 'funding_uri' => 'https://karafka.io/#become-pro',
38
+ 'homepage_uri' => 'https://karafka.io',
39
+ 'changelog_uri' => 'https://github.com/karafka/karafka/blob/master/CHANGELOG.md',
40
+ 'bug_tracker_uri' => 'https://github.com/karafka/karafka/issues',
37
41
  'source_code_uri' => 'https://github.com/karafka/karafka',
42
+ 'documentation_uri' => 'https://karafka.io/docs',
38
43
  'rubygems_mfa_required' => 'true'
39
44
  }
40
45
  end
@@ -1,7 +1,9 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  begin
4
- require 'active_job'
4
+ # Do not load active job if already loaded
5
+ require 'active_job' unless Object.const_defined?('ActiveJob')
6
+
5
7
  require_relative 'queue_adapters/karafka_adapter'
6
8
 
7
9
  module ActiveJob
@@ -241,6 +241,17 @@ module Karafka
241
241
  end
242
242
  end
243
243
 
244
+ # Runs a single poll ignoring all the potential errors
245
+ # This is used as a keep-alive in the shutdown stage and any errors that happen here are
246
+ # irrelevant from the shutdown process perspective
247
+ #
248
+ # This is used only to trigger rebalance callbacks
249
+ def ping
250
+ poll(100)
251
+ rescue Rdkafka::RdkafkaError
252
+ nil
253
+ end
254
+
244
255
  private
245
256
 
246
257
  # When we cannot store an offset, it means we no longer own the partition
@@ -302,8 +313,8 @@ module Karafka
302
313
 
303
314
  @kafka.close
304
315
  @buffer.clear
305
- # @note We do not clear rebalance manager here as we may still have revocation info here
306
- # that we want to consider valid prior to running another reconnection
316
+ # @note We do not clear rebalance manager here as we may still have revocation info
317
+ # here that we want to consider valid prior to running another reconnection
307
318
  end
308
319
  end
309
320
  end
@@ -0,0 +1,47 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Connection
5
+ # This object represents a collective status of execution of group of listeners running inside
6
+ # of one consumer group but in separate subscription groups.
7
+ #
8
+ # There are cases when we do not want to close a given client when others from the same
9
+ # consumer group are running because it can cause instabilities due to early shutdown of some
10
+ # of the clients out of same consumer group.
11
+ #
12
+ # We also want to make sure, we close one consumer at a time while others can continue polling.
13
+ #
14
+ # This prevents a scenario, where a rebalance is not acknowledged and we loose assignment
15
+ # without having a chance to commit changes.
16
+ class ConsumerGroupCoordinator
17
+ # @param group_size [Integer] number of separate subscription groups in a consumer group
18
+ def initialize(group_size)
19
+ # We need two locks here:
20
+ # - first one is to decrement the number of listeners doing work
21
+ # - second to ensure only one client is being closed the same time and that others can
22
+ # wait actively (not locked)
23
+ @work_mutex = Mutex.new
24
+ @shutdown_lock = Mutex.new
25
+ @group_size = group_size
26
+ @finished = Set.new
27
+ end
28
+
29
+ # @return [Boolean] can we start shutdown on a given listener
30
+ # @note If true, will also obtain a lock so no-one else will be closing the same time we do
31
+ def shutdown?
32
+ @finished.size == @group_size && @shutdown_lock.try_lock
33
+ end
34
+
35
+ # Unlocks the shutdown lock
36
+ def unlock
37
+ @shutdown_lock.unlock if @shutdown_lock.owned?
38
+ end
39
+
40
+ # Marks given listener as finished
41
+ # @param listener_id [String]
42
+ def finish_work(listener_id)
43
+ @finished << listener_id
44
+ end
45
+ end
46
+ end
47
+ end
@@ -14,20 +14,15 @@ module Karafka
14
14
  # @return [String] id of this listener
15
15
  attr_reader :id
16
16
 
17
- # Mutex for things we do not want to do in parallel even in different consumer groups
18
- MUTEX = Mutex.new
19
-
20
- private_constant :MUTEX
21
-
22
- # @param consumer_group_status [Karafka::Connection::ConsumerGroupStatus]
17
+ # @param consumer_group_coordinator [Karafka::Connection::ConsumerGroupCoordinator]
23
18
  # @param subscription_group [Karafka::Routing::SubscriptionGroup]
24
19
  # @param jobs_queue [Karafka::Processing::JobsQueue] queue where we should push work
25
20
  # @return [Karafka::Connection::Listener] listener instance
26
- def initialize(consumer_group_status, subscription_group, jobs_queue)
21
+ def initialize(consumer_group_coordinator, subscription_group, jobs_queue)
27
22
  proc_config = ::Karafka::App.config.internal.processing
28
23
 
29
24
  @id = SecureRandom.uuid
30
- @consumer_group_status = consumer_group_status
25
+ @consumer_group_coordinator = consumer_group_coordinator
31
26
  @subscription_group = subscription_group
32
27
  @jobs_queue = jobs_queue
33
28
  @coordinators = Processing::CoordinatorsBuffer.new
@@ -68,17 +63,13 @@ module Karafka
68
63
  #
69
64
  # @note We wrap it with a mutex exactly because of the above case of forceful shutdown
70
65
  def shutdown
71
- # We want to make sure that we never close two librdkafka clients at the same time. I'm not
72
- # particularly fond of it's shutdown API being fully thread-safe
73
- MUTEX.synchronize do
74
- return if @stopped
75
-
76
- @mutex.synchronize do
77
- @stopped = true
78
- @executors.clear
79
- @coordinators.reset
80
- @client.stop
81
- end
66
+ return if @stopped
67
+
68
+ @mutex.synchronize do
69
+ @stopped = true
70
+ @executors.clear
71
+ @coordinators.reset
72
+ @client.stop
82
73
  end
83
74
  end
84
75
 
@@ -147,9 +138,9 @@ module Karafka
147
138
  # What we do care however is the ability to still run revocation jobs in case anything
148
139
  # would change in the cluster. We still want to notify the long-running jobs about changes
149
140
  # that occurred in the cluster.
150
- wait_polling(
141
+ wait_pinging(
151
142
  wait_until: -> { @jobs_queue.empty?(@subscription_group.id) },
152
- after_poll: -> { build_and_schedule_revoke_lost_partitions_jobs }
143
+ after_ping: -> { build_and_schedule_revoke_lost_partitions_jobs }
153
144
  )
154
145
 
155
146
  # We do not want to schedule the shutdown jobs prior to finishing all the jobs
@@ -159,19 +150,23 @@ module Karafka
159
150
  build_and_schedule_shutdown_jobs
160
151
 
161
152
  # Wait until all the shutdown jobs are done
162
- wait_polling(wait_until: -> { @jobs_queue.empty?(@subscription_group.id) })
153
+ wait_pinging(wait_until: -> { @jobs_queue.empty?(@subscription_group.id) })
163
154
 
164
155
  # Once all the work is done, we need to decrement counter of active subscription groups
165
156
  # within this consumer group
166
- @consumer_group_status.finish
157
+ @consumer_group_coordinator.finish_work(id)
167
158
 
168
159
  # Wait if we're in the quiet mode
169
- wait_polling(wait_until: -> { !Karafka::App.quieting? })
160
+ wait_pinging(wait_until: -> { !Karafka::App.quieting? })
170
161
 
171
162
  # We need to wait until all the work in the whole consumer group (local to the process)
172
163
  # is done. Otherwise we may end up with locks and `Timed out LeaveGroupRequest in flight`
173
164
  # warning notifications.
174
- wait_polling(wait_until: -> { !@consumer_group_status.working? })
165
+ wait_pinging(wait_until: -> { @consumer_group_coordinator.shutdown? })
166
+
167
+ # This extra ping will make sure we've refreshed the rebalance state after other instances
168
+ # potentially shutdown. This will prevent us from closing with a dangling callback
169
+ @client.ping
175
170
 
176
171
  shutdown
177
172
 
@@ -189,6 +184,8 @@ module Karafka
189
184
  restart
190
185
 
191
186
  sleep(1) && retry
187
+ ensure
188
+ @consumer_group_coordinator.unlock
192
189
  end
193
190
 
194
191
  # Resumes processing of partitions that were paused due to an error.
@@ -293,15 +290,15 @@ module Karafka
293
290
  # can safely discard it. We can however use the rebalance information if needed.
294
291
  #
295
292
  # @param wait_until [Proc] until this evaluates to true, we will poll data
296
- # @param after_poll [Proc] code that we want to run after each batch poll (if any)
293
+ # @param after_ping [Proc] code that we want to run after each ping (if any)
297
294
  #
298
295
  # @note Performance of this is not relevant (in regards to blocks) because it is used only
299
296
  # on shutdown and quiet, hence not in the running mode
300
- def wait_polling(wait_until:, after_poll: -> {})
297
+ def wait_pinging(wait_until:, after_ping: -> {})
301
298
  until wait_until.call
302
- @client.batch_poll
303
-
304
- after_poll.call
299
+ @client.ping
300
+ after_ping.call
301
+ sleep(0.2)
305
302
  end
306
303
  end
307
304
 
@@ -10,13 +10,13 @@ module Karafka
10
10
  # @return [ListenersBatch]
11
11
  def initialize(jobs_queue)
12
12
  @batch = App.subscription_groups.flat_map do |_consumer_group, subscription_groups|
13
- consumer_group_status = Connection::ConsumerGroupStatus.new(
13
+ consumer_group_coordinator = Connection::ConsumerGroupCoordinator.new(
14
14
  subscription_groups.size
15
15
  )
16
16
 
17
17
  subscription_groups.map do |subscription_group|
18
18
  Connection::Listener.new(
19
- consumer_group_status,
19
+ consumer_group_coordinator,
20
20
  subscription_group,
21
21
  jobs_queue
22
22
  )
@@ -44,9 +44,6 @@ module Karafka
44
44
  # This should never happen. Please open an issue if it does.
45
45
  InvalidCoordinatorStateError = Class.new(BaseError)
46
46
 
47
- # This should never happen. Please open an issue if it does.
48
- InvalidConsumerGroupStatusError = Class.new(BaseError)
49
-
50
47
  # This should never happen. Please open an issue if it does.
51
48
  StrategyNotFoundError = Class.new(BaseError)
52
49
 
@@ -25,11 +25,13 @@ module Karafka
25
25
 
26
26
  # Builds up nested concurrent hash for data tracking
27
27
  def initialize
28
- @processing_times = Concurrent::Hash.new do |topics_hash, topic|
29
- topics_hash[topic] = Concurrent::Hash.new do |partitions_hash, partition|
30
- # This array does not have to be concurrent because we always access single partition
31
- # data via instrumentation that operates in a single thread via consumer
32
- partitions_hash[partition] = []
28
+ @processing_times = Concurrent::Map.new do |topics_hash, topic|
29
+ topics_hash.compute_if_absent(topic) do
30
+ Concurrent::Map.new do |partitions_hash, partition|
31
+ # This array does not have to be concurrent because we always access single
32
+ # partition data via instrumentation that operates in a single thread via consumer
33
+ partitions_hash.compute_if_absent(partition) { [] }
34
+ end
33
35
  end
34
36
  end
35
37
  end
@@ -21,7 +21,7 @@ module Karafka
21
21
  #
22
22
  # AJ has manual offset management on by default and the offset management is delegated to
23
23
  # the AJ consumer. This means, we cannot mark as consumed always. We can only mark as
24
- # consumed when we skip given job upon errors. In all the other scenarions marking as
24
+ # consumed when we skip given job upon errors. In all the other scenarios marking as
25
25
  # consumed needs to happen in the AJ consumer on a per job basis.
26
26
  module AjDlqMom
27
27
  include DlqMom
@@ -46,7 +46,7 @@ module Karafka
46
46
  else
47
47
  coordinator.pause_tracker.reset
48
48
  skippable_message = find_skippable_message
49
- dispatch_to_dlq(skippable_message)
49
+ dispatch_to_dlq(skippable_message) if dispatch_to_dlq?
50
50
  # We can commit the offset here because we know that we skip it "forever" and
51
51
  # since AJ consumer commits the offset after each job, we also know that the
52
52
  # previous job was successful
@@ -42,7 +42,7 @@ module Karafka
42
42
  # We reset the pause to indicate we will now consider it as "ok".
43
43
  coordinator.pause_tracker.reset
44
44
  skippable_message = find_skippable_message
45
- dispatch_to_dlq(skippable_message)
45
+ dispatch_to_dlq(skippable_message) if dispatch_to_dlq?
46
46
  mark_as_consumed(skippable_message)
47
47
  pause(coordinator.seek_offset)
48
48
  end
@@ -59,7 +59,6 @@ module Karafka
59
59
 
60
60
  # Moves the broken message into a separate queue defined via the settings
61
61
  #
62
- # @private
63
62
  # @param skippable_message [Array<Karafka::Messages::Message>] message we want to
64
63
  # dispatch to DLQ
65
64
  def dispatch_to_dlq(skippable_message)
@@ -81,6 +80,13 @@ module Karafka
81
80
  message: skippable_message
82
81
  )
83
82
  end
83
+
84
+ # @return [Boolean] should we dispatch the message to DLQ or not. When the dispatch topic
85
+ # is set to false, we will skip the dispatch, effectively ignoring the broken message
86
+ # without taking any action.
87
+ def dispatch_to_dlq?
88
+ topic.dead_letter_queue.topic
89
+ end
84
90
  end
85
91
  end
86
92
  end
@@ -43,10 +43,9 @@ module Karafka
43
43
  else
44
44
  coordinator.pause_tracker.reset
45
45
 
46
- skippable_message = find_skippable_message
47
-
48
46
  unless revoked?
49
- dispatch_to_dlq(skippable_message)
47
+ skippable_message = find_skippable_message
48
+ dispatch_to_dlq(skippable_message) if dispatch_to_dlq?
50
49
  mark_as_consumed(skippable_message)
51
50
  end
52
51
 
@@ -42,10 +42,12 @@ module Karafka
42
42
  else
43
43
  coordinator.pause_tracker.reset
44
44
 
45
- skippable_message = find_skippable_message
46
-
47
45
  unless revoked?
48
- dispatch_to_dlq(skippable_message)
46
+ if dispatch_to_dlq?
47
+ skippable_message = find_skippable_message
48
+ dispatch_to_dlq(skippable_message)
49
+ end
50
+
49
51
  seek(coordinator.seek_offset)
50
52
  end
51
53
 
@@ -45,8 +45,12 @@ module Karafka
45
45
  else
46
46
  # We reset the pause to indicate we will now consider it as "ok".
47
47
  coordinator.pause_tracker.reset
48
- skippable_message = find_skippable_message
49
- dispatch_to_dlq(skippable_message)
48
+
49
+ if dispatch_to_dlq?
50
+ skippable_message = find_skippable_message
51
+ dispatch_to_dlq(skippable_message)
52
+ end
53
+
50
54
  pause(coordinator.seek_offset)
51
55
  end
52
56
  end
@@ -20,8 +20,12 @@ module Karafka
20
20
  # scheduled by Ruby hundreds of thousands of times per group.
21
21
  # We cannot use a single semaphore as it could potentially block in listeners that should
22
22
  # process with their data and also could unlock when a given group needs to remain locked
23
- @semaphores = Concurrent::Map.new { |h, k| h[k] = Queue.new }
23
+ @semaphores = Concurrent::Map.new do |h, k|
24
+ h.compute_if_absent(k) { Queue.new }
25
+ end
26
+
24
27
  @in_processing = Hash.new { |h, k| h[k] = [] }
28
+
25
29
  @mutex = Mutex.new
26
30
  end
27
31
 
@@ -5,7 +5,8 @@
5
5
  rails = false
6
6
 
7
7
  begin
8
- require 'rails'
8
+ # Do not load Rails again if already loaded
9
+ Object.const_defined?('Rails::Railtie') || require('rails')
9
10
 
10
11
  rails = true
11
12
  rescue LoadError
@@ -29,6 +29,8 @@ module Karafka
29
29
 
30
30
  topic = dead_letter_queue[:topic]
31
31
 
32
+ # When topic is set to false, it means we just want to skip dispatch on DLQ
33
+ next if topic == false
32
34
  next if topic.is_a?(String) && Contracts::TOPIC_REGEXP.match?(topic)
33
35
 
34
36
  [[%i[dead_letter_queue topic], :format]]
@@ -12,7 +12,8 @@ module Karafka
12
12
  private_constant :DEFAULT_MAX_RETRIES
13
13
 
14
14
  # @param max_retries [Integer] after how many retries should we move data to dlq
15
- # @param topic [String] where the messages should be moved if failing
15
+ # @param topic [String, false] where the messages should be moved if failing or false
16
+ # if we do not want to move it anywhere and just skip
16
17
  # @return [Config] defined config
17
18
  def dead_letter_queue(max_retries: DEFAULT_MAX_RETRIES, topic: nil)
18
19
  @dead_letter_queue ||= Config.new(
@@ -84,6 +84,7 @@ module Karafka
84
84
  reconnect.backoff.jitter.ms
85
85
  reconnect.backoff.max.ms
86
86
  reconnect.backoff.ms
87
+ resolve_cb
87
88
  sasl.kerberos.keytab
88
89
  sasl.kerberos.kinit.cmd
89
90
  sasl.kerberos.min.time.before.relogin
@@ -215,6 +216,7 @@ module Karafka
215
216
  reconnect.backoff.ms
216
217
  request.required.acks
217
218
  request.timeout.ms
219
+ resolve_cb
218
220
  retries
219
221
  retry.backoff.ms
220
222
  sasl.kerberos.keytab
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.0.18'
6
+ VERSION = '2.0.19'
7
7
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.18
4
+ version: 2.0.19
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -35,7 +35,7 @@ cert_chain:
35
35
  Qf04B9ceLUaC4fPVEz10FyobjaFoY4i32xRto3XnrzeAgfEe4swLq8bQsR3w/EF3
36
36
  MGU0FeSV2Yj7Xc2x/7BzLK8xQn5l7Yy75iPF+KP3vVmDHnNl
37
37
  -----END CERTIFICATE-----
38
- date: 2022-11-18 00:00:00.000000000 Z
38
+ date: 2022-11-20 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: karafka-core
@@ -176,7 +176,7 @@ files:
176
176
  - lib/karafka/cli/install.rb
177
177
  - lib/karafka/cli/server.rb
178
178
  - lib/karafka/connection/client.rb
179
- - lib/karafka/connection/consumer_group_status.rb
179
+ - lib/karafka/connection/consumer_group_coordinator.rb
180
180
  - lib/karafka/connection/listener.rb
181
181
  - lib/karafka/connection/listeners_batch.rb
182
182
  - lib/karafka/connection/messages_buffer.rb
@@ -327,7 +327,12 @@ licenses:
327
327
  - LGPL-3.0
328
328
  - Commercial
329
329
  metadata:
330
+ funding_uri: https://karafka.io/#become-pro
331
+ homepage_uri: https://karafka.io
332
+ changelog_uri: https://github.com/karafka/karafka/blob/master/CHANGELOG.md
333
+ bug_tracker_uri: https://github.com/karafka/karafka/issues
330
334
  source_code_uri: https://github.com/karafka/karafka
335
+ documentation_uri: https://karafka.io/docs
331
336
  rubygems_mfa_required: 'true'
332
337
  post_install_message:
333
338
  rdoc_options: []
metadata.gz.sig CHANGED
@@ -1,2 +1,2 @@
1
- ��7j]��Q"�:|fOlqpA�lhe�+��ᜄfKX񳒗�
2
- ]#E�_?�� �k��;qeN猅�9���ح�f���BɣY��8՟�x̗-֞�ݢ�fگ݆#�i����ÔFw@�u���/� U���BC�-��w���j_�{��������/�^r�h| ����x�
1
+ &ޤ�| .݇ځ�a=�8aۍ_o2Y�W��f1x|ľn�=�M�q���'k���q��6�� ��wKKvWMV])�J ƜA�;�����yԟ��=��IB+,;���
2
+ b7��<
@@ -1,38 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Karafka
4
- module Connection
5
- # This object represents a collective status of execution of group of listeners running inside
6
- # of one consumer group but potentially in separate subscription groups.
7
- #
8
- # There are cases when we do not want to close a given client when others from the same
9
- # consumer group are running because it can cause instabilities due to early shutdown of some
10
- # of the clients out of same consumer group.
11
- #
12
- # Here we can track it and only shutdown listeners when all work in a group is done.
13
- class ConsumerGroupStatus
14
- # @param group_size [Integer] number of separate subscription groups in a consumer group
15
- def initialize(group_size)
16
- @mutex = Mutex.new
17
- @active_size = group_size
18
- end
19
-
20
- # @return [Boolean] Are there any listeners that are still doing any type of work. If not,
21
- # it means a consumer group is safe to be shutdown fully.
22
- def working?
23
- @active_size.positive?
24
- end
25
-
26
- # Decrements number of working listeners in the group by one until there's none
27
- def finish
28
- @mutex.synchronize do
29
- @active_size -= 1
30
-
31
- return if @active_size >= 0
32
-
33
- raise Errors::InvalidConsumerGroupStatusError, @active_size
34
- end
35
- end
36
- end
37
- end
38
- end