karafka 2.0.16 → 2.0.18

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8cc0ac8b318d7c40e8974e2e26753c68cc5f611bbcd0e4fbea94e3eb275002dd
4
- data.tar.gz: db638edaecc662ac8a0f7d627f4bcae6e1e1e83500a49c4dfa4a4f32ee8f78c9
3
+ metadata.gz: 17800cf4d60529269b31340d76245ed2035b463f03daceeb482241528eb5d681
4
+ data.tar.gz: 9caed11b4f7221d9c5e9c33607ba965a0fd9c75c7ce037d40e38df119a8255ea
5
5
  SHA512:
6
- metadata.gz: 6875e7e152b4150f397f3830c828412e0477409483c81e714525161e5263cff686c0cb9b46a6c11cb5b914dc51087e4bac95d5dcdc0e25c7233825b363ed7e6c
7
- data.tar.gz: 1f35d410350dc3fbe97462e979f3df832484df1f1f5119e16fe2c28210d01a05e1d3e086c73e1a3b70ba3c6cae9a25dd50e0907ca454b68c9166efcabd22f972
6
+ metadata.gz: 5de87acf0d5dbef037c2e980d1354c398fe47bf9a14312f772b346f59bd6f638aa0d4444b9d32984f494cd66dc3cb752f843dc72534ad9ec7df367ecde100dfc
7
+ data.tar.gz: 8577247eeb4eae1d5abc9e74de9c2c95787535662ed8a2a664824871fe1907abb7aa6e14619926cde218cfb06db5496ec0c38dd6fc3fc3ad0efe0b5702fe067e
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,5 +1,35 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.0.18 (2022-11-18)
4
+ - **[Feature]** Support quiet mode via `TSTP` signal. When used, Karafka will finish processing current messages, run `shutdown` jobs, and switch to a quiet mode where no new work is being accepted. At the same time, it will keep the consumer group quiet, and thus no rebalance will be triggered. This can be particularly useful during deployments.
5
+ - [Improvement] Trigger `#revoked` for jobs in case revocation would happen during shutdown when jobs are still running. This should ensure, we get a notion of revocation for Pro LRJ jobs even when revocation happening upon shutdown (#1150).
6
+ - [Improvement] Stabilize the shutdown procedure for consumer groups with many subscription groups that have non-aligned processing cost per batch.
7
+ - [Improvement] Remove double loading of Karafka via Rails railtie.
8
+ - [Fix] Fix invalid class references in YARD docs.
9
+ - [Fix] prevent parallel closing of many clients.
10
+ - [Fix] fix a case where information about revocation for a combination of LRJ + VP would not be dispatched until all VP work is done.
11
+
12
+ ## 2.0.17 (2022-11-10)
13
+ - [Fix] Few typos around DLQ and Pro DLQ Dispatch original metadata naming.
14
+ - [Fix] Narrow the components lookup to the appropriate scope (#1114)
15
+
16
+ ### Upgrade notes
17
+
18
+ 1. Replace `original-*` references from DLQ dispatched metadata with `original_*`
19
+
20
+ ```ruby
21
+ # DLQ topic consumption
22
+ def consume
23
+ messages.each do |broken_message|
24
+ topic = broken_message.metadata['original_topic'] # was original-topic
25
+ partition = broken_message.metadata['original_partition'] # was original-partition
26
+ offset = broken_message.metadata['original_offset'] # was original-offset
27
+
28
+ Rails.logger.error "This message is broken: #{topic}/#{partition}/#{offset}"
29
+ end
30
+ end
31
+ ```
32
+
3
33
  ## 2.0.16 (2022-11-09)
4
34
  - **[Breaking]** Disable the root `manual_offset_management` setting and require it to be configured per topic. This is part of "topic features" configuration extraction for better code organization.
5
35
  - **[Feature]** Introduce **Dead Letter Queue** feature and Pro **Enhanced Dead Letter Queue** feature
@@ -26,7 +56,6 @@
26
56
  - [Specs] Split specs into regular and pro to simplify how resources are loaded
27
57
  - [Specs] Add specs to ensure, that all the Pro components have a proper per-file license (#1099)
28
58
 
29
-
30
59
  ### Upgrade notes
31
60
 
32
61
  1. Remove the `manual_offset_management` setting from the main config if you use it:
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.0.16)
4
+ karafka (2.0.18)
5
5
  karafka-core (>= 2.0.2, < 3.0.0)
6
6
  rdkafka (>= 0.12)
7
7
  thor (>= 0.20)
data/README.md CHANGED
@@ -4,6 +4,8 @@
4
4
  [![Gem Version](https://badge.fury.io/rb/karafka.svg)](http://badge.fury.io/rb/karafka)
5
5
  [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
6
6
 
7
+ **Note**: Upgrade notes for migration from Karafka `1.4` to Karafka `2.0` can be found [here](https://karafka.io/docs/Upgrades-2.0/).
8
+
7
9
  ## About Karafka
8
10
 
9
11
  Karafka is a Ruby and Rails multi-threaded efficient Kafka processing framework that:
data/lib/karafka/app.rb CHANGED
@@ -14,11 +14,12 @@ module Karafka
14
14
  .builder
15
15
  end
16
16
 
17
- # @return [Array<Karafka::Routing::SubscriptionGroup>] active subscription groups
17
+ # @return [Hash] active subscription groups grouped based on consumer group in a hash
18
18
  def subscription_groups
19
19
  consumer_groups
20
20
  .active
21
- .flat_map(&:subscription_groups)
21
+ .map { |consumer_group| [consumer_group, consumer_group.subscription_groups] }
22
+ .to_h
22
23
  end
23
24
 
24
25
  # Just a nicer name for the consumer groups
@@ -17,7 +17,11 @@ module Karafka
17
17
  # How many times should we retry polling in case of a failure
18
18
  MAX_POLL_RETRIES = 20
19
19
 
20
- private_constant :MAX_POLL_RETRIES
20
+ # We want to make sure we never close several clients in the same moment to prevent
21
+ # potential race conditions and other issues
22
+ SHUTDOWN_MUTEX = Mutex.new
23
+
24
+ private_constant :MAX_POLL_RETRIES, :SHUTDOWN_MUTEX
21
25
 
22
26
  # Creates a new consumer instance.
23
27
  #
@@ -281,24 +285,26 @@ module Karafka
281
285
 
282
286
  # Commits the stored offsets in a sync way and closes the consumer.
283
287
  def close
284
- @mutex.synchronize do
285
- # Once client is closed, we should not close it again
286
- # This could only happen in case of a race-condition when forceful shutdown happens
287
- # and triggers this from a different thread
288
- return if @closed
289
-
290
- @closed = true
291
-
292
- internal_commit_offsets(async: false)
293
-
294
- # Remove callbacks runners that were registered
295
- ::Karafka::Instrumentation.statistics_callbacks.delete(@subscription_group.id)
296
- ::Karafka::Instrumentation.error_callbacks.delete(@subscription_group.id)
297
-
298
- @kafka.close
299
- @buffer.clear
300
- # @note We do not clear rebalance manager here as we may still have revocation info here
301
- # that we want to consider valid prior to running another reconnection
288
+ # Allow only one client to be closed at the same time
289
+ SHUTDOWN_MUTEX.synchronize do
290
+ # Make sure that no other operations are happening on this client when we close it
291
+ @mutex.synchronize do
292
+ # Once client is closed, we should not close it again
293
+ # This could only happen in case of a race-condition when forceful shutdown happens
294
+ # and triggers this from a different thread
295
+ return if @closed
296
+
297
+ @closed = true
298
+
299
+ # Remove callbacks runners that were registered
300
+ ::Karafka::Instrumentation.statistics_callbacks.delete(@subscription_group.id)
301
+ ::Karafka::Instrumentation.error_callbacks.delete(@subscription_group.id)
302
+
303
+ @kafka.close
304
+ @buffer.clear
305
+ # @note We do not clear rebalance manager here as we may still have revocation info here
306
+ # that we want to consider valid prior to running another reconnection
307
+ end
302
308
  end
303
309
  end
304
310
 
@@ -0,0 +1,38 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Connection
5
+ # This object represents a collective status of execution of group of listeners running inside
6
+ # of one consumer group but potentially in separate subscription groups.
7
+ #
8
+ # There are cases when we do not want to close a given client when others from the same
9
+ # consumer group are running because it can cause instabilities due to early shutdown of some
10
+ # of the clients out of same consumer group.
11
+ #
12
+ # Here we can track it and only shutdown listeners when all work in a group is done.
13
+ class ConsumerGroupStatus
14
+ # @param group_size [Integer] number of separate subscription groups in a consumer group
15
+ def initialize(group_size)
16
+ @mutex = Mutex.new
17
+ @active_size = group_size
18
+ end
19
+
20
+ # @return [Boolean] Are there any listeners that are still doing any type of work. If not,
21
+ # it means a consumer group is safe to be shutdown fully.
22
+ def working?
23
+ @active_size.positive?
24
+ end
25
+
26
+ # Decrements number of working listeners in the group by one until there's none
27
+ def finish
28
+ @mutex.synchronize do
29
+ @active_size -= 1
30
+
31
+ return if @active_size >= 0
32
+
33
+ raise Errors::InvalidConsumerGroupStatusError, @active_size
34
+ end
35
+ end
36
+ end
37
+ end
38
+ end
@@ -14,13 +14,20 @@ module Karafka
14
14
  # @return [String] id of this listener
15
15
  attr_reader :id
16
16
 
17
+ # Mutex for things we do not want to do in parallel even in different consumer groups
18
+ MUTEX = Mutex.new
19
+
20
+ private_constant :MUTEX
21
+
22
+ # @param consumer_group_status [Karafka::Connection::ConsumerGroupStatus]
17
23
  # @param subscription_group [Karafka::Routing::SubscriptionGroup]
18
24
  # @param jobs_queue [Karafka::Processing::JobsQueue] queue where we should push work
19
25
  # @return [Karafka::Connection::Listener] listener instance
20
- def initialize(subscription_group, jobs_queue)
26
+ def initialize(consumer_group_status, subscription_group, jobs_queue)
21
27
  proc_config = ::Karafka::App.config.internal.processing
22
28
 
23
29
  @id = SecureRandom.uuid
30
+ @consumer_group_status = consumer_group_status
24
31
  @subscription_group = subscription_group
25
32
  @jobs_queue = jobs_queue
26
33
  @coordinators = Processing::CoordinatorsBuffer.new
@@ -61,13 +68,17 @@ module Karafka
61
68
  #
62
69
  # @note We wrap it with a mutex exactly because of the above case of forceful shutdown
63
70
  def shutdown
64
- return if @stopped
65
-
66
- @mutex.synchronize do
67
- @stopped = true
68
- @executors.clear
69
- @coordinators.reset
70
- @client.stop
71
+ # We want to make sure that we never close two librdkafka clients at the same time. I'm not
72
+ # particularly fond of it's shutdown API being fully thread-safe
73
+ MUTEX.synchronize do
74
+ return if @stopped
75
+
76
+ @mutex.synchronize do
77
+ @stopped = true
78
+ @executors.clear
79
+ @coordinators.reset
80
+ @client.stop
81
+ end
71
82
  end
72
83
  end
73
84
 
@@ -82,7 +93,8 @@ module Karafka
82
93
  # Kafka connections / Internet connection issues / Etc. Business logic problems should not
83
94
  # propagate this far.
84
95
  def fetch_loop
85
- until Karafka::App.stopping?
96
+ # Run the main loop as long as we are not stopping or moving into quiet mode
97
+ until Karafka::App.stopping? || Karafka::App.quieting?
86
98
  Karafka.monitor.instrument(
87
99
  'connection.listener.fetch_loop',
88
100
  caller: self,
@@ -122,7 +134,7 @@ module Karafka
122
134
  wait
123
135
  end
124
136
 
125
- # If we are stopping we will no longer schedule any jobs despite polling.
137
+ # If we are stopping we will no longer schedule any regular jobs despite polling.
126
138
  # We need to keep polling not to exceed the `max.poll.interval` for long-running
127
139
  # non-blocking jobs and we need to allow them to finish. We however do not want to
128
140
  # enqueue any new jobs. It's worth keeping in mind that it is the end user responsibility
@@ -131,7 +143,14 @@ module Karafka
131
143
  #
132
144
  # We do not care about resuming any partitions or lost jobs as we do not plan to do
133
145
  # anything with them as we're in the shutdown phase.
134
- wait_with_poll
146
+ #
147
+ # What we do care however is the ability to still run revocation jobs in case anything
148
+ # would change in the cluster. We still want to notify the long-running jobs about changes
149
+ # that occurred in the cluster.
150
+ wait_polling(
151
+ wait_until: -> { @jobs_queue.empty?(@subscription_group.id) },
152
+ after_poll: -> { build_and_schedule_revoke_lost_partitions_jobs }
153
+ )
135
154
 
136
155
  # We do not want to schedule the shutdown jobs prior to finishing all the jobs
137
156
  # (including non-blocking) as there might be a long-running job with a shutdown and then
@@ -139,7 +158,20 @@ module Karafka
139
158
  # as it could create a race-condition.
140
159
  build_and_schedule_shutdown_jobs
141
160
 
142
- wait_with_poll
161
+ # Wait until all the shutdown jobs are done
162
+ wait_polling(wait_until: -> { @jobs_queue.empty?(@subscription_group.id) })
163
+
164
+ # Once all the work is done, we need to decrement counter of active subscription groups
165
+ # within this consumer group
166
+ @consumer_group_status.finish
167
+
168
+ # Wait if we're in the quiet mode
169
+ wait_polling(wait_until: -> { !Karafka::App.quieting? })
170
+
171
+ # We need to wait until all the work in the whole consumer group (local to the process)
172
+ # is done. Otherwise we may end up with locks and `Timed out LeaveGroupRequest in flight`
173
+ # warning notifications.
174
+ wait_polling(wait_until: -> { !@consumer_group_status.working? })
143
175
 
144
176
  shutdown
145
177
 
@@ -256,10 +288,21 @@ module Karafka
256
288
  end
257
289
 
258
290
  # Waits without blocking the polling
259
- # This should be used only when we no longer plan to use any incoming data and we can safely
260
- # discard it
261
- def wait_with_poll
262
- @client.batch_poll until @jobs_queue.empty?(@subscription_group.id)
291
+ #
292
+ # This should be used only when we no longer plan to use any incoming messages data and we
293
+ # can safely discard it. We can however use the rebalance information if needed.
294
+ #
295
+ # @param wait_until [Proc] until this evaluates to true, we will poll data
296
+ # @param after_poll [Proc] code that we want to run after each batch poll (if any)
297
+ #
298
+ # @note Performance of this is not relevant (in regards to blocks) because it is used only
299
+ # on shutdown and quiet, hence not in the running mode
300
+ def wait_polling(wait_until:, after_poll: -> {})
301
+ until wait_until.call
302
+ @client.batch_poll
303
+
304
+ after_poll.call
305
+ end
263
306
  end
264
307
 
265
308
  # We can stop client without a problem, as it will reinitialize itself when running the
@@ -9,8 +9,18 @@ module Karafka
9
9
  # @param jobs_queue [JobsQueue]
10
10
  # @return [ListenersBatch]
11
11
  def initialize(jobs_queue)
12
- @batch = App.subscription_groups.map do |subscription_group|
13
- Connection::Listener.new(subscription_group, jobs_queue)
12
+ @batch = App.subscription_groups.flat_map do |_consumer_group, subscription_groups|
13
+ consumer_group_status = Connection::ConsumerGroupStatus.new(
14
+ subscription_groups.size
15
+ )
16
+
17
+ subscription_groups.map do |subscription_group|
18
+ Connection::Listener.new(
19
+ consumer_group_status,
20
+ subscription_group,
21
+ jobs_queue
22
+ )
23
+ end
14
24
  end
15
25
  end
16
26
 
@@ -18,6 +18,16 @@ module Karafka
18
18
  # Stop needs to be blocking to wait for all the things to finalize
19
19
  Karafka::Server.stop
20
20
  end
21
+
22
+ # Quiets Karafka upon any event
23
+ #
24
+ # @note This method is not blocking and will not wait for Karafka to fully quiet.
25
+ # It will trigger the quiet procedure but won't wait.
26
+ #
27
+ # @note Please keep in mind you need to `#stop` to actually stop the server anyhow.
28
+ def quiet
29
+ Karafka::Server.quiet
30
+ end
21
31
  end
22
32
  end
23
33
  end
@@ -44,6 +44,9 @@ module Karafka
44
44
  # This should never happen. Please open an issue if it does.
45
45
  InvalidCoordinatorStateError = Class.new(BaseError)
46
46
 
47
+ # This should never happen. Please open an issue if it does.
48
+ InvalidConsumerGroupStatusError = Class.new(BaseError)
49
+
47
50
  # This should never happen. Please open an issue if it does.
48
51
  StrategyNotFoundError = Class.new(BaseError)
49
52
 
@@ -18,7 +18,7 @@ module Karafka
18
18
 
19
19
  # Logs each messages fetching attempt
20
20
  #
21
- # @param event [Dry::Events::Event] event details including payload
21
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
22
22
  def on_connection_listener_fetch_loop(event)
23
23
  listener = event[:caller]
24
24
  debug "[#{listener.id}] Polling messages..."
@@ -26,7 +26,7 @@ module Karafka
26
26
 
27
27
  # Logs about messages that we've received from Kafka
28
28
  #
29
- # @param event [Dry::Events::Event] event details including payload
29
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
30
30
  def on_connection_listener_fetch_loop_received(event)
31
31
  listener = event[:caller]
32
32
  time = event[:time]
@@ -42,7 +42,7 @@ module Karafka
42
42
 
43
43
  # Prints info about the fact that a given job has started
44
44
  #
45
- # @param event [Dry::Events::Event] event details including payload
45
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
46
46
  def on_worker_process(event)
47
47
  job = event[:job]
48
48
  job_type = job.class.to_s.split('::').last
@@ -53,7 +53,7 @@ module Karafka
53
53
 
54
54
  # Prints info about the fact that a given job has finished
55
55
  #
56
- # @param event [Dry::Events::Event] event details including payload
56
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
57
57
  def on_worker_processed(event)
58
58
  job = event[:job]
59
59
  time = event[:time]
@@ -66,7 +66,7 @@ module Karafka
66
66
  # Logs info about system signals that Karafka received and prints backtrace for threads in
67
67
  # case of ttin
68
68
  #
69
- # @param event [Dry::Events::Event] event details including payload
69
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
70
70
  def on_process_notice_signal(event)
71
71
  info "Received #{event[:signal]} system signal"
72
72
 
@@ -89,7 +89,7 @@ module Karafka
89
89
 
90
90
  # Logs info that we're running Karafka app.
91
91
  #
92
- # @param _event [Dry::Events::Event] event details including payload
92
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
93
93
  def on_app_running(_event)
94
94
  info "Running in #{RUBY_DESCRIPTION}"
95
95
  info "Running Karafka #{Karafka::VERSION} server"
@@ -99,23 +99,28 @@ module Karafka
99
99
  info 'See LICENSE and the LGPL-3.0 for licensing details.'
100
100
  end
101
101
 
102
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
103
+ def on_app_quieting(_event)
104
+ info 'Switching to quiet mode. New messages will not be processed.'
105
+ end
106
+
102
107
  # Logs info that we're going to stop the Karafka server.
103
108
  #
104
- # @param _event [Dry::Events::Event] event details including payload
109
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
105
110
  def on_app_stopping(_event)
106
111
  info 'Stopping Karafka server'
107
112
  end
108
113
 
109
114
  # Logs info that we stopped the Karafka server.
110
115
  #
111
- # @param _event [Dry::Events::Event] event details including payload
116
+ # @param _event [Karafka::Core::Monitoring::Event] event details including payload
112
117
  def on_app_stopped(_event)
113
118
  info 'Stopped Karafka server'
114
119
  end
115
120
 
116
121
  # Logs info when we have dispatched a message the the DLQ
117
122
  #
118
- # @param event [Dry::Events::Event] event details including payload
123
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
119
124
  def on_dead_letter_queue_dispatched(event)
120
125
  message = event[:message]
121
126
  offset = message.offset
@@ -123,12 +128,12 @@ module Karafka
123
128
  dlq_topic = event[:caller].topic.dead_letter_queue.topic
124
129
  partition = message.partition
125
130
 
126
- info "Dispatched message #{offset} from #{topic}/#{partition} to DQL topic: #{dlq_topic}"
131
+ info "Dispatched message #{offset} from #{topic}/#{partition} to DLQ topic: #{dlq_topic}"
127
132
  end
128
133
 
129
134
  # There are many types of errors that can occur in many places, but we provide a single
130
135
  # handler for all of them to simplify error instrumentation.
131
- # @param event [Dry::Events::Event] event details including payload
136
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
132
137
  def on_error_occurred(event)
133
138
  type = event[:type]
134
139
  error = event[:error]
@@ -19,6 +19,7 @@ module Karafka
19
19
  EVENTS = %w[
20
20
  app.initialized
21
21
  app.running
22
+ app.quieting
22
23
  app.stopping
23
24
  app.stopped
24
25
 
@@ -42,7 +42,7 @@ module Karafka
42
42
 
43
43
  # Prints info about the fact that a given job has started
44
44
  #
45
- # @param event [Dry::Events::Event] event details including payload
45
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
46
46
  def on_worker_process(event)
47
47
  current_span = client.trace('karafka.consumer')
48
48
  push_tags
@@ -60,7 +60,7 @@ module Karafka
60
60
 
61
61
  # Prints info about the fact that a given job has finished
62
62
  #
63
- # @param event [Dry::Events::Event] event details including payload
63
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
64
64
  def on_worker_processed(event)
65
65
  push_tags
66
66
 
@@ -80,7 +80,7 @@ module Karafka
80
80
 
81
81
  # There are many types of errors that can occur in many places, but we provide a single
82
82
  # handler for all of them to simplify error instrumentation.
83
- # @param event [Dry::Events::Event] event details including payload
83
+ # @param event [Karafka::Core::Monitoring::Event] event details including payload
84
84
  def on_error_occurred(event)
85
85
  push_tags
86
86
 
@@ -20,10 +20,8 @@ module Karafka
20
20
  # @param args [Object] anything the base coordinator accepts
21
21
  def initialize(*args)
22
22
  super
23
- @on_enqueued_invoked = false
24
- @on_started_invoked = false
25
- @on_finished_invoked = false
26
- @on_revoked_invoked = false
23
+
24
+ @executed = []
27
25
  @flow_lock = Mutex.new
28
26
  end
29
27
 
@@ -34,9 +32,7 @@ module Karafka
34
32
  super
35
33
 
36
34
  @mutex.synchronize do
37
- @on_enqueued_invoked = false
38
- @on_started_invoked = false
39
- @on_finished_invoked = false
35
+ @executed.clear
40
36
  @last_message = messages.last
41
37
  end
42
38
  end
@@ -50,9 +46,7 @@ module Karafka
50
46
  # enqueued
51
47
  def on_enqueued
52
48
  @flow_lock.synchronize do
53
- return if @on_enqueued_invoked
54
-
55
- @on_enqueued_invoked = true
49
+ return unless executable?(:on_enqueued)
56
50
 
57
51
  yield(@last_message)
58
52
  end
@@ -61,9 +55,7 @@ module Karafka
61
55
  # Runs given code only once per all the coordinated jobs upon starting first of them
62
56
  def on_started
63
57
  @flow_lock.synchronize do
64
- return if @on_started_invoked
65
-
66
- @on_started_invoked = true
58
+ return unless executable?(:on_started)
67
59
 
68
60
  yield(@last_message)
69
61
  end
@@ -75,25 +67,36 @@ module Karafka
75
67
  def on_finished
76
68
  @flow_lock.synchronize do
77
69
  return unless finished?
78
- return if @on_finished_invoked
79
-
80
- @on_finished_invoked = true
70
+ return unless executable?(:on_finished)
81
71
 
82
72
  yield(@last_message)
83
73
  end
84
74
  end
85
75
 
86
- # Runs once when a partition is revoked
76
+ # Runs once after a partition is revoked
87
77
  def on_revoked
88
78
  @flow_lock.synchronize do
89
- return unless finished?
90
- return if @on_revoked_invoked
91
-
92
- @on_revoked_invoked = true
79
+ return unless executable?(:on_revoked)
93
80
 
94
81
  yield(@last_message)
95
82
  end
96
83
  end
84
+
85
+ private
86
+
87
+ # Checks if given action is executable once. If it is and true is returned, this method
88
+ # will return false next time it is used.
89
+ #
90
+ # @param action [Symbol] what action we want to perform
91
+ # @return [Boolean] true if we can
92
+ # @note This method needs to run behind a mutex.
93
+ def executable?(action)
94
+ return false if @executed.include?(action)
95
+
96
+ @executed << action
97
+
98
+ true
99
+ end
97
100
  end
98
101
  end
99
102
  end
@@ -68,9 +68,9 @@ module Karafka
68
68
  payload: skippable_message.raw_payload,
69
69
  key: skippable_message.partition.to_s,
70
70
  headers: skippable_message.headers.merge(
71
- 'original-topic' => topic.name,
72
- 'original-partition' => skippable_message.partition.to_s,
73
- 'original-offset' => skippable_message.offset.to_s
71
+ 'original_topic' => topic.name,
72
+ 'original_partition' => skippable_message.partition.to_s,
73
+ 'original_offset' => skippable_message.offset.to_s
74
74
  )
75
75
  )
76
76
 
@@ -10,6 +10,7 @@ module Karafka
10
10
  SIGQUIT
11
11
  SIGTERM
12
12
  SIGTTIN
13
+ SIGTSTP
13
14
  ].freeze
14
15
 
15
16
  HANDLED_SIGNALS.each do |signal|
@@ -48,21 +49,23 @@ module Karafka
48
49
 
49
50
  # Traps a single signal and performs callbacks (if any) or just ignores this signal
50
51
  # @param [Symbol] signal type that we want to catch
52
+ # @note Since we do a lot of threading and queuing, we don't want to handle signals from the
53
+ # trap context s some things may not work there as expected, that is why we spawn a separate
54
+ # thread to handle the signals process
51
55
  def trap_signal(signal)
52
56
  trap(signal) do
53
- notice_signal(signal)
54
- (@callbacks[signal] || []).each(&:call)
57
+ Thread.new do
58
+ notice_signal(signal)
59
+
60
+ (@callbacks[signal] || []).each(&:call)
61
+ end
55
62
  end
56
63
  end
57
64
 
58
65
  # Informs monitoring about trapped signal
59
66
  # @param [Symbol] signal type that we received
60
- # @note We cannot perform logging from trap context, that's why
61
- # we have to spin up a new thread to do this
62
67
  def notice_signal(signal)
63
- Thread.new do
64
- Karafka.monitor.instrument('process.notice_signal', caller: self, signal: signal)
65
- end
68
+ Karafka.monitor.instrument('process.notice_signal', caller: self, signal: signal)
66
69
  end
67
70
  end
68
71
  end
@@ -20,7 +20,7 @@ module Karafka
20
20
  # scheduled by Ruby hundreds of thousands of times per group.
21
21
  # We cannot use a single semaphore as it could potentially block in listeners that should
22
22
  # process with their data and also could unlock when a given group needs to remain locked
23
- @semaphores = Hash.new { |h, k| h[k] = Queue.new }
23
+ @semaphores = Concurrent::Map.new { |h, k| h[k] = Queue.new }
24
24
  @in_processing = Hash.new { |h, k| h[k] = [] }
25
25
  @mutex = Mutex.new
26
26
  end
@@ -47,9 +47,9 @@ module Karafka
47
47
  raise(Errors::JobsQueueSynchronizationError, job.group_id) if group.include?(job)
48
48
 
49
49
  group << job
50
- end
51
50
 
52
- @queue << job
51
+ @queue << job
52
+ end
53
53
  end
54
54
 
55
55
  # @return [Jobs::Base, nil] waits for a job from the main queue and returns it once available
@@ -105,7 +105,9 @@ module Karafka
105
105
  # @return [Boolean] tell us if we have anything in the processing (or for processing) from
106
106
  # a given group.
107
107
  def empty?(group_id)
108
- @in_processing[group_id].empty?
108
+ @mutex.synchronize do
109
+ @in_processing[group_id].empty?
110
+ end
109
111
  end
110
112
 
111
113
  # Blocks when there are things in the queue in a given group and waits until all the blocking
@@ -17,9 +17,6 @@ rescue LoadError
17
17
  end
18
18
 
19
19
  if rails
20
- # Load Karafka
21
- require 'karafka'
22
-
23
20
  # Load ActiveJob adapter
24
21
  require 'active_job/karafka'
25
22
 
@@ -13,10 +13,10 @@ module Karafka
13
13
  class << self
14
14
  # Extends topic and builder with given feature API
15
15
  def activate
16
- Topic.prepend(self::Topic) if const_defined?('Topic')
17
- Proxy.prepend(self::Builder) if const_defined?('Builder')
18
- Builder.prepend(self::Builder) if const_defined?('Builder')
19
- Builder.prepend(Base::Expander.new(self)) if const_defined?('Contract')
16
+ Topic.prepend(self::Topic) if const_defined?('Topic', false)
17
+ Proxy.prepend(self::Builder) if const_defined?('Builder', false)
18
+ Builder.prepend(self::Builder) if const_defined?('Builder', false)
19
+ Builder.prepend(Base::Expander.new(self)) if const_defined?('Contract', false)
20
20
  end
21
21
 
22
22
  # Loads all the features and activates them
@@ -4,7 +4,7 @@ module Karafka
4
4
  module Routing
5
5
  module Features
6
6
  class DeadLetterQueue < Base
7
- # DQL topic extensions
7
+ # DLQ topic extensions
8
8
  module Topic
9
9
  # After how many retries should be move data to DLQ
10
10
  DEFAULT_MAX_RETRIES = 3
@@ -25,12 +25,10 @@ module Karafka
25
25
 
26
26
  # Method which runs app
27
27
  def run
28
- # Since we do a lot of threading and queuing, we don't want to stop from the trap context
29
- # as some things may not work there as expected, that is why we spawn a separate thread to
30
- # handle the stopping process
31
- process.on_sigint { Thread.new { stop } }
32
- process.on_sigquit { Thread.new { stop } }
33
- process.on_sigterm { Thread.new { stop } }
28
+ process.on_sigint { stop }
29
+ process.on_sigquit { stop }
30
+ process.on_sigterm { stop }
31
+ process.on_sigtstp { quiet }
34
32
  process.supervise
35
33
 
36
34
  # Start is blocking until stop is called and when we stop, it will wait until
@@ -74,7 +72,8 @@ module Karafka
74
72
  # please start a separate thread to do so.
75
73
  def stop
76
74
  # Initialize the stopping process only if Karafka was running
77
- return if Karafka::App.stopping? || Karafka::App.stopped?
75
+ return if Karafka::App.stopping?
76
+ return if Karafka::App.stopped?
78
77
 
79
78
  Karafka::App.stop!
80
79
 
@@ -125,6 +124,18 @@ module Karafka
125
124
  Karafka::App.stopped! if timeout
126
125
  end
127
126
 
127
+ # Quiets the Karafka server.
128
+ # Karafka will stop processing but won't quiet to consumer group, so no rebalance will be
129
+ # triggered until final shutdown.
130
+ def quiet
131
+ # If we are already quieting or in the stop procedures, we should not do it again.
132
+ return if Karafka::App.quieting?
133
+ return if Karafka::App.stopping?
134
+ return if Karafka::App.stopped?
135
+
136
+ Karafka::App.quiet!
137
+ end
138
+
128
139
  private
129
140
 
130
141
  # @return [Karafka::Process] process wrapper instance used to catch system signal calls
@@ -8,6 +8,7 @@ module Karafka
8
8
  initializing: :initialize!,
9
9
  initialized: :initialized!,
10
10
  running: :run!,
11
+ quieting: :quiet!,
11
12
  stopping: :stop!,
12
13
  stopped: :stopped!
13
14
  }.freeze
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.0.16'
6
+ VERSION = '2.0.18'
7
7
  end
data/lib/karafka.rb CHANGED
@@ -86,6 +86,9 @@ end
86
86
  loader = Zeitwerk::Loader.for_gem
87
87
  # Do not load Rails extensions by default, this will be handled by Railtie if they are needed
88
88
  loader.ignore(Karafka.gem_root.join('lib/active_job'))
89
+ # Do not load Railtie. It will load if after everything is ready, so we don't have to load any
90
+ # Karafka components when we require this railtie. Railtie needs to be loaded last.
91
+ loader.ignore(Karafka.gem_root.join('lib/karafka/railtie'))
89
92
  # Do not load pro components as they will be loaded if needed and allowed
90
93
  loader.ignore(Karafka.core_root.join('pro/'))
91
94
  # Do not load vendors instrumentation components. Those need to be required manually if needed
@@ -96,3 +99,6 @@ loader.eager_load
96
99
  # This will load features but since Pro are not loaded automatically, they will not be visible
97
100
  # nor included here
98
101
  ::Karafka::Routing::Features::Base.load_all
102
+
103
+ # Load railtie after everything else is ready so we know we can rely on it.
104
+ require 'karafka/railtie'
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.16
4
+ version: 2.0.18
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -35,7 +35,7 @@ cert_chain:
35
35
  Qf04B9ceLUaC4fPVEz10FyobjaFoY4i32xRto3XnrzeAgfEe4swLq8bQsR3w/EF3
36
36
  MGU0FeSV2Yj7Xc2x/7BzLK8xQn5l7Yy75iPF+KP3vVmDHnNl
37
37
  -----END CERTIFICATE-----
38
- date: 2022-11-09 00:00:00.000000000 Z
38
+ date: 2022-11-18 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: karafka-core
@@ -176,6 +176,7 @@ files:
176
176
  - lib/karafka/cli/install.rb
177
177
  - lib/karafka/cli/server.rb
178
178
  - lib/karafka/connection/client.rb
179
+ - lib/karafka/connection/consumer_group_status.rb
179
180
  - lib/karafka/connection/listener.rb
180
181
  - lib/karafka/connection/listeners_batch.rb
181
182
  - lib/karafka/connection/messages_buffer.rb
metadata.gz.sig CHANGED
@@ -1,3 +1,2 @@
1
- ��'���?w+b�<Ə`Xd����#�>�i���xk�%P9�����Lwij%+�yz\X�?A�����T�*=�0�2�b���%B���(�B�i`%�Y>����w��D���j��m�e
2
- 34�9^a�r��4�m�O��K�f�Hu:c�PPT��o�����8�"�r� lᩉ&ef��o)��9K�����b��'2U>���蹞�B�\V=�]ޯ�{u��y%8Q;=R���.ɽ��O20��*ڔ
3
- 6QFof�}�Sb�>rj�UL�3��R�`����Z?6�
1
+ ��7j]��Q"�:|fOlqpA�lhe�+��ᜄfK�X񳒗�
2
+ ]#E_?�� �k��;qeN猅�9���ح�f���BɣY��8՟�x̗-֞�ݢ�fگ݆#�i����ÔFw@�u���/� U���BC�-��w���j_{��������/�^r�h| ����x�