karafka 2.3.2 → 2.3.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: cb47082224d857f3029f9bb8e1b04a35e6b8ed2f7ae75bbe52bf1b778ff56226
4
- data.tar.gz: 53d59fd7e140f5b3e9b89dd3e4af28469bc534074110e2e93fae24c59bf81b88
3
+ metadata.gz: 5e7307013403e4df36017d242a2fc2a56555376af8f2015474a9602ba655343a
4
+ data.tar.gz: '08807329903078bf5c8014da5250aa2c96bfd170f28e57e8110c018ed480e8f9'
5
5
  SHA512:
6
- metadata.gz: 00e09a345122ad2facaf8adcbb52fae3ce87374083d9e6785a1f07a74c87e53c6b0b3dd32b82566e846d160569aafc55c614013d8c9f95664612150fb51d07b1
7
- data.tar.gz: 283e50a6b3b25579419bdc9b947e5ba802e22a1cd6d0097ab8929c5394d3858461bda15c3e70a0d2d466a8705466ef1d5b24e1a8bfe5ac8e356c64790049c7b0
6
+ metadata.gz: fdd01cf50e55478e2dbd7bdbdac4c42b7f86c0b55507023cce0e14e61697936ea80ab8fb00d9447ec39f2179d5e59aebc2e763e1202323319122391994954d69
7
+ data.tar.gz: 379eafa7fb52d35ae003583a98ae03d9cf5ff799c1599d1a2b22966d9f4e5a0fe65dd7a21b743bc81ca153dc73313f3869e1d4ac01e7784df87dafabbd98533e
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,5 +1,16 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.3.4 (2024-04-11)
4
+ - [Fix] Seek consumer group on a topic level is updating only recent partition.
5
+
6
+ ## 2.3.3 (2024-02-26)
7
+ - [Enhancement] Routing based topics allocation for swarm (Pro)
8
+ - [Enhancement] Publish the `-1` shutdown reason status for a non-responding node in swarm.
9
+ - [Enhancement] Allow for using the `distribution` mode for DataDog listener histogram reporting (Aerdayne).
10
+ - [Change] Change `internal.swarm.node_report_timeout` to 60 seconds from 30 seconds to compensate for long pollings.
11
+ - [Fix] Static membership routing evaluation happens too early in swarm.
12
+ - [Fix] Close producer in supervisor prior to forking and warmup to prevent invalid memory states.
13
+
3
14
  ## 2.3.2 (2024-02-16)
4
15
  - **[Feature]** Provide swarm capabilities to OSS and Pro.
5
16
  - **[Feature]** Provide ability to use complex strategies in DLQ (Pro).
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.3.2)
4
+ karafka (2.3.4)
5
5
  karafka-core (>= 2.3.0, < 2.4.0)
6
6
  waterdrop (>= 2.6.12, < 3.0.0)
7
7
  zeitwerk (~> 2.3)
data/README.md CHANGED
@@ -9,13 +9,13 @@
9
9
  Karafka is a Ruby and Rails multi-threaded efficient Kafka processing framework that:
10
10
 
11
11
  - Has a built-in [Web UI](https://karafka.io/docs/Web-UI-Features/) providing a convenient way to monitor and manage Karafka-based applications.
12
- - Supports parallel processing in [multiple threads](https://karafka.io/docs/Concurrency-and-multithreading) (also for a [single topic partition](https://karafka.io/docs/Pro-Virtual-Partitions) work)
12
+ - Supports parallel processing in [multiple threads](https://karafka.io/docs/Concurrency-and-multithreading) (also for a [single topic partition](https://karafka.io/docs/Pro-Virtual-Partitions) work) and [processes](https://karafka.io/docs/Swarm-Multi-Process).
13
13
  - [Automatically integrates](https://karafka.io/docs/Integrating-with-Ruby-on-Rails-and-other-frameworks#integrating-with-ruby-on-rails) with Ruby on Rails
14
14
  - Has [ActiveJob backend](https://karafka.io/docs/Active-Job) support (including [ordered jobs](https://karafka.io/docs/Pro-Enhanced-Active-Job#ordered-jobs))
15
15
  - Has a seamless [Dead Letter Queue](https://karafka.io/docs/Dead-Letter-Queue/) functionality built-in
16
16
  - Supports in-development [code reloading](https://karafka.io/docs/Auto-reload-of-code-changes-in-development)
17
17
  - Is powered by [librdkafka](https://github.com/edenhill/librdkafka) (the Apache Kafka C/C++ client library)
18
- - Has an out-of the box [StatsD/DataDog monitoring](https://karafka.io/docs/Monitoring-and-logging) with a dashboard template.
18
+ - Has an out-of the box [AppSignal](https://karafka.io/docs/Monitoring-and-Logging/#appsignal-metrics-and-error-tracking) and [StatsD/DataDog](https://karafka.io/docs/Monitoring-and-Logging/#datadog-and-statsd-integration) monitoring with dashboard templates.
19
19
 
20
20
  ```ruby
21
21
  # Define what topics you want to consume with which consumers in karafka.rb
@@ -15,6 +15,9 @@ en:
15
15
  pause_with_exponential_backoff_format: needs to be either true or false
16
16
  shutdown_timeout_format: needs to be an integer bigger than 0
17
17
  max_wait_time_format: needs to be an integer bigger than 0
18
+ max_wait_time_max_wait_time_vs_swarm_node_report_timeout: >
19
+ cannot be more than 80% of internal.swarm.node_report_timeout.
20
+ Decrease max_wait_time or increase node_report_timeout
18
21
  kafka_format: needs to be a filled hash
19
22
  key_must_be_a_symbol: All keys under the kafka settings scope need to be symbols
20
23
  max_timeout_vs_pause_max_timeout: pause_timeout must be less or equal to pause_max_timeout
@@ -58,6 +58,10 @@ en:
58
58
  subscription_group_details.multiplexing_boot_format: 'needs to be an integer equal or more than 1'
59
59
  subscription_group_details.multiplexing_boot_not_dynamic: 'needs to be equal to max when not in dynamic mode'
60
60
 
61
+ swarm.active_format: needs to be true
62
+ swarm.nodes_format: needs to be a range or an array of nodes ids
63
+ swarm_nodes_with_non_existent_nodes: includes unreachable nodes ids
64
+
61
65
  consumer_group:
62
66
  patterns_format: must be an array with hashes
63
67
  patterns_missing: needs to be present
data/lib/karafka/admin.rb CHANGED
@@ -183,7 +183,13 @@ module Karafka
183
183
  tpl_base.each do |topic, partitions_with_offsets|
184
184
  partitions_with_offsets.each do |partition, offset|
185
185
  target = offset.is_a?(Time) ? time_tpl : tpl
186
- target.add_topic_and_partitions_with_offsets(topic, [[partition, offset]])
186
+ # We reverse and uniq to make sure that potentially duplicated references are removed
187
+ # in such a way that the newest stays
188
+ target.to_h[topic] ||= []
189
+ target.to_h[topic] << Rdkafka::Consumer::Partition.new(partition, offset)
190
+ target.to_h[topic].reverse!
191
+ target.to_h[topic].uniq!(&:partition)
192
+ target.to_h[topic].reverse!
187
193
  end
188
194
  end
189
195
 
@@ -219,7 +225,11 @@ module Karafka
219
225
  end
220
226
 
221
227
  # Since now we have proper offsets, we can add this to the final tpl for commit
222
- tpl.add_topic_and_partitions_with_offsets(name, [[partition, offset]])
228
+ tpl.to_h[name] ||= []
229
+ tpl.to_h[name] << Rdkafka::Consumer::Partition.new(partition, offset)
230
+ tpl.to_h[name].reverse!
231
+ tpl.to_h[name].uniq!(&:partition)
232
+ tpl.to_h[name].reverse!
223
233
  end
224
234
  end
225
235
  end
@@ -600,6 +600,13 @@ module Karafka
600
600
  # @return [Rdkafka::Consumer]
601
601
  def build_consumer
602
602
  ::Rdkafka::Config.logger = ::Karafka::App.config.logger
603
+
604
+ # We need to refresh the setup of this subscription group in case we started running in a
605
+ # swarm. The initial configuration for validation comes from the parent node, but it needs
606
+ # to be altered in case of a static group membership usage for correct mapping of the
607
+ # group instance id.
608
+ @subscription_group.refresh
609
+
603
610
  config = ::Rdkafka::Config.new(@subscription_group.kafka)
604
611
  config.consumer_rebalance_listener = @rebalance_callback
605
612
  # We want to manage the events queue independently from the messages queue. Thanks to that
@@ -162,6 +162,28 @@ module Karafka
162
162
 
163
163
  [[%i[shutdown_timeout], :shutdown_timeout_vs_max_wait_time]]
164
164
  end
165
+
166
+ # `internal.swarm.node_report_timeout` should not be close to `max_wait_time` otherwise
167
+ # there may be a case where node cannot report often enough because it is clogged by waiting
168
+ # on more data.
169
+ #
170
+ # We handle that at a config level to make sure that this is correctly configured.
171
+ #
172
+ # We do not validate this in the context of swarm usage (validate only if...) because it is
173
+ # often that swarm only runs on prod and we do not want to crash it surprisingly.
174
+ virtual do |data, errors|
175
+ next unless errors.empty?
176
+
177
+ max_wait_time = data.fetch(:max_wait_time)
178
+ node_report_timeout = data.fetch(:internal)[:swarm][:node_report_timeout] || false
179
+
180
+ next unless node_report_timeout
181
+ # max wait time should be at least 20% smaller than the reporting time to have enough
182
+ # time for reporting
183
+ next if max_wait_time < node_report_timeout * 0.8
184
+
185
+ [[%i[max_wait_time], :max_wait_time_vs_swarm_node_report_timeout]]
186
+ end
165
187
  end
166
188
  end
167
189
  end
@@ -14,7 +14,8 @@ module Karafka
14
14
  include ::Karafka::Core::Configurable
15
15
  extend Forwardable
16
16
 
17
- def_delegators :config, :client, :rd_kafka_metrics, :namespace, :default_tags
17
+ def_delegators :config, :client, :rd_kafka_metrics, :namespace,
18
+ :default_tags, :distribution_mode
18
19
 
19
20
  # Value object for storing a single rdkafka metric publishing details
20
21
  RdKafkaMetric = Struct.new(:type, :scope, :name, :key_location)
@@ -53,6 +54,13 @@ module Karafka
53
54
  RdKafkaMetric.new(:gauge, :topics, 'consumer.lags_delta', 'consumer_lag_stored_d')
54
55
  ].freeze
55
56
 
57
+ # Whether histogram metrics should be sent as distributions or histograms.
58
+ # Distribution metrics are aggregated globally and not agent-side,
59
+ # providing more accurate percentiles whenever consumers are running on multiple hosts.
60
+ #
61
+ # Learn more at https://docs.datadoghq.com/metrics/types/?tab=distribution#metric-types
62
+ setting :distribution_mode, default: :histogram
63
+
56
64
  configure
57
65
 
58
66
  # @param block [Proc] configuration block
@@ -169,18 +177,40 @@ module Karafka
169
177
  %i[
170
178
  count
171
179
  gauge
172
- histogram
173
180
  increment
174
181
  decrement
175
182
  ].each do |metric_type|
176
- class_eval <<~METHODS, __FILE__, __LINE__ + 1
183
+ class_eval <<~RUBY, __FILE__, __LINE__ + 1
177
184
  def #{metric_type}(key, *args)
178
185
  client.#{metric_type}(
179
186
  namespaced_metric(key),
180
187
  *args
181
188
  )
182
189
  end
183
- METHODS
190
+ RUBY
191
+ end
192
+
193
+ # Selects the histogram mode configured and uses it to report to DD client
194
+ # @param key [String] non-namespaced key
195
+ # @param args [Array] extra arguments to pass to the client
196
+ def histogram(key, *args)
197
+ case distribution_mode
198
+ when :histogram
199
+ client.histogram(
200
+ namespaced_metric(key),
201
+ *args
202
+ )
203
+ when :distribution
204
+ client.distribution(
205
+ namespaced_metric(key),
206
+ *args
207
+ )
208
+ else
209
+ raise(
210
+ ::ArgumentError,
211
+ 'distribution_mode setting value must be either :histogram or :distribution'
212
+ )
213
+ end
184
214
  end
185
215
 
186
216
  # Wraps metric name in listener's namespace
@@ -0,0 +1,31 @@
1
+ # frozen_string_literal: true
2
+
3
+ # This Karafka component is a Pro component under a commercial license.
4
+ # This Karafka component is NOT licensed under LGPL.
5
+ #
6
+ # All of the commercial components are present in the lib/karafka/pro directory of this
7
+ # repository and their usage requires commercial license agreement.
8
+ #
9
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
10
+ #
11
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
12
+ # your code to Maciej Mensfeld.
13
+
14
+ module Karafka
15
+ module Pro
16
+ module Routing
17
+ module Features
18
+ class Swarm < Base
19
+ # Swarm feature configuration
20
+ Config = Struct.new(
21
+ :active,
22
+ :nodes,
23
+ keyword_init: true
24
+ ) do
25
+ alias_method :active?, :active
26
+ end
27
+ end
28
+ end
29
+ end
30
+ end
31
+ end
@@ -0,0 +1,67 @@
1
+ # frozen_string_literal: true
2
+
3
+ # This Karafka component is a Pro component under a commercial license.
4
+ # This Karafka component is NOT licensed under LGPL.
5
+ #
6
+ # All of the commercial components are present in the lib/karafka/pro directory of this
7
+ # repository and their usage requires commercial license agreement.
8
+ #
9
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
10
+ #
11
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
12
+ # your code to Maciej Mensfeld.
13
+
14
+ module Karafka
15
+ module Pro
16
+ module Routing
17
+ module Features
18
+ class Swarm < Base
19
+ # Namespace for swarm contracts
20
+ module Contracts
21
+ # Contract to validate configuration of the swarm feature
22
+ class Topic < Karafka::Contracts::Base
23
+ configure do |config|
24
+ config.error_messages = YAML.safe_load(
25
+ File.read(
26
+ File.join(Karafka.gem_root, 'config', 'locales', 'pro_errors.yml')
27
+ )
28
+ ).fetch('en').fetch('validations').fetch('topic')
29
+ end
30
+
31
+ nested(:swarm) do
32
+ required(:active) { |val| val == true }
33
+
34
+ required(:nodes) do |val|
35
+ val.is_a?(Range) || (
36
+ val.is_a?(Array) &&
37
+ val.all? { |id| id.is_a?(Integer) }
38
+ )
39
+ end
40
+ end
41
+
42
+ # Make sure that if range is defined it fits number of nodes (except infinity)
43
+ # As it may be a common error to accidentally define a node that will never be
44
+ # reached
45
+ virtual do |data, errors|
46
+ next unless errors.empty?
47
+
48
+ nodes = data[:swarm][:nodes]
49
+
50
+ # Defaults
51
+ next if nodes.first.zero? && nodes.last == Float::INFINITY
52
+
53
+ # If our expectation towards which node should run things matches at least one
54
+ # node, then it's ok
55
+ next if Karafka::App.config.swarm.nodes.times.any? do |node_id|
56
+ nodes.include?(node_id)
57
+ end
58
+
59
+ [[%i[swarm_nodes], :with_non_existent_nodes]]
60
+ end
61
+ end
62
+ end
63
+ end
64
+ end
65
+ end
66
+ end
67
+ end
@@ -0,0 +1,54 @@
1
+ # frozen_string_literal: true
2
+
3
+ # This Karafka component is a Pro component under a commercial license.
4
+ # This Karafka component is NOT licensed under LGPL.
5
+ #
6
+ # All of the commercial components are present in the lib/karafka/pro directory of this
7
+ # repository and their usage requires commercial license agreement.
8
+ #
9
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
10
+ #
11
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
12
+ # your code to Maciej Mensfeld.
13
+
14
+ module Karafka
15
+ module Pro
16
+ module Routing
17
+ module Features
18
+ class Swarm < Base
19
+ # Topic swarm API extensions
20
+ module Topic
21
+ # Allows defining swarm routing topic settings
22
+ # @param nodes [Range, Array] range of nodes ids or array with nodes ids for which we
23
+ # should run given topic
24
+ def swarm(nodes: (0...Karafka::App.config.swarm.nodes))
25
+ @swarm ||= Config.new(active: true, nodes: nodes)
26
+ end
27
+
28
+ # @return [true] swarm setup is always true. May not be in use but is active
29
+ def swarm?
30
+ swarm.active?
31
+ end
32
+
33
+ # @return [Boolean] should this topic be active. In the context of swarm it is only
34
+ # active when swarm routing setup does not limit nodes on which it should operate
35
+ def active?
36
+ node = Karafka::App.config.swarm.node
37
+
38
+ return super unless node
39
+
40
+ super && swarm.nodes.include?(node.id)
41
+ end
42
+
43
+ # @return [Hash] topic with all its native configuration options plus swarm
44
+ def to_h
45
+ super.merge(
46
+ swarm: swarm.to_h
47
+ ).freeze
48
+ end
49
+ end
50
+ end
51
+ end
52
+ end
53
+ end
54
+ end
@@ -0,0 +1,25 @@
1
+ # frozen_string_literal: true
2
+
3
+ # This Karafka component is a Pro component under a commercial license.
4
+ # This Karafka component is NOT licensed under LGPL.
5
+ #
6
+ # All of the commercial components are present in the lib/karafka/pro directory of this
7
+ # repository and their usage requires commercial license agreement.
8
+ #
9
+ # Karafka has also commercial-friendly license, commercial support and commercial components.
10
+ #
11
+ # By sending a pull request to the pro components, you are agreeing to transfer the copyright of
12
+ # your code to Maciej Mensfeld.
13
+
14
+ module Karafka
15
+ module Pro
16
+ module Routing
17
+ module Features
18
+ # Karafka Pro Swarm extensions to the routing
19
+ # They allow for more granular work assignment in the swarm
20
+ class Swarm < Base
21
+ end
22
+ end
23
+ end
24
+ end
25
+ end
@@ -91,6 +91,19 @@ module Karafka
91
91
  id
92
92
  end
93
93
 
94
+ # Refreshes the configuration of this subscription group if needed based on the execution
95
+ # context.
96
+ #
97
+ # Since the initial routing setup happens in the supervisor, it is inherited by the children.
98
+ # This causes incomplete assignment of `group.instance.id` which is not expanded with proper
99
+ # node identifier. This refreshes this if needed when in swarm.
100
+ def refresh
101
+ return unless node
102
+ return unless kafka.key?(:'group.instance.id')
103
+
104
+ @kafka = build_kafka
105
+ end
106
+
94
107
  private
95
108
 
96
109
  # @return [Hash] kafka settings are a bit special. They are exactly the same for all of the
@@ -196,7 +196,7 @@ module Karafka
196
196
  setting :liveness_listener, default: Swarm::LivenessListener.new
197
197
  # How long should we wait for any info from the node before we consider it hanging at
198
198
  # stop it
199
- setting :node_report_timeout, default: 30_000
199
+ setting :node_report_timeout, default: 60_000
200
200
  # How long should we wait before restarting a node. This can prevent us from having a
201
201
  # case where for some external reason our spawned process would die immediately and we
202
202
  # would immediately try to start it back in an endless loop
@@ -19,6 +19,13 @@ module Karafka
19
19
  node_restart_timeout: %i[internal swarm node_restart_timeout]
20
20
  )
21
21
 
22
+ # Status we issue when we decide to shutdown unresponsive node
23
+ # We use -1 because nodes are expected to report 0+ statuses and we can use negative numbers
24
+ # for non-node based statuses
25
+ NOT_RESPONDING_SHUTDOWN_STATUS = -1
26
+
27
+ private_constant :NOT_RESPONDING_SHUTDOWN_STATUS
28
+
22
29
  # @return [Array<Node>] All nodes that manager manages
23
30
  attr_reader :nodes
24
31
 
@@ -29,10 +36,10 @@ module Karafka
29
36
 
30
37
  # Starts all the expected nodes for the first time
31
38
  def start
32
- pidfd = Pidfd.new(::Process.pid)
39
+ parent_pid = ::Process.pid
33
40
 
34
41
  @nodes = Array.new(nodes_count) do |i|
35
- start_one Node.new(i, pidfd)
42
+ start_one Node.new(i, parent_pid)
36
43
  end
37
44
  end
38
45
 
@@ -148,7 +155,12 @@ module Karafka
148
155
  return true unless over?(statuses[:control], node_report_timeout)
149
156
 
150
157
  # Start the stopping procedure if the node stopped reporting frequently enough
151
- monitor.instrument('swarm.manager.stopping', caller: self, node: node) do
158
+ monitor.instrument(
159
+ 'swarm.manager.stopping',
160
+ caller: self,
161
+ node: node,
162
+ status: NOT_RESPONDING_SHUTDOWN_STATUS
163
+ ) do
152
164
  node.stop
153
165
  statuses[:stop] = monotonic_now
154
166
  end
@@ -30,10 +30,10 @@ module Karafka
30
30
  # @param id [Integer] number of the fork. Used for uniqueness setup for group client ids and
31
31
  # other stuff where we need to know a unique reference of the fork in regards to the rest
32
32
  # of them.
33
- # @param parent_pidfd [Pidfd] parent pidfd for zombie fencing
34
- def initialize(id, parent_pidfd)
33
+ # @param parent_pid [Integer] parent pid for zombie fencing
34
+ def initialize(id, parent_pid)
35
35
  @id = id
36
- @parent_pidfd = parent_pidfd
36
+ @parent_pidfd = Pidfd.new(parent_pid)
37
37
  end
38
38
 
39
39
  # Starts a new fork and:
@@ -72,17 +72,33 @@ module Karafka
72
72
  def alive?
73
73
  @pidfd_select ||= [@pidfd_io]
74
74
 
75
- IO.select(@pidfd_select, nil, nil, 0).nil?
75
+ if @mutex.owned?
76
+ return false if @cleaned
77
+
78
+ IO.select(@pidfd_select, nil, nil, 0).nil?
79
+ else
80
+ @mutex.synchronize do
81
+ return false if @cleaned
82
+
83
+ IO.select(@pidfd_select, nil, nil, 0).nil?
84
+ end
85
+ end
76
86
  end
77
87
 
78
88
  # Cleans the zombie process
79
89
  # @note This should run **only** on processes that exited, otherwise will wait
80
90
  def cleanup
81
- return if @cleaned
91
+ @mutex.synchronize do
92
+ return if @cleaned
82
93
 
83
- waitid(P_PIDFD, @pidfd, nil, WEXITED)
94
+ waitid(P_PIDFD, @pidfd, nil, WEXITED)
84
95
 
85
- @cleaned = true
96
+ @pidfd_io.close
97
+ @pidfd_select = nil
98
+ @pidfd_io = nil
99
+ @pidfd = nil
100
+ @cleaned = true
101
+ end
86
102
  end
87
103
 
88
104
  # Sends given signal to the process using its pidfd
@@ -30,14 +30,16 @@ module Karafka
30
30
 
31
31
  # Creates needed number of forks, installs signals and starts supervision
32
32
  def run
33
- Karafka::App.warmup
34
-
35
- manager.start
36
-
37
33
  # Close producer just in case. While it should not be used, we do not want even a
38
34
  # theoretical case since librdkafka is not thread-safe.
35
+ # We close it prior to forking just to make sure, there is no issue with initialized
36
+ # producer (should not be initialized but just in case)
39
37
  Karafka.producer.close
40
38
 
39
+ Karafka::App.warmup
40
+
41
+ manager.start
42
+
41
43
  process.on_sigint { stop }
42
44
  process.on_sigquit { stop }
43
45
  process.on_sigterm { stop }
@@ -132,8 +134,9 @@ module Karafka
132
134
  # Cleanup the process table
133
135
  manager.cleanup
134
136
 
135
- # exit! is not within the instrumentation as it would not trigger due to exit
136
- Kernel.exit!(forceful_exit_code)
137
+ # We do not use `exit!` here similar to regular server because we do not have to worry
138
+ # about any librdkafka related hanging connections, etc
139
+ Kernel.exit(forceful_exit_code)
137
140
  ensure
138
141
  if initialized
139
142
  Karafka::App.stopped!
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.3.2'
6
+ VERSION = '2.3.4'
7
7
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.3.2
4
+ version: 2.3.4
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -35,7 +35,7 @@ cert_chain:
35
35
  AnG1dJU+yL2BK7vaVytLTstJME5mepSZ46qqIJXMuWob/YPDmVaBF39TDSG9e34s
36
36
  msG3BiCqgOgHAnL23+CN3Rt8MsuRfEtoTKpJVcCfoEoNHOkc
37
37
  -----END CERTIFICATE-----
38
- date: 2024-02-16 00:00:00.000000000 Z
38
+ date: 2024-04-11 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: karafka-core
@@ -381,6 +381,10 @@ files:
381
381
  - lib/karafka/pro/routing/features/periodic_job/config.rb
382
382
  - lib/karafka/pro/routing/features/periodic_job/contracts/topic.rb
383
383
  - lib/karafka/pro/routing/features/periodic_job/topic.rb
384
+ - lib/karafka/pro/routing/features/swarm.rb
385
+ - lib/karafka/pro/routing/features/swarm/config.rb
386
+ - lib/karafka/pro/routing/features/swarm/contracts/topic.rb
387
+ - lib/karafka/pro/routing/features/swarm/topic.rb
384
388
  - lib/karafka/pro/routing/features/throttling.rb
385
389
  - lib/karafka/pro/routing/features/throttling/config.rb
386
390
  - lib/karafka/pro/routing/features/throttling/contracts/topic.rb
metadata.gz.sig CHANGED
Binary file