karafka 2.0.0.rc4 → 2.0.0.rc5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 526402b906d00f844c5b25925a854bc45f5127dc8c06646f1cd1432f265dfb81
4
- data.tar.gz: c58f6be491c4c4e82237307d37874100588c97241439788325600bea77d4254f
3
+ metadata.gz: 52fde84aac9ffef63ecbc1d367e3bfdc7209f6c2904f2a7142f51b1bd0613002
4
+ data.tar.gz: 4b4e7838e19505469ec20a654a3d7bb9c2588f351f9ea64fdb05e224b476d562
5
5
  SHA512:
6
- metadata.gz: '08922eef9890af84f1b444329061283f7987258200a9e338b892944a6ca061a60abed273e02acf44f91728f0ed219ec494e96c05a1155db2321e941e24c8c328'
7
- data.tar.gz: 5b3de319f1887af51eeda3ffc5b42a86601a382a55ab30e8cc7abce0526d4471ff274b3627250d47ec9a70de763fd337a7c8eff79add2b9409c935133bbe3610
6
+ metadata.gz: 0ebdda184eca4c7dcfe39821125ba7a8645336607300395de792a7c8808cde07f54f5382278946fab58af0e95f3a85ab05be9ed6e7b292fc7692922cb7890747
7
+ data.tar.gz: 416cb01ed1b6dac0681969ce72505d19855a041e7ceb29e369ed9fc80675755ad1f478a20ee2d1f2c0e925036ba6f08adda727f8ecd3f4f07f39f0be347ae4a1
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,8 +1,18 @@
1
1
  # Karafka framework changelog
2
2
 
3
+ ## 2.0.0.rc5 (2022-08-01)
4
+ - Improve specs stability
5
+ - Improve forceful shutdown
6
+ - Add support for debug `TTIN` backtrace printing
7
+ - Fix a case where logger listener would not intercept `warn` level
8
+ - Require `rdkafka` >= `0.12`
9
+ - Replace statistics decorator with the one from `karafka-core`
10
+
3
11
  ## 2.0.0.rc4 (2022-07-28)
4
12
  - Remove `dry-monitor`
5
13
  - Use `karafka-core`
14
+ - Improve forceful shutdown resources finalization
15
+ - Cache consumer client name
6
16
 
7
17
  ## 2.0.0.rc3 (2022-07-26)
8
18
  - Fix Pro partitioner hash function may not utilize all the threads (#907).
@@ -69,8 +79,8 @@
69
79
  - Fix a case where consecutive CTRL+C (non-stop) would case an exception during forced shutdown
70
80
  - Add missing `consumer.prepared.error` into `LoggerListener`
71
81
  - Delegate partition resuming from the consumers to listeners threads.
72
- - Add support for Long Running Jobs (LRJ) for ActiveJob [PRO]
73
- - Add support for Long Running Jobs for consumers [PRO]
82
+ - Add support for Long-Running Jobs (LRJ) for ActiveJob [PRO]
83
+ - Add support for Long-Running Jobs for consumers [PRO]
74
84
  - Allow `active_job_topic` to accept a block for extra topic related settings
75
85
  - Remove no longer needed logger threads
76
86
  - Auto-adapt number of processes for integration specs based on the number of CPUs
data/Gemfile.lock CHANGED
@@ -1,11 +1,11 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.0.0.rc4)
5
- karafka-core (>= 2.0.0, < 3.0.0)
6
- rdkafka (>= 0.10)
4
+ karafka (2.0.0.rc5)
5
+ karafka-core (>= 2.0.2, < 3.0.0)
6
+ rdkafka (>= 0.12)
7
7
  thor (>= 0.20)
8
- waterdrop (>= 2.4.0, < 3.0.0)
8
+ waterdrop (>= 2.4.1, < 3.0.0)
9
9
  zeitwerk (~> 2.3)
10
10
 
11
11
  GEM
@@ -30,7 +30,7 @@ GEM
30
30
  activesupport (>= 5.0)
31
31
  i18n (1.12.0)
32
32
  concurrent-ruby (~> 1.0)
33
- karafka-core (2.0.0)
33
+ karafka-core (2.0.2)
34
34
  concurrent-ruby (>= 1.1)
35
35
  mini_portile2 (2.8.0)
36
36
  minitest (5.16.2)
@@ -61,8 +61,8 @@ GEM
61
61
  thor (1.2.1)
62
62
  tzinfo (2.0.5)
63
63
  concurrent-ruby (~> 1.0)
64
- waterdrop (2.4.0)
65
- karafka-core (~> 2.0)
64
+ waterdrop (2.4.1)
65
+ karafka-core (>= 2.0.2, < 3.0.0)
66
66
  rdkafka (>= 0.10)
67
67
  zeitwerk (~> 2.3)
68
68
  zeitwerk (2.6.0)
data/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
  [![Gem Version](https://badge.fury.io/rb/karafka.svg)](http://badge.fury.io/rb/karafka)
5
5
  [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
6
6
 
7
- **Note**: All of the documentation here refers to Karafka `2.0.0.rc3` or higher. If you are looking for the documentation for Karafka `1.4`, please click [here](https://github.com/karafka/wiki/tree/1.4).
7
+ **Note**: All of the documentation here refers to Karafka `2.0.0.rc4` or higher. If you are looking for the documentation for Karafka `1.4`, please click [here](https://github.com/karafka/wiki/tree/1.4).
8
8
 
9
9
  ## About Karafka
10
10
 
@@ -55,7 +55,7 @@ We also maintain many [integration specs](https://github.com/karafka/karafka/tre
55
55
  1. Add and install Karafka:
56
56
 
57
57
  ```bash
58
- bundle add karafka -v 2.0.0.rc3
58
+ bundle add karafka -v 2.0.0.rc4
59
59
 
60
60
  bundle exec karafka install
61
61
  ```
data/bin/integrations CHANGED
@@ -37,7 +37,7 @@ class Scenario
37
37
  'consumption/worker_critical_error_behaviour.rb' => [0, 2].freeze,
38
38
  'shutdown/on_hanging_jobs_and_a_shutdown.rb' => [2].freeze,
39
39
  'shutdown/on_hanging_on_shutdown_job_and_a_shutdown.rb' => [2].freeze,
40
- 'shutdown/on_hanging_poll_and_shutdown.rb' => [2].freeze
40
+ 'shutdown/on_hanging_listener_and_shutdown.rb' => [2].freeze
41
41
  }.freeze
42
42
 
43
43
  private_constant :MAX_RUN_TIME, :EXIT_CODES
data/karafka.gemspec CHANGED
@@ -16,10 +16,10 @@ Gem::Specification.new do |spec|
16
16
  spec.description = 'Framework used to simplify Apache Kafka based Ruby applications development'
17
17
  spec.licenses = ['LGPL-3.0', 'Commercial']
18
18
 
19
- spec.add_dependency 'karafka-core', '>= 2.0.0', '< 3.0.0'
20
- spec.add_dependency 'rdkafka', '>= 0.10'
19
+ spec.add_dependency 'karafka-core', '>= 2.0.2', '< 3.0.0'
20
+ spec.add_dependency 'rdkafka', '>= 0.12'
21
21
  spec.add_dependency 'thor', '>= 0.20'
22
- spec.add_dependency 'waterdrop', '>= 2.4.0', '< 3.0.0'
22
+ spec.add_dependency 'waterdrop', '>= 2.4.1', '< 3.0.0'
23
23
  spec.add_dependency 'zeitwerk', '~> 2.3'
24
24
 
25
25
  spec.required_ruby_version = '>= 2.7.0'
@@ -275,6 +275,11 @@ module Karafka
275
275
 
276
276
  # Commits the stored offsets in a sync way and closes the consumer.
277
277
  def close
278
+ # Once client is closed, we should not close it again
279
+ # This could only happen in case of a race-condition when forceful shutdown happens
280
+ # and triggers this from a different thread
281
+ return if @closed
282
+
278
283
  @mutex.synchronize do
279
284
  internal_commit_offsets(async: false)
280
285
 
@@ -34,6 +34,8 @@ module Karafka
34
34
  # We can do this that way because we always first schedule jobs using messages before we
35
35
  # fetch another batch.
36
36
  @messages_buffer = MessagesBuffer.new(subscription_group)
37
+ @mutex = Mutex.new
38
+ @stopped = false
37
39
  end
38
40
 
39
41
  # Runs the main listener fetch loop.
@@ -51,6 +53,25 @@ module Karafka
51
53
  fetch_loop
52
54
  end
53
55
 
56
+ # Stops the jobs queue, triggers shutdown on all the executors (sync), commits offsets and
57
+ # stops kafka client.
58
+ #
59
+ # @note This method is not private despite being part of the fetch loop because in case of
60
+ # a forceful shutdown, it may be invoked from a separate thread
61
+ #
62
+ # @note We wrap it with a mutex exactly because of the above case of forceful shutdown
63
+ def shutdown
64
+ return if @stopped
65
+
66
+ @mutex.synchronize do
67
+ @stopped = true
68
+ @executors.clear
69
+ @coordinators.reset
70
+ @client.commit_offsets!
71
+ @client.stop
72
+ end
73
+ end
74
+
54
75
  private
55
76
 
56
77
  # Fetches the data and adds it to the jobs queue.
@@ -239,13 +260,6 @@ module Karafka
239
260
  @client.batch_poll until @jobs_queue.empty?(@subscription_group.id)
240
261
  end
241
262
 
242
- # Stops the jobs queue, triggers shutdown on all the executors (sync), commits offsets and
243
- # stops kafka client.
244
- def shutdown
245
- @client.commit_offsets!
246
- @client.stop
247
- end
248
-
249
263
  # We can stop client without a problem, as it will reinitialize itself when running the
250
264
  # `#fetch_loop` again. We just need to remember to also reset the runner as it is a long
251
265
  # running one, so with a new connection to Kafka, we need to initialize the state of the
@@ -16,8 +16,7 @@ module Karafka
16
16
  @consumer_group_id = consumer_group_id
17
17
  @client_name = client_name
18
18
  @monitor = monitor
19
- # We decorate both Karafka and WaterDrop statistics the same way
20
- @statistics_decorator = ::WaterDrop::Instrumentation::Callbacks::StatisticsDecorator.new
19
+ @statistics_decorator = ::Karafka::Core::Monitoring::StatisticsDecorator.new
21
20
  end
22
21
 
23
22
  # Emits decorated statistics to the monitor
@@ -9,6 +9,7 @@ module Karafka
9
9
  USED_LOG_LEVELS = %i[
10
10
  debug
11
11
  info
12
+ warn
12
13
  error
13
14
  fatal
14
15
  ].freeze
@@ -60,11 +61,28 @@ module Karafka
60
61
  info "[#{job.id}] #{job_type} job for #{consumer} on #{topic} finished in #{time}ms"
61
62
  end
62
63
 
63
- # Logs info about system signals that Karafka received.
64
+ # Logs info about system signals that Karafka received and prints backtrace for threads in
65
+ # case of ttin
64
66
  #
65
67
  # @param event [Dry::Events::Event] event details including payload
66
68
  def on_process_notice_signal(event)
67
69
  info "Received #{event[:signal]} system signal"
70
+
71
+ # We print backtrace only for ttin
72
+ return unless event[:signal] == :SIGTTIN
73
+
74
+ # Inspired by Sidekiq
75
+ Thread.list.each do |thread|
76
+ tid = (thread.object_id ^ ::Process.pid).to_s(36)
77
+
78
+ warn "Thread TID-#{tid} #{thread['label']}"
79
+
80
+ if thread.backtrace
81
+ warn thread.backtrace.join("\n")
82
+ else
83
+ warn '<no backtrace available>'
84
+ end
85
+ end
68
86
  end
69
87
 
70
88
  # Logs info that we're initializing Karafka app.
@@ -12,7 +12,7 @@ module Karafka
12
12
  # @note We need this to make sure that we allocate proper dispatched events only to
13
13
  # callback listeners that should publish them
14
14
  def name
15
- ::Rdkafka::Bindings.rd_kafka_name(@native_kafka)
15
+ @name ||= ::Rdkafka::Bindings.rd_kafka_name(@native_kafka)
16
16
  end
17
17
  end
18
18
  end
@@ -36,7 +36,7 @@ module Karafka
36
36
  end
37
37
  end
38
38
 
39
- # Runs extra logic after consumption that is related to handling long running jobs
39
+ # Runs extra logic after consumption that is related to handling long-running jobs
40
40
  # @note This overwrites the '#on_after_consume' from the base consumer
41
41
  def on_after_consume
42
42
  coordinator.on_finished do |first_group_message, last_group_message|
@@ -59,7 +59,7 @@ module Karafka
59
59
  # Mark as consumed only if manual offset management is not on
60
60
  mark_as_consumed(last_message) unless topic.manual_offset_management? || revoked?
61
61
 
62
- # If this is not a long running job there is nothing for us to do here
62
+ # If this is not a long-running job there is nothing for us to do here
63
63
  return unless topic.long_running_job?
64
64
 
65
65
  # Once processing is done, we move to the new offset based on commits
@@ -28,7 +28,7 @@ module Karafka
28
28
  virtual_partitioner != nil
29
29
  end
30
30
 
31
- # @return [Boolean] is a given job on a topic a long running one
31
+ # @return [Boolean] is a given job on a topic a long-running one
32
32
  def long_running_job?
33
33
  @long_running_job || false
34
34
  end
@@ -9,6 +9,7 @@ module Karafka
9
9
  SIGINT
10
10
  SIGQUIT
11
11
  SIGTERM
12
+ SIGTTIN
12
13
  ].freeze
13
14
 
14
15
  HANDLED_SIGNALS.each do |signal|
@@ -104,6 +104,9 @@ module Karafka
104
104
  # We're done waiting, lets kill them!
105
105
  workers.each(&:terminate)
106
106
  listeners.each(&:terminate)
107
+ # We always need to shutdown clients to make sure we do not force the GC to close consumer.
108
+ # This can cause memory leaks and crashes.
109
+ listeners.each(&:shutdown)
107
110
 
108
111
  Karafka::App.producer.close
109
112
 
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.0.0.rc4'
6
+ VERSION = '2.0.0.rc5'
7
7
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.0.rc4
4
+ version: 2.0.0.rc5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -34,7 +34,7 @@ cert_chain:
34
34
  R2P11bWoCtr70BsccVrN8jEhzwXngMyI2gVt750Y+dbTu1KgRqZKp/ECe7ZzPzXj
35
35
  pIy9vHxTANKYVyI4qj8OrFdEM5BQNu8oQpL0iQ==
36
36
  -----END CERTIFICATE-----
37
- date: 2022-07-28 00:00:00.000000000 Z
37
+ date: 2022-08-01 00:00:00.000000000 Z
38
38
  dependencies:
39
39
  - !ruby/object:Gem::Dependency
40
40
  name: karafka-core
@@ -42,7 +42,7 @@ dependencies:
42
42
  requirements:
43
43
  - - ">="
44
44
  - !ruby/object:Gem::Version
45
- version: 2.0.0
45
+ version: 2.0.2
46
46
  - - "<"
47
47
  - !ruby/object:Gem::Version
48
48
  version: 3.0.0
@@ -52,7 +52,7 @@ dependencies:
52
52
  requirements:
53
53
  - - ">="
54
54
  - !ruby/object:Gem::Version
55
- version: 2.0.0
55
+ version: 2.0.2
56
56
  - - "<"
57
57
  - !ruby/object:Gem::Version
58
58
  version: 3.0.0
@@ -62,14 +62,14 @@ dependencies:
62
62
  requirements:
63
63
  - - ">="
64
64
  - !ruby/object:Gem::Version
65
- version: '0.10'
65
+ version: '0.12'
66
66
  type: :runtime
67
67
  prerelease: false
68
68
  version_requirements: !ruby/object:Gem::Requirement
69
69
  requirements:
70
70
  - - ">="
71
71
  - !ruby/object:Gem::Version
72
- version: '0.10'
72
+ version: '0.12'
73
73
  - !ruby/object:Gem::Dependency
74
74
  name: thor
75
75
  requirement: !ruby/object:Gem::Requirement
@@ -90,7 +90,7 @@ dependencies:
90
90
  requirements:
91
91
  - - ">="
92
92
  - !ruby/object:Gem::Version
93
- version: 2.4.0
93
+ version: 2.4.1
94
94
  - - "<"
95
95
  - !ruby/object:Gem::Version
96
96
  version: 3.0.0
@@ -100,7 +100,7 @@ dependencies:
100
100
  requirements:
101
101
  - - ">="
102
102
  - !ruby/object:Gem::Version
103
- version: 2.4.0
103
+ version: 2.4.1
104
104
  - - "<"
105
105
  - !ruby/object:Gem::Version
106
106
  version: 3.0.0
metadata.gz.sig CHANGED
Binary file