karafka 2.4.12 → 2.4.14

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f96e616f91d60d5054e276f52cc2ecde9d4a6f6c8e25b3d61c384936b9239e4f
4
- data.tar.gz: 8dfad6cc5d0cb4cdcc885b58e129929f67b0fdc4ee7922c21f9728cbdd863f5b
3
+ metadata.gz: 22f45da117cf90a2ecbec04dbcaf39634b1e1b85e00f521d4880954c65268ecf
4
+ data.tar.gz: ce5318aaa8f52954a80981662a41ad17ab314e9706a00f2edf3e840604b87b32
5
5
  SHA512:
6
- metadata.gz: bfea8217fb7ba89158b926417a0dd0cab42460a607b3bf25a62a400ab83e510806b966f81bd2ca76144e5aff62c99fb58b35be94ca1ad5c61ca113e760098243
7
- data.tar.gz: ee66d2c6a11dc6baac3cc836ea5455d6c88838f1db7c0f9af9a92c3f5d3c7b8dd9d1f8c42dac4c9908042da3b192d8579466deecc687126640fc7bc5e0aeafb2
6
+ metadata.gz: d8ec82f91aea2bdba595fa290feec1ca0b25dbd73cd0eafac6554538d3506c47ad8db42036ed838b199a3016341bf67d6daf4f03dfced3ddcf783850c0dc97bf
7
+ data.tar.gz: 2b750fce3294dda031f5a97bf5343711dcf52d1dbc00c61e029b602b302a549e9a70bb6ac4e952842e1ad840d7dd7b28115ed72118aad54917497672593de133
checksums.yaml.gz.sig CHANGED
Binary file
@@ -1,51 +1,43 @@
1
1
  ---
2
2
  name: Bug Report
3
- about: Report an issue with Karafka you've discovered.
3
+ about: Report an issue within the Karafka ecosystem you've discovered.
4
4
  ---
5
5
 
6
- *Be clear, concise and precise in your description of the problem.
7
- Open an issue with a descriptive title and a summary in grammatically correct,
8
- complete sentences.*
6
+ To make this process smoother for everyone involved, please read the following information before filling out the template.
9
7
 
10
- *Use the template below when reporting bugs. Please, make sure that
11
- you're running the latest stable Karafka and that the problem you're reporting
12
- hasn't been reported (and potentially fixed) already.*
8
+ Scope of the OSS Support
9
+ ===========
13
10
 
14
- *Before filing the ticket you should replace all text above the horizontal
15
- rule with your own words.*
11
+ We do not provide OSS support for outdated versions of Karafka and its components.
16
12
 
17
- --------
13
+ Please ensure that you are using a version that is still actively supported. We cannot assist with any no longer maintained versions unless you support us with our Pro offering (https://karafka.io/docs/Pro-Support/).
18
14
 
19
- ## Expected behavior
15
+ We acknowledge that understanding the specifics of your application and its configuration can be essential for resolving certain issues. However, due to the extensive time and resources such analysis can require, this may fall beyond our Open Source Support scope.
20
16
 
21
- Describe here how you expected Karafka to behave in this particular situation.
17
+ If Karafka or its components are critical to your infrastructure, we encourage you to consider our Pro Offering.
22
18
 
23
- ## Actual behavior
19
+ By backing us up, you can gain direct assistance and ensure your use case receives the dedicated attention it deserves.
24
20
 
25
- Describe here what actually happened.
26
21
 
27
- ## Steps to reproduce the problem
22
+ Important Links to Read
23
+ ===========
28
24
 
29
- This is extremely important! Providing us with a reliable way to reproduce
30
- a problem will expedite its solution.
25
+ Please take a moment to review the following resources before submitting your report:
31
26
 
32
- ## Your setup details
27
+ - Issue Reporting Guide: https://karafka.io/docs/Support/#issue-reporting-guide
28
+ - Support Policy: https://karafka.io/docs/Support/
29
+ - Versions, Lifecycle, and EOL: https://karafka.io/docs/Versions-Lifecycle-and-EOL/
33
30
 
34
- Please provide kafka version and the output of `karafka info` or `bundle exec karafka info` if using Bundler.
35
31
 
36
- Here's an example:
32
+ Bug Report Details
33
+ ===========
37
34
 
38
- ```
39
- $ [bundle exec] karafka info
40
- Karafka version: 2.2.10 + Pro
41
- Ruby version: ruby 3.2.2 (2023-03-30 revision e51014f9c0) [x86_64-linux]
42
- Rdkafka version: 0.13.8
43
- Consumer groups count: 2
44
- Subscription groups count: 2
45
- Workers count: 2
46
- Application client id: example_app
47
- Boot file: /app/karafka.rb
48
- Environment: development
49
- License: Commercial
50
- License entity: karafka-ci
51
- ```
35
+ Please provide all the details per our Issue Reporting Guide: https://karafka.io/docs/Support/#issue-reporting-guide
36
+
37
+ Failing to provide the required details may result in the issue being closed. Please include all necessary information to help us understand and resolve your issue effectively.
38
+
39
+
40
+ Additional Context
41
+ ===========
42
+
43
+ Add any other context about the problem here.
@@ -89,6 +89,13 @@ jobs:
89
89
  run: |
90
90
  docker compose up -d || (sleep 5 && docker compose up -d)
91
91
 
92
+ # Newer versions of ActiveSupport and Rails do not work with Ruby 3.1 anymore.
93
+ # While we use newer by default we do want to resolve older and test, thus we remove
94
+ # Gemfile.lock and let it resolve to the most compatible version possible
95
+ - name: Remove Gemfile.lock if Ruby 3.1
96
+ if: matrix.ruby == '3.1'
97
+ run: rm -f Gemfile.lock
98
+
92
99
  - name: Set up Ruby
93
100
  uses: ruby/setup-ruby@v1
94
101
  with:
data/.ruby-version CHANGED
@@ -1 +1 @@
1
- 3.3.5
1
+ 3.3.6
data/CHANGELOG.md CHANGED
@@ -1,5 +1,22 @@
1
1
  # Karafka Framework Changelog
2
2
 
3
+ ## 2.4.14 (2024-11-25)
4
+ - [Enhancement] Improve low-level critical error reporting.
5
+ - [Enhancement] Expand Kubernetes Liveness state reporting with critical errors detection.
6
+ - [Enhancement] Save several string allocations and one array allocation on each job execution when using Datadog instrumentation.
7
+ - [Enhancement] Support `eofed` jobs in the AppSignal instrumentation.
8
+ - [Enhancement] Allow running bootfile-less Rails setup Karafka CLI commands where stuff is configured in the initializers.
9
+ - [Fix] `Instrumentation::Vendors::Datadog::LoggerListener` treats eof jobs as consume jobs.
10
+
11
+ ## 2.4.13 (2024-10-11)
12
+ - [Enhancement] Make declarative topics return different exit codes on migrable/non-migrable states (0 - no changes, 2 - changes) when used with `--detailed-exitcode` flag.
13
+ - [Enhancement] Introduce `config.strict_declarative_topics` that should force declaratives on all non-pattern based topics and DLQ topics
14
+ - [Enhancement] Report ignored repartitioning to lower number of partitions in declarative topics.
15
+ - [Enhancement] Promote the `LivenessListener#healty?` to a public API.
16
+ - [Fix] Fix `Karafka::Errors::MissingBootFileError` when debugging in VScode with ruby-lsp.
17
+ - [Fix] Require `karafka-core` `>=` `2.4.4` to prevent dependencies conflicts.
18
+ - [Fix] Validate swarm cli and always parse options from argv (roelbondoc)
19
+
3
20
  ## 2.4.12 (2024-09-17)
4
21
  - **[Feature]** Provide Adaptive Iterator feature as a fast alternative to Long-Running Jobs (Pro).
5
22
  - [Enhancement] Provide `Consumer#each` as a delegation to messages batch.
data/Gemfile.lock CHANGED
@@ -1,9 +1,9 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.4.12)
4
+ karafka (2.4.14)
5
5
  base64 (~> 0.2)
6
- karafka-core (>= 2.4.3, < 2.5.0)
6
+ karafka-core (>= 2.4.4, < 2.5.0)
7
7
  karafka-rdkafka (>= 0.17.2)
8
8
  waterdrop (>= 2.7.3, < 3.0.0)
9
9
  zeitwerk (~> 2.3)
@@ -11,11 +11,12 @@ PATH
11
11
  GEM
12
12
  remote: https://rubygems.org/
13
13
  specs:
14
- activejob (7.2.1)
15
- activesupport (= 7.2.1)
14
+ activejob (8.0.0)
15
+ activesupport (= 8.0.0)
16
16
  globalid (>= 0.3.6)
17
- activesupport (7.2.1)
17
+ activesupport (8.0.0)
18
18
  base64
19
+ benchmark (>= 0.3)
19
20
  bigdecimal
20
21
  concurrent-ruby (~> 1.0, >= 1.3.1)
21
22
  connection_pool (>= 2.2.5)
@@ -25,7 +26,9 @@ GEM
25
26
  minitest (>= 5.1)
26
27
  securerandom (>= 0.3)
27
28
  tzinfo (~> 2.0, >= 2.0.5)
29
+ uri (>= 0.13.1)
28
30
  base64 (0.2.0)
31
+ benchmark (0.3.0)
29
32
  bigdecimal (3.1.8)
30
33
  byebug (11.1.3)
31
34
  concurrent-ruby (1.3.4)
@@ -44,7 +47,7 @@ GEM
44
47
  raabro (~> 1.4)
45
48
  globalid (1.2.1)
46
49
  activesupport (>= 6.1)
47
- i18n (1.14.5)
50
+ i18n (1.14.6)
48
51
  concurrent-ruby (~> 1.0)
49
52
  karafka-core (2.4.4)
50
53
  karafka-rdkafka (>= 0.15.0, < 0.18.0)
@@ -64,7 +67,7 @@ GEM
64
67
  logger (1.6.1)
65
68
  mini_portile2 (2.8.7)
66
69
  minitest (5.25.1)
67
- ostruct (0.6.0)
70
+ ostruct (0.6.1)
68
71
  raabro (1.4.0)
69
72
  rack (3.1.7)
70
73
  rake (13.2.1)
@@ -93,6 +96,7 @@ GEM
93
96
  tilt (2.4.0)
94
97
  tzinfo (2.0.6)
95
98
  concurrent-ruby (~> 1.0)
99
+ uri (1.0.0)
96
100
  waterdrop (2.8.0)
97
101
  karafka-core (>= 2.4.3, < 3.0.0)
98
102
  karafka-rdkafka (>= 0.17.5)
data/bin/integrations CHANGED
@@ -45,7 +45,10 @@ class Scenario
45
45
  'shutdown/on_hanging_on_shutdown_job_and_a_shutdown_spec.rb' => [2].freeze,
46
46
  'shutdown/on_hanging_listener_and_shutdown_spec.rb' => [2].freeze,
47
47
  'swarm/forceful_shutdown_of_hanging_spec.rb' => [2].freeze,
48
- 'instrumentation/post_errors_instrumentation_error_spec.rb' => [1].freeze
48
+ 'instrumentation/post_errors_instrumentation_error_spec.rb' => [1].freeze,
49
+ 'cli/declaratives/delete/existing_with_exit_code_spec.rb' => [2].freeze,
50
+ 'cli/declaratives/create/new_with_exit_code_spec.rb' => [2].freeze,
51
+ 'cli/declaratives/plan/when_changes_with_detailed_exit_code_spec.rb' => [2].freeze
49
52
  }.freeze
50
53
 
51
54
  private_constant :MAX_RUN_TIME, :EXIT_CODES
@@ -240,9 +243,13 @@ ARGV.each do |filter|
240
243
  end
241
244
  end
242
245
 
243
- # Remove Rails 7.2 specs from Ruby 3.0 because it requires 3.1
246
+ # Remove Rails 7.2 specs from Ruby < 3.1 because it requires 3.1
247
+ # Remove Rails 8.0 specs from Ruby < 3.2 because it requires 3.2
244
248
  specs.delete_if do |spec|
245
- RUBY_VERSION < '3.1' && spec.include?('rails72')
249
+ next true if RUBY_VERSION < '3.1' && spec.include?('rails72')
250
+ next true if RUBY_VERSION < '3.2' && spec.include?('rails8')
251
+
252
+ false
246
253
  end
247
254
 
248
255
  raise ArgumentError, "No integration specs with filters: #{ARGV.join(', ')}" if specs.empty?
@@ -14,6 +14,7 @@ en:
14
14
  pause_max_timeout_format: needs to be an integer bigger than 0
15
15
  pause_with_exponential_backoff_format: needs to be either true or false
16
16
  strict_topics_namespacing_format: needs to be either true or false
17
+ strict_declarative_topics_format: needs to be either true or false
17
18
  shutdown_timeout_format: needs to be an integer bigger than 0
18
19
  max_wait_time_format: needs to be an integer bigger than 0
19
20
  max_wait_time_max_wait_time_vs_swarm_node_report_timeout: >
@@ -147,6 +148,9 @@ en:
147
148
  all topic names within a single consumer group must be unique considering namespacing styles
148
149
  disable this validation by setting config.strict_topics_namespacing to false
149
150
 
151
+ routing:
152
+ without_declarative_definition: lacks explicit declarative topics definition
153
+
150
154
  job_options:
151
155
  missing: needs to be present
152
156
  dispatch_method_format: needs to be either :produce_async or :produce_sync
data/karafka.gemspec CHANGED
@@ -22,7 +22,7 @@ Gem::Specification.new do |spec|
22
22
  DESC
23
23
 
24
24
  spec.add_dependency 'base64', '~> 0.2'
25
- spec.add_dependency 'karafka-core', '>= 2.4.3', '< 2.5.0'
25
+ spec.add_dependency 'karafka-core', '>= 2.4.4', '< 2.5.0'
26
26
  spec.add_dependency 'karafka-rdkafka', '>= 0.17.2'
27
27
  spec.add_dependency 'waterdrop', '>= 2.7.3', '< 3.0.0'
28
28
  spec.add_dependency 'zeitwerk', '~> 2.3'
@@ -41,22 +41,38 @@ module Karafka
41
41
  class << self
42
42
  # Loads proper environment with what is needed to run the CLI
43
43
  def load
44
+ rails_env_rb = File.join(Dir.pwd, 'config/environment.rb')
45
+ is_rails = Kernel.const_defined?(:Rails) && File.exist?(rails_env_rb)
46
+
47
+ # If the boot file is disabled and this is a Rails app, we assume that user moved the
48
+ # karafka app configuration to initializers or other Rails loading related place.
49
+ # It is not recommended but some users tend to do this. In such cases we just try to load
50
+ # the Rails stuff hoping that it will also load Karafka stuff
51
+ if Karafka.boot_file.to_s == 'false' && is_rails
52
+ require rails_env_rb
53
+
54
+ return
55
+ end
56
+
44
57
  # If there is a boot file, we need to require it as we expect it to contain
45
58
  # Karafka app setup, routes, etc
46
59
  if File.exist?(::Karafka.boot_file)
47
- rails_env_rb = File.join(Dir.pwd, 'config/environment.rb')
48
-
49
60
  # Load Rails environment file that starts Rails, so we can reference consumers and
50
61
  # other things from `karafka.rb` file. This will work only for Rails, for non-rails
51
62
  # a manual setup is needed
52
- require rails_env_rb if Kernel.const_defined?(:Rails) && File.exist?(rails_env_rb)
53
-
63
+ require rails_env_rb if is_rails
54
64
  require Karafka.boot_file.to_s
65
+
66
+ return
67
+ end
68
+
55
69
  # However when it is unavailable, we still want to be able to run help command
56
70
  # and install command as they don't require configured app itself to run
57
- elsif %w[-h install].none? { |cmd| cmd == ARGV[0] }
58
- raise ::Karafka::Errors::MissingBootFileError, ::Karafka.boot_file
59
- end
71
+ return if %w[-h install].any? { |cmd| cmd == ARGV[0] }
72
+
73
+ # All other commands except help and install do require an existing boot file if it was
74
+ # declared
75
+ raise ::Karafka::Errors::MissingBootFileError, ::Karafka.boot_file
60
76
  end
61
77
 
62
78
  # Allows to set options for Thor cli
@@ -96,7 +112,7 @@ module Karafka
96
112
  *[names, option[2], option[1]].flatten
97
113
  ) { |value| options[option[0]] = value }
98
114
  end
99
- end.parse!
115
+ end.parse(ARGV)
100
116
 
101
117
  options
102
118
  end
@@ -16,7 +16,10 @@ module Karafka
16
16
  return false
17
17
  end
18
18
 
19
+ changes = false
20
+
19
21
  unless topics_to_create.empty?
22
+ changes = true
20
23
  puts 'Following topics will be created:'
21
24
  puts
22
25
 
@@ -33,20 +36,54 @@ module Karafka
33
36
  end
34
37
 
35
38
  unless topics_to_repartition.empty?
36
- puts 'Following topics will be repartitioned:'
37
- puts
39
+ upscale = {}
40
+ downscale = {}
38
41
 
39
42
  topics_to_repartition.each do |topic, partitions|
40
43
  from = partitions
41
44
  to = topic.declaratives.partitions
42
45
 
43
- puts " #{yellow('~')} #{topic.name}:"
44
- puts " #{yellow('~')} partitions: \"#{red(from)}\" #{grey('=>')} \"#{green(to)}\""
46
+ if from < to
47
+ upscale[topic] = partitions
48
+ else
49
+ downscale[topic] = partitions
50
+ end
51
+ end
52
+
53
+ unless upscale.empty?
54
+ changes = true
55
+ puts 'Following topics will be repartitioned:'
56
+ puts
57
+
58
+ upscale.each do |topic, partitions|
59
+ from = partitions
60
+ to = topic.declaratives.partitions
61
+ y = yellow('~')
62
+ puts " #{y} #{topic.name}:"
63
+ puts " #{y} partitions: \"#{red(from)}\" #{grey('=>')} \"#{green(to)}\""
64
+ puts
65
+ end
66
+ end
67
+
68
+ unless downscale.empty?
69
+ puts(
70
+ 'Following topics repartitioning will be ignored as downscaling is not supported:'
71
+ )
45
72
  puts
73
+
74
+ downscale.each do |topic, partitions|
75
+ from = partitions
76
+ to = topic.declaratives.partitions
77
+
78
+ puts " #{grey('~')} #{topic.name}:"
79
+ puts " #{grey('~')} partitions: \"#{grey(from)}\" #{grey('=>')} \"#{grey(to)}\""
80
+ puts
81
+ end
46
82
  end
47
83
  end
48
84
 
49
85
  unless topics_to_alter.empty?
86
+ changes = true
50
87
  puts 'Following topics will have configuration changes:'
51
88
  puts
52
89
 
@@ -65,7 +102,7 @@ module Karafka
65
102
  end
66
103
  end
67
104
 
68
- true
105
+ changes
69
106
  end
70
107
 
71
108
  private
@@ -10,26 +10,53 @@ module Karafka
10
10
  )
11
11
 
12
12
  desc 'Allows for the topics management'
13
+
14
+ option(
15
+ :detailed_exitcode,
16
+ 'Exists with 0 when no changes, 1 when error and 2 when changes present or applied',
17
+ TrueClass,
18
+ %w[
19
+ --detailed_exitcode
20
+ ]
21
+ )
22
+
23
+ # We exit with 0 if no changes happened
24
+ NO_CHANGES_EXIT_CODE = 0
25
+
26
+ # When any changes happened (or could happen) we return 2 because 1 is default when Ruby
27
+ # crashes
28
+ CHANGES_EXIT_CODE = 2
29
+
30
+ private_constant :NO_CHANGES_EXIT_CODE, :CHANGES_EXIT_CODE
31
+
13
32
  # @param action [String] action we want to take
14
33
  def call(action = 'missing')
15
- case action
16
- when 'create'
17
- Topics::Create.new.call
18
- when 'delete'
19
- Topics::Delete.new.call
20
- when 'reset'
21
- Topics::Reset.new.call
22
- when 'repartition'
23
- Topics::Repartition.new.call
24
- when 'migrate'
25
- Topics::Migrate.new.call
26
- when 'align'
27
- Topics::Align.new.call
28
- when 'plan'
29
- Topics::Plan.new.call
30
- else
31
- raise ::ArgumentError, "Invalid topics action: #{action}"
32
- end
34
+ detailed_exit_code = options.fetch(:detailed_exitcode, false)
35
+
36
+ command = case action
37
+ when 'create'
38
+ Topics::Create
39
+ when 'delete'
40
+ Topics::Delete
41
+ when 'reset'
42
+ Topics::Reset
43
+ when 'repartition'
44
+ Topics::Repartition
45
+ when 'migrate'
46
+ Topics::Migrate
47
+ when 'align'
48
+ Topics::Align
49
+ when 'plan'
50
+ Topics::Plan
51
+ else
52
+ raise ::ArgumentError, "Invalid topics action: #{action}"
53
+ end
54
+
55
+ changes = command.new.call
56
+
57
+ return unless detailed_exit_code
58
+
59
+ changes ? exit(CHANGES_EXIT_CODE) : exit(NO_CHANGES_EXIT_CODE)
33
60
  end
34
61
  end
35
62
  end
@@ -41,6 +41,9 @@ module Karafka
41
41
  :topic_authorization_failed, # 29
42
42
  :group_authorization_failed, # 30
43
43
  :cluster_authorization_failed, # 31
44
+ :illegal_generation,
45
+ # this will not recover as fencing is permanent
46
+ :fenced, # -144
44
47
  # This can happen for many reasons, including issues with static membership being fenced
45
48
  :fatal # -150
46
49
  ].freeze
@@ -34,6 +34,7 @@ module Karafka
34
34
  required(:max_wait_time) { |val| val.is_a?(Integer) && val.positive? }
35
35
  required(:group_id) { |val| val.is_a?(String) && Contracts::TOPIC_REGEXP.match?(val) }
36
36
  required(:kafka) { |val| val.is_a?(Hash) && !val.empty? }
37
+ required(:strict_declarative_topics) { |val| [true, false].include?(val) }
37
38
 
38
39
  nested(:swarm) do
39
40
  required(:nodes) { |val| val.is_a?(Integer) && val.positive? }
@@ -0,0 +1,59 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Karafka
4
+ module Contracts
5
+ # Ensures that routing wide rules are obeyed
6
+ class Routing < Base
7
+ configure do |config|
8
+ config.error_messages = YAML.safe_load(
9
+ File.read(
10
+ File.join(Karafka.gem_root, 'config', 'locales', 'errors.yml')
11
+ )
12
+ ).fetch('en').fetch('validations').fetch('routing')
13
+ end
14
+
15
+ # Ensures, that when declarative topics strict requirement is on, all topics have
16
+ # declarative definition (including DLQ topics)
17
+ # @note It will ignore routing pattern topics because those topics are virtual
18
+ virtual do |data, errors|
19
+ next unless errors.empty?
20
+ # Do not validate declaratives unless required and explicitly enabled
21
+ next unless Karafka::App.config.strict_declarative_topics
22
+
23
+ # Collects declarative topics. Please note, that any topic that has a `#topic` reference,
24
+ # will be declarative by default unless explicitly disabled. This however does not apply
25
+ # to the DLQ definitions
26
+ dec_topics = Set.new
27
+ # All topics including the DLQ topics names that are marked as active
28
+ topics = Set.new
29
+
30
+ data.each do |consumer_group|
31
+ consumer_group[:topics].each do |topic|
32
+ pat = topic[:patterns]
33
+ # Ignore pattern topics because they won't exist and should not be declarative
34
+ # managed
35
+ topics << topic[:name] if !pat || !pat[:active]
36
+
37
+ dlq = topic[:dead_letter_queue]
38
+ topics << dlq[:topic] if dlq[:active]
39
+
40
+ dec = topic[:declaratives]
41
+
42
+ dec_topics << topic[:name] if dec[:active]
43
+ end
44
+ end
45
+
46
+ missing_dec = topics - dec_topics
47
+
48
+ next if missing_dec.empty?
49
+
50
+ missing_dec.map do |topic_name|
51
+ [
52
+ [:topics, topic_name],
53
+ :without_declarative_definition
54
+ ]
55
+ end
56
+ end
57
+ end
58
+ end
59
+ end
@@ -48,6 +48,7 @@ module Karafka
48
48
  consumer.revoked.error
49
49
  consumer.shutdown.error
50
50
  consumer.tick.error
51
+ consumer.eofed.error
51
52
  ].freeze
52
53
 
53
54
  private_constant :USER_CONSUMER_ERROR_TYPES
@@ -107,7 +108,8 @@ module Karafka
107
108
  [
108
109
  %i[revoke revoked revoked],
109
110
  %i[shutting_down shutdown shutdown],
110
- %i[tick ticked tick]
111
+ %i[tick ticked tick],
112
+ %i[eof eofed eofed]
111
113
  ].each do |before, after, name|
112
114
  class_eval <<~RUBY, __FILE__, __LINE__ + 1
113
115
  # Keeps track of user code execution
@@ -35,6 +35,7 @@ module Karafka
35
35
  def initialize(&block)
36
36
  configure
37
37
  setup(&block) if block
38
+ @job_types_cache = {}
38
39
  end
39
40
 
40
41
  # @param block [Proc] configuration block
@@ -51,7 +52,7 @@ module Karafka
51
52
  push_tags
52
53
 
53
54
  job = event[:job]
54
- job_type = job.class.to_s.split('::').last
55
+ job_type = fetch_job_type(job.class)
55
56
  consumer = job.executor.topic.consumer
56
57
  topic = job.executor.topic.name
57
58
 
@@ -68,8 +69,16 @@ module Karafka
68
69
  'revoked'
69
70
  when 'Idle'
70
71
  'idle'
71
- else
72
+ when 'Eofed'
73
+ 'eofed'
74
+ when 'EofedNonBlocking'
75
+ 'eofed'
76
+ when 'ConsumeNonBlocking'
72
77
  'consume'
78
+ when 'Consume'
79
+ 'consume'
80
+ else
81
+ raise Errors::UnsupportedCaseError, job_type
73
82
  end
74
83
 
75
84
  current_span.resource = "#{consumer}##{action}"
@@ -121,6 +130,8 @@ module Karafka
121
130
  error "Consumer on shutdown failed due to an error: #{error}"
122
131
  when 'consumer.tick.error'
123
132
  error "Consumer tick failed due to an error: #{error}"
133
+ when 'consumer.eofed.error'
134
+ error "Consumer eofed failed due to an error: #{error}"
124
135
  when 'worker.process.error'
125
136
  fatal "Worker processing failed due to an error: #{error}"
126
137
  when 'connection.listener.fetch_loop.error'
@@ -169,6 +180,18 @@ module Karafka
169
180
 
170
181
  Karafka.logger.pop_tags
171
182
  end
183
+
184
+ private
185
+
186
+ # Takes the job class and extracts the job type.
187
+ # @param job_class [Class] job class
188
+ # @return [String]
189
+ # @note It does not have to be thread-safe despite running in multiple threads because
190
+ # the assignment race condition is irrelevant here since the same value will be
191
+ # assigned.
192
+ def fetch_job_type(job_class)
193
+ @job_types_cache[job_class] ||= job_class.to_s.split('::').last
194
+ end
172
195
  end
173
196
  end
174
197
  end
@@ -82,10 +82,11 @@ module Karafka
82
82
  statistics = event[:statistics]
83
83
  consumer_group_id = event[:consumer_group_id]
84
84
 
85
- base_tags = default_tags + ["consumer_group:#{consumer_group_id}"]
85
+ tags = ["consumer_group:#{consumer_group_id}"]
86
+ tags.concat(default_tags)
86
87
 
87
88
  rd_kafka_metrics.each do |metric|
88
- report_metric(metric, statistics, base_tags)
89
+ report_metric(metric, statistics, tags)
89
90
  end
90
91
  end
91
92
 
@@ -93,13 +94,14 @@ module Karafka
93
94
  #
94
95
  # @param event [Karafka::Core::Monitoring::Event]
95
96
  def on_error_occurred(event)
96
- extra_tags = ["type:#{event[:type]}"]
97
+ tags = ["type:#{event[:type]}"]
98
+ tags.concat(default_tags)
97
99
 
98
100
  if event.payload[:caller].respond_to?(:messages)
99
- extra_tags += consumer_tags(event.payload[:caller])
101
+ tags.concat(consumer_tags(event.payload[:caller]))
100
102
  end
101
103
 
102
- count('error_occurred', 1, tags: default_tags + extra_tags)
104
+ count('error_occurred', 1, tags: tags)
103
105
  end
104
106
 
105
107
  # Reports how many messages we've polled and how much time did we spend on it
@@ -111,10 +113,11 @@ module Karafka
111
113
 
112
114
  consumer_group_id = event[:subscription_group].consumer_group.id
113
115
 
114
- extra_tags = ["consumer_group:#{consumer_group_id}"]
116
+ tags = ["consumer_group:#{consumer_group_id}"]
117
+ tags.concat(default_tags)
115
118
 
116
- histogram('listener.polling.time_taken', time_taken, tags: default_tags + extra_tags)
117
- histogram('listener.polling.messages', messages_count, tags: default_tags + extra_tags)
119
+ histogram('listener.polling.time_taken', time_taken, tags: tags)
120
+ histogram('listener.polling.messages', messages_count, tags: tags)
118
121
  end
119
122
 
120
123
  # Here we report majority of things related to processing as we have access to the
@@ -125,7 +128,8 @@ module Karafka
125
128
  messages = consumer.messages
126
129
  metadata = messages.metadata
127
130
 
128
- tags = default_tags + consumer_tags(consumer)
131
+ tags = consumer_tags(consumer)
132
+ tags.concat(default_tags)
129
133
 
130
134
  count('consumer.messages', messages.count, tags: tags)
131
135
  count('consumer.batches', 1, tags: tags)
@@ -146,7 +150,8 @@ module Karafka
146
150
  #
147
151
  # @param event [Karafka::Core::Monitoring::Event]
148
152
  def on_consumer_#{after}(event)
149
- tags = default_tags + consumer_tags(event.payload[:caller])
153
+ tags = consumer_tags(event.payload[:caller])
154
+ tags.concat(default_tags)
150
155
 
151
156
  count('consumer.#{name}', 1, tags: tags)
152
157
  end
@@ -158,9 +163,10 @@ module Karafka
158
163
  def on_worker_process(event)
159
164
  jq_stats = event[:jobs_queue].statistics
160
165
 
161
- gauge('worker.total_threads', Karafka::App.config.concurrency, tags: default_tags)
162
- histogram('worker.processing', jq_stats[:busy], tags: default_tags)
163
- histogram('worker.enqueued_jobs', jq_stats[:enqueued], tags: default_tags)
166
+ tags = default_tags
167
+ gauge('worker.total_threads', Karafka::App.config.concurrency, tags: tags)
168
+ histogram('worker.processing', jq_stats[:busy], tags: tags)
169
+ histogram('worker.enqueued_jobs', jq_stats[:enqueued], tags: tags)
164
170
  end
165
171
 
166
172
  # We report this metric before and after processing for higher accuracy
@@ -240,11 +246,14 @@ module Karafka
240
246
  # node ids
241
247
  next if broker_statistics['nodeid'] == -1
242
248
 
249
+ tags = ["broker:#{broker_statistics['nodename']}"]
250
+ tags.concat(base_tags)
251
+
243
252
  public_send(
244
253
  metric.type,
245
254
  metric.name,
246
255
  broker_statistics.dig(*metric.key_location),
247
- tags: base_tags + ["broker:#{broker_statistics['nodename']}"]
256
+ tags: tags
248
257
  )
249
258
  end
250
259
  when :topics
@@ -259,14 +268,14 @@ module Karafka
259
268
  next if partition_statistics['fetch_state'] == 'stopped'
260
269
  next if partition_statistics['fetch_state'] == 'none'
261
270
 
271
+ tags = ["topic:#{topic_name}", "partition:#{partition_name}"]
272
+ tags.concat(base_tags)
273
+
262
274
  public_send(
263
275
  metric.type,
264
276
  metric.name,
265
277
  partition_statistics.dig(*metric.key_location),
266
- tags: base_tags + [
267
- "topic:#{topic_name}",
268
- "partition:#{partition_name}"
269
- ]
278
+ tags: tags
270
279
  )
271
280
  end
272
281
  end
@@ -29,13 +29,13 @@ module Karafka
29
29
  @port = port
30
30
  end
31
31
 
32
- private
33
-
34
32
  # @return [Boolean] true if all good, false if we should tell k8s to kill this process
35
33
  def healthy?
36
34
  raise NotImplementedError, 'Implement in a subclass'
37
35
  end
38
36
 
37
+ private
38
+
39
39
  # Responds to a HTTP request with the process liveness status
40
40
  def respond
41
41
  client = @server.accept
@@ -26,6 +26,19 @@ module Karafka
26
26
  #
27
27
  # @note Please use `Kubernetes::SwarmLivenessListener` when operating in the swarm mode
28
28
  class LivenessListener < BaseListener
29
+ # When any of those occurs, it means something went wrong in a way that cannot be
30
+ # recovered. In such cases we should report that the consumer process is not healthy.
31
+ # - `fenced` - This instance has been fenced by a newer instance and will not do any
32
+ # processing at all never. Fencing most of the time means the instance.group.id has
33
+ # been reused without properly terminating the previous consumer process first
34
+ # - `fatal` - any fatal error that halts the processing forever
35
+ UNRECOVERABLE_RDKAFKA_ERRORS = [
36
+ :fenced, # -144
37
+ :fatal # -150
38
+ ].freeze
39
+
40
+ private_constant :UNRECOVERABLE_RDKAFKA_ERRORS
41
+
29
42
  # @param hostname [String, nil] hostname or nil to bind on all
30
43
  # @param port [Integer] TCP port on which we want to run our HTTP status server
31
44
  # @param consuming_ttl [Integer] time in ms after which we consider consumption hanging.
@@ -40,6 +53,11 @@ module Karafka
40
53
  consuming_ttl: 5 * 60 * 1_000,
41
54
  polling_ttl: 5 * 60 * 1_000
42
55
  )
56
+ # If this is set to true, it indicates unrecoverable error like fencing
57
+ # While fencing can be partial (for one of the SGs), we still should consider this
58
+ # as an undesired state for the whole process because it halts processing in a
59
+ # non-recoverable manner forever
60
+ @unrecoverable = false
43
61
  @polling_ttl = polling_ttl
44
62
  @consuming_ttl = consuming_ttl
45
63
  @mutex = Mutex.new
@@ -86,10 +104,19 @@ module Karafka
86
104
  RUBY
87
105
  end
88
106
 
89
- # @param _event [Karafka::Core::Monitoring::Event]
90
- def on_error_occurred(_event)
107
+ # @param event [Karafka::Core::Monitoring::Event]
108
+ def on_error_occurred(event)
91
109
  clear_consumption_tick
92
110
  clear_polling_tick
111
+
112
+ error = event[:error]
113
+
114
+ # We are only interested in the rdkafka errors
115
+ return unless error.is_a?(Rdkafka::RdkafkaError)
116
+ # We mark as unrecoverable only on certain errors that will not be fixed by retrying
117
+ return unless UNRECOVERABLE_RDKAFKA_ERRORS.include?(error.code)
118
+
119
+ @unrecoverable = true
93
120
  end
94
121
 
95
122
  # Deregister the polling tracker for given listener
@@ -112,6 +139,18 @@ module Karafka
112
139
  clear_polling_tick
113
140
  end
114
141
 
142
+ # Did we exceed any of the ttls
143
+ # @return [String] 204 string if ok, 500 otherwise
144
+ def healthy?
145
+ time = monotonic_now
146
+
147
+ return false if @unrecoverable
148
+ return false if @pollings.values.any? { |tick| (time - tick) > @polling_ttl }
149
+ return false if @consumptions.values.any? { |tick| (time - tick) > @consuming_ttl }
150
+
151
+ true
152
+ end
153
+
115
154
  private
116
155
 
117
156
  # Wraps the logic with a mutex
@@ -152,17 +191,6 @@ module Karafka
152
191
  @consumptions.delete(thread_id)
153
192
  end
154
193
  end
155
-
156
- # Did we exceed any of the ttls
157
- # @return [String] 204 string if ok, 500 otherwise
158
- def healthy?
159
- time = monotonic_now
160
-
161
- return false if @pollings.values.any? { |tick| (time - tick) > @polling_ttl }
162
- return false if @consumptions.values.any? { |tick| (time - tick) > @consuming_ttl }
163
-
164
- true
165
- end
166
194
  end
167
195
  end
168
196
  end
@@ -48,6 +48,10 @@ module Karafka
48
48
 
49
49
  instance_eval(&block)
50
50
 
51
+ # Ensures high-level routing details consistency
52
+ # Contains checks that require knowledge about all the consumer groups to operate
53
+ Contracts::Routing.new.validate!(map(&:to_h))
54
+
51
55
  each do |consumer_group|
52
56
  # Validate consumer group settings
53
57
  Contracts::ConsumerGroup.new.validate!(consumer_group.to_h)
@@ -68,6 +68,11 @@ module Karafka
68
68
  setting :strict_topics_namespacing, default: true
69
69
  # option [String] default consumer group name for implicit routing
70
70
  setting :group_id, default: 'app'
71
+ # option [Boolean] when set to true, it will validate as part of the routing validation, that
72
+ # all topics and DLQ topics (even not active) have the declarative topics definitions.
73
+ # Really useful when you want to ensure that all topics in routing are managed via
74
+ # declaratives.
75
+ setting :strict_declarative_topics, default: false
71
76
 
72
77
  setting :oauth do
73
78
  # option [false, #call] Listener for using oauth bearer. This listener will be able to
@@ -20,7 +20,9 @@ module Karafka
20
20
  shutdown_timeout: %i[shutdown_timeout],
21
21
  supervision_sleep: %i[internal supervision_sleep],
22
22
  forceful_exit_code: %i[internal forceful_exit_code],
23
- process: %i[internal process]
23
+ process: %i[internal process],
24
+ cli_contract: %i[internal cli contract],
25
+ activity_manager: %i[internal routing activity_manager]
24
26
  )
25
27
 
26
28
  # How long extra should we wait on shutdown before forceful termination
@@ -39,6 +41,9 @@ module Karafka
39
41
 
40
42
  # Creates needed number of forks, installs signals and starts supervision
41
43
  def run
44
+ # Validate the CLI provided options the same way as we do for the regular server
45
+ cli_contract.validate!(activity_manager.to_h)
46
+
42
47
  # Close producer just in case. While it should not be used, we do not want even a
43
48
  # theoretical case since librdkafka is not thread-safe.
44
49
  # We close it prior to forking just to make sure, there is no issue with initialized
@@ -67,7 +72,11 @@ module Karafka
67
72
  lock
68
73
  control
69
74
  end
70
- # If anything went wrong, signal this and die
75
+
76
+ # If the cli contract validation failed reraise immediately and stop the process
77
+ rescue Karafka::Errors::InvalidConfigurationError => e
78
+ raise e
79
+ # If anything went wrong during supervision, signal this and die
71
80
  # Supervisor is meant to be thin and not cause any issues. If you encounter this case
72
81
  # please report it as it should be considered critical
73
82
  rescue StandardError => e
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.4.12'
6
+ VERSION = '2.4.14'
7
7
  end
data/lib/karafka.rb CHANGED
@@ -49,17 +49,29 @@ module Karafka
49
49
  @monitor ||= App.config.monitor
50
50
  end
51
51
 
52
- # @return [String] root path of this gem
52
+ # @return [Pathname] root path of this gem
53
53
  def gem_root
54
54
  Pathname.new(File.expand_path('..', __dir__))
55
55
  end
56
56
 
57
- # @return [String] Karafka app root path (user application path)
57
+ # @return [Pathname] Karafka app root path (user application path)
58
58
  def root
59
- Pathname.new(ENV['KARAFKA_ROOT_DIR'] || File.dirname(ENV['BUNDLE_GEMFILE']))
59
+ # If user points to a different root explicitly, use it
60
+ return Pathname.new(ENV['KARAFKA_ROOT_DIR']) if ENV['KARAFKA_ROOT_DIR']
61
+
62
+ # By default we infer the project root from bundler.
63
+ # We cannot use the BUNDLE_GEMFILE env directly because it may be altered by things like
64
+ # ruby-lsp. Instead we always fallback to the most outer Gemfile. In most of the cases, it
65
+ # won't matter but in case of some automatic setup alterations like ruby-lsp, the location
66
+ # from which the project starts may not match the original Gemfile.
67
+ Pathname.new(
68
+ File.dirname(
69
+ Bundler.with_unbundled_env { Bundler.default_gemfile }
70
+ )
71
+ )
60
72
  end
61
73
 
62
- # @return [String] path to Karafka gem root core
74
+ # @return [Pathname] path to Karafka gem root core
63
75
  def core_root
64
76
  Pathname.new(File.expand_path('karafka', __dir__))
65
77
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.4.12
4
+ version: 2.4.14
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -35,7 +35,7 @@ cert_chain:
35
35
  i9zWxov0mr44TWegTVeypcWGd/0nxu1+QHVNHJrpqlPBRvwQsUm7fwmRInGpcaB8
36
36
  ap8wNYvryYzrzvzUxIVFBVM5PacgkFqRmolCa8I7tdKQN+R1
37
37
  -----END CERTIFICATE-----
38
- date: 2024-09-17 00:00:00.000000000 Z
38
+ date: 2024-11-25 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: base64
@@ -57,7 +57,7 @@ dependencies:
57
57
  requirements:
58
58
  - - ">="
59
59
  - !ruby/object:Gem::Version
60
- version: 2.4.3
60
+ version: 2.4.4
61
61
  - - "<"
62
62
  - !ruby/object:Gem::Version
63
63
  version: 2.5.0
@@ -67,7 +67,7 @@ dependencies:
67
67
  requirements:
68
68
  - - ">="
69
69
  - !ruby/object:Gem::Version
70
- version: 2.4.3
70
+ version: 2.4.4
71
71
  - - "<"
72
72
  - !ruby/object:Gem::Version
73
73
  version: 2.5.0
@@ -219,6 +219,7 @@ files:
219
219
  - lib/karafka/contracts/base.rb
220
220
  - lib/karafka/contracts/config.rb
221
221
  - lib/karafka/contracts/consumer_group.rb
222
+ - lib/karafka/contracts/routing.rb
222
223
  - lib/karafka/contracts/server_cli_options.rb
223
224
  - lib/karafka/contracts/topic.rb
224
225
  - lib/karafka/deserializers/headers.rb
@@ -619,7 +620,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
619
620
  - !ruby/object:Gem::Version
620
621
  version: '0'
621
622
  requirements: []
622
- rubygems_version: 3.5.16
623
+ rubygems_version: 3.5.22
623
624
  signing_key:
624
625
  specification_version: 4
625
626
  summary: Karafka is Ruby and Rails efficient Kafka processing framework.
metadata.gz.sig CHANGED
Binary file