karafka 2.4.13 → 2.4.14
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- checksums.yaml.gz.sig +2 -1
- data/.github/ISSUE_TEMPLATE/bug_report.md +26 -34
- data/.github/workflows/ci.yml +7 -0
- data/.ruby-version +1 -1
- data/CHANGELOG.md +9 -1
- data/Gemfile.lock +10 -6
- data/bin/integrations +6 -2
- data/lib/karafka/cli/base.rb +23 -7
- data/lib/karafka/connection/client.rb +3 -0
- data/lib/karafka/instrumentation/vendors/appsignal/metrics_listener.rb +3 -1
- data/lib/karafka/instrumentation/vendors/datadog/logger_listener.rb +25 -2
- data/lib/karafka/instrumentation/vendors/datadog/metrics_listener.rb +27 -18
- data/lib/karafka/instrumentation/vendors/kubernetes/liveness_listener.rb +30 -2
- data/lib/karafka/version.rb +1 -1
- data.tar.gz.sig +0 -0
- metadata +3 -3
- metadata.gz.sig +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 22f45da117cf90a2ecbec04dbcaf39634b1e1b85e00f521d4880954c65268ecf
|
4
|
+
data.tar.gz: ce5318aaa8f52954a80981662a41ad17ab314e9706a00f2edf3e840604b87b32
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: d8ec82f91aea2bdba595fa290feec1ca0b25dbd73cd0eafac6554538d3506c47ad8db42036ed838b199a3016341bf67d6daf4f03dfced3ddcf783850c0dc97bf
|
7
|
+
data.tar.gz: 2b750fce3294dda031f5a97bf5343711dcf52d1dbc00c61e029b602b302a549e9a70bb6ac4e952842e1ad840d7dd7b28115ed72118aad54917497672593de133
|
checksums.yaml.gz.sig
CHANGED
@@ -1 +1,2 @@
|
|
1
|
-
|
1
|
+
+e&��t�X���Xl��u�{H��r���2����9�xXv�.t����Os���~���A�����"'�1��:���<��u3�/i��@�Ym0��H��z�U��-2�G�B��Jvܯ����w���H���yP��E�et������
|
2
|
+
ܕ�khh�����f�kg5��Rg#G��Kyb�r�8��ۗ���(���
|
@@ -1,51 +1,43 @@
|
|
1
1
|
---
|
2
2
|
name: Bug Report
|
3
|
-
about: Report an issue
|
3
|
+
about: Report an issue within the Karafka ecosystem you've discovered.
|
4
4
|
---
|
5
5
|
|
6
|
-
|
7
|
-
Open an issue with a descriptive title and a summary in grammatically correct,
|
8
|
-
complete sentences.*
|
6
|
+
To make this process smoother for everyone involved, please read the following information before filling out the template.
|
9
7
|
|
10
|
-
|
11
|
-
|
12
|
-
hasn't been reported (and potentially fixed) already.*
|
8
|
+
Scope of the OSS Support
|
9
|
+
===========
|
13
10
|
|
14
|
-
|
15
|
-
rule with your own words.*
|
11
|
+
We do not provide OSS support for outdated versions of Karafka and its components.
|
16
12
|
|
17
|
-
|
13
|
+
Please ensure that you are using a version that is still actively supported. We cannot assist with any no longer maintained versions unless you support us with our Pro offering (https://karafka.io/docs/Pro-Support/).
|
18
14
|
|
19
|
-
|
15
|
+
We acknowledge that understanding the specifics of your application and its configuration can be essential for resolving certain issues. However, due to the extensive time and resources such analysis can require, this may fall beyond our Open Source Support scope.
|
20
16
|
|
21
|
-
|
17
|
+
If Karafka or its components are critical to your infrastructure, we encourage you to consider our Pro Offering.
|
22
18
|
|
23
|
-
|
19
|
+
By backing us up, you can gain direct assistance and ensure your use case receives the dedicated attention it deserves.
|
24
20
|
|
25
|
-
Describe here what actually happened.
|
26
21
|
|
27
|
-
|
22
|
+
Important Links to Read
|
23
|
+
===========
|
28
24
|
|
29
|
-
|
30
|
-
a problem will expedite its solution.
|
25
|
+
Please take a moment to review the following resources before submitting your report:
|
31
26
|
|
32
|
-
|
27
|
+
- Issue Reporting Guide: https://karafka.io/docs/Support/#issue-reporting-guide
|
28
|
+
- Support Policy: https://karafka.io/docs/Support/
|
29
|
+
- Versions, Lifecycle, and EOL: https://karafka.io/docs/Versions-Lifecycle-and-EOL/
|
33
30
|
|
34
|
-
Please provide kafka version and the output of `karafka info` or `bundle exec karafka info` if using Bundler.
|
35
31
|
|
36
|
-
|
32
|
+
Bug Report Details
|
33
|
+
===========
|
37
34
|
|
38
|
-
|
39
|
-
|
40
|
-
|
41
|
-
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
Boot file: /app/karafka.rb
|
48
|
-
Environment: development
|
49
|
-
License: Commercial
|
50
|
-
License entity: karafka-ci
|
51
|
-
```
|
35
|
+
Please provide all the details per our Issue Reporting Guide: https://karafka.io/docs/Support/#issue-reporting-guide
|
36
|
+
|
37
|
+
Failing to provide the required details may result in the issue being closed. Please include all necessary information to help us understand and resolve your issue effectively.
|
38
|
+
|
39
|
+
|
40
|
+
Additional Context
|
41
|
+
===========
|
42
|
+
|
43
|
+
Add any other context about the problem here.
|
data/.github/workflows/ci.yml
CHANGED
@@ -89,6 +89,13 @@ jobs:
|
|
89
89
|
run: |
|
90
90
|
docker compose up -d || (sleep 5 && docker compose up -d)
|
91
91
|
|
92
|
+
# Newer versions of ActiveSupport and Rails do not work with Ruby 3.1 anymore.
|
93
|
+
# While we use newer by default we do want to resolve older and test, thus we remove
|
94
|
+
# Gemfile.lock and let it resolve to the most compatible version possible
|
95
|
+
- name: Remove Gemfile.lock if Ruby 3.1
|
96
|
+
if: matrix.ruby == '3.1'
|
97
|
+
run: rm -f Gemfile.lock
|
98
|
+
|
92
99
|
- name: Set up Ruby
|
93
100
|
uses: ruby/setup-ruby@v1
|
94
101
|
with:
|
data/.ruby-version
CHANGED
@@ -1 +1 @@
|
|
1
|
-
3.3.
|
1
|
+
3.3.6
|
data/CHANGELOG.md
CHANGED
@@ -1,7 +1,15 @@
|
|
1
1
|
# Karafka Framework Changelog
|
2
2
|
|
3
|
+
## 2.4.14 (2024-11-25)
|
4
|
+
- [Enhancement] Improve low-level critical error reporting.
|
5
|
+
- [Enhancement] Expand Kubernetes Liveness state reporting with critical errors detection.
|
6
|
+
- [Enhancement] Save several string allocations and one array allocation on each job execution when using Datadog instrumentation.
|
7
|
+
- [Enhancement] Support `eofed` jobs in the AppSignal instrumentation.
|
8
|
+
- [Enhancement] Allow running bootfile-less Rails setup Karafka CLI commands where stuff is configured in the initializers.
|
9
|
+
- [Fix] `Instrumentation::Vendors::Datadog::LoggerListener` treats eof jobs as consume jobs.
|
10
|
+
|
3
11
|
## 2.4.13 (2024-10-11)
|
4
|
-
- [Enhancement] Make declarative topics return different exit codes on migrable/non-migrable states (0 - no changes, 2 - changes).
|
12
|
+
- [Enhancement] Make declarative topics return different exit codes on migrable/non-migrable states (0 - no changes, 2 - changes) when used with `--detailed-exitcode` flag.
|
5
13
|
- [Enhancement] Introduce `config.strict_declarative_topics` that should force declaratives on all non-pattern based topics and DLQ topics
|
6
14
|
- [Enhancement] Report ignored repartitioning to lower number of partitions in declarative topics.
|
7
15
|
- [Enhancement] Promote the `LivenessListener#healty?` to a public API.
|
data/Gemfile.lock
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
PATH
|
2
2
|
remote: .
|
3
3
|
specs:
|
4
|
-
karafka (2.4.
|
4
|
+
karafka (2.4.14)
|
5
5
|
base64 (~> 0.2)
|
6
6
|
karafka-core (>= 2.4.4, < 2.5.0)
|
7
7
|
karafka-rdkafka (>= 0.17.2)
|
@@ -11,11 +11,12 @@ PATH
|
|
11
11
|
GEM
|
12
12
|
remote: https://rubygems.org/
|
13
13
|
specs:
|
14
|
-
activejob (
|
15
|
-
activesupport (=
|
14
|
+
activejob (8.0.0)
|
15
|
+
activesupport (= 8.0.0)
|
16
16
|
globalid (>= 0.3.6)
|
17
|
-
activesupport (
|
17
|
+
activesupport (8.0.0)
|
18
18
|
base64
|
19
|
+
benchmark (>= 0.3)
|
19
20
|
bigdecimal
|
20
21
|
concurrent-ruby (~> 1.0, >= 1.3.1)
|
21
22
|
connection_pool (>= 2.2.5)
|
@@ -25,7 +26,9 @@ GEM
|
|
25
26
|
minitest (>= 5.1)
|
26
27
|
securerandom (>= 0.3)
|
27
28
|
tzinfo (~> 2.0, >= 2.0.5)
|
29
|
+
uri (>= 0.13.1)
|
28
30
|
base64 (0.2.0)
|
31
|
+
benchmark (0.3.0)
|
29
32
|
bigdecimal (3.1.8)
|
30
33
|
byebug (11.1.3)
|
31
34
|
concurrent-ruby (1.3.4)
|
@@ -44,7 +47,7 @@ GEM
|
|
44
47
|
raabro (~> 1.4)
|
45
48
|
globalid (1.2.1)
|
46
49
|
activesupport (>= 6.1)
|
47
|
-
i18n (1.14.
|
50
|
+
i18n (1.14.6)
|
48
51
|
concurrent-ruby (~> 1.0)
|
49
52
|
karafka-core (2.4.4)
|
50
53
|
karafka-rdkafka (>= 0.15.0, < 0.18.0)
|
@@ -64,7 +67,7 @@ GEM
|
|
64
67
|
logger (1.6.1)
|
65
68
|
mini_portile2 (2.8.7)
|
66
69
|
minitest (5.25.1)
|
67
|
-
ostruct (0.6.
|
70
|
+
ostruct (0.6.1)
|
68
71
|
raabro (1.4.0)
|
69
72
|
rack (3.1.7)
|
70
73
|
rake (13.2.1)
|
@@ -93,6 +96,7 @@ GEM
|
|
93
96
|
tilt (2.4.0)
|
94
97
|
tzinfo (2.0.6)
|
95
98
|
concurrent-ruby (~> 1.0)
|
99
|
+
uri (1.0.0)
|
96
100
|
waterdrop (2.8.0)
|
97
101
|
karafka-core (>= 2.4.3, < 3.0.0)
|
98
102
|
karafka-rdkafka (>= 0.17.5)
|
data/bin/integrations
CHANGED
@@ -243,9 +243,13 @@ ARGV.each do |filter|
|
|
243
243
|
end
|
244
244
|
end
|
245
245
|
|
246
|
-
# Remove Rails 7.2 specs from Ruby 3.
|
246
|
+
# Remove Rails 7.2 specs from Ruby < 3.1 because it requires 3.1
|
247
|
+
# Remove Rails 8.0 specs from Ruby < 3.2 because it requires 3.2
|
247
248
|
specs.delete_if do |spec|
|
248
|
-
RUBY_VERSION < '3.1' && spec.include?('rails72')
|
249
|
+
next true if RUBY_VERSION < '3.1' && spec.include?('rails72')
|
250
|
+
next true if RUBY_VERSION < '3.2' && spec.include?('rails8')
|
251
|
+
|
252
|
+
false
|
249
253
|
end
|
250
254
|
|
251
255
|
raise ArgumentError, "No integration specs with filters: #{ARGV.join(', ')}" if specs.empty?
|
data/lib/karafka/cli/base.rb
CHANGED
@@ -41,22 +41,38 @@ module Karafka
|
|
41
41
|
class << self
|
42
42
|
# Loads proper environment with what is needed to run the CLI
|
43
43
|
def load
|
44
|
+
rails_env_rb = File.join(Dir.pwd, 'config/environment.rb')
|
45
|
+
is_rails = Kernel.const_defined?(:Rails) && File.exist?(rails_env_rb)
|
46
|
+
|
47
|
+
# If the boot file is disabled and this is a Rails app, we assume that user moved the
|
48
|
+
# karafka app configuration to initializers or other Rails loading related place.
|
49
|
+
# It is not recommended but some users tend to do this. In such cases we just try to load
|
50
|
+
# the Rails stuff hoping that it will also load Karafka stuff
|
51
|
+
if Karafka.boot_file.to_s == 'false' && is_rails
|
52
|
+
require rails_env_rb
|
53
|
+
|
54
|
+
return
|
55
|
+
end
|
56
|
+
|
44
57
|
# If there is a boot file, we need to require it as we expect it to contain
|
45
58
|
# Karafka app setup, routes, etc
|
46
59
|
if File.exist?(::Karafka.boot_file)
|
47
|
-
rails_env_rb = File.join(Dir.pwd, 'config/environment.rb')
|
48
|
-
|
49
60
|
# Load Rails environment file that starts Rails, so we can reference consumers and
|
50
61
|
# other things from `karafka.rb` file. This will work only for Rails, for non-rails
|
51
62
|
# a manual setup is needed
|
52
|
-
require rails_env_rb if
|
53
|
-
|
63
|
+
require rails_env_rb if is_rails
|
54
64
|
require Karafka.boot_file.to_s
|
65
|
+
|
66
|
+
return
|
67
|
+
end
|
68
|
+
|
55
69
|
# However when it is unavailable, we still want to be able to run help command
|
56
70
|
# and install command as they don't require configured app itself to run
|
57
|
-
|
58
|
-
|
59
|
-
|
71
|
+
return if %w[-h install].any? { |cmd| cmd == ARGV[0] }
|
72
|
+
|
73
|
+
# All other commands except help and install do require an existing boot file if it was
|
74
|
+
# declared
|
75
|
+
raise ::Karafka::Errors::MissingBootFileError, ::Karafka.boot_file
|
60
76
|
end
|
61
77
|
|
62
78
|
# Allows to set options for Thor cli
|
@@ -41,6 +41,9 @@ module Karafka
|
|
41
41
|
:topic_authorization_failed, # 29
|
42
42
|
:group_authorization_failed, # 30
|
43
43
|
:cluster_authorization_failed, # 31
|
44
|
+
:illegal_generation,
|
45
|
+
# this will not recover as fencing is permanent
|
46
|
+
:fenced, # -144
|
44
47
|
# This can happen for many reasons, including issues with static membership being fenced
|
45
48
|
:fatal # -150
|
46
49
|
].freeze
|
@@ -48,6 +48,7 @@ module Karafka
|
|
48
48
|
consumer.revoked.error
|
49
49
|
consumer.shutdown.error
|
50
50
|
consumer.tick.error
|
51
|
+
consumer.eofed.error
|
51
52
|
].freeze
|
52
53
|
|
53
54
|
private_constant :USER_CONSUMER_ERROR_TYPES
|
@@ -107,7 +108,8 @@ module Karafka
|
|
107
108
|
[
|
108
109
|
%i[revoke revoked revoked],
|
109
110
|
%i[shutting_down shutdown shutdown],
|
110
|
-
%i[tick ticked tick]
|
111
|
+
%i[tick ticked tick],
|
112
|
+
%i[eof eofed eofed]
|
111
113
|
].each do |before, after, name|
|
112
114
|
class_eval <<~RUBY, __FILE__, __LINE__ + 1
|
113
115
|
# Keeps track of user code execution
|
@@ -35,6 +35,7 @@ module Karafka
|
|
35
35
|
def initialize(&block)
|
36
36
|
configure
|
37
37
|
setup(&block) if block
|
38
|
+
@job_types_cache = {}
|
38
39
|
end
|
39
40
|
|
40
41
|
# @param block [Proc] configuration block
|
@@ -51,7 +52,7 @@ module Karafka
|
|
51
52
|
push_tags
|
52
53
|
|
53
54
|
job = event[:job]
|
54
|
-
job_type = job.class
|
55
|
+
job_type = fetch_job_type(job.class)
|
55
56
|
consumer = job.executor.topic.consumer
|
56
57
|
topic = job.executor.topic.name
|
57
58
|
|
@@ -68,8 +69,16 @@ module Karafka
|
|
68
69
|
'revoked'
|
69
70
|
when 'Idle'
|
70
71
|
'idle'
|
71
|
-
|
72
|
+
when 'Eofed'
|
73
|
+
'eofed'
|
74
|
+
when 'EofedNonBlocking'
|
75
|
+
'eofed'
|
76
|
+
when 'ConsumeNonBlocking'
|
72
77
|
'consume'
|
78
|
+
when 'Consume'
|
79
|
+
'consume'
|
80
|
+
else
|
81
|
+
raise Errors::UnsupportedCaseError, job_type
|
73
82
|
end
|
74
83
|
|
75
84
|
current_span.resource = "#{consumer}##{action}"
|
@@ -121,6 +130,8 @@ module Karafka
|
|
121
130
|
error "Consumer on shutdown failed due to an error: #{error}"
|
122
131
|
when 'consumer.tick.error'
|
123
132
|
error "Consumer tick failed due to an error: #{error}"
|
133
|
+
when 'consumer.eofed.error'
|
134
|
+
error "Consumer eofed failed due to an error: #{error}"
|
124
135
|
when 'worker.process.error'
|
125
136
|
fatal "Worker processing failed due to an error: #{error}"
|
126
137
|
when 'connection.listener.fetch_loop.error'
|
@@ -169,6 +180,18 @@ module Karafka
|
|
169
180
|
|
170
181
|
Karafka.logger.pop_tags
|
171
182
|
end
|
183
|
+
|
184
|
+
private
|
185
|
+
|
186
|
+
# Takes the job class and extracts the job type.
|
187
|
+
# @param job_class [Class] job class
|
188
|
+
# @return [String]
|
189
|
+
# @note It does not have to be thread-safe despite running in multiple threads because
|
190
|
+
# the assignment race condition is irrelevant here since the same value will be
|
191
|
+
# assigned.
|
192
|
+
def fetch_job_type(job_class)
|
193
|
+
@job_types_cache[job_class] ||= job_class.to_s.split('::').last
|
194
|
+
end
|
172
195
|
end
|
173
196
|
end
|
174
197
|
end
|
@@ -82,10 +82,11 @@ module Karafka
|
|
82
82
|
statistics = event[:statistics]
|
83
83
|
consumer_group_id = event[:consumer_group_id]
|
84
84
|
|
85
|
-
|
85
|
+
tags = ["consumer_group:#{consumer_group_id}"]
|
86
|
+
tags.concat(default_tags)
|
86
87
|
|
87
88
|
rd_kafka_metrics.each do |metric|
|
88
|
-
report_metric(metric, statistics,
|
89
|
+
report_metric(metric, statistics, tags)
|
89
90
|
end
|
90
91
|
end
|
91
92
|
|
@@ -93,13 +94,14 @@ module Karafka
|
|
93
94
|
#
|
94
95
|
# @param event [Karafka::Core::Monitoring::Event]
|
95
96
|
def on_error_occurred(event)
|
96
|
-
|
97
|
+
tags = ["type:#{event[:type]}"]
|
98
|
+
tags.concat(default_tags)
|
97
99
|
|
98
100
|
if event.payload[:caller].respond_to?(:messages)
|
99
|
-
|
101
|
+
tags.concat(consumer_tags(event.payload[:caller]))
|
100
102
|
end
|
101
103
|
|
102
|
-
count('error_occurred', 1, tags:
|
104
|
+
count('error_occurred', 1, tags: tags)
|
103
105
|
end
|
104
106
|
|
105
107
|
# Reports how many messages we've polled and how much time did we spend on it
|
@@ -111,10 +113,11 @@ module Karafka
|
|
111
113
|
|
112
114
|
consumer_group_id = event[:subscription_group].consumer_group.id
|
113
115
|
|
114
|
-
|
116
|
+
tags = ["consumer_group:#{consumer_group_id}"]
|
117
|
+
tags.concat(default_tags)
|
115
118
|
|
116
|
-
histogram('listener.polling.time_taken', time_taken, tags:
|
117
|
-
histogram('listener.polling.messages', messages_count, tags:
|
119
|
+
histogram('listener.polling.time_taken', time_taken, tags: tags)
|
120
|
+
histogram('listener.polling.messages', messages_count, tags: tags)
|
118
121
|
end
|
119
122
|
|
120
123
|
# Here we report majority of things related to processing as we have access to the
|
@@ -125,7 +128,8 @@ module Karafka
|
|
125
128
|
messages = consumer.messages
|
126
129
|
metadata = messages.metadata
|
127
130
|
|
128
|
-
tags =
|
131
|
+
tags = consumer_tags(consumer)
|
132
|
+
tags.concat(default_tags)
|
129
133
|
|
130
134
|
count('consumer.messages', messages.count, tags: tags)
|
131
135
|
count('consumer.batches', 1, tags: tags)
|
@@ -146,7 +150,8 @@ module Karafka
|
|
146
150
|
#
|
147
151
|
# @param event [Karafka::Core::Monitoring::Event]
|
148
152
|
def on_consumer_#{after}(event)
|
149
|
-
tags =
|
153
|
+
tags = consumer_tags(event.payload[:caller])
|
154
|
+
tags.concat(default_tags)
|
150
155
|
|
151
156
|
count('consumer.#{name}', 1, tags: tags)
|
152
157
|
end
|
@@ -158,9 +163,10 @@ module Karafka
|
|
158
163
|
def on_worker_process(event)
|
159
164
|
jq_stats = event[:jobs_queue].statistics
|
160
165
|
|
161
|
-
|
162
|
-
|
163
|
-
histogram('worker.
|
166
|
+
tags = default_tags
|
167
|
+
gauge('worker.total_threads', Karafka::App.config.concurrency, tags: tags)
|
168
|
+
histogram('worker.processing', jq_stats[:busy], tags: tags)
|
169
|
+
histogram('worker.enqueued_jobs', jq_stats[:enqueued], tags: tags)
|
164
170
|
end
|
165
171
|
|
166
172
|
# We report this metric before and after processing for higher accuracy
|
@@ -240,11 +246,14 @@ module Karafka
|
|
240
246
|
# node ids
|
241
247
|
next if broker_statistics['nodeid'] == -1
|
242
248
|
|
249
|
+
tags = ["broker:#{broker_statistics['nodename']}"]
|
250
|
+
tags.concat(base_tags)
|
251
|
+
|
243
252
|
public_send(
|
244
253
|
metric.type,
|
245
254
|
metric.name,
|
246
255
|
broker_statistics.dig(*metric.key_location),
|
247
|
-
tags:
|
256
|
+
tags: tags
|
248
257
|
)
|
249
258
|
end
|
250
259
|
when :topics
|
@@ -259,14 +268,14 @@ module Karafka
|
|
259
268
|
next if partition_statistics['fetch_state'] == 'stopped'
|
260
269
|
next if partition_statistics['fetch_state'] == 'none'
|
261
270
|
|
271
|
+
tags = ["topic:#{topic_name}", "partition:#{partition_name}"]
|
272
|
+
tags.concat(base_tags)
|
273
|
+
|
262
274
|
public_send(
|
263
275
|
metric.type,
|
264
276
|
metric.name,
|
265
277
|
partition_statistics.dig(*metric.key_location),
|
266
|
-
tags:
|
267
|
-
"topic:#{topic_name}",
|
268
|
-
"partition:#{partition_name}"
|
269
|
-
]
|
278
|
+
tags: tags
|
270
279
|
)
|
271
280
|
end
|
272
281
|
end
|
@@ -26,6 +26,19 @@ module Karafka
|
|
26
26
|
#
|
27
27
|
# @note Please use `Kubernetes::SwarmLivenessListener` when operating in the swarm mode
|
28
28
|
class LivenessListener < BaseListener
|
29
|
+
# When any of those occurs, it means something went wrong in a way that cannot be
|
30
|
+
# recovered. In such cases we should report that the consumer process is not healthy.
|
31
|
+
# - `fenced` - This instance has been fenced by a newer instance and will not do any
|
32
|
+
# processing at all never. Fencing most of the time means the instance.group.id has
|
33
|
+
# been reused without properly terminating the previous consumer process first
|
34
|
+
# - `fatal` - any fatal error that halts the processing forever
|
35
|
+
UNRECOVERABLE_RDKAFKA_ERRORS = [
|
36
|
+
:fenced, # -144
|
37
|
+
:fatal # -150
|
38
|
+
].freeze
|
39
|
+
|
40
|
+
private_constant :UNRECOVERABLE_RDKAFKA_ERRORS
|
41
|
+
|
29
42
|
# @param hostname [String, nil] hostname or nil to bind on all
|
30
43
|
# @param port [Integer] TCP port on which we want to run our HTTP status server
|
31
44
|
# @param consuming_ttl [Integer] time in ms after which we consider consumption hanging.
|
@@ -40,6 +53,11 @@ module Karafka
|
|
40
53
|
consuming_ttl: 5 * 60 * 1_000,
|
41
54
|
polling_ttl: 5 * 60 * 1_000
|
42
55
|
)
|
56
|
+
# If this is set to true, it indicates unrecoverable error like fencing
|
57
|
+
# While fencing can be partial (for one of the SGs), we still should consider this
|
58
|
+
# as an undesired state for the whole process because it halts processing in a
|
59
|
+
# non-recoverable manner forever
|
60
|
+
@unrecoverable = false
|
43
61
|
@polling_ttl = polling_ttl
|
44
62
|
@consuming_ttl = consuming_ttl
|
45
63
|
@mutex = Mutex.new
|
@@ -86,10 +104,19 @@ module Karafka
|
|
86
104
|
RUBY
|
87
105
|
end
|
88
106
|
|
89
|
-
# @param
|
90
|
-
def on_error_occurred(
|
107
|
+
# @param event [Karafka::Core::Monitoring::Event]
|
108
|
+
def on_error_occurred(event)
|
91
109
|
clear_consumption_tick
|
92
110
|
clear_polling_tick
|
111
|
+
|
112
|
+
error = event[:error]
|
113
|
+
|
114
|
+
# We are only interested in the rdkafka errors
|
115
|
+
return unless error.is_a?(Rdkafka::RdkafkaError)
|
116
|
+
# We mark as unrecoverable only on certain errors that will not be fixed by retrying
|
117
|
+
return unless UNRECOVERABLE_RDKAFKA_ERRORS.include?(error.code)
|
118
|
+
|
119
|
+
@unrecoverable = true
|
93
120
|
end
|
94
121
|
|
95
122
|
# Deregister the polling tracker for given listener
|
@@ -117,6 +144,7 @@ module Karafka
|
|
117
144
|
def healthy?
|
118
145
|
time = monotonic_now
|
119
146
|
|
147
|
+
return false if @unrecoverable
|
120
148
|
return false if @pollings.values.any? { |tick| (time - tick) > @polling_ttl }
|
121
149
|
return false if @consumptions.values.any? { |tick| (time - tick) > @consuming_ttl }
|
122
150
|
|
data/lib/karafka/version.rb
CHANGED
data.tar.gz.sig
CHANGED
Binary file
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: karafka
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 2.4.
|
4
|
+
version: 2.4.14
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Maciej Mensfeld
|
@@ -35,7 +35,7 @@ cert_chain:
|
|
35
35
|
i9zWxov0mr44TWegTVeypcWGd/0nxu1+QHVNHJrpqlPBRvwQsUm7fwmRInGpcaB8
|
36
36
|
ap8wNYvryYzrzvzUxIVFBVM5PacgkFqRmolCa8I7tdKQN+R1
|
37
37
|
-----END CERTIFICATE-----
|
38
|
-
date: 2024-
|
38
|
+
date: 2024-11-25 00:00:00.000000000 Z
|
39
39
|
dependencies:
|
40
40
|
- !ruby/object:Gem::Dependency
|
41
41
|
name: base64
|
@@ -620,7 +620,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
620
620
|
- !ruby/object:Gem::Version
|
621
621
|
version: '0'
|
622
622
|
requirements: []
|
623
|
-
rubygems_version: 3.5.
|
623
|
+
rubygems_version: 3.5.22
|
624
624
|
signing_key:
|
625
625
|
specification_version: 4
|
626
626
|
summary: Karafka is Ruby and Rails efficient Kafka processing framework.
|
metadata.gz.sig
CHANGED
Binary file
|