waterdrop 2.8.0 → 2.8.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 96435533f730edae9dcf7bed5426641a33715e87b55c804168fa6a02465f1431
4
- data.tar.gz: 3737e88ec3ffaa407b1cdaa523cf1d24a25aa85c4d90989a249381dbfb7a80c2
3
+ metadata.gz: f1dd1ecba0be29bdf637575f0fe8eaea654900f7466d5840c62fbcdeb5000492
4
+ data.tar.gz: 663b683a670ec1df3700dd3440c80435b376c493bcdb7d024eedcc9a9c7b0de9
5
5
  SHA512:
6
- metadata.gz: 1b83e8505ca1aa9be5c137c06af1b98e10bb8f39af93e77aefa42302bf8dc2bd95f1442eb2569d5f032e80ac0374b8013e774d1f36494d23e20af0766094e2b3
7
- data.tar.gz: 198ea59a4f7ca6a63f9816e7026c97152aaed5e6154142c47e6e96c16696a2a3d4c60cd56bf72b407b092682e9d15c08f1adfa552eb5bdc17a308f77f7424dfc
6
+ metadata.gz: 8ff3c1d142dd9d855c7a26223ceefa973764dfde23b0ec8530999e87d52b7827b7180b30dd1f9f545a65eb7e1b00a8ddf753baa4fa365b93ec997450491ddc9b
7
+ data.tar.gz: 0fee34d4935ee757fef55ae9205570894b20894b839a1e6abd80fa918eb7d733603a0c791b3d81c73fd913699f07a1ddf1d3af9921722713accdb8bc90a17255
checksums.yaml.gz.sig CHANGED
Binary file
@@ -0,0 +1,43 @@
1
+ ---
2
+ name: Bug Report
3
+ about: Report an issue within the Karafka ecosystem you've discovered.
4
+ ---
5
+
6
+ To make this process smoother for everyone involved, please read the following information before filling out the template.
7
+
8
+ Scope of the OSS Support
9
+ ===========
10
+
11
+ We do not provide OSS support for outdated versions of Karafka and its components.
12
+
13
+ Please ensure that you are using a version that is still actively supported. We cannot assist with any no longer maintained versions unless you support us with our Pro offering (https://karafka.io/docs/Pro-Support/).
14
+
15
+ We acknowledge that understanding the specifics of your application and its configuration can be essential for resolving certain issues. However, due to the extensive time and resources such analysis can require, this may fall beyond our Open Source Support scope.
16
+
17
+ If Karafka or its components are critical to your infrastructure, we encourage you to consider our Pro Offering.
18
+
19
+ By backing us up, you can gain direct assistance and ensure your use case receives the dedicated attention it deserves.
20
+
21
+
22
+ Important Links to Read
23
+ ===========
24
+
25
+ Please take a moment to review the following resources before submitting your report:
26
+
27
+ - Issue Reporting Guide: https://karafka.io/docs/Support/#issue-reporting-guide
28
+ - Support Policy: https://karafka.io/docs/Support/
29
+ - Versions, Lifecycle, and EOL: https://karafka.io/docs/Versions-Lifecycle-and-EOL/
30
+
31
+
32
+ Bug Report Details
33
+ ===========
34
+
35
+ Please provide all the details per our Issue Reporting Guide: https://karafka.io/docs/Support/#issue-reporting-guide
36
+
37
+ Failing to provide the required details may result in the issue being closed. Please include all necessary information to help us understand and resolve your issue effectively.
38
+
39
+
40
+ Additional Context
41
+ ===========
42
+
43
+ Add any other context about the problem here.
@@ -0,0 +1,20 @@
1
+ ---
2
+ name: Feature Request
3
+ about: Suggest new WaterDrop features or improvements to existing features.
4
+ ---
5
+
6
+ ## Is your feature request related to a problem? Please describe.
7
+
8
+ A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
9
+
10
+ ## Describe the solution you'd like
11
+
12
+ A clear and concise description of what you want to happen.
13
+
14
+ ## Describe alternatives you've considered
15
+
16
+ A clear and concise description of any alternative solutions or features you've considered.
17
+
18
+ ## Additional context
19
+
20
+ Add any other context or screenshots about the feature request here.
@@ -18,12 +18,12 @@ jobs:
18
18
  fail-fast: false
19
19
  matrix:
20
20
  ruby:
21
- - '3.4.0-preview1'
21
+ - '3.4'
22
22
  - '3.3'
23
23
  - '3.2'
24
24
  - '3.1'
25
25
  include:
26
- - ruby: '3.3'
26
+ - ruby: '3.4'
27
27
  coverage: 'true'
28
28
  steps:
29
29
  - uses: actions/checkout@v4
@@ -63,6 +63,9 @@ jobs:
63
63
  GITHUB_COVERAGE: ${{matrix.coverage}}
64
64
  run: bundle exec rspec
65
65
 
66
+ - name: Check Kafka logs for unexpected warnings
67
+ run: bin/verify_kafka_warnings
68
+
66
69
  diffend:
67
70
  runs-on: ubuntu-latest
68
71
  strategy:
@@ -74,7 +77,7 @@ jobs:
74
77
  - name: Set up Ruby
75
78
  uses: ruby/setup-ruby@v1
76
79
  with:
77
- ruby-version: 3.3
80
+ ruby-version: 3.4
78
81
  - name: Install latest bundler
79
82
  run: gem install bundler --no-document
80
83
  - name: Install Diffend plugin
data/.ruby-version CHANGED
@@ -1 +1 @@
1
- 3.3.5
1
+ 3.4.1
data/CHANGELOG.md CHANGED
@@ -1,5 +1,14 @@
1
1
  # WaterDrop changelog
2
2
 
3
+ ## 2.8.2 (2025-02-13)
4
+ - [Feature] Allow for tagging of producer instances similar to how consumers can be tagged.
5
+ - [Refactor] Ensure all test topics in the test suite start with "it-" prefix.
6
+
7
+ ## 2.8.1 (2024-12-26)
8
+ - [Enhancement] Raise `WaterDrop::ProducerNotTransactionalError` when attempting to use transactions on a non-transactional producer.
9
+ - [Fix] Disallow closing a producer from within a transaction.
10
+ - [Fix] WaterDrop should prevent opening a transaction using a closed producer.
11
+
3
12
  ## 2.8.0 (2024-09-16)
4
13
 
5
14
  This release contains **BREAKING** changes. Make sure to read and apply upgrade notes.
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- waterdrop (2.8.0)
4
+ waterdrop (2.8.2)
5
5
  karafka-core (>= 2.4.3, < 3.0.0)
6
6
  karafka-rdkafka (>= 0.17.5)
7
7
  zeitwerk (~> 2.3)
@@ -9,8 +9,9 @@ PATH
9
9
  GEM
10
10
  remote: https://rubygems.org/
11
11
  specs:
12
- activesupport (7.2.1)
12
+ activesupport (7.2.2.1)
13
13
  base64
14
+ benchmark (>= 0.3)
14
15
  bigdecimal
15
16
  concurrent-ruby (~> 1.0, >= 1.3.1)
16
17
  connection_pool (>= 2.2.5)
@@ -21,17 +22,18 @@ GEM
21
22
  securerandom (>= 0.3)
22
23
  tzinfo (~> 2.0, >= 2.0.5)
23
24
  base64 (0.2.0)
24
- bigdecimal (3.1.8)
25
+ benchmark (0.4.0)
26
+ bigdecimal (3.1.9)
25
27
  byebug (11.1.3)
26
- concurrent-ruby (1.3.4)
28
+ concurrent-ruby (1.3.5)
27
29
  connection_pool (2.4.1)
28
30
  diff-lcs (1.5.1)
29
31
  docile (1.4.1)
30
32
  drb (2.2.1)
31
- factory_bot (6.5.0)
32
- activesupport (>= 5.0.0)
33
+ factory_bot (6.5.1)
34
+ activesupport (>= 6.1.0)
33
35
  ffi (1.17.0)
34
- i18n (1.14.5)
36
+ i18n (1.14.7)
35
37
  concurrent-ruby (~> 1.0)
36
38
  karafka-core (2.4.4)
37
39
  karafka-rdkafka (>= 0.15.0, < 0.18.0)
@@ -39,10 +41,10 @@ GEM
39
41
  ffi (~> 1.15)
40
42
  mini_portile2 (~> 2.6)
41
43
  rake (> 12)
42
- logger (1.6.1)
44
+ logger (1.6.5)
43
45
  mini_portile2 (2.8.7)
44
- minitest (5.25.1)
45
- ostruct (0.6.0)
46
+ minitest (5.25.4)
47
+ ostruct (0.6.1)
46
48
  rake (13.2.1)
47
49
  rspec (3.13.0)
48
50
  rspec-core (~> 3.13.0)
@@ -57,7 +59,7 @@ GEM
57
59
  diff-lcs (>= 1.2.0, < 2.0)
58
60
  rspec-support (~> 3.13.0)
59
61
  rspec-support (3.13.1)
60
- securerandom (0.3.1)
62
+ securerandom (0.3.2)
61
63
  simplecov (0.22.0)
62
64
  docile (~> 1.1)
63
65
  simplecov-html (~> 0.11)
@@ -0,0 +1,31 @@
1
+ #!/bin/bash
2
+
3
+ # Checks Kafka logs for unsupported warning patterns
4
+ # Only specified warnings are allowed, all others should trigger failure
5
+
6
+ allowed_patterns=(
7
+ "Performing controller activation"
8
+ "registered with feature metadata.version"
9
+ )
10
+
11
+ # Get all warnings
12
+ warnings=$(docker logs --since=0 kafka | grep WARN)
13
+ exit_code=0
14
+
15
+ while IFS= read -r line; do
16
+ allowed=0
17
+ for pattern in "${allowed_patterns[@]}"; do
18
+ if echo "$line" | grep -q "$pattern"; then
19
+ allowed=1
20
+ break
21
+ fi
22
+ done
23
+
24
+ if [ $allowed -eq 0 ]; then
25
+ echo "Unexpected warning found:"
26
+ echo "$line"
27
+ exit_code=1
28
+ fi
29
+ done <<< "$warnings"
30
+
31
+ exit $exit_code
data/docker-compose.yml CHANGED
@@ -1,9 +1,7 @@
1
- version: '2'
2
-
3
1
  services:
4
2
  kafka:
5
3
  container_name: kafka
6
- image: confluentinc/cp-kafka:7.7.1
4
+ image: confluentinc/cp-kafka:7.8.1
7
5
 
8
6
  ports:
9
7
  - 9092:9092
@@ -25,12 +25,18 @@ module WaterDrop
25
25
  # Raised when there was an attempt to use a closed producer
26
26
  ProducerClosedError = Class.new(BaseError)
27
27
 
28
+ # Raised if you attempt to close the producer from within a transaction. This is not allowed.
29
+ ProducerTransactionalCloseAttemptError = Class.new(BaseError)
30
+
28
31
  # Raised when we want to send a message that is invalid (impossible topic, etc)
29
32
  MessageInvalidError = Class.new(BaseError)
30
33
 
31
34
  # Raised when we want to commit transactional offset and the input is invalid
32
35
  TransactionalOffsetInvalidError = Class.new(BaseError)
33
36
 
37
+ # Raised when transaction attempt happens on a non-transactional producer
38
+ ProducerNotTransactionalError = Class.new(BaseError)
39
+
34
40
  # Raised when we've got an unexpected status. This should never happen. If it does, please
35
41
  # contact us as it is an error.
36
42
  StatusInvalidError = Class.new(BaseError)
@@ -55,11 +55,20 @@ module WaterDrop
55
55
  #
56
56
  # handler.wait
57
57
  def transaction
58
+ unless transactional?
59
+ raise(
60
+ Errors::ProducerNotTransactionalError,
61
+ "#{id} is not transactional"
62
+ )
63
+ end
64
+
58
65
  # This will safely allow us to support one operation transactions so a transactional
59
66
  # producer can work without the transactional block if needed
60
67
  return yield if @transaction_mutex.owned?
61
68
 
62
69
  @transaction_mutex.synchronize do
70
+ ensure_active!
71
+
63
72
  transactional_instrument(:finished) do
64
73
  with_transactional_error_handling(:begin) do
65
74
  transactional_instrument(:started) { client.begin_transaction }
@@ -9,6 +9,7 @@ module WaterDrop
9
9
  include Buffer
10
10
  include Transactions
11
11
  include ::Karafka::Core::Helpers::Time
12
+ include ::Karafka::Core::Taggable
12
13
 
13
14
  # Local storage for given thread waterdrop client references for variants
14
15
  ::Fiber.send(:attr_accessor, :waterdrop_clients)
@@ -85,6 +86,7 @@ module WaterDrop
85
86
 
86
87
  # Don't allow to obtain a client reference for a producer that was not configured
87
88
  raise Errors::ProducerNotConfiguredError, id if @status.initial?
89
+ raise Errors::ProducerClosedError, id if @status.closed?
88
90
 
89
91
  @connecting_mutex.synchronize do
90
92
  return @client if @client && @pid == Process.pid
@@ -180,65 +182,75 @@ module WaterDrop
180
182
  # @param force [Boolean] should we force closing even with outstanding messages after the
181
183
  # max wait timeout
182
184
  def close(force: false)
183
- @operating_mutex.synchronize do
184
- return unless @status.active?
185
-
186
- @monitor.instrument(
187
- 'producer.closed',
188
- producer_id: id
189
- ) do
190
- @status.closing!
191
- @monitor.instrument('producer.closing', producer_id: id)
192
-
193
- # No need for auto-gc if everything got closed by us
194
- # This should be used only in case a producer was not closed properly and forgotten
195
- ObjectSpace.undefine_finalizer(id)
196
-
197
- # We save this thread id because we need to bypass the activity verification on the
198
- # producer for final flush of buffers.
199
- @closing_thread_id = Thread.current.object_id
200
-
201
- # Wait until all the outgoing operations are done. Only when no one is using the
202
- # underlying client running operations we can close
203
- sleep(0.001) until @operations_in_progress.value.zero?
204
-
205
- # Flush has its own buffer mutex but even if it is blocked, flushing can still happen
206
- # as we close the client after the flushing (even if blocked by the mutex)
207
- flush(true)
208
-
209
- # We should not close the client in several threads the same time
210
- # It is safe to run it several times but not exactly the same moment
211
- # We also mark it as closed only if it was connected, if not, it would trigger a new
212
- # connection that anyhow would be immediately closed
213
- if @client
214
- # Why do we trigger it early instead of just having `#close` do it?
215
- # The linger.ms time will be ignored for the duration of the call,
216
- # queued messages will be sent to the broker as soon as possible.
217
- begin
218
- @client.flush(current_variant.max_wait_timeout) unless @client.closed?
219
- # We can safely ignore timeouts here because any left outstanding requests
220
- # will anyhow force wait on close if not forced.
221
- # If forced, we will purge the queue and just close
222
- rescue ::Rdkafka::RdkafkaError, Rdkafka::AbstractHandle::WaitTimeoutError
223
- nil
224
- ensure
225
- # Purge fully the local queue in case of a forceful shutdown just to be sure, that
226
- # there are no dangling messages. In case flush was successful, there should be
227
- # none but we do it just in case it timed out
228
- purge if force
229
- end
185
+ # If we already own the transactional mutex, it means we are inside of a transaction and
186
+ # it should not we allowed to close the producer in such a case.
187
+ if @transaction_mutex.locked? && @transaction_mutex.owned?
188
+ raise Errors::ProducerTransactionalCloseAttemptError, id
189
+ end
230
190
 
231
- @client.close
191
+ # The transactional mutex here can be used even when no transactions are in use
192
+ # It prevents us from closing a mutex during transactions and is irrelevant in other cases
193
+ @transaction_mutex.synchronize do
194
+ @operating_mutex.synchronize do
195
+ return unless @status.active?
232
196
 
233
- @client = nil
234
- end
197
+ @monitor.instrument(
198
+ 'producer.closed',
199
+ producer_id: id
200
+ ) do
201
+ @status.closing!
202
+ @monitor.instrument('producer.closing', producer_id: id)
203
+
204
+ # No need for auto-gc if everything got closed by us
205
+ # This should be used only in case a producer was not closed properly and forgotten
206
+ ObjectSpace.undefine_finalizer(id)
207
+
208
+ # We save this thread id because we need to bypass the activity verification on the
209
+ # producer for final flush of buffers.
210
+ @closing_thread_id = Thread.current.object_id
211
+
212
+ # Wait until all the outgoing operations are done. Only when no one is using the
213
+ # underlying client running operations we can close
214
+ sleep(0.001) until @operations_in_progress.value.zero?
215
+
216
+ # Flush has its own buffer mutex but even if it is blocked, flushing can still happen
217
+ # as we close the client after the flushing (even if blocked by the mutex)
218
+ flush(true)
219
+
220
+ # We should not close the client in several threads the same time
221
+ # It is safe to run it several times but not exactly the same moment
222
+ # We also mark it as closed only if it was connected, if not, it would trigger a new
223
+ # connection that anyhow would be immediately closed
224
+ if @client
225
+ # Why do we trigger it early instead of just having `#close` do it?
226
+ # The linger.ms time will be ignored for the duration of the call,
227
+ # queued messages will be sent to the broker as soon as possible.
228
+ begin
229
+ @client.flush(current_variant.max_wait_timeout) unless @client.closed?
230
+ # We can safely ignore timeouts here because any left outstanding requests
231
+ # will anyhow force wait on close if not forced.
232
+ # If forced, we will purge the queue and just close
233
+ rescue ::Rdkafka::RdkafkaError, Rdkafka::AbstractHandle::WaitTimeoutError
234
+ nil
235
+ ensure
236
+ # Purge fully the local queue in case of a forceful shutdown just to be sure, that
237
+ # there are no dangling messages. In case flush was successful, there should be
238
+ # none but we do it just in case it timed out
239
+ purge if force
240
+ end
241
+
242
+ @client.close
243
+
244
+ @client = nil
245
+ end
235
246
 
236
- # Remove callbacks runners that were registered
237
- ::Karafka::Core::Instrumentation.statistics_callbacks.delete(@id)
238
- ::Karafka::Core::Instrumentation.error_callbacks.delete(@id)
239
- ::Karafka::Core::Instrumentation.oauthbearer_token_refresh_callbacks.delete(@id)
247
+ # Remove callbacks runners that were registered
248
+ ::Karafka::Core::Instrumentation.statistics_callbacks.delete(@id)
249
+ ::Karafka::Core::Instrumentation.error_callbacks.delete(@id)
250
+ ::Karafka::Core::Instrumentation.oauthbearer_token_refresh_callbacks.delete(@id)
240
251
 
241
- @status.closed!
252
+ @status.closed!
253
+ end
242
254
  end
243
255
  end
244
256
  end
@@ -3,5 +3,5 @@
3
3
  # WaterDrop library
4
4
  module WaterDrop
5
5
  # Current WaterDrop version
6
- VERSION = '2.8.0'
6
+ VERSION = '2.8.2'
7
7
  end
data/waterdrop.gemspec CHANGED
@@ -28,7 +28,7 @@ Gem::Specification.new do |spec|
28
28
 
29
29
  spec.cert_chain = %w[certs/cert.pem]
30
30
  spec.files = `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(spec)/}) }
31
- spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
31
+ spec.executables = []
32
32
  spec.require_paths = %w[lib]
33
33
 
34
34
  spec.metadata = {
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,11 +1,10 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: waterdrop
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.8.0
4
+ version: 2.8.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
8
- autorequire:
9
8
  bindir: bin
10
9
  cert_chain:
11
10
  - |
@@ -35,7 +34,7 @@ cert_chain:
35
34
  i9zWxov0mr44TWegTVeypcWGd/0nxu1+QHVNHJrpqlPBRvwQsUm7fwmRInGpcaB8
36
35
  ap8wNYvryYzrzvzUxIVFBVM5PacgkFqRmolCa8I7tdKQN+R1
37
36
  -----END CERTIFICATE-----
38
- date: 2024-09-16 00:00:00.000000000 Z
37
+ date: 2025-02-13 00:00:00.000000000 Z
39
38
  dependencies:
40
39
  - !ruby/object:Gem::Dependency
41
40
  name: karafka-core
@@ -95,6 +94,8 @@ files:
95
94
  - ".coditsu/ci.yml"
96
95
  - ".diffend.yml"
97
96
  - ".github/FUNDING.yml"
97
+ - ".github/ISSUE_TEMPLATE/bug_report.md"
98
+ - ".github/ISSUE_TEMPLATE/feature_request.md"
98
99
  - ".github/workflows/ci.yml"
99
100
  - ".gitignore"
100
101
  - ".rspec"
@@ -105,6 +106,7 @@ files:
105
106
  - Gemfile.lock
106
107
  - LICENSE
107
108
  - README.md
109
+ - bin/verify_kafka_warnings
108
110
  - certs/cert.pem
109
111
  - config/locales/errors.yml
110
112
  - docker-compose.yml
@@ -154,7 +156,6 @@ metadata:
154
156
  source_code_uri: https://github.com/karafka/waterdrop
155
157
  documentation_uri: https://karafka.io/docs/#waterdrop
156
158
  rubygems_mfa_required: 'true'
157
- post_install_message:
158
159
  rdoc_options: []
159
160
  require_paths:
160
161
  - lib
@@ -169,8 +170,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
169
170
  - !ruby/object:Gem::Version
170
171
  version: '0'
171
172
  requirements: []
172
- rubygems_version: 3.5.16
173
- signing_key:
173
+ rubygems_version: 3.6.2
174
174
  specification_version: 4
175
175
  summary: Kafka messaging made easy!
176
176
  test_files: []
metadata.gz.sig CHANGED
Binary file