waterdrop 2.4.7 → 2.4.9

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8aa78b8b5f2d8534689cb9fe3db46d579610ce3b4767cef46b16d8fb1e19d48e
4
- data.tar.gz: 75ff38cc56317bc74047fe9ba53ee1f42b098299f694c1ad5767f80ed4c2cf7e
3
+ metadata.gz: e2e707af4409ace682680ca2131b89221ee04552b856a0855e663a209e09e0fb
4
+ data.tar.gz: fed47f90aa9397754ec6819fcc720d18ad3cace184a053ca6dbbec1b86480172
5
5
  SHA512:
6
- metadata.gz: 3e5589c065d8db716a277bb78e985b85fbe94c2d064eff1cae780f9b4fa6f64ddd95ef0591c99de2dcefedd5582c80ad83442f1cd53091b27349fb193bcd98f6
7
- data.tar.gz: 9af7835c4419dd10af2cde93b42c7797e1811f3f049fe7ae4a50a8ccad50fc6f86a8703777c60410d10c1615386eaee2acbeacf073dd640ba24af24ed534326a
6
+ metadata.gz: 8e2fa6d1abbaa0cae44a8a8fd5e17ef64a851e48c0433d74ca259a52747b92aa94beeabf2ccbc8c3df3efec67050f04b22cafb346f8f69d279effe86590a7079
7
+ data.tar.gz: 5eefb7ba3ae2c3f1ce6253fd5206762c4fd94698719968f42a3e7ddd5cc18a90459ed2eddc29c01452fd9c211cd1e57cb71f66338b428eaf71e0b3ddb0eb45c2
checksums.yaml.gz.sig CHANGED
Binary file
@@ -16,11 +16,12 @@ jobs:
16
16
  fail-fast: false
17
17
  matrix:
18
18
  ruby:
19
+ - '3.2'
19
20
  - '3.1'
20
21
  - '3.0'
21
22
  - '2.7'
22
23
  include:
23
- - ruby: '3.1'
24
+ - ruby: '3.2'
24
25
  coverage: 'true'
25
26
  steps:
26
27
  - uses: actions/checkout@v3
@@ -62,7 +63,7 @@ jobs:
62
63
  - name: Set up Ruby
63
64
  uses: ruby/setup-ruby@v1
64
65
  with:
65
- ruby-version: 3.1
66
+ ruby-version: 3.2
66
67
  - name: Install latest bundler
67
68
  run: gem install bundler --no-document
68
69
  - name: Install Diffend plugin
data/.ruby-version CHANGED
@@ -1 +1 @@
1
- 3.1.3
1
+ 3.2.0
data/CHANGELOG.md CHANGED
@@ -1,5 +1,13 @@
1
1
  # WaterDrop changelog
2
2
 
3
+ ## 2.4.9 (2023-01-11)
4
+ - Remove empty debug logging out of `LoggerListener`.
5
+ - Do not lock Ruby version in Karafka in favour of `karafka-core`.
6
+ - Make sure `karafka-core` version is at least `2.0.9` to make sure we run `karafka-rdkafka`.
7
+
8
+ ## 2.4.8 (2023-01-07)
9
+ - Use monotonic time from Karafka core.
10
+
3
11
  ## 2.4.7 (2022-12-18)
4
12
  - Add support to customizable middlewares that can modify message hash prior to validation and dispatch.
5
13
  - Fix a case where upon not-available leader, metadata request would not be retried
data/Gemfile.lock CHANGED
@@ -1,8 +1,8 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- waterdrop (2.4.7)
5
- karafka-core (>= 2.0.7, < 3.0.0)
4
+ waterdrop (2.4.9)
5
+ karafka-core (>= 2.0.9, < 3.0.0)
6
6
  zeitwerk (~> 2.3)
7
7
 
8
8
  GEM
@@ -22,30 +22,30 @@ GEM
22
22
  ffi (1.15.5)
23
23
  i18n (1.12.0)
24
24
  concurrent-ruby (~> 1.0)
25
- karafka-core (2.0.7)
25
+ karafka-core (2.0.9)
26
26
  concurrent-ruby (>= 1.1)
27
- rdkafka (>= 0.12)
28
- mini_portile2 (2.8.0)
29
- minitest (5.16.3)
30
- rake (13.0.6)
31
- rdkafka (0.12.0)
27
+ karafka-rdkafka (>= 0.12)
28
+ karafka-rdkafka (0.12.0)
32
29
  ffi (~> 1.15)
33
30
  mini_portile2 (~> 2.6)
34
31
  rake (> 12)
32
+ mini_portile2 (2.8.1)
33
+ minitest (5.17.0)
34
+ rake (13.0.6)
35
35
  rspec (3.12.0)
36
36
  rspec-core (~> 3.12.0)
37
37
  rspec-expectations (~> 3.12.0)
38
38
  rspec-mocks (~> 3.12.0)
39
39
  rspec-core (3.12.0)
40
40
  rspec-support (~> 3.12.0)
41
- rspec-expectations (3.12.0)
41
+ rspec-expectations (3.12.2)
42
42
  diff-lcs (>= 1.2.0, < 2.0)
43
43
  rspec-support (~> 3.12.0)
44
- rspec-mocks (3.12.0)
44
+ rspec-mocks (3.12.2)
45
45
  diff-lcs (>= 1.2.0, < 2.0)
46
46
  rspec-support (~> 3.12.0)
47
47
  rspec-support (3.12.0)
48
- simplecov (0.21.2)
48
+ simplecov (0.22.0)
49
49
  docile (~> 1.1)
50
50
  simplecov-html (~> 0.11)
51
51
  simplecov_json_formatter (~> 0.1)
@@ -56,7 +56,6 @@ GEM
56
56
  zeitwerk (2.6.6)
57
57
 
58
58
  PLATFORMS
59
- arm64-darwin
60
59
  x86_64-linux
61
60
 
62
61
  DEPENDENCIES
@@ -67,4 +66,4 @@ DEPENDENCIES
67
66
  waterdrop!
68
67
 
69
68
  BUNDLED WITH
70
- 2.3.26
69
+ 2.4.2
data/README.md CHANGED
@@ -33,6 +33,7 @@ It:
33
33
  - [Instrumentation](#instrumentation)
34
34
  * [Usage statistics](#usage-statistics)
35
35
  * [Error notifications](#error-notifications)
36
+ * [Acknowledgment notifications](#acknowledgment-notifications)
36
37
  * [Datadog and StatsD integration](#datadog-and-statsd-integration)
37
38
  * [Forking and potential memory problems](#forking-and-potential-memory-problems)
38
39
  - [Middleware](#middleware)
@@ -352,6 +353,65 @@ producer.close
352
353
 
353
354
  Note: The metrics returned may not be completely consistent between brokers, toppars and totals, due to the internal asynchronous nature of librdkafka. E.g., the top level tx total may be less than the sum of the broker tx values which it represents.
354
355
 
356
+ ### Error notifications
357
+
358
+ WaterDrop allows you to listen to all errors that occur while producing messages and in its internal background threads. Things like reconnecting to Kafka upon network errors and others unrelated to publishing messages are all available under `error.occurred` notification key. You can subscribe to this event to ensure your setup is healthy and without any problems that would otherwise go unnoticed as long as messages are delivered.
359
+
360
+ ```ruby
361
+ producer = WaterDrop::Producer.new do |config|
362
+ # Note invalid connection port...
363
+ config.kafka = { 'bootstrap.servers': 'localhost:9090' }
364
+ end
365
+
366
+ producer.monitor.subscribe('error.occurred') do |event|
367
+ error = event[:error]
368
+
369
+ p "WaterDrop error occurred: #{error}"
370
+ end
371
+
372
+ # Run this code without Kafka cluster
373
+ loop do
374
+ producer.produce_async(topic: 'events', payload: 'data')
375
+
376
+ sleep(1)
377
+ end
378
+
379
+ # After you stop your Kafka cluster, you will see a lot of those:
380
+ #
381
+ # WaterDrop error occurred: Local: Broker transport failure (transport)
382
+ #
383
+ # WaterDrop error occurred: Local: Broker transport failure (transport)
384
+ ```
385
+
386
+ ### Acknowledgment notifications
387
+
388
+ WaterDrop allows you to listen to Kafka messages' acknowledgment events. This will enable you to monitor deliveries of messages from WaterDrop even when using asynchronous dispatch methods.
389
+
390
+ That way, you can make sure, that dispatched messages are acknowledged by Kafka.
391
+
392
+ ```ruby
393
+ producer = WaterDrop::Producer.new do |config|
394
+ config.kafka = { 'bootstrap.servers': 'localhost:9092' }
395
+ end
396
+
397
+ producer.monitor.subscribe('message.acknowledged') do |event|
398
+ producer_id = event[:producer_id]
399
+ offset = event[:offset]
400
+
401
+ p "WaterDrop [#{producer_id}] delivered message with offset: #{offset}"
402
+ end
403
+
404
+ loop do
405
+ producer.produce_async(topic: 'events', payload: 'data')
406
+
407
+ sleep(1)
408
+ end
409
+
410
+ # WaterDrop [dd8236fff672] delivered message with offset: 32
411
+ # WaterDrop [dd8236fff672] delivered message with offset: 33
412
+ # WaterDrop [dd8236fff672] delivered message with offset: 34
413
+ ```
414
+
355
415
  ### Datadog and StatsD integration
356
416
 
357
417
  WaterDrop comes with (optional) full Datadog and StatsD integration that you can use. To use it:
@@ -385,36 +445,6 @@ You can also find [here](https://github.com/karafka/waterdrop/blob/master/lib/wa
385
445
 
386
446
  ![Example WaterDrop DD dashboard](https://raw.githubusercontent.com/karafka/misc/master/printscreens/waterdrop_dd_dashboard_example.png)
387
447
 
388
- ### Error notifications
389
-
390
- WaterDrop allows you to listen to all errors that occur while producing messages and in its internal background threads. Things like reconnecting to Kafka upon network errors and others unrelated to publishing messages are all available under `error.occurred` notification key. You can subscribe to this event to ensure your setup is healthy and without any problems that would otherwise go unnoticed as long as messages are delivered.
391
-
392
- ```ruby
393
- producer = WaterDrop::Producer.new do |config|
394
- # Note invalid connection port...
395
- config.kafka = { 'bootstrap.servers': 'localhost:9090' }
396
- end
397
-
398
- producer.monitor.subscribe('error.occurred') do |event|
399
- error = event[:error]
400
-
401
- p "WaterDrop error occurred: #{error}"
402
- end
403
-
404
- # Run this code without Kafka cluster
405
- loop do
406
- producer.produce_async(topic: 'events', payload: 'data')
407
-
408
- sleep(1)
409
- end
410
-
411
- # After you stop your Kafka cluster, you will see a lot of those:
412
- #
413
- # WaterDrop error occurred: Local: Broker transport failure (transport)
414
- #
415
- # WaterDrop error occurred: Local: Broker transport failure (transport)
416
- ```
417
-
418
448
  ### Forking and potential memory problems
419
449
 
420
450
  If you work with forked processes, make sure you **don't** use the producer before the fork. You can easily configure the producer and then fork and use it.
File without changes
@@ -7,7 +7,7 @@ module WaterDrop
7
7
  configure do |config|
8
8
  config.error_messages = YAML.safe_load(
9
9
  File.read(
10
- File.join(WaterDrop.gem_root, 'config', 'errors.yml')
10
+ File.join(WaterDrop.gem_root, 'config', 'locales', 'errors.yml')
11
11
  )
12
12
  ).fetch('en').fetch('validations').fetch('config')
13
13
  end
@@ -8,7 +8,7 @@ module WaterDrop
8
8
  configure do |config|
9
9
  config.error_messages = YAML.safe_load(
10
10
  File.read(
11
- File.join(WaterDrop.gem_root, 'config', 'errors.yml')
11
+ File.join(WaterDrop.gem_root, 'config', 'locales', 'errors.yml')
12
12
  )
13
13
  ).fetch('en').fetch('validations').fetch('message')
14
14
  end
@@ -82,7 +82,6 @@ module WaterDrop
82
82
  # @param event [Dry::Events::Event] event that happened with the details
83
83
  def on_producer_closed(event)
84
84
  info event, 'Closing producer'
85
- debug event, ''
86
85
  end
87
86
 
88
87
  # @param event [Dry::Events::Event] event that happened with the error details
@@ -91,7 +90,6 @@ module WaterDrop
91
90
  type = event[:type]
92
91
 
93
92
  error(event, "Error occurred: #{error} - #{type}")
94
- debug(event, '')
95
93
  end
96
94
 
97
95
  private
@@ -7,6 +7,8 @@ module WaterDrop
7
7
  module Rdkafka
8
8
  # Rdkafka::Producer patches
9
9
  module Producer
10
+ include ::Karafka::Core::Helpers::Time
11
+
10
12
  # Cache partitions count for 30 seconds
11
13
  PARTITIONS_COUNT_TTL = 30_000
12
14
 
@@ -20,7 +22,7 @@ module WaterDrop
20
22
  topic_metadata = ::Rdkafka::Metadata.new(inner_kafka, topic).topics&.first
21
23
 
22
24
  cache[topic] = [
23
- now,
25
+ monotonic_now,
24
26
  topic_metadata ? topic_metadata[:partition_count] : nil
25
27
  ]
26
28
  end
@@ -46,7 +48,7 @@ module WaterDrop
46
48
  closed_producer_check(__method__)
47
49
 
48
50
  @_partitions_count_cache.delete_if do |_, cached|
49
- now - cached.first > PARTITIONS_COUNT_TTL
51
+ monotonic_now - cached.first > PARTITIONS_COUNT_TTL
50
52
  end
51
53
 
52
54
  @_partitions_count_cache[topic].last
@@ -68,11 +70,6 @@ module WaterDrop
68
70
 
69
71
  @_inner_kafka
70
72
  end
71
-
72
- # @return [Float] current clock time
73
- def now
74
- ::Process.clock_gettime(::Process::CLOCK_MONOTONIC) * 1_000
75
- end
76
73
  end
77
74
  end
78
75
  end
@@ -3,5 +3,5 @@
3
3
  # WaterDrop library
4
4
  module WaterDrop
5
5
  # Current WaterDrop version
6
- VERSION = '2.4.7'
6
+ VERSION = '2.4.9'
7
7
  end
data/waterdrop.gemspec CHANGED
@@ -16,11 +16,9 @@ Gem::Specification.new do |spec|
16
16
  spec.description = spec.summary
17
17
  spec.license = 'MIT'
18
18
 
19
- spec.add_dependency 'karafka-core', '>= 2.0.7', '< 3.0.0'
19
+ spec.add_dependency 'karafka-core', '>= 2.0.9', '< 3.0.0'
20
20
  spec.add_dependency 'zeitwerk', '~> 2.3'
21
21
 
22
- spec.required_ruby_version = '>= 2.7'
23
-
24
22
  if $PROGRAM_NAME.end_with?('gem')
25
23
  spec.signing_key = File.expand_path('~/.ssh/gem-private_key.pem')
26
24
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: waterdrop
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.4.7
4
+ version: 2.4.9
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -35,7 +35,7 @@ cert_chain:
35
35
  Qf04B9ceLUaC4fPVEz10FyobjaFoY4i32xRto3XnrzeAgfEe4swLq8bQsR3w/EF3
36
36
  MGU0FeSV2Yj7Xc2x/7BzLK8xQn5l7Yy75iPF+KP3vVmDHnNl
37
37
  -----END CERTIFICATE-----
38
- date: 2022-12-18 00:00:00.000000000 Z
38
+ date: 2023-01-11 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: karafka-core
@@ -43,7 +43,7 @@ dependencies:
43
43
  requirements:
44
44
  - - ">="
45
45
  - !ruby/object:Gem::Version
46
- version: 2.0.7
46
+ version: 2.0.9
47
47
  - - "<"
48
48
  - !ruby/object:Gem::Version
49
49
  version: 3.0.0
@@ -53,7 +53,7 @@ dependencies:
53
53
  requirements:
54
54
  - - ">="
55
55
  - !ruby/object:Gem::Version
56
- version: 2.0.7
56
+ version: 2.0.9
57
57
  - - "<"
58
58
  - !ruby/object:Gem::Version
59
59
  version: 3.0.0
@@ -92,7 +92,7 @@ files:
92
92
  - MIT-LICENSE
93
93
  - README.md
94
94
  - certs/cert_chain.pem
95
- - config/errors.yml
95
+ - config/locales/errors.yml
96
96
  - docker-compose.yml
97
97
  - lib/waterdrop.rb
98
98
  - lib/waterdrop/config.rb
@@ -135,14 +135,14 @@ required_ruby_version: !ruby/object:Gem::Requirement
135
135
  requirements:
136
136
  - - ">="
137
137
  - !ruby/object:Gem::Version
138
- version: '2.7'
138
+ version: '0'
139
139
  required_rubygems_version: !ruby/object:Gem::Requirement
140
140
  requirements:
141
141
  - - ">="
142
142
  - !ruby/object:Gem::Version
143
143
  version: '0'
144
144
  requirements: []
145
- rubygems_version: 3.3.26
145
+ rubygems_version: 3.3.4
146
146
  signing_key:
147
147
  specification_version: 4
148
148
  summary: Kafka messaging made easy!
metadata.gz.sig CHANGED
Binary file