waterdrop 2.3.3 → 2.4.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b1482f0d2955c43c2ab479372c9cbb4a8c039723121c26b2ff75d0e26700d71e
4
- data.tar.gz: 1bb3e675c0c632d4940b2b661bdb75a74c44358a7e53fa9dc7051fdad62218b0
3
+ metadata.gz: 0bd7288d6c0c7a6f356f050af309899cad41e6fcf202c945229231d7058194e1
4
+ data.tar.gz: 727738b727dccf8e2d7b4eb7c3f25b1c7ed761eb958eae4ba818c7402839a6c7
5
5
  SHA512:
6
- metadata.gz: 8921844cc44625916974c63478562b66a7f30a833b6f95e71e8c3448109d673bd3520a8a48bdf0fe7296e5e00b988c15c3dd92cab2e11ee37914f667966a14de
7
- data.tar.gz: 6c6038c4f9a718d107fd885e66bf29cc3dde09c6f6e0be2ba41651f3494672ddd925e71574f53ec8ec70d4e766024018f73a43a5fadf99402117ac3a0751b0c0
6
+ metadata.gz: 95f69c8cd00d33e04747f1447ca9fe40de88941869925662116427b14da7f0eb0a04c7eada0e00ea6c226dd8be90484b7e97e6d34f9c7f0a11333bfede37e03c
7
+ data.tar.gz: fa43e25469180c9d9e65f31b4d79554b35c090e79d8812586d614a77acafe04bb873f24ed58f739c4523696282bef8dfa05e0fa0358ca808bfd2df038bf3d55b
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,5 +1,10 @@
1
1
  # WaterDrop changelog
2
2
 
3
+ ## 2.4.0 (2022-07-28)
4
+ - Small refactor of the DataDog/Statsd listener to align for future extraction to `karafka-common`.
5
+ - Replace `dry-monitor` with home-brew notification layer (API compatible) and allow for usage with `ActiveSupport::Notifications`.
6
+ - Remove all the common code into `karafka-core` and add it as a dependency.
7
+
3
8
  ## 2.3.3 (2022-07-18)
4
9
  - Replace `dry-validation` with home-brew validation layer and drop direct dependency on `dry-validation`.
5
10
  - Remove indirect dependency on dry-configurable from DataDog listener (no changes required).
data/Gemfile.lock CHANGED
@@ -1,16 +1,15 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- waterdrop (2.3.3)
5
- concurrent-ruby (>= 1.1)
6
- dry-monitor (~> 0.5)
4
+ waterdrop (2.4.0)
5
+ karafka-core (~> 2.0)
7
6
  rdkafka (>= 0.10)
8
7
  zeitwerk (~> 2.3)
9
8
 
10
9
  GEM
11
10
  remote: https://rubygems.org/
12
11
  specs:
13
- activesupport (7.0.3)
12
+ activesupport (7.0.3.1)
14
13
  concurrent-ruby (~> 1.0, >= 1.0.2)
15
14
  i18n (>= 1.6, < 2)
16
15
  minitest (>= 5.1)
@@ -19,26 +18,15 @@ GEM
19
18
  concurrent-ruby (1.1.10)
20
19
  diff-lcs (1.5.0)
21
20
  docile (1.4.0)
22
- dry-configurable (0.15.0)
23
- concurrent-ruby (~> 1.0)
24
- dry-core (~> 0.6)
25
- dry-core (0.8.0)
26
- concurrent-ruby (~> 1.0)
27
- dry-events (0.3.0)
28
- concurrent-ruby (~> 1.0)
29
- dry-core (~> 0.5, >= 0.5)
30
- dry-monitor (0.6.1)
31
- dry-configurable (~> 0.13, >= 0.13.0)
32
- dry-core (~> 0.5, >= 0.5)
33
- dry-events (~> 0.2)
34
- zeitwerk (~> 2.5)
35
21
  factory_bot (6.2.1)
36
22
  activesupport (>= 5.0.0)
37
23
  ffi (1.15.5)
38
- i18n (1.10.0)
24
+ i18n (1.12.0)
39
25
  concurrent-ruby (~> 1.0)
26
+ karafka-core (2.0.0)
27
+ concurrent-ruby (>= 1.1)
40
28
  mini_portile2 (2.8.0)
41
- minitest (5.16.0)
29
+ minitest (5.16.2)
42
30
  rake (13.0.6)
43
31
  rdkafka (0.12.0)
44
32
  ffi (~> 1.15)
@@ -63,12 +51,11 @@ GEM
63
51
  simplecov_json_formatter (~> 0.1)
64
52
  simplecov-html (0.12.3)
65
53
  simplecov_json_formatter (0.1.4)
66
- tzinfo (2.0.4)
54
+ tzinfo (2.0.5)
67
55
  concurrent-ruby (~> 1.0)
68
56
  zeitwerk (2.6.0)
69
57
 
70
58
  PLATFORMS
71
- arm64-darwin-21
72
59
  x86_64-linux
73
60
 
74
61
  DEPENDENCIES
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  **Note**: Documentation presented here refers to WaterDrop `2.x`.
4
4
 
5
- WaterDrop `2.x` does **not** work with Karafka `1.*` and aims to either work as a standalone producer outside of Karafka `1.*` ecosystem or as a part of soon to be released Karafka `2.0.*`.
5
+ WaterDrop `2.x` works with Karafka `2.*` and aims to either work as a standalone producer or as a part of the Karafka `2.*`.
6
6
 
7
7
  Please refer to [this](https://github.com/karafka/waterdrop/tree/1.4) branch and its documentation for details about WaterDrop `1.*` usage.
8
8
 
@@ -10,17 +10,17 @@ Please refer to [this](https://github.com/karafka/waterdrop/tree/1.4) branch and
10
10
  [![Gem Version](https://badge.fury.io/rb/waterdrop.svg)](http://badge.fury.io/rb/waterdrop)
11
11
  [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
12
12
 
13
- Gem used to send messages to Kafka in an easy way with an extra validation layer. It is a part of the [Karafka](https://github.com/karafka/karafka) ecosystem.
13
+ A gem to send messages to Kafka easily with an extra validation layer. It is a part of the [Karafka](https://github.com/karafka/karafka) ecosystem.
14
14
 
15
15
  It:
16
16
 
17
- - Is thread safe
17
+ - Is thread-safe
18
18
  - Supports sync producing
19
19
  - Supports async producing
20
20
  - Supports buffering
21
21
  - Supports producing messages to multiple clusters
22
22
  - Supports multiple delivery policies
23
- - Works with Kafka 1.0+ and Ruby 2.6+
23
+ - Works with Kafka `1.0+` and Ruby `2.7+`
24
24
 
25
25
  ## Table of contents
26
26
 
@@ -30,6 +30,8 @@ It:
30
30
  * [Kafka configuration options](#kafka-configuration-options)
31
31
  - [Usage](#usage)
32
32
  * [Basic usage](#basic-usage)
33
+ * [Using WaterDrop across the application and with Ruby on Rails](#using-waterdrop-across-the-application-and-with-ruby-on-rails)
34
+ * [Using WaterDrop with a connection-pool](#using-waterdrop-with-a-connection-pool)
33
35
  * [Buffering](#buffering)
34
36
  + [Using WaterDrop to buffer messages based on the application logic](#using-waterdrop-to-buffer-messages-based-on-the-application-logic)
35
37
  + [Using WaterDrop with rdkafka buffers to achieve periodic auto-flushing](#using-waterdrop-with-rdkafka-buffers-to-achieve-periodic-auto-flushing)
@@ -42,6 +44,8 @@ It:
42
44
 
43
45
  ## Installation
44
46
 
47
+ **Note**: If you want to both produce and consume messages, please use [Karafka](https://github.com/karafka/karafka/). It integrates WaterDrop automatically.
48
+
45
49
  Add this to your Gemfile:
46
50
 
47
51
  ```ruby
@@ -56,7 +60,7 @@ bundle install
56
60
 
57
61
  ## Setup
58
62
 
59
- WaterDrop is a complex tool, that contains multiple configuration options. To keep everything organized, all the configuration options were divided into two groups:
63
+ WaterDrop is a complex tool that contains multiple configuration options. To keep everything organized, all the configuration options were divided into two groups:
60
64
 
61
65
  - WaterDrop options - options directly related to WaterDrop and its components
62
66
  - Kafka driver options - options related to `rdkafka`
@@ -103,8 +107,6 @@ You can create producers with different `kafka` settings. Documentation of the a
103
107
 
104
108
  ## Usage
105
109
 
106
- Please refer to the [documentation](https://www.rubydoc.info/gems/waterdrop) in case you're interested in the more advanced API.
107
-
108
110
  ### Basic usage
109
111
 
110
112
  To send Kafka messages, just create a producer and use it:
@@ -157,6 +159,41 @@ Here are all the things you can provide in the message hash:
157
159
 
158
160
  Keep in mind, that message you want to send should be either binary or stringified (to_s, to_json, etc).
159
161
 
162
+ ### Using WaterDrop across the application and with Ruby on Rails
163
+
164
+ If you plan to both produce and consume messages using Kafka, you should install and use [Karafka](https://github.com/karafka/karafka). It integrates automatically with Ruby on Rails applications and auto-configures WaterDrop producer to make it accessible via `Karafka#producer` method.
165
+
166
+ If you want to only produce messages from within your application, since WaterDrop is thread-safe you can create a single instance in an initializer like so:
167
+
168
+ ```ruby
169
+ KAFKA_PRODUCER = WaterDrop::Producer.new
170
+
171
+ KAFKA_PRODUCER.setup do |config|
172
+ config.kafka = { 'bootstrap.servers': 'localhost:9092' }
173
+ end
174
+
175
+ # And just dispatch messages
176
+ KAFKA_PRODUCER.produce_sync(topic: 'my-topic', payload: 'my message')
177
+ ```
178
+
179
+ ### Using WaterDrop with a connection-pool
180
+
181
+ While WaterDrop is thread-safe, there is no problem in using it with a connection pool inside high-intensity applications. The only thing worth keeping in mind, is that WaterDrop instances should be shutdown before the application is closed.
182
+
183
+ ```ruby
184
+ KAFKA_PRODUCERS_CP = ConnectionPool.new do
185
+ WaterDrop::Producer.new do |config|
186
+ config.kafka = { 'bootstrap.servers': 'localhost:9092' }
187
+ end
188
+ end
189
+
190
+ KAFKA_PRODUCERS_CP.with do |producer|
191
+ producer.produce_async(topic: 'my-topic', payload: 'my message')
192
+ end
193
+
194
+ KAFKA_PRODUCERS_CP.shutdown { |producer| producer.close }
195
+ ```
196
+
160
197
  ### Buffering
161
198
 
162
199
  WaterDrop producers support buffering messages in their internal buffers and on the `rdkafka` level via `queue.buffering.*` set of settings.
@@ -316,6 +353,8 @@ producer.monitor.subscribe(listener)
316
353
 
317
354
  You can also find [here](https://github.com/karafka/waterdrop/blob/master/lib/waterdrop/instrumentation/vendors/datadog/dashboard.json) a ready to import DataDog dashboard configuration file that you can use to monitor all of your producers.
318
355
 
356
+ ![Example WaterDrop DD dashboard](https://raw.githubusercontent.com/karafka/misc/master/printscreens/waterdrop_dd_dashboard_example.png)
357
+
319
358
  ### Error notifications
320
359
 
321
360
  WaterDrop allows you to listen to all errors that occur while producing messages and in its internal background threads. Things like reconnecting to Kafka upon network errors and others unrelated to publishing messages are all available under `error.occurred` notification key. You can subscribe to this event to ensure your setup is healthy and without any problems that would otherwise go unnoticed as long as messages are delivered.
@@ -5,7 +5,7 @@
5
5
  module WaterDrop
6
6
  # Configuration object for setting up all options required by WaterDrop
7
7
  class Config
8
- include Configurable
8
+ include ::Karafka::Core::Configurable
9
9
 
10
10
  # Defaults for kafka settings, that will be overwritten only if not present already
11
11
  KAFKA_DEFAULTS = {
@@ -77,7 +77,7 @@ module WaterDrop
77
77
  # Propagates the kafka setting defaults unless they are already present
78
78
  # This makes it easier to set some values that users usually don't change but still allows them
79
79
  # to overwrite the whole hash if they want to
80
- # @param config [Dry::Configurable::Config] dry config of this producer
80
+ # @param config [WaterDrop::Configurable::Node] config of this producer
81
81
  def merge_kafka_defaults!(config)
82
82
  KAFKA_DEFAULTS.each do |key, value|
83
83
  next if config.kafka.key?(key)
@@ -3,7 +3,7 @@
3
3
  module WaterDrop
4
4
  module Contracts
5
5
  # Contract with validation rules for WaterDrop configuration details
6
- class Config < Contractable::Contract
6
+ class Config < ::Karafka::Core::Contractable::Contract
7
7
  configure do |config|
8
8
  config.error_messages = YAML.safe_load(
9
9
  File.read(
@@ -12,13 +12,13 @@ module WaterDrop
12
12
  ).fetch('en').fetch('validations').fetch('config')
13
13
  end
14
14
 
15
- required(:id) { |id| id.is_a?(String) && !id.empty? }
16
- required(:logger) { |logger| !logger.nil? }
17
- required(:deliver) { |deliver| [true, false].include?(deliver) }
18
- required(:max_payload_size) { |ps| ps.is_a?(Integer) && ps >= 1 }
19
- required(:max_wait_timeout) { |mwt| mwt.is_a?(Numeric) && mwt >= 0 }
20
- required(:wait_timeout) { |wt| wt.is_a?(Numeric) && wt.positive? }
21
- required(:kafka) { |kafka| kafka.is_a?(Hash) && !kafka.empty? }
15
+ required(:id) { |val| val.is_a?(String) && !val.empty? }
16
+ required(:logger) { |val| !val.nil? }
17
+ required(:deliver) { |val| [true, false].include?(val) }
18
+ required(:max_payload_size) { |val| val.is_a?(Integer) && val >= 1 }
19
+ required(:max_wait_timeout) { |val| val.is_a?(Numeric) && val >= 0 }
20
+ required(:wait_timeout) { |val| val.is_a?(Numeric) && val.positive? }
21
+ required(:kafka) { |val| val.is_a?(Hash) && !val.empty? }
22
22
 
23
23
  # rdkafka allows both symbols and strings as keys for config but then casts them to strings
24
24
  # This can be confusing, so we expect all keys to be symbolized
@@ -4,7 +4,7 @@ module WaterDrop
4
4
  module Contracts
5
5
  # Contract with validation rules for validating that all the message options that
6
6
  # we provide to producer ale valid and usable
7
- class Message < Contractable::Contract
7
+ class Message < ::Karafka::Core::Contractable::Contract
8
8
  configure do |config|
9
9
  config.error_messages = YAML.safe_load(
10
10
  File.read(
@@ -26,13 +26,13 @@ module WaterDrop
26
26
  @max_payload_size = max_payload_size
27
27
  end
28
28
 
29
- required(:topic) { |topic| topic.is_a?(String) && TOPIC_REGEXP.match?(topic) }
30
- required(:payload) { |payload| payload.is_a?(String) }
31
- optional(:key) { |key| key.nil? || (key.is_a?(String) && !key.empty?) }
32
- optional(:partition) { |partition| partition.is_a?(Integer) && partition >= -1 }
33
- optional(:partition_key) { |p_key| p_key.nil? || (p_key.is_a?(String) && !p_key.empty?) }
34
- optional(:timestamp) { |ts| ts.nil? || (ts.is_a?(Time) || ts.is_a?(Integer)) }
35
- optional(:headers) { |headers| headers.nil? || headers.is_a?(Hash) }
29
+ required(:topic) { |val| val.is_a?(String) && TOPIC_REGEXP.match?(val) }
30
+ required(:payload) { |val| val.is_a?(String) }
31
+ optional(:key) { |val| val.nil? || (val.is_a?(String) && !val.empty?) }
32
+ optional(:partition) { |val| val.is_a?(Integer) && val >= -1 }
33
+ optional(:partition_key) { |val| val.nil? || (val.is_a?(String) && !val.empty?) }
34
+ optional(:timestamp) { |val| val.nil? || (val.is_a?(Time) || val.is_a?(Integer)) }
35
+ optional(:headers) { |val| val.nil? || val.is_a?(Hash) }
36
36
 
37
37
  virtual do |config, errors|
38
38
  next true unless errors.empty?
@@ -2,41 +2,18 @@
2
2
 
3
3
  module WaterDrop
4
4
  module Instrumentation
5
- # Monitor is used to hookup external monitoring services to monitor how WaterDrop works
6
- # Since it is a pub-sub based on dry-monitor, you can use as many subscribers/loggers at the
7
- # same time, which means that you might have for example file logging and NewRelic at the same
8
- # time
9
- # @note This class acts as a singleton because we are only permitted to have single monitor
10
- # per running process (just as logger)
11
- class Monitor < Dry::Monitor::Notifications
12
- # List of events that we support in the system and to which a monitor client can hook up
13
- # @note The non-error once support timestamp benchmarking
14
- EVENTS = %w[
15
- producer.closed
16
-
17
- message.produced_async
18
- message.produced_sync
19
- message.acknowledged
20
- message.buffered
21
-
22
- messages.produced_async
23
- messages.produced_sync
24
- messages.buffered
25
-
26
- buffer.flushed_async
27
- buffer.flushed_sync
28
-
29
- statistics.emitted
30
-
31
- error.occurred
32
- ].freeze
33
-
34
- private_constant :EVENTS
35
-
36
- # @return [WaterDrop::Instrumentation::Monitor] monitor instance for system instrumentation
37
- def initialize
38
- super(:waterdrop)
39
- EVENTS.each(&method(:register_event))
5
+ # WaterDrop instrumentation monitor that we use to publish events
6
+ # By default uses our internal notifications bus but can be used with
7
+ # `ActiveSupport::Notifications` as well
8
+ class Monitor < ::Karafka::Core::Monitoring::Monitor
9
+ # @param notifications_bus [Object] either our internal notifications bus or
10
+ # `ActiveSupport::Notifications`
11
+ # @param namespace [String, nil] namespace for events or nil if no namespace
12
+ def initialize(
13
+ notifications_bus = WaterDrop::Instrumentation::Notifications.new,
14
+ namespace = nil
15
+ )
16
+ super(notifications_bus, namespace)
40
17
  end
41
18
  end
42
19
  end
@@ -0,0 +1,38 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WaterDrop
4
+ module Instrumentation
5
+ # Instrumented is used to hookup external monitoring services to monitor how WaterDrop works
6
+ class Notifications < ::Karafka::Core::Monitoring::Notifications
7
+ # List of events that we support in the system and to which a monitor client can hook up
8
+ # @note The non-error once support timestamp benchmarking
9
+ EVENTS = %w[
10
+ producer.closed
11
+
12
+ message.produced_async
13
+ message.produced_sync
14
+ message.acknowledged
15
+ message.buffered
16
+
17
+ messages.produced_async
18
+ messages.produced_sync
19
+ messages.buffered
20
+
21
+ buffer.flushed_async
22
+ buffer.flushed_sync
23
+
24
+ statistics.emitted
25
+
26
+ error.occurred
27
+ ].freeze
28
+
29
+ private_constant :EVENTS
30
+
31
+ # @return [WaterDrop::Instrumentation::Monitor] monitor instance for system instrumentation
32
+ def initialize
33
+ super
34
+ EVENTS.each { |event| register_event(event) }
35
+ end
36
+ end
37
+ end
38
+ end
@@ -11,7 +11,7 @@ module WaterDrop
11
11
  #
12
12
  # @note You need to setup the `dogstatsd-ruby` client and assign it
13
13
  class Listener
14
- include WaterDrop::Configurable
14
+ include ::Karafka::Core::Configurable
15
15
  extend Forwardable
16
16
 
17
17
  def_delegators :config, :client, :rd_kafka_metrics, :namespace, :default_tags
@@ -66,7 +66,7 @@ module WaterDrop
66
66
 
67
67
  # Hooks up to WaterDrop instrumentation for emitted statistics
68
68
  #
69
- # @param event [Dry::Events::Event]
69
+ # @param event [WaterDrop::Monitor::Event]
70
70
  def on_statistics_emitted(event)
71
71
  statistics = event[:statistics]
72
72
 
@@ -77,22 +77,15 @@ module WaterDrop
77
77
 
78
78
  # Increases the errors count by 1
79
79
  #
80
- # @param _event [Dry::Events::Event]
80
+ # @param _event [WaterDrop::Monitor::Event]
81
81
  def on_error_occurred(_event)
82
- client.count(
83
- namespaced_metric('error_occurred'),
84
- 1,
85
- tags: default_tags
86
- )
82
+ count('error_occurred', 1, tags: default_tags)
87
83
  end
88
84
 
89
85
  # Increases acknowledged messages counter
90
- # @param _event [Dry::Events::Event]
86
+ # @param _event [WaterDrop::Monitor::Event]
91
87
  def on_message_acknowledged(_event)
92
- client.increment(
93
- namespaced_metric('acknowledged'),
94
- tags: default_tags
95
- )
88
+ increment('acknowledged', tags: default_tags)
96
89
  end
97
90
 
98
91
  %i[
@@ -100,12 +93,12 @@ module WaterDrop
100
93
  produced_async
101
94
  ].each do |event_scope|
102
95
  class_eval <<~METHODS, __FILE__, __LINE__ + 1
103
- # @param event [Dry::Events::Event]
96
+ # @param event [WaterDrop::Monitor::Event]
104
97
  def on_message_#{event_scope}(event)
105
98
  report_message(event[:message][:topic], :#{event_scope})
106
99
  end
107
100
 
108
- # @param event [Dry::Events::Event]
101
+ # @param event [WaterDrop::Monitor::Event]
109
102
  def on_messages_#{event_scope}(event)
110
103
  event[:messages].each do |message|
111
104
  report_message(message[:topic], :#{event_scope})
@@ -120,10 +113,10 @@ module WaterDrop
120
113
  messages_buffered
121
114
  ].each do |event_scope|
122
115
  class_eval <<~METHODS, __FILE__, __LINE__ + 1
123
- # @param event [Dry::Events::Event]
116
+ # @param event [WaterDrop::Monitor::Event]
124
117
  def on_#{event_scope}(event)
125
- client.histogram(
126
- namespaced_metric('buffer.size'),
118
+ histogram(
119
+ 'buffer.size',
127
120
  event[:buffer].size,
128
121
  tags: default_tags
129
122
  )
@@ -138,7 +131,7 @@ module WaterDrop
138
131
  flushed_async
139
132
  ].each do |event_scope|
140
133
  class_eval <<~METHODS, __FILE__, __LINE__ + 1
141
- # @param event [Dry::Events::Event]
134
+ # @param event [WaterDrop::Monitor::Event]
142
135
  def on_buffer_#{event_scope}(event)
143
136
  event[:messages].each do |message|
144
137
  report_message(message[:topic], :#{event_scope})
@@ -149,14 +142,28 @@ module WaterDrop
149
142
 
150
143
  private
151
144
 
145
+ %i[
146
+ count
147
+ gauge
148
+ histogram
149
+ increment
150
+ decrement
151
+ ].each do |metric_type|
152
+ class_eval <<~METHODS, __FILE__, __LINE__ + 1
153
+ def #{metric_type}(key, *args)
154
+ client.#{metric_type}(
155
+ namespaced_metric(key),
156
+ *args
157
+ )
158
+ end
159
+ METHODS
160
+ end
161
+
152
162
  # Report that a message has been produced to a topic.
153
163
  # @param topic [String] Kafka topic
154
164
  # @param method_name [Symbol] method from which this message operation comes
155
165
  def report_message(topic, method_name)
156
- client.increment(
157
- namespaced_metric(method_name),
158
- tags: default_tags + ["topic:#{topic}"]
159
- )
166
+ increment(method_name, tags: default_tags + ["topic:#{topic}"])
160
167
  end
161
168
 
162
169
  # Wraps metric name in listener's namespace
@@ -172,9 +179,9 @@ module WaterDrop
172
179
  def report_metric(metric, statistics)
173
180
  case metric.scope
174
181
  when :root
175
- client.public_send(
182
+ public_send(
176
183
  metric.type,
177
- namespaced_metric(metric.name),
184
+ metric.name,
178
185
  statistics.fetch(*metric.key_location),
179
186
  tags: default_tags
180
187
  )
@@ -185,9 +192,9 @@ module WaterDrop
185
192
  # node ids
186
193
  next if broker_statistics['nodeid'] == -1
187
194
 
188
- client.public_send(
195
+ public_send(
189
196
  metric.type,
190
- namespaced_metric(metric.name),
197
+ metric.name,
191
198
  broker_statistics.dig(*metric.key_location),
192
199
  tags: default_tags + ["broker:#{broker_statistics['nodename']}"]
193
200
  )
@@ -3,5 +3,5 @@
3
3
  # WaterDrop library
4
4
  module WaterDrop
5
5
  # Current WaterDrop version
6
- VERSION = '2.3.3'
6
+ VERSION = '2.4.0'
7
7
  end
data/lib/waterdrop.rb CHANGED
@@ -3,11 +3,8 @@
3
3
  # External components
4
4
  # delegate should be removed because we don't need it, we just add it because of ruby-kafka
5
5
  %w[
6
+ karafka-core
6
7
  forwardable
7
- concurrent/array
8
- concurrent/hash
9
- yaml
10
- dry/monitor/notifications
11
8
  rdkafka
12
9
  json
13
10
  zeitwerk
data/waterdrop.gemspec CHANGED
@@ -16,8 +16,7 @@ Gem::Specification.new do |spec|
16
16
  spec.description = spec.summary
17
17
  spec.license = 'MIT'
18
18
 
19
- spec.add_dependency 'concurrent-ruby', '>= 1.1'
20
- spec.add_dependency 'dry-monitor', '~> 0.5'
19
+ spec.add_dependency 'karafka-core', '~> 2.0'
21
20
  spec.add_dependency 'rdkafka', '>= 0.10'
22
21
  spec.add_dependency 'zeitwerk', '~> 2.3'
23
22
 
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: waterdrop
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.3.3
4
+ version: 2.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -34,36 +34,22 @@ cert_chain:
34
34
  R2P11bWoCtr70BsccVrN8jEhzwXngMyI2gVt750Y+dbTu1KgRqZKp/ECe7ZzPzXj
35
35
  pIy9vHxTANKYVyI4qj8OrFdEM5BQNu8oQpL0iQ==
36
36
  -----END CERTIFICATE-----
37
- date: 2022-07-18 00:00:00.000000000 Z
37
+ date: 2022-07-28 00:00:00.000000000 Z
38
38
  dependencies:
39
39
  - !ruby/object:Gem::Dependency
40
- name: concurrent-ruby
41
- requirement: !ruby/object:Gem::Requirement
42
- requirements:
43
- - - ">="
44
- - !ruby/object:Gem::Version
45
- version: '1.1'
46
- type: :runtime
47
- prerelease: false
48
- version_requirements: !ruby/object:Gem::Requirement
49
- requirements:
50
- - - ">="
51
- - !ruby/object:Gem::Version
52
- version: '1.1'
53
- - !ruby/object:Gem::Dependency
54
- name: dry-monitor
40
+ name: karafka-core
55
41
  requirement: !ruby/object:Gem::Requirement
56
42
  requirements:
57
43
  - - "~>"
58
44
  - !ruby/object:Gem::Version
59
- version: '0.5'
45
+ version: '2.0'
60
46
  type: :runtime
61
47
  prerelease: false
62
48
  version_requirements: !ruby/object:Gem::Requirement
63
49
  requirements:
64
50
  - - "~>"
65
51
  - !ruby/object:Gem::Version
66
- version: '0.5'
52
+ version: '2.0'
67
53
  - !ruby/object:Gem::Dependency
68
54
  name: rdkafka
69
55
  requirement: !ruby/object:Gem::Requirement
@@ -116,13 +102,6 @@ files:
116
102
  - docker-compose.yml
117
103
  - lib/waterdrop.rb
118
104
  - lib/waterdrop/config.rb
119
- - lib/waterdrop/configurable.rb
120
- - lib/waterdrop/configurable/leaf.rb
121
- - lib/waterdrop/configurable/node.rb
122
- - lib/waterdrop/contractable.rb
123
- - lib/waterdrop/contractable/contract.rb
124
- - lib/waterdrop/contractable/result.rb
125
- - lib/waterdrop/contractable/rule.rb
126
105
  - lib/waterdrop/contracts.rb
127
106
  - lib/waterdrop/contracts/config.rb
128
107
  - lib/waterdrop/contracts/message.rb
@@ -135,6 +114,7 @@ files:
135
114
  - lib/waterdrop/instrumentation/callbacks_manager.rb
136
115
  - lib/waterdrop/instrumentation/logger_listener.rb
137
116
  - lib/waterdrop/instrumentation/monitor.rb
117
+ - lib/waterdrop/instrumentation/notifications.rb
138
118
  - lib/waterdrop/instrumentation/vendors/datadog/dashboard.json
139
119
  - lib/waterdrop/instrumentation/vendors/datadog/listener.rb
140
120
  - lib/waterdrop/patches/rdkafka/bindings.rb
metadata.gz.sig CHANGED
Binary file
@@ -1,8 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- module Configurable
5
- # Single end config value representation
6
- Leaf = Struct.new(:name, :default, :constructor)
7
- end
8
- end
@@ -1,100 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- module Configurable
5
- # Single non-leaf node
6
- # This is a core component for the configurable settings
7
- #
8
- # The idea here is simple: we collect settings (leafs) and children (nodes) information and we
9
- # only compile/initialize the values prior to user running the `#configure` API. This API needs
10
- # to run prior to using the result stuff even if there is nothing to configure
11
- class Node
12
- attr_reader :name, :nestings
13
-
14
- # We need to be able to redefine children for deep copy
15
- attr_accessor :children
16
-
17
- # @param name [Symbol] node name
18
- # @param nestings [Proc] block for nested settings
19
- def initialize(name, nestings = ->(_) {})
20
- @name = name
21
- @children = []
22
- @nestings = nestings
23
- instance_eval(&nestings)
24
- end
25
-
26
- # Allows for a single leaf or nested node definition
27
- #
28
- # @param name [Symbol] setting or nested node name
29
- # @param default [Object] default value
30
- # @param constructor [#call, nil] callable or nil
31
- # @param block [Proc] block for nested settings
32
- def setting(name, default: nil, constructor: nil, &block)
33
- @children << if block
34
- Node.new(name, block)
35
- else
36
- Leaf.new(name, default, constructor)
37
- end
38
- end
39
-
40
- # Allows for the configuration and setup of the settings
41
- #
42
- # Compile settings, allow for overrides via yielding
43
- # @return [Node] returns self after configuration
44
- def configure
45
- compile
46
- yield(self) if block_given?
47
- self
48
- end
49
-
50
- # @return [Hash] frozen config hash representation
51
- def to_h
52
- config = {}
53
-
54
- @children.each do |value|
55
- config[value.name] = if value.is_a?(Leaf)
56
- public_send(value.name)
57
- else
58
- value.to_h
59
- end
60
- end
61
-
62
- config.freeze
63
- end
64
-
65
- # Deep copies all the children nodes to allow us for templates building on a class level and
66
- # non-side-effect usage on an instance/inherited.
67
- # @return [Node] duplicated node
68
- def deep_dup
69
- dupped = Node.new(name, nestings)
70
-
71
- dupped.children += children.map do |value|
72
- value.is_a?(Leaf) ? value.dup : value.deep_dup
73
- end
74
-
75
- dupped
76
- end
77
-
78
- # Converts the settings definitions into end children
79
- # @note It runs once, after things are compiled, they will not be recompiled again
80
- def compile
81
- @children.each do |value|
82
- # Do not redefine something that was already set during compilation
83
- # This will allow us to reconfigure things and skip override with defaults
84
- next if respond_to?(value.name)
85
-
86
- singleton_class.attr_accessor value.name
87
-
88
- initialized = if value.is_a?(Leaf)
89
- value.constructor ? value.constructor.call(value.default) : value.default
90
- else
91
- value.compile
92
- value
93
- end
94
-
95
- public_send("#{value.name}=", initialized)
96
- end
97
- end
98
- end
99
- end
100
- end
@@ -1,71 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- # A simple dry-configuration API compatible module for defining settings with defaults and a
5
- # constructor.
6
- module Configurable
7
- # A simple settings layer that works similar to dry-configurable
8
- # It allows us to define settings on a class and per instance level with templating on a class
9
- # level. It handles inheritance and allows for nested settings.
10
- #
11
- # @note The core settings template needs to be defined on a class level
12
- class << self
13
- # Sets up all the class methods and inits the core root node.
14
- # Useful when only per class settings are needed as does not include instance methods
15
- # @param base [Class] class that we extend
16
- def extended(base)
17
- base.extend ClassMethods
18
- end
19
-
20
- # Sets up all the class and instance methods and inits the core root node
21
- #
22
- # @param base [Class] class to which we want to add configuration
23
- #
24
- # Needs to be used when per instance configuration is needed
25
- def included(base)
26
- base.include InstanceMethods
27
- base.extend self
28
- end
29
- end
30
-
31
- # Instance related methods
32
- module InstanceMethods
33
- # @return [Node] config root node
34
- def config
35
- @config ||= self.class.config.deep_dup
36
- end
37
-
38
- # Allows for a per instance configuration (if needed)
39
- # @param block [Proc] block for configuration
40
- def configure(&block)
41
- config.configure(&block)
42
- end
43
- end
44
-
45
- # Class related methods
46
- module ClassMethods
47
- # @return [Node] root node for the settings
48
- def config
49
- return @config if @config
50
-
51
- # This will handle inheritance
52
- @config = if superclass.respond_to?(:config)
53
- superclass.config.deep_dup
54
- else
55
- Node.new(:root)
56
- end
57
- end
58
-
59
- # Allows for a per class configuration (if needed)
60
- # @param block [Proc] block for configuration
61
- def configure(&block)
62
- config.configure(&block)
63
- end
64
-
65
- # Pipes the settings setup to the config root node
66
- def setting(...)
67
- config.setting(...)
68
- end
69
- end
70
- end
71
- end
@@ -1,183 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- module Contractable
5
- # Base contract for all the contracts that check data format
6
- #
7
- # @note This contract does NOT support rules inheritance as it was never needed in Karafka
8
- class Contract
9
- extend Configurable
10
-
11
- # Yaml based error messages data
12
- setting(:error_messages)
13
-
14
- # Class level API definitions
15
- class << self
16
- # @return [Array<Rule>] all the validation rules defined for a given contract
17
- attr_reader :rules
18
-
19
- # Allows for definition of a scope/namespace for nested validations
20
- #
21
- # @param path [Symbol] path in the hash for nesting
22
- # @param block [Proc] nested rule code or more nestings inside
23
- #
24
- # @example
25
- # nested(:key) do
26
- # required(:inside) { |inside| inside.is_a?(String) }
27
- # end
28
- def nested(path, &block)
29
- init_accu
30
- @nested << path
31
- instance_eval(&block)
32
- @nested.pop
33
- end
34
-
35
- # Defines a rule for a required field (required means, that will automatically create an
36
- # error if missing)
37
- #
38
- # @param keys [Array<Symbol>] single or full path
39
- # @param block [Proc] validation rule
40
- def required(*keys, &block)
41
- init_accu
42
- @rules << Rule.new(@nested + keys, :required, block).freeze
43
- end
44
-
45
- # @param keys [Array<Symbol>] single or full path
46
- # @param block [Proc] validation rule
47
- def optional(*keys, &block)
48
- init_accu
49
- @rules << Rule.new(@nested + keys, :optional, block).freeze
50
- end
51
-
52
- # @param block [Proc] validation rule
53
- #
54
- # @note Virtual rules have different result expectations. Please see contracts or specs for
55
- # details.
56
- def virtual(&block)
57
- init_accu
58
- @rules << Rule.new([], :virtual, block).freeze
59
- end
60
-
61
- private
62
-
63
- # Initializes nestings and rules building accumulator
64
- def init_accu
65
- @nested ||= []
66
- @rules ||= []
67
- end
68
- end
69
-
70
- # Runs the validation
71
- #
72
- # @param data [Hash] hash with data we want to validate
73
- # @return [Result] validaton result
74
- def call(data)
75
- errors = []
76
-
77
- self.class.rules.map do |rule|
78
- case rule.type
79
- when :required
80
- validate_required(data, rule, errors)
81
- when :optional
82
- validate_optional(data, rule, errors)
83
- when :virtual
84
- validate_virtual(data, rule, errors)
85
- end
86
- end
87
-
88
- Result.new(errors, self)
89
- end
90
-
91
- # @param data [Hash] data for validation
92
- # @param error_class [Class] error class that should be used when validation fails
93
- # @return [Boolean] true
94
- # @raise [StandardError] any error provided in the error_class that inherits from the
95
- # standard error
96
- def validate!(data, error_class)
97
- result = call(data)
98
-
99
- return true if result.success?
100
-
101
- raise error_class, result.errors
102
- end
103
-
104
- private
105
-
106
- # Runs validation for rules on fields that are required and adds errors (if any) to the
107
- # errors array
108
- #
109
- # @param data [Hash] input hash
110
- # @param rule [Rule] validation rule
111
- # @param errors [Array] array with errors from previous rules (if any)
112
- def validate_required(data, rule, errors)
113
- for_checking = dig(data, rule.path)
114
-
115
- if for_checking.first == :match
116
- result = rule.validator.call(for_checking.last, data, errors, self)
117
-
118
- return if result == true
119
-
120
- errors << [rule.path, result || :format]
121
- else
122
- errors << [rule.path, :missing]
123
- end
124
- end
125
-
126
- # Runs validation for rules on fields that are optional and adds errors (if any) to the
127
- # errors array
128
- #
129
- # @param data [Hash] input hash
130
- # @param rule [Rule] validation rule
131
- # @param errors [Array] array with errors from previous rules (if any)
132
- def validate_optional(data, rule, errors)
133
- for_checking = dig(data, rule.path)
134
-
135
- return unless for_checking.first == :match
136
-
137
- result = rule.validator.call(for_checking.last, data, errors, self)
138
-
139
- return if result == true
140
-
141
- errors << [rule.path, result || :format]
142
- end
143
-
144
- # Runs validation for rules on virtual fields (aggregates, etc) and adds errors (if any) to
145
- # the errors array
146
- #
147
- # @param data [Hash] input hash
148
- # @param rule [Rule] validation rule
149
- # @param errors [Array] array with errors from previous rules (if any)
150
- def validate_virtual(data, rule, errors)
151
- result = rule.validator.call(data, errors, self)
152
-
153
- return if result == true
154
-
155
- errors.push(*result)
156
- end
157
-
158
- # Tries to dig for a given key in a hash and returns it with indication whether or not it was
159
- # possible to find it (dig returns nil and we don't know if it wasn't the digged key value)
160
- #
161
- # @param data [Hash]
162
- # @param keys [Array<Symbol>]
163
- # @return [Array<Symbol, Object>] array where the first element is `:match` or `:miss` and
164
- # the digged value or nil if not found
165
- def dig(data, keys)
166
- current = data
167
- result = :match
168
-
169
- keys.each do |nesting|
170
- unless current.key?(nesting)
171
- result = :miss
172
-
173
- break
174
- end
175
-
176
- current = current[nesting]
177
- end
178
-
179
- [result, current]
180
- end
181
- end
182
- end
183
- end
@@ -1,57 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- module Contractable
5
- # Representation of a validaton result with resolved error messages
6
- class Result
7
- attr_reader :errors
8
-
9
- # Builds a result object and remaps (if needed) error keys to proper error messages
10
- #
11
- # @param errors [Array<Array>] array with sub-arrays with paths and error keys
12
- # @param contract [Object] contract that generated the error
13
- def initialize(errors, contract)
14
- # Short track to skip object allocation for the happy path
15
- if errors.empty?
16
- @errors = errors
17
- return
18
- end
19
-
20
- hashed = {}
21
-
22
- errors.each do |error|
23
- scope = error.first.map(&:to_s).join('.').to_sym
24
-
25
- # This will allow for usage of custom messages instead of yaml keys if needed
26
- hashed[scope] = if error.last.is_a?(String)
27
- error.last
28
- else
29
- build_message(contract, scope, error.last)
30
- end
31
- end
32
-
33
- @errors = hashed
34
- end
35
-
36
- # @return [Boolean] true if no errors
37
- def success?
38
- errors.empty?
39
- end
40
-
41
- private
42
-
43
- # Builds message based on the error messages
44
- # @param contract [Object] contract for which we build the result
45
- # @param scope [Symbol] path to the key that has an error
46
- # @param error_key [Symbol] error key for yaml errors lookup
47
- # @return [String] error message
48
- def build_message(contract, scope, error_key)
49
- messages = contract.class.config.error_messages
50
-
51
- messages.fetch(error_key.to_s) do
52
- messages.fetch("#{scope}_#{error_key}")
53
- end
54
- end
55
- end
56
- end
57
- end
@@ -1,8 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- module Contractable
5
- # Representation of a single validation rule
6
- Rule = Struct.new(:path, :type, :validator)
7
- end
8
- end
@@ -1,13 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- # Contract layer for WaterDrop and Karafka
5
- # It aims to be "dry-validation" like but smaller and easier to handle + without dependencies
6
- #
7
- # It allows for nested validations, etc
8
- #
9
- # @note It is thread-safe to run but validations definitions should happen before threads are
10
- # used.
11
- module Contractable
12
- end
13
- end