waterdrop 2.0.2 → 2.0.6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 24189103d7583fac3c911f4b32244655ffa2cd1b5a95abe0e73dbd4344530c5b
4
- data.tar.gz: b4f5c0e95af559e0a0a929d14a3b56b39c4da5edc9b1d9ffe995e8b1852d9c14
3
+ metadata.gz: b2effa3419e6eb9a94cf46d6ccfde35ec3dd0579d60e679445436e792e1a30cb
4
+ data.tar.gz: 79aa906b30e78fa183cdb8a37df6c5fc0e4e7a599e350376eb34849d07f094df
5
5
  SHA512:
6
- metadata.gz: bd49152ec9f6325cf58d9d470ed1ccfdb0c5b5d9a899bcdca289500ab56008b1153db35bf90f8e5e167692be8c79cd5bf96bbf66f9f515f97922bb217a0544d5
7
- data.tar.gz: 32b36daf1f26227c58690fea5a6846c7599d1e8859e074ca3f2075cf0effa187cfbb4cb86a50ea22dfe169d3af54fd2e3232e14c569efcaa625f52166f227dbb
6
+ metadata.gz: 588b5f7ec397830a422e42b0d37cd189952f6648761c9cc93d3c50bd664d3f502e0775f731fabc1925fca1ae651c90ce3d3c96aeadc7de3edb79c5155c152211
7
+ data.tar.gz: b94efadeb84cc4033d5b08aeac372f92a2cb7fac8df4e58df62dca55535a152afa9d7078433841b17dd8906a6588ed90110d22cf9dcad9915b659367015be1f2
checksums.yaml.gz.sig CHANGED
Binary file
@@ -17,7 +17,7 @@ jobs:
17
17
  - '3.0'
18
18
  - '2.7'
19
19
  - '2.6'
20
- - 'jruby-head'
20
+ - 'jruby-9.3.1.0'
21
21
  include:
22
22
  - ruby: '3.0'
23
23
  coverage: 'true'
@@ -29,6 +29,12 @@ jobs:
29
29
  uses: ruby/setup-ruby@v1
30
30
  with:
31
31
  ruby-version: ${{matrix.ruby}}
32
+ - name: Run Kafka with docker-compose
33
+ # We need to give Kafka enough time to start and create all the needed topics, etc
34
+ # If anyone has a better idea on how to do it smart and easily, please contact me
35
+ run: |
36
+ docker-compose up -d
37
+ sleep 5
32
38
  - name: Install latest bundler
33
39
  run: |
34
40
  gem install bundler --no-document
@@ -37,8 +43,6 @@ jobs:
37
43
  run: |
38
44
  bundle config set without development
39
45
  bundle install --jobs 4 --retry 3
40
- - name: Run Kafka with docker-compose
41
- run: docker-compose up -d
42
46
  - name: Run all tests
43
47
  env:
44
48
  GITHUB_COVERAGE: ${{matrix.coverage}}
data/CHANGELOG.md CHANGED
@@ -1,5 +1,37 @@
1
1
  # WaterDrop changelog
2
2
 
3
+ ## 2.0.6 (2021-12-01)
4
+ - #218 - Fixes a case, where dispatch of callbacks the same moment a new producer was created could cause a concurrency issue in the manager.
5
+ - Fix some unstable specs.
6
+
7
+ ## 2.0.5 (2021-11-28)
8
+
9
+ ### Bug fixes
10
+
11
+ - Fixes an issue where multiple producers would emit stats of other producers causing the same stats to be published several times (as many times as a number of producers). This could cause invalid reporting for multi-kafka setups.
12
+ - Fixes a bug where emitted statistics would contain their first value as the first delta value for first stats emitted.
13
+ - Fixes a bug where decorated statistics would include a delta for a root field with non-numeric values.
14
+
15
+ ### Changes and features
16
+ - Introduces support for error callbacks instrumentation notifications with `error.emitted` monitor emitted key for tracking background errors that would occur on the producer (disconnects, etc).
17
+ - Removes the `:producer` key from `statistics.emitted` and replaces it with `:producer_id` not to inject whole producer into the payload
18
+ - Removes the `:producer` key from `message.acknowledged` and replaces it with `:producer_id` not to inject whole producer into the payload
19
+ - Cleanup and refactor of callbacks support to simplify the API and make it work with Rdkafka way of things.
20
+ - Introduces a callbacks manager concept that will also be within in Karafka `2.0` for both statistics and errors tracking per client.
21
+ - Sets default Kafka `client.id` to `waterdrop` when not set.
22
+ - Updates specs to always emit statistics for better test coverage.
23
+ - Adds statistics and errors integration specs running against Kafka.
24
+ - Replaces direct `RSpec.describe` reference with auto-discovery
25
+ - Patches `rdkafka` to provide functionalities that are needed for granular callback support.
26
+
27
+ ## 2.0.4 (2021-09-19)
28
+ - Update `dry-*` to the recent versions and update settings syntax to match it
29
+ - Update Zeitwerk requirement
30
+
31
+ ## 2.0.3 (2021-09-05)
32
+ - Remove rdkafka patch in favour of spec topic pre-creation
33
+ - Do not close client that was never used upon closing producer
34
+
3
35
  ## 2.0.2 (2021-08-13)
4
36
  - Add support for `partition_key`
5
37
  - Switch license from `LGPL-3.0` to `MIT`
data/Gemfile.lock CHANGED
@@ -1,18 +1,18 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- waterdrop (2.0.2)
4
+ waterdrop (2.0.6)
5
5
  concurrent-ruby (>= 1.1)
6
- dry-configurable (~> 0.8)
7
- dry-monitor (~> 0.3)
8
- dry-validation (~> 1.3)
9
- rdkafka (>= 0.6.0)
10
- zeitwerk (~> 2.1)
6
+ dry-configurable (~> 0.13)
7
+ dry-monitor (~> 0.5)
8
+ dry-validation (~> 1.7)
9
+ rdkafka (>= 0.10)
10
+ zeitwerk (~> 2.3)
11
11
 
12
12
  GEM
13
13
  remote: https://rubygems.org/
14
14
  specs:
15
- activesupport (6.1.4)
15
+ activesupport (6.1.4.1)
16
16
  concurrent-ruby (~> 1.0, >= 1.0.2)
17
17
  i18n (>= 1.6, < 2)
18
18
  minitest (>= 5.1)
@@ -22,15 +22,14 @@ GEM
22
22
  concurrent-ruby (1.1.9)
23
23
  diff-lcs (1.4.4)
24
24
  docile (1.4.0)
25
- dry-configurable (0.12.1)
25
+ dry-configurable (0.13.0)
26
26
  concurrent-ruby (~> 1.0)
27
- dry-core (~> 0.5, >= 0.5.0)
28
- dry-container (0.8.0)
27
+ dry-core (~> 0.6)
28
+ dry-container (0.9.0)
29
29
  concurrent-ruby (~> 1.0)
30
- dry-configurable (~> 0.1, >= 0.1.3)
30
+ dry-configurable (~> 0.13, >= 0.13.0)
31
31
  dry-core (0.7.1)
32
32
  concurrent-ruby (~> 1.0)
33
- dry-equalizer (0.3.0)
34
33
  dry-events (0.3.0)
35
34
  concurrent-ruby (~> 1.0)
36
35
  dry-core (~> 0.5, >= 0.5)
@@ -39,13 +38,13 @@ GEM
39
38
  dry-logic (1.2.0)
40
39
  concurrent-ruby (~> 1.0)
41
40
  dry-core (~> 0.5, >= 0.5)
42
- dry-monitor (0.4.0)
43
- dry-configurable (~> 0.5)
41
+ dry-monitor (0.5.0)
42
+ dry-configurable (~> 0.13, >= 0.13.0)
44
43
  dry-core (~> 0.5, >= 0.5)
45
44
  dry-events (~> 0.2)
46
- dry-schema (1.7.0)
45
+ dry-schema (1.8.0)
47
46
  concurrent-ruby (~> 1.0)
48
- dry-configurable (~> 0.8, >= 0.8.3)
47
+ dry-configurable (~> 0.13, >= 0.13.0)
49
48
  dry-core (~> 0.5, >= 0.5)
50
49
  dry-initializer (~> 3.0)
51
50
  dry-logic (~> 1.0)
@@ -56,25 +55,24 @@ GEM
56
55
  dry-core (~> 0.5, >= 0.5)
57
56
  dry-inflector (~> 0.1, >= 0.1.2)
58
57
  dry-logic (~> 1.0, >= 1.0.2)
59
- dry-validation (1.6.0)
58
+ dry-validation (1.7.0)
60
59
  concurrent-ruby (~> 1.0)
61
60
  dry-container (~> 0.7, >= 0.7.1)
62
- dry-core (~> 0.4)
63
- dry-equalizer (~> 0.2)
61
+ dry-core (~> 0.5, >= 0.5)
64
62
  dry-initializer (~> 3.0)
65
- dry-schema (~> 1.5, >= 1.5.2)
63
+ dry-schema (~> 1.8, >= 1.8.0)
66
64
  factory_bot (6.2.0)
67
65
  activesupport (>= 5.0.0)
68
- ffi (1.15.3)
69
- i18n (1.8.10)
66
+ ffi (1.15.4)
67
+ i18n (1.8.11)
70
68
  concurrent-ruby (~> 1.0)
71
- mini_portile2 (2.6.1)
69
+ mini_portile2 (2.7.1)
72
70
  minitest (5.14.4)
73
71
  rake (13.0.6)
74
- rdkafka (0.9.0)
75
- ffi (~> 1.9)
76
- mini_portile2 (~> 2.1)
77
- rake (>= 12.3)
72
+ rdkafka (0.11.0)
73
+ ffi (~> 1.15)
74
+ mini_portile2 (~> 2.7)
75
+ rake (> 12)
78
76
  rspec (3.10.0)
79
77
  rspec-core (~> 3.10.0)
80
78
  rspec-expectations (~> 3.10.0)
@@ -87,7 +85,7 @@ GEM
87
85
  rspec-mocks (3.10.2)
88
86
  diff-lcs (>= 1.2.0, < 2.0)
89
87
  rspec-support (~> 3.10.0)
90
- rspec-support (3.10.2)
88
+ rspec-support (3.10.3)
91
89
  simplecov (0.21.2)
92
90
  docile (~> 1.1)
93
91
  simplecov-html (~> 0.11)
@@ -96,7 +94,7 @@ GEM
96
94
  simplecov_json_formatter (0.1.3)
97
95
  tzinfo (2.0.4)
98
96
  concurrent-ruby (~> 1.0)
99
- zeitwerk (2.4.2)
97
+ zeitwerk (2.5.1)
100
98
 
101
99
  PLATFORMS
102
100
  x86_64-darwin
@@ -111,4 +109,4 @@ DEPENDENCIES
111
109
  waterdrop!
112
110
 
113
111
  BUNDLED WITH
114
- 2.2.25
112
+ 2.2.31
data/README.md CHANGED
@@ -8,7 +8,7 @@ Please refer to [this](https://github.com/karafka/waterdrop/tree/1.4) branch and
8
8
 
9
9
  [![Build Status](https://github.com/karafka/waterdrop/workflows/ci/badge.svg)](https://github.com/karafka/waterdrop/actions?query=workflow%3Aci)
10
10
  [![Gem Version](https://badge.fury.io/rb/waterdrop.svg)](http://badge.fury.io/rb/waterdrop)
11
- [![Join the chat at https://gitter.im/karafka/karafka](https://badges.gitter.im/karafka/karafka.svg)](https://gitter.im/karafka/karafka)
11
+ [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
12
12
 
13
13
  Gem used to send messages to Kafka in an easy way with an extra validation layer. It is a part of the [Karafka](https://github.com/karafka/karafka) ecosystem.
14
14
 
@@ -24,22 +24,20 @@ It:
24
24
 
25
25
  ## Table of contents
26
26
 
27
- - [WaterDrop](#waterdrop)
28
- * [Table of contents](#table-of-contents)
29
- * [Installation](#installation)
30
- * [Setup](#setup)
31
- + [WaterDrop configuration options](#waterdrop-configuration-options)
32
- + [Kafka configuration options](#kafka-configuration-options)
33
- * [Usage](#usage)
34
- + [Basic usage](#basic-usage)
35
- + [Buffering](#buffering)
36
- - [Using WaterDrop to buffer messages based on the application logic](#using-waterdrop-to-buffer-messages-based-on-the-application-logic)
37
- - [Using WaterDrop with rdkafka buffers to achieve periodic auto-flushing](#using-waterdrop-with-rdkafka-buffers-to-achieve-periodic-auto-flushing)
38
- * [Instrumentation](#instrumentation)
39
- + [Usage statistics](#usage-statistics)
40
- + [Forking and potential memory problems](#forking-and-potential-memory-problems)
41
- * [References](#references)
42
- * [Note on contributions](#note-on-contributions)
27
+ - [Installation](#installation)
28
+ - [Setup](#setup)
29
+ * [WaterDrop configuration options](#waterdrop-configuration-options)
30
+ * [Kafka configuration options](#kafka-configuration-options)
31
+ - [Usage](#usage)
32
+ * [Basic usage](#basic-usage)
33
+ * [Buffering](#buffering)
34
+ + [Using WaterDrop to buffer messages based on the application logic](#using-waterdrop-to-buffer-messages-based-on-the-application-logic)
35
+ + [Using WaterDrop with rdkafka buffers to achieve periodic auto-flushing](#using-waterdrop-with-rdkafka-buffers-to-achieve-periodic-auto-flushing)
36
+ - [Instrumentation](#instrumentation)
37
+ * [Usage statistics](#usage-statistics)
38
+ * [Error notifications](#error-notifications)
39
+ * [Forking and potential memory problems](#forking-and-potential-memory-problems)
40
+ - [Note on contributions](#note-on-contributions)
43
41
 
44
42
  ## Installation
45
43
 
@@ -290,25 +288,46 @@ producer.close
290
288
 
291
289
  Note: The metrics returned may not be completely consistent between brokers, toppars and totals, due to the internal asynchronous nature of librdkafka. E.g., the top level tx total may be less than the sum of the broker tx values which it represents.
292
290
 
291
+ ### Error notifications
292
+
293
+ Aside from errors related to publishing messages like `buffer.flushed_async.error`, WaterDrop allows you to listen to errors that occur in its internal background threads. Things like reconnecting to Kafka upon network errors and others unrelated to publishing messages are all available under `error.emitted` notification key. You can subscribe to this event to ensure your setup is healthy and without any problems that would otherwise go unnoticed as long as messages are delivered.
294
+
295
+ ```ruby
296
+ producer = WaterDrop::Producer.new do |config|
297
+ # Note invalid connection port...
298
+ config.kafka = { 'bootstrap.servers': 'localhost:9090' }
299
+ end
300
+
301
+ producer.monitor.subscribe('error.emitted') do |event|
302
+ error = event[:error]
303
+
304
+ p "Internal error occurred: #{error}"
305
+ end
306
+
307
+ # Run this code without Kafka cluster
308
+ loop do
309
+ producer.produce_async(topic: 'events', payload: 'data')
310
+
311
+ sleep(1)
312
+ end
313
+
314
+ # After you stop your Kafka cluster, you will see a lot of those:
315
+ #
316
+ # Internal error occurred: Local: Broker transport failure (transport)
317
+ #
318
+ # Internal error occurred: Local: Broker transport failure (transport)
319
+ ```
320
+
293
321
  ### Forking and potential memory problems
294
322
 
295
323
  If you work with forked processes, make sure you **don't** use the producer before the fork. You can easily configure the producer and then fork and use it.
296
324
 
297
325
  To tackle this [obstacle](https://github.com/appsignal/rdkafka-ruby/issues/15) related to rdkafka, WaterDrop adds finalizer to each of the producers to close the rdkafka client before the Ruby process is shutdown. Due to the [nature of the finalizers](https://www.mikeperham.com/2010/02/24/the-trouble-with-ruby-finalizers/), this implementation prevents producers from being GCed (except upon VM shutdown) and can cause memory leaks if you don't use persistent/long-lived producers in a long-running process or if you don't use the `#close` method of a producer when it is no longer needed. Creating a producer instance for each message is anyhow a rather bad idea, so we recommend not to.
298
326
 
299
- ## References
300
-
301
- * [WaterDrop code documentation](https://www.rubydoc.info/github/karafka/waterdrop)
302
- * [Karafka framework](https://github.com/karafka/karafka)
303
- * [WaterDrop Actions CI](https://github.com/karafka/waterdrop/actions?query=workflow%3Ac)
304
- * [WaterDrop Coditsu](https://app.coditsu.io/karafka/repositories/waterdrop)
305
-
306
327
  ## Note on contributions
307
328
 
308
- First, thank you for considering contributing to WaterDrop! It's people like you that make the open source community such a great community!
309
-
310
- Each pull request must pass all the RSpec specs and meet our quality requirements.
329
+ First, thank you for considering contributing to the Karafka ecosystem! It's people like you that make the open source community such a great community!
311
330
 
312
- To check if everything is as it should be, we use [Coditsu](https://coditsu.io) that combines multiple linters and code analyzers for both code and documentation. Once you're done with your changes, submit a pull request.
331
+ Each pull request must pass all the RSpec specs, integration tests and meet our quality requirements.
313
332
 
314
- Coditsu will automatically check your work against our quality standards. You can find your commit check results on the [builds page](https://app.coditsu.io/karafka/repositories/waterdrop/builds/commit_builds) of WaterDrop repository.
333
+ Fork it, update and wait for the Github Actions results.
data/docker-compose.yml CHANGED
@@ -13,5 +13,6 @@ services:
13
13
  KAFKA_ADVERTISED_PORT: 9092
14
14
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
15
15
  KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
16
+ KAFKA_CREATE_TOPICS: 'example_topic:1:1'
16
17
  volumes:
17
18
  - /var/run/docker.sock:/var/run/docker.sock
@@ -7,33 +7,52 @@ module WaterDrop
7
7
  class Config
8
8
  include Dry::Configurable
9
9
 
10
+ # Defaults for kafka settings, that will be overwritten only if not present already
11
+ KAFKA_DEFAULTS = {
12
+ 'client.id' => 'waterdrop'
13
+ }.freeze
14
+
15
+ private_constant :KAFKA_DEFAULTS
16
+
10
17
  # WaterDrop options
11
18
  #
12
19
  # option [String] id of the producer. This can be helpful when building producer specific
13
20
  # instrumentation or loggers. It is not the kafka producer id
14
- setting(:id, false) { |id| id || SecureRandom.uuid }
21
+ setting(
22
+ :id,
23
+ default: false,
24
+ constructor: ->(id) { id || SecureRandom.uuid }
25
+ )
15
26
  # option [Instance] logger that we want to use
16
27
  # @note Due to how rdkafka works, this setting is global for all the producers
17
- setting(:logger, false) { |logger| logger || Logger.new($stdout, level: Logger::WARN) }
28
+ setting(
29
+ :logger,
30
+ default: false,
31
+ constructor: ->(logger) { logger || Logger.new($stdout, level: Logger::WARN) }
32
+ )
18
33
  # option [Instance] monitor that we want to use. See instrumentation part of the README for
19
34
  # more details
20
- setting(:monitor, false) { |monitor| monitor || WaterDrop::Instrumentation::Monitor.new }
35
+ setting(
36
+ :monitor,
37
+ default: false,
38
+ constructor: ->(monitor) { monitor || WaterDrop::Instrumentation::Monitor.new }
39
+ )
21
40
  # option [Integer] max payload size allowed for delivery to Kafka
22
- setting :max_payload_size, 1_000_012
41
+ setting :max_payload_size, default: 1_000_012
23
42
  # option [Integer] Wait that long for the delivery report or raise an error if this takes
24
43
  # longer than the timeout.
25
- setting :max_wait_timeout, 5
44
+ setting :max_wait_timeout, default: 5
26
45
  # option [Numeric] how long should we wait between re-checks on the availability of the
27
46
  # delivery report. In a really robust systems, this describes the min-delivery time
28
47
  # for a single sync message when produced in isolation
29
- setting :wait_timeout, 0.005 # 5 milliseconds
48
+ setting :wait_timeout, default: 0.005 # 5 milliseconds
30
49
  # option [Boolean] should we send messages. Setting this to false can be really useful when
31
50
  # testing and or developing because when set to false, won't actually ping Kafka but will
32
51
  # run all the validations, etc
33
- setting :deliver, true
52
+ setting :deliver, default: true
34
53
  # rdkafka options
35
54
  # @see https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
36
- setting :kafka, {}
55
+ setting :kafka, default: {}
37
56
 
38
57
  # Configuration method
39
58
  # @yield Runs a block of code providing a config singleton instance to it
@@ -41,12 +60,28 @@ module WaterDrop
41
60
  def setup
42
61
  configure do |config|
43
62
  yield(config)
63
+
64
+ merge_kafka_defaults!(config)
44
65
  validate!(config.to_h)
66
+
67
+ ::Rdkafka::Config.logger = config.logger
45
68
  end
46
69
  end
47
70
 
48
71
  private
49
72
 
73
+ # Propagates the kafka setting defaults unless they are already present
74
+ # This makes it easier to set some values that users usually don't change but still allows them
75
+ # to overwrite the whole hash if they want to
76
+ # @param config [Dry::Configurable::Config] dry config of this producer
77
+ def merge_kafka_defaults!(config)
78
+ KAFKA_DEFAULTS.each do |key, value|
79
+ next if config.kafka.key?(key)
80
+
81
+ config.kafka[key] = value
82
+ end
83
+ end
84
+
50
85
  # Validates the configuration and if anything is wrong, will raise an exception
51
86
  # @param config_hash [Hash] config hash with setup details
52
87
  # @raise [WaterDrop::Errors::ConfigurationInvalidError] raised when something is wrong with
@@ -0,0 +1,30 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WaterDrop
4
+ module Instrumentation
5
+ module Callbacks
6
+ # Creates a callable that we want to run upon each message delivery or failure
7
+ #
8
+ # @note We don't have to provide client_name here as this callback is per client instance
9
+ class Delivery
10
+ # @param producer_id [String] id of the current producer
11
+ # @param monitor [WaterDrop::Instrumentation::Monitor] monitor we are using
12
+ def initialize(producer_id, monitor)
13
+ @producer_id = producer_id
14
+ @monitor = monitor
15
+ end
16
+
17
+ # Emits delivery details to the monitor
18
+ # @param delivery_report [Rdkafka::Producer::DeliveryReport] delivery report
19
+ def call(delivery_report)
20
+ @monitor.instrument(
21
+ 'message.acknowledged',
22
+ producer_id: @producer_id,
23
+ offset: delivery_report.offset,
24
+ partition: delivery_report.partition
25
+ )
26
+ end
27
+ end
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,35 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WaterDrop
4
+ module Instrumentation
5
+ module Callbacks
6
+ # Callback that kicks in when error occurs and is published in a background thread
7
+ class Error
8
+ # @param producer_id [String] id of the current producer
9
+ # @param client_name [String] rdkafka client name
10
+ # @param monitor [WaterDrop::Instrumentation::Monitor] monitor we are using
11
+ def initialize(producer_id, client_name, monitor)
12
+ @producer_id = producer_id
13
+ @client_name = client_name
14
+ @monitor = monitor
15
+ end
16
+
17
+ # Runs the instrumentation monitor with error
18
+ # @param client_name [String] rdkafka client name
19
+ # @param error [Rdkafka::Error] error that occurred
20
+ # @note If will only instrument on errors of the client of our producer
21
+ def call(client_name, error)
22
+ # Emit only errors related to our client
23
+ # Same as with statistics (mor explanation there)
24
+ return unless @client_name == client_name
25
+
26
+ @monitor.instrument(
27
+ 'error.emitted',
28
+ producer_id: @producer_id,
29
+ error: error
30
+ )
31
+ end
32
+ end
33
+ end
34
+ end
35
+ end
@@ -0,0 +1,41 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WaterDrop
4
+ module Instrumentation
5
+ # Namespace for handlers of callbacks emitted by the kafka client lib
6
+ module Callbacks
7
+ # Statistics callback handler
8
+ # @note We decorate the statistics with our own decorator because some of the metrics from
9
+ # rdkafka are absolute. For example number of sent messages increases not in reference to
10
+ # previous statistics emit but from the beginning of the process. We decorate it with diff
11
+ # of all the numeric values against the data from the previous callback emit
12
+ class Statistics
13
+ # @param producer_id [String] id of the current producer
14
+ # @param client_name [String] rdkafka client name
15
+ # @param monitor [WaterDrop::Instrumentation::Monitor] monitor we are using
16
+ def initialize(producer_id, client_name, monitor)
17
+ @producer_id = producer_id
18
+ @client_name = client_name
19
+ @monitor = monitor
20
+ @statistics_decorator = StatisticsDecorator.new
21
+ end
22
+
23
+ # Emits decorated statistics to the monitor
24
+ # @param statistics [Hash] rdkafka statistics
25
+ def call(statistics)
26
+ # Emit only statistics related to our client
27
+ # rdkafka does not have per-instance statistics hook, thus we need to make sure that we
28
+ # emit only stats that are related to current producer. Otherwise we would emit all of
29
+ # all the time.
30
+ return unless @client_name == statistics['name']
31
+
32
+ @monitor.instrument(
33
+ 'statistics.emitted',
34
+ producer_id: @producer_id,
35
+ statistics: @statistics_decorator.call(statistics)
36
+ )
37
+ end
38
+ end
39
+ end
40
+ end
41
+ end
@@ -0,0 +1,77 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WaterDrop
4
+ module Instrumentation
5
+ module Callbacks
6
+ # Many of the librdkafka statistics are absolute values instead of a gauge.
7
+ # This means, that for example number of messages sent is an absolute growing value
8
+ # instead of being a value of messages sent from the last statistics report.
9
+ # This decorator calculates the diff against previously emited stats, so we get also
10
+ # the diff together with the original values
11
+ class StatisticsDecorator
12
+ def initialize
13
+ @previous = {}.freeze
14
+ end
15
+
16
+ # @param emited_stats [Hash] original emited statistics
17
+ # @return [Hash] emited statistics extended with the diff data
18
+ # @note We modify the emited statistics, instead of creating new. Since we don't expose
19
+ # any API to get raw data, users can just assume that the result of this decoration is
20
+ # the proper raw stats that they can use
21
+ def call(emited_stats)
22
+ diff(
23
+ @previous,
24
+ emited_stats
25
+ )
26
+
27
+ @previous = emited_stats
28
+
29
+ emited_stats.freeze
30
+ end
31
+
32
+ private
33
+
34
+ # Calculates the diff of the provided values and modifies in place the emited statistics
35
+ #
36
+ # @param previous [Object] previous value from the given scope in which
37
+ # we are
38
+ # @param current [Object] current scope from emitted statistics
39
+ # @return [Object] the diff if the values were numerics or the current scope
40
+ def diff(previous, current)
41
+ if current.is_a?(Hash)
42
+ # @note We cannot use #each_key as we modify the content of the current scope
43
+ # in place (in case it's a hash)
44
+ current.keys.each do |key|
45
+ append(
46
+ current,
47
+ key,
48
+ diff((previous || {})[key], (current || {})[key])
49
+ )
50
+ end
51
+ end
52
+
53
+ # Diff can be computed only for numerics
54
+ return current unless current.is_a?(Numeric)
55
+ # If there was no previous value, delta is always zero
56
+ return 0 unless previous
57
+ # Should never happen but just in case, a type changed in between stats
58
+ return current unless previous.is_a?(Numeric)
59
+
60
+ current - previous
61
+ end
62
+
63
+ # Appends the result of the diff to a given key as long as the result is numeric
64
+ #
65
+ # @param current [Hash] current scope
66
+ # @param key [Symbol] key based on which we were diffing
67
+ # @param result [Object] diff result
68
+ def append(current, key, result)
69
+ return unless result.is_a?(Numeric)
70
+ return if current.frozen?
71
+
72
+ current["#{key}_d"] = result
73
+ end
74
+ end
75
+ end
76
+ end
77
+ end
@@ -0,0 +1,39 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WaterDrop
4
+ module Instrumentation
5
+ # This manager allows us to register multiple callbacks into a hook that is suppose to support
6
+ # a single callback
7
+ class CallbacksManager
8
+ # @return [::WaterDrop::Instrumentation::CallbacksManager]
9
+ def initialize
10
+ @callbacks = Concurrent::Hash.new
11
+ end
12
+
13
+ # Invokes all the callbacks registered one after another
14
+ #
15
+ # @param args [Object] any args that should go to the callbacks
16
+ # @note We do not use `#each_value` here on purpose. With it being used, we cannot dispatch
17
+ # callbacks and add new at the same time. Since we don't know when and in what thread
18
+ # things are going to be added to the manager, we need to extract values into an array and
19
+ # run it. That way we can add new things the same time.
20
+ def call(*args)
21
+ @callbacks.values.each { |callback| callback.call(*args) }
22
+ end
23
+
24
+ # Adds a callback to the manager
25
+ #
26
+ # @param id [String] id of the callback (used when deleting it)
27
+ # @param callable [#call] object that responds to a `#call` method
28
+ def add(id, callable)
29
+ @callbacks[id] = callable
30
+ end
31
+
32
+ # Removes the callback from the manager
33
+ # @param id [String] id of the callback we want to remove
34
+ def delete(id)
35
+ @callbacks.delete(id)
36
+ end
37
+ end
38
+ end
39
+ end
@@ -13,18 +13,24 @@ module WaterDrop
13
13
  # @note The non-error once support timestamp benchmarking
14
14
  EVENTS = %w[
15
15
  producer.closed
16
+
16
17
  message.produced_async
17
18
  message.produced_sync
19
+ message.acknowledged
20
+ message.buffered
21
+
18
22
  messages.produced_async
19
23
  messages.produced_sync
20
- message.buffered
21
24
  messages.buffered
22
- message.acknowledged
25
+
23
26
  buffer.flushed_async
24
27
  buffer.flushed_async.error
25
28
  buffer.flushed_sync
26
29
  buffer.flushed_sync.error
30
+
27
31
  statistics.emitted
32
+
33
+ error.emitted
28
34
  ].freeze
29
35
 
30
36
  private_constant :EVENTS
@@ -3,5 +3,18 @@
3
3
  module WaterDrop
4
4
  # Namespace for all the things related with WaterDrop instrumentation process
5
5
  module Instrumentation
6
+ class << self
7
+ # Builds a manager for statistics callbacks
8
+ # @return [WaterDrop::CallbacksManager]
9
+ def statistics_callbacks
10
+ @statistics_callbacks ||= CallbacksManager.new
11
+ end
12
+
13
+ # Builds a manager for error callbacks
14
+ # @return [WaterDrop::CallbacksManager]
15
+ def error_callbacks
16
+ @error_callbacks ||= CallbacksManager.new
17
+ end
18
+ end
6
19
  end
7
20
  end
@@ -0,0 +1,42 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WaterDrop
4
+ module Patches
5
+ module Rdkafka
6
+ # Extends `Rdkafka::Bindings` with some extra methods and updates callbacks that we intend
7
+ # to work with in a bit different way than rdkafka itself
8
+ module Bindings
9
+ class << self
10
+ # Add extra methods that we need
11
+ # @param mod [::Rdkafka::Bindings] rdkafka bindings module
12
+ def included(mod)
13
+ mod.attach_function :rd_kafka_name, [:pointer], :string
14
+
15
+ # Default rdkafka setup for errors doest not propagate client details, thus it always
16
+ # publishes all the stuff for all rdkafka instances. We change that by providing
17
+ # function that fetches the instance name, allowing us to have better notifications
18
+ mod.send(:remove_const, :ErrorCallback)
19
+ mod.const_set(:ErrorCallback, build_error_callback)
20
+ end
21
+
22
+ # @return [FFI::Function] overwritten callback function
23
+ def build_error_callback
24
+ FFI::Function.new(
25
+ :void, %i[pointer int string pointer]
26
+ ) do |client_prr, err_code, reason, _opaque|
27
+ return nil unless ::Rdkafka::Config.error_callback
28
+
29
+ name = ::Rdkafka::Bindings.rd_kafka_name(client_prr)
30
+
31
+ error = ::Rdkafka::RdkafkaError.new(err_code, broker_message: reason)
32
+
33
+ ::Rdkafka::Config.error_callback.call(name, error)
34
+ end
35
+ end
36
+ end
37
+ end
38
+ end
39
+ end
40
+ end
41
+
42
+ ::Rdkafka::Bindings.include(::WaterDrop::Patches::Rdkafka::Bindings)
@@ -0,0 +1,20 @@
1
+ # frozen_string_literal: true
2
+
3
+ module WaterDrop
4
+ # Patches to external components
5
+ module Patches
6
+ # Rdkafka related patches
7
+ module Rdkafka
8
+ # Rdkafka::Producer patches
9
+ module Producer
10
+ # Adds a method that allows us to get the native kafka producer name
11
+ # @return [String] producer instance name
12
+ def name
13
+ ::Rdkafka::Bindings.rd_kafka_name(@native_kafka)
14
+ end
15
+ end
16
+ end
17
+ end
18
+ end
19
+
20
+ ::Rdkafka::Producer.include ::WaterDrop::Patches::Rdkafka::Producer
@@ -12,51 +12,16 @@ module WaterDrop
12
12
  def call(producer, config)
13
13
  return DummyClient.new unless config.deliver
14
14
 
15
- Rdkafka::Config.logger = config.logger
16
- Rdkafka::Config.statistics_callback = build_statistics_callback(producer, config.monitor)
17
-
18
15
  client = Rdkafka::Config.new(config.kafka.to_h).producer
19
- client.delivery_callback = build_delivery_callback(producer, config.monitor)
20
- client
21
- end
22
16
 
23
- private
17
+ # This callback is not global and is per client, thus we do not have to wrap it with a
18
+ # callbacks manager to make it work
19
+ client.delivery_callback = Instrumentation::Callbacks::Delivery.new(
20
+ producer.id,
21
+ config.monitor
22
+ )
24
23
 
25
- # Creates a proc that we want to run upon each successful message delivery
26
- #
27
- # @param producer [Producer]
28
- # @param monitor [Object] monitor we want to use
29
- # @return [Proc] delivery callback
30
- def build_delivery_callback(producer, monitor)
31
- lambda do |delivery_report|
32
- monitor.instrument(
33
- 'message.acknowledged',
34
- producer: producer,
35
- offset: delivery_report.offset,
36
- partition: delivery_report.partition
37
- )
38
- end
39
- end
40
-
41
- # Creates a proc that we want to run upon each statistics callback execution
42
- #
43
- # @param producer [Producer]
44
- # @param monitor [Object] monitor we want to use
45
- # @return [Proc] statistics callback
46
- # @note We decorate the statistics with our own decorator because some of the metrics from
47
- # rdkafka are absolute. For example number of sent messages increases not in reference to
48
- # previous statistics emit but from the beginning of the process. We decorate it with diff
49
- # of all the numeric values against the data from the previous callback emit
50
- def build_statistics_callback(producer, monitor)
51
- statistics_decorator = StatisticsDecorator.new
52
-
53
- lambda do |statistics|
54
- monitor.instrument(
55
- 'statistics.emitted',
56
- producer: producer,
57
- statistics: statistics_decorator.call(statistics)
58
- )
59
- end
24
+ client
60
25
  end
61
26
  end
62
27
  end
@@ -80,6 +80,19 @@ module WaterDrop
80
80
 
81
81
  @pid = Process.pid
82
82
  @client = Builder.new.call(self, @config)
83
+
84
+ # Register statistics runner for this particular type of callbacks
85
+ ::WaterDrop::Instrumentation.statistics_callbacks.add(
86
+ @id,
87
+ Instrumentation::Callbacks::Statistics.new(@id, @client.name, @config.monitor)
88
+ )
89
+
90
+ # Register error tracking callback
91
+ ::WaterDrop::Instrumentation.error_callbacks.add(
92
+ @id,
93
+ Instrumentation::Callbacks::Error.new(@id, @client.name, @config.monitor)
94
+ )
95
+
83
96
  @status.connected!
84
97
  end
85
98
 
@@ -107,7 +120,13 @@ module WaterDrop
107
120
 
108
121
  # We should not close the client in several threads the same time
109
122
  # It is safe to run it several times but not exactly the same moment
110
- client.close
123
+ # We also mark it as closed only if it was connected, if not, it would trigger a new
124
+ # connection that anyhow would be immediately closed
125
+ client.close if @client
126
+
127
+ # Remove callbacks runners that were registered
128
+ ::WaterDrop::Instrumentation.statistics_callbacks.delete(@id)
129
+ ::WaterDrop::Instrumentation.error_callbacks.delete(@id)
111
130
 
112
131
  @status.closed!
113
132
  end
@@ -3,5 +3,5 @@
3
3
  # WaterDrop library
4
4
  module WaterDrop
5
5
  # Current WaterDrop version
6
- VERSION = '2.0.2'
6
+ VERSION = '2.0.6'
7
7
  end
data/lib/water_drop.rb CHANGED
@@ -28,3 +28,9 @@ Zeitwerk::Loader
28
28
  .tap { |loader| loader.ignore("#{__dir__}/waterdrop.rb") }
29
29
  .tap(&:setup)
30
30
  .tap(&:eager_load)
31
+
32
+ # Rdkafka uses a single global callback for things. We bypass that by injecting a manager for
33
+ # each callback type. Callback manager allows us to register more than one callback
34
+ # @note Those managers are also used by Karafka for consumer related statistics
35
+ Rdkafka::Config.statistics_callback = WaterDrop::Instrumentation.statistics_callbacks
36
+ Rdkafka::Config.error_callback = WaterDrop::Instrumentation.error_callbacks
data/waterdrop.gemspec CHANGED
@@ -11,17 +11,17 @@ Gem::Specification.new do |spec|
11
11
  spec.platform = Gem::Platform::RUBY
12
12
  spec.authors = ['Maciej Mensfeld']
13
13
  spec.email = %w[maciej@mensfeld.pl]
14
- spec.homepage = 'https://github.com/karafka/waterdrop'
14
+ spec.homepage = 'https://karafka.io'
15
15
  spec.summary = 'Kafka messaging made easy!'
16
16
  spec.description = spec.summary
17
17
  spec.license = 'MIT'
18
18
 
19
19
  spec.add_dependency 'concurrent-ruby', '>= 1.1'
20
- spec.add_dependency 'dry-configurable', '~> 0.8'
21
- spec.add_dependency 'dry-monitor', '~> 0.3'
22
- spec.add_dependency 'dry-validation', '~> 1.3'
23
- spec.add_dependency 'rdkafka', '>= 0.6.0'
24
- spec.add_dependency 'zeitwerk', '~> 2.1'
20
+ spec.add_dependency 'dry-configurable', '~> 0.13'
21
+ spec.add_dependency 'dry-monitor', '~> 0.5'
22
+ spec.add_dependency 'dry-validation', '~> 1.7'
23
+ spec.add_dependency 'rdkafka', '>= 0.10'
24
+ spec.add_dependency 'zeitwerk', '~> 2.3'
25
25
 
26
26
  spec.required_ruby_version = '>= 2.6.0'
27
27
 
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: waterdrop
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.2
4
+ version: 2.0.6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -34,7 +34,7 @@ cert_chain:
34
34
  R2P11bWoCtr70BsccVrN8jEhzwXngMyI2gVt750Y+dbTu1KgRqZKp/ECe7ZzPzXj
35
35
  pIy9vHxTANKYVyI4qj8OrFdEM5BQNu8oQpL0iQ==
36
36
  -----END CERTIFICATE-----
37
- date: 2021-08-13 00:00:00.000000000 Z
37
+ date: 2021-12-01 00:00:00.000000000 Z
38
38
  dependencies:
39
39
  - !ruby/object:Gem::Dependency
40
40
  name: concurrent-ruby
@@ -56,70 +56,70 @@ dependencies:
56
56
  requirements:
57
57
  - - "~>"
58
58
  - !ruby/object:Gem::Version
59
- version: '0.8'
59
+ version: '0.13'
60
60
  type: :runtime
61
61
  prerelease: false
62
62
  version_requirements: !ruby/object:Gem::Requirement
63
63
  requirements:
64
64
  - - "~>"
65
65
  - !ruby/object:Gem::Version
66
- version: '0.8'
66
+ version: '0.13'
67
67
  - !ruby/object:Gem::Dependency
68
68
  name: dry-monitor
69
69
  requirement: !ruby/object:Gem::Requirement
70
70
  requirements:
71
71
  - - "~>"
72
72
  - !ruby/object:Gem::Version
73
- version: '0.3'
73
+ version: '0.5'
74
74
  type: :runtime
75
75
  prerelease: false
76
76
  version_requirements: !ruby/object:Gem::Requirement
77
77
  requirements:
78
78
  - - "~>"
79
79
  - !ruby/object:Gem::Version
80
- version: '0.3'
80
+ version: '0.5'
81
81
  - !ruby/object:Gem::Dependency
82
82
  name: dry-validation
83
83
  requirement: !ruby/object:Gem::Requirement
84
84
  requirements:
85
85
  - - "~>"
86
86
  - !ruby/object:Gem::Version
87
- version: '1.3'
87
+ version: '1.7'
88
88
  type: :runtime
89
89
  prerelease: false
90
90
  version_requirements: !ruby/object:Gem::Requirement
91
91
  requirements:
92
92
  - - "~>"
93
93
  - !ruby/object:Gem::Version
94
- version: '1.3'
94
+ version: '1.7'
95
95
  - !ruby/object:Gem::Dependency
96
96
  name: rdkafka
97
97
  requirement: !ruby/object:Gem::Requirement
98
98
  requirements:
99
99
  - - ">="
100
100
  - !ruby/object:Gem::Version
101
- version: 0.6.0
101
+ version: '0.10'
102
102
  type: :runtime
103
103
  prerelease: false
104
104
  version_requirements: !ruby/object:Gem::Requirement
105
105
  requirements:
106
106
  - - ">="
107
107
  - !ruby/object:Gem::Version
108
- version: 0.6.0
108
+ version: '0.10'
109
109
  - !ruby/object:Gem::Dependency
110
110
  name: zeitwerk
111
111
  requirement: !ruby/object:Gem::Requirement
112
112
  requirements:
113
113
  - - "~>"
114
114
  - !ruby/object:Gem::Version
115
- version: '2.1'
115
+ version: '2.3'
116
116
  type: :runtime
117
117
  prerelease: false
118
118
  version_requirements: !ruby/object:Gem::Requirement
119
119
  requirements:
120
120
  - - "~>"
121
121
  - !ruby/object:Gem::Version
122
- version: '2.1'
122
+ version: '2.3'
123
123
  description: Kafka messaging made easy!
124
124
  email:
125
125
  - maciej@mensfeld.pl
@@ -129,7 +129,6 @@ extra_rdoc_files: []
129
129
  files:
130
130
  - ".coditsu/ci.yml"
131
131
  - ".diffend.yml"
132
- - ".github/FUNDING.yml"
133
132
  - ".github/workflows/ci.yml"
134
133
  - ".gitignore"
135
134
  - ".rspec"
@@ -150,22 +149,27 @@ files:
150
149
  - lib/water_drop/contracts/message.rb
151
150
  - lib/water_drop/errors.rb
152
151
  - lib/water_drop/instrumentation.rb
152
+ - lib/water_drop/instrumentation/callbacks/delivery.rb
153
+ - lib/water_drop/instrumentation/callbacks/error.rb
154
+ - lib/water_drop/instrumentation/callbacks/statistics.rb
155
+ - lib/water_drop/instrumentation/callbacks/statistics_decorator.rb
156
+ - lib/water_drop/instrumentation/callbacks_manager.rb
153
157
  - lib/water_drop/instrumentation/monitor.rb
154
158
  - lib/water_drop/instrumentation/stdout_listener.rb
155
- - lib/water_drop/patches/rdkafka_producer.rb
159
+ - lib/water_drop/patches/rdkafka/bindings.rb
160
+ - lib/water_drop/patches/rdkafka/producer.rb
156
161
  - lib/water_drop/producer.rb
157
162
  - lib/water_drop/producer/async.rb
158
163
  - lib/water_drop/producer/buffer.rb
159
164
  - lib/water_drop/producer/builder.rb
160
165
  - lib/water_drop/producer/dummy_client.rb
161
- - lib/water_drop/producer/statistics_decorator.rb
162
166
  - lib/water_drop/producer/status.rb
163
167
  - lib/water_drop/producer/sync.rb
164
168
  - lib/water_drop/version.rb
165
169
  - lib/waterdrop.rb
166
170
  - log/.gitkeep
167
171
  - waterdrop.gemspec
168
- homepage: https://github.com/karafka/waterdrop
172
+ homepage: https://karafka.io
169
173
  licenses:
170
174
  - MIT
171
175
  metadata: {}
@@ -184,7 +188,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
184
188
  - !ruby/object:Gem::Version
185
189
  version: '0'
186
190
  requirements: []
187
- rubygems_version: 3.2.25
191
+ rubygems_version: 3.2.31
188
192
  signing_key:
189
193
  specification_version: 4
190
194
  summary: Kafka messaging made easy!
metadata.gz.sig CHANGED
Binary file
data/.github/FUNDING.yml DELETED
@@ -1 +0,0 @@
1
- open_collective: karafka
@@ -1,49 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- # Patches to external components
5
- module Patches
6
- # `Rdkafka::Producer` patches
7
- module RdkafkaProducer
8
- # Errors upon which we want to retry message production
9
- # @note Since production happens async, those errors should only occur when using
10
- # partition_key, thus only then we handle them
11
- RETRYABLES = %w[
12
- leader_not_available
13
- err_not_leader_for_partition
14
- invalid_replication_factor
15
- transport
16
- timed_out
17
- ].freeze
18
-
19
- # How many attempts do we want to make before re-raising the error
20
- MAX_ATTEMPTS = 5
21
-
22
- private_constant :RETRYABLES, :MAX_ATTEMPTS
23
-
24
- # @param args [Object] anything `Rdkafka::Producer#produce` accepts
25
- #
26
- # @note This can be removed once this: https://github.com/appsignal/rdkafka-ruby/issues/163
27
- # is resolved.
28
- def produce(**args)
29
- attempt ||= 0
30
- attempt += 1
31
-
32
- super
33
- rescue Rdkafka::RdkafkaError => e
34
- raise unless args.key?(:partition_key)
35
- # We care only about specific errors
36
- # https://docs.confluent.io/platform/current/clients/librdkafka/html/md_INTRODUCTION.html
37
- raise unless RETRYABLES.any? { |message| e.message.to_s.include?(message) }
38
- raise if attempt > MAX_ATTEMPTS
39
-
40
- max_sleep = 2**attempt / 10.0
41
- sleep rand(0.01..max_sleep)
42
-
43
- retry
44
- end
45
- end
46
- end
47
- end
48
-
49
- Rdkafka::Producer.prepend(WaterDrop::Patches::RdkafkaProducer)
@@ -1,71 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module WaterDrop
4
- class Producer
5
- # Many of the librdkafka statistics are absolute values instead of a gauge.
6
- # This means, that for example number of messages sent is an absolute growing value
7
- # instead of being a value of messages sent from the last statistics report.
8
- # This decorator calculates the diff against previously emited stats, so we get also
9
- # the diff together with the original values
10
- class StatisticsDecorator
11
- def initialize
12
- @previous = {}.freeze
13
- end
14
-
15
- # @param emited_stats [Hash] original emited statistics
16
- # @return [Hash] emited statistics extended with the diff data
17
- # @note We modify the emited statistics, instead of creating new. Since we don't expose
18
- # any API to get raw data, users can just assume that the result of this decoration is the
19
- # proper raw stats that they can use
20
- def call(emited_stats)
21
- diff(
22
- @previous,
23
- emited_stats
24
- )
25
-
26
- @previous = emited_stats
27
-
28
- emited_stats.freeze
29
- end
30
-
31
- private
32
-
33
- # Calculates the diff of the provided values and modifies in place the emited statistics
34
- #
35
- # @param previous [Object] previous value from the given scope in which
36
- # we are
37
- # @param current [Object] current scope from emitted statistics
38
- # @return [Object] the diff if the values were numerics or the current scope
39
- def diff(previous, current)
40
- if current.is_a?(Hash)
41
- # @note We cannot use #each_key as we modify the content of the current scope
42
- # in place (in case it's a hash)
43
- current.keys.each do |key|
44
- append(
45
- current,
46
- key,
47
- diff((previous || {})[key], (current || {})[key])
48
- )
49
- end
50
- end
51
-
52
- if current.is_a?(Numeric) && previous.is_a?(Numeric)
53
- current - previous
54
- else
55
- current
56
- end
57
- end
58
-
59
- # Appends the result of the diff to a given key as long as the result is numeric
60
- #
61
- # @param current [Hash] current scope
62
- # @param key [Symbol] key based on which we were diffing
63
- # @param result [Object] diff result
64
- def append(current, key, result)
65
- return unless result.is_a?(Numeric)
66
-
67
- current["#{key}_d"] = result
68
- end
69
- end
70
- end
71
- end