waterdrop 2.6.13 → 2.7.0.alpha1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: '06984e8584d8b1ed3e6f854be6ca1cd1957acab63d6640c41087a42572da670a'
4
- data.tar.gz: 9b0c94f047b5e21a554ecad5f32639a761aa32fbab9f5776a2470fdfe4adb675
3
+ metadata.gz: 938018b39bca57a68925d61e00a1347c34d3ad536fb73c910589065fc8e6cbca
4
+ data.tar.gz: cdad580ebf6e91d51464fb2336f9c78896f3d1b03e96919463c99f141b5dc5ff
5
5
  SHA512:
6
- metadata.gz: 610ec69f42c6d209e3d024825d62ecd25e7d50ad212d14b4b2db7534463f63e5cee4662b0412db559d1537fa28ef76acd3b22418f686fcea6c6406a1cb6cec54
7
- data.tar.gz: af35df96c8566d1fef3d16898553277c3d15a572ab8fc563858048beae70915a761109a32cd101b22123174f16dc601f7f28c2699fae20126d5dd9dc519fcd23
6
+ metadata.gz: 15e328bfe3135a14b3f61019185bcbb88c56ed2ed01e2a6523439229ee709940c0f38e4d2259e208c785fb06c2326fb290423c88d5213f730120b0a11ea8a152
7
+ data.tar.gz: f6642bb5dbc1cb84e923bc5bdc43417bf3f88215eab4a537e3fa14290726e935a45aec0cf640962f3e48615310d639ed1f592cf324b9963e1ff1dc0dabb8cbe2
checksums.yaml.gz.sig CHANGED
Binary file
@@ -22,7 +22,6 @@ jobs:
22
22
  - '3.2'
23
23
  - '3.1'
24
24
  - '3.0'
25
- - '2.7'
26
25
  include:
27
26
  - ruby: '3.3'
28
27
  coverage: 'true'
@@ -49,25 +48,15 @@ jobs:
49
48
 
50
49
  - name: Install latest bundler
51
50
  run: |
52
- if [[ "$(ruby -v | awk '{print $2}')" == 2.7.8* ]]; then
53
- gem install bundler -v 2.4.22 --no-document
54
- gem update --system 3.4.22 --no-document
55
- else
56
- gem install bundler --no-document
57
- gem update --system --no-document
58
- fi
51
+ gem install bundler --no-document
52
+ gem update --system --no-document
59
53
 
60
54
  bundle config set without 'tools benchmarks docs'
61
55
 
62
56
  - name: Bundle install
63
57
  run: |
64
58
  bundle config set without development
65
-
66
- if [[ "$(ruby -v | awk '{print $2}')" == 2.7.8* ]]; then
67
- BUNDLER_VERSION=2.4.22 bundle install --jobs 4 --retry 3
68
- else
69
- bundle install --jobs 4 --retry 3
70
- fi
59
+ bundle install --jobs 4 --retry 3
71
60
 
72
61
  - name: Run all tests
73
62
  env:
data/CHANGELOG.md CHANGED
@@ -1,5 +1,87 @@
1
1
  # WaterDrop changelog
2
2
 
3
+ ## 2.7.0 (Unreleased)
4
+
5
+ This release contains **BREAKING** changes. Make sure to read and apply upgrade notes.
6
+
7
+ - **[Breaking]** Drop Ruby `2.7` support.
8
+ - **[Breaking]** Change default timeouts so final delivery `message.timeout.ms` is less that `max_wait_time` so we do not end up with not final verdict.
9
+ - **[Breaking]** Update all the time related configuration settings to be in `ms` and not mixed.
10
+ - [Enhancement] Introduce `instrument_on_wait_queue_full` flag (defaults to `true`) to be able to configure whether non critical (retryable) queue full errors should be instrumented in the error pipeline. Useful when building high-performance pipes with WaterDrop queue retry backoff as a throttler.
11
+
12
+ ### Upgrade Notes
13
+
14
+ **PLEASE MAKE SURE TO READ AND APPLY THEM!**
15
+
16
+ #### Time Settings Format Alignment
17
+
18
+ **All** time-related values are now configured in milliseconds instead of some being in seconds and some in milliseconds.
19
+
20
+ The values that were changed from seconds to milliseconds are:
21
+
22
+ - `max_wait_timeout`
23
+ - `wait_timeout`
24
+ - `wait_backoff_on_queue_full`
25
+ - `wait_timeout_on_queue_full`
26
+ - `wait_backoff_on_transaction_command, default`
27
+
28
+ If you have configured any of those yourself, please replace the seconds representation with milliseconds:
29
+
30
+ ```ruby
31
+ producer = WaterDrop::Producer.new
32
+
33
+ producer.setup do |config|
34
+ config.deliver = true
35
+
36
+ # Replace this:
37
+ config.wait_timeout = 30
38
+
39
+ # With
40
+ config.wait_timeout = 30_000
41
+ # ...
42
+ end
43
+ ```
44
+
45
+ #### Defaults Alignment
46
+
47
+ In this release, we've updated our default settings to address a crucial issue: previous defaults could lead to inconclusive outcomes in synchronous operations due to wait timeout errors. Users often mistakenly believed that a message dispatch was halted because of these errors when, in fact, the timeout was related to awaiting the final dispatch verdict, not the dispatch action itself.
48
+
49
+ The new defaults in WaterDrop 2.7.0 eliminate this confusion by ensuring synchronous operation results are always transparent and conclusive. This change aims to provide a straightforward understanding of wait timeout errors, reinforcing that they reflect the wait state, not the dispatch success.
50
+
51
+ Below, you can find a table with what has changed, the new defaults, and the current ones in case you want to retain the previous behavior:
52
+
53
+ <table>
54
+ <thead>
55
+ <tr>
56
+ <th>Config</th>
57
+ <th>Previous Default</th>
58
+ <th>New Default</th>
59
+ </tr>
60
+ </thead>
61
+ <tbody>
62
+ <tr>
63
+ <td>root <code>max_wait_timeout</code></td>
64
+ <td>5000 ms (5 seconds)</td>
65
+ <td>60000 ms (60 seconds)</td>
66
+ </tr>
67
+ <tr>
68
+ <td>kafka <code>message.timeout.ms</code></td>
69
+ <td>300000 ms (5 minutes)</td>
70
+ <td>55000 ms (55 seconds)</td>
71
+ </tr>
72
+ <tr>
73
+ <td>kafka <code>transaction.timeout.ms</code></td>
74
+ <td>60000 ms (1 minute)</td>
75
+ <td>45000 ms (45 seconds)</td>
76
+ </tr>
77
+ </tbody>
78
+ </table>
79
+
80
+ This alignment ensures that when using sync operations or invoking `#wait`, any exception you get should give you a conclusive and final delivery verdict.
81
+
82
+ ## 2.6.14 (2024-02-06)
83
+ - [Enhancement] Instrument `producer.connected` and `producer.closing` lifecycle events.
84
+
3
85
  ## 2.6.13 (2024-01-29)
4
86
  - [Enhancement] Expose `#partition_count` for building custom partitioners that need to be aware of number of partitions on a given topic.
5
87
 
data/Gemfile.lock CHANGED
@@ -1,14 +1,14 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- waterdrop (2.6.13)
5
- karafka-core (>= 2.2.3, < 3.0.0)
4
+ waterdrop (2.7.0.alpha1)
5
+ karafka-core (>= 2.4.0.alpha1, < 3.0.0)
6
6
  zeitwerk (~> 2.3)
7
7
 
8
8
  GEM
9
9
  remote: https://rubygems.org/
10
10
  specs:
11
- activesupport (7.1.3)
11
+ activesupport (7.1.3.2)
12
12
  base64
13
13
  bigdecimal
14
14
  concurrent-ruby (~> 1.0, >= 1.0.2)
@@ -23,40 +23,37 @@ GEM
23
23
  byebug (11.1.3)
24
24
  concurrent-ruby (1.2.3)
25
25
  connection_pool (2.4.1)
26
- diff-lcs (1.5.0)
26
+ diff-lcs (1.5.1)
27
27
  docile (1.4.0)
28
- drb (2.2.0)
29
- ruby2_keywords
30
- factory_bot (6.4.5)
28
+ drb (2.2.1)
29
+ factory_bot (6.4.6)
31
30
  activesupport (>= 5.0.0)
32
31
  ffi (1.16.3)
33
- i18n (1.14.1)
32
+ i18n (1.14.4)
34
33
  concurrent-ruby (~> 1.0)
35
- karafka-core (2.2.7)
36
- concurrent-ruby (>= 1.1)
37
- karafka-rdkafka (>= 0.13.9, < 0.15.0)
38
- karafka-rdkafka (0.14.7)
34
+ karafka-core (2.4.0.alpha1)
35
+ karafka-rdkafka (>= 0.15.0.alpha1, < 0.16.0)
36
+ karafka-rdkafka (0.15.0.alpha1)
39
37
  ffi (~> 1.15)
40
38
  mini_portile2 (~> 2.6)
41
39
  rake (> 12)
42
40
  mini_portile2 (2.8.5)
43
- minitest (5.21.2)
41
+ minitest (5.22.2)
44
42
  mutex_m (0.2.0)
45
43
  rake (13.1.0)
46
- rspec (3.12.0)
47
- rspec-core (~> 3.12.0)
48
- rspec-expectations (~> 3.12.0)
49
- rspec-mocks (~> 3.12.0)
50
- rspec-core (3.12.2)
51
- rspec-support (~> 3.12.0)
52
- rspec-expectations (3.12.3)
44
+ rspec (3.13.0)
45
+ rspec-core (~> 3.13.0)
46
+ rspec-expectations (~> 3.13.0)
47
+ rspec-mocks (~> 3.13.0)
48
+ rspec-core (3.13.0)
49
+ rspec-support (~> 3.13.0)
50
+ rspec-expectations (3.13.0)
53
51
  diff-lcs (>= 1.2.0, < 2.0)
54
- rspec-support (~> 3.12.0)
55
- rspec-mocks (3.12.6)
52
+ rspec-support (~> 3.13.0)
53
+ rspec-mocks (3.13.0)
56
54
  diff-lcs (>= 1.2.0, < 2.0)
57
- rspec-support (~> 3.12.0)
58
- rspec-support (3.12.1)
59
- ruby2_keywords (0.0.5)
55
+ rspec-support (~> 3.13.0)
56
+ rspec-support (3.13.1)
60
57
  simplecov (0.22.0)
61
58
  docile (~> 1.1)
62
59
  simplecov-html (~> 0.11)
@@ -65,10 +62,10 @@ GEM
65
62
  simplecov_json_formatter (0.1.4)
66
63
  tzinfo (2.0.6)
67
64
  concurrent-ruby (~> 1.0)
68
- zeitwerk (2.6.12)
65
+ zeitwerk (2.6.13)
69
66
 
70
67
  PLATFORMS
71
- ruby
68
+ arm64-darwin-22
72
69
  x86_64-linux
73
70
 
74
71
  DEPENDENCIES
data/docker-compose.yml CHANGED
@@ -3,7 +3,7 @@ version: '2'
3
3
  services:
4
4
  kafka:
5
5
  container_name: kafka
6
- image: confluentinc/cp-kafka:7.5.3
6
+ image: confluentinc/cp-kafka:7.6.0
7
7
 
8
8
  ports:
9
9
  - 9092:9092
@@ -12,7 +12,12 @@ module WaterDrop
12
12
  'client.id': 'waterdrop',
13
13
  # emit librdkafka statistics every five seconds. This is used in instrumentation.
14
14
  # When disabled, part of metrics will not be published and available.
15
- 'statistics.interval.ms': 5_000
15
+ 'statistics.interval.ms': 5_000,
16
+ # We set it to a value that is lower than `max_wait_time` to have a final verdict upon sync
17
+ # delivery
18
+ 'message.timeout.ms': 55_000,
19
+ # Must be less or equal to `message.timeout.ms` defaults
20
+ 'transaction.timeout.ms': 45_000
16
21
  }.freeze
17
22
 
18
23
  private_constant :KAFKA_DEFAULTS
@@ -44,12 +49,12 @@ module WaterDrop
44
49
  # option [Integer] max payload size allowed for delivery to Kafka
45
50
  setting :max_payload_size, default: 1_000_012
46
51
  # option [Integer] Wait that long for the delivery report or raise an error if this takes
47
- # longer than the timeout.
48
- setting :max_wait_timeout, default: 5
52
+ # longer than the timeout ms.
53
+ setting :max_wait_timeout, default: 60_000
49
54
  # option [Numeric] how long should we wait between re-checks on the availability of the
50
55
  # delivery report. In a really robust systems, this describes the min-delivery time
51
56
  # for a single sync message when produced in isolation
52
- setting :wait_timeout, default: 0.005 # 5 milliseconds
57
+ setting :wait_timeout, default: 5 # 5 milliseconds
53
58
  # option [Boolean] should we upon detecting full librdkafka queue backoff and retry or should
54
59
  # we raise an exception.
55
60
  # When this is set to `true`, upon full queue, we won't raise an error. There will be error
@@ -60,12 +65,14 @@ module WaterDrop
60
65
  # option [Integer] how long (in seconds) should we backoff before a retry when queue is full
61
66
  # The retry will happen with the same message and backoff should give us some time to
62
67
  # dispatch previously buffered messages.
63
- setting :wait_backoff_on_queue_full, default: 0.1
64
- # option [Numeric] how many seconds should we wait with the backoff on queue having space for
68
+ setting :wait_backoff_on_queue_full, default: 100
69
+ # option [Numeric] how many ms should we wait with the backoff on queue having space for
65
70
  # more messages before re-raising the error.
66
- setting :wait_timeout_on_queue_full, default: 10
71
+ setting :wait_timeout_on_queue_full, default: 10_000
72
+ # option [Boolean] should we instrument non-critical, retryable queue full errors
73
+ setting :instrument_on_wait_queue_full, default: true
67
74
  # option [Numeric] How long to wait before retrying a retryable transaction related error
68
- setting :wait_backoff_on_transaction_command, default: 0.5
75
+ setting :wait_backoff_on_transaction_command, default: 500
69
76
  # option [Numeric] How many times to retry a retryable transaction related error before
70
77
  # giving up
71
78
  setting :max_attempts_on_transaction_command, default: 5
@@ -118,6 +118,13 @@ module WaterDrop
118
118
  end
119
119
 
120
120
  # @param event [Dry::Events::Event] event that happened with the details
121
+ def on_producer_closing(event)
122
+ info(event, 'Closing producer')
123
+ end
124
+
125
+ # @param event [Dry::Events::Event] event that happened with the details
126
+ # @note While this says "Closing producer", it produces a nice message with time taken:
127
+ # "Closing producer took 12 ms" indicating it happened in the past.
121
128
  def on_producer_closed(event)
122
129
  info(event, 'Closing producer')
123
130
  end
@@ -180,7 +187,11 @@ module WaterDrop
180
187
  # @param event [Dry::Events::Event] event that happened with the details
181
188
  # @param log_message [String] message we want to publish
182
189
  def info(event, log_message)
183
- @logger.info("[#{event[:producer_id]}] #{log_message} took #{event[:time]} ms")
190
+ if event.payload.key?(:time)
191
+ @logger.info("[#{event[:producer_id]}] #{log_message} took #{event[:time]} ms")
192
+ else
193
+ @logger.info("[#{event[:producer_id]}] #{log_message}")
194
+ end
184
195
  end
185
196
 
186
197
  # @param event [Dry::Events::Event] event that happened with the details
@@ -7,6 +7,8 @@ module WaterDrop
7
7
  # List of events that we support in the system and to which a monitor client can hook up
8
8
  # @note The non-error once support timestamp benchmarking
9
9
  EVENTS = %w[
10
+ producer.connected
11
+ producer.closing
10
12
  producer.closed
11
13
 
12
14
  message.produced_async
@@ -52,8 +52,8 @@ module WaterDrop
52
52
  # @return [Array<Rdkafka::Producer::DeliveryReport>] delivery reports
53
53
  #
54
54
  # @raise [Rdkafka::RdkafkaError] When adding the messages to rdkafka's queue failed
55
- # @raise [Rdkafka::Producer::WaitTimeoutError] When the timeout has been reached and the
56
- # some handles are still pending
55
+ # @raise [Rdkafka::Producer::WaitTimeoutError] When the timeout has been reached and some
56
+ # handles are still pending
57
57
  # @raise [Errors::MessageInvalidError] When any of the provided messages details are invalid
58
58
  # and the message could not be sent to Kafka
59
59
  def produce_many_sync(messages)
@@ -132,8 +132,7 @@ module WaterDrop
132
132
  client.send_offsets_to_transaction(
133
133
  consumer,
134
134
  tpl,
135
- # This setting is at the moment in seconds and we require ms
136
- @config.max_wait_timeout * 1_000
135
+ @config.max_wait_timeout
137
136
  )
138
137
  end
139
138
  end
@@ -197,7 +196,7 @@ module WaterDrop
197
196
 
198
197
  if do_retry
199
198
  # Backoff more and more before retries
200
- sleep(config.wait_backoff_on_transaction_command * attempt)
199
+ sleep((config.wait_backoff_on_transaction_command / 1_000.0) * attempt)
201
200
 
202
201
  retry
203
202
  end
@@ -117,6 +117,7 @@ module WaterDrop
117
117
  )
118
118
 
119
119
  @status.connected!
120
+ @monitor.instrument('producer.connected', producer_id: id)
120
121
  end
121
122
 
122
123
  @client
@@ -125,9 +126,10 @@ module WaterDrop
125
126
  # Fetches and caches the partition count of a topic
126
127
  #
127
128
  # @param topic [String] topic for which we want to get the number of partitions
128
- # @return [Integer] number of partitions of the requested topic
129
+ # @return [Integer] number of partitions of the requested topic or -1 if number could not be
130
+ # retrieved.
129
131
  #
130
- # @note It uses the underlying `rdkafka-ruby` partition count cache.
132
+ # @note It uses the underlying `rdkafka-ruby` partition count fetch and cache.
131
133
  def partition_count(topic)
132
134
  client.partition_count(topic.to_s)
133
135
  end
@@ -159,6 +161,7 @@ module WaterDrop
159
161
  producer_id: id
160
162
  ) do
161
163
  @status.closing!
164
+ @monitor.instrument('producer.closing', producer_id: id)
162
165
 
163
166
  # No need for auto-gc if everything got closed by us
164
167
  # This should be used only in case a producer was not closed properly and forgotten
@@ -185,8 +188,7 @@ module WaterDrop
185
188
  # The linger.ms time will be ignored for the duration of the call,
186
189
  # queued messages will be sent to the broker as soon as possible.
187
190
  begin
188
- # `max_wait_timeout` is in seconds at the moment
189
- @client.flush(@config.max_wait_timeout * 1_000) unless @client.closed?
191
+ @client.flush(@config.max_wait_timeout) unless @client.closed?
190
192
  # We can safely ignore timeouts here because any left outstanding requests
191
193
  # will anyhow force wait on close if not forced.
192
194
  # If forced, we will purge the queue and just close
@@ -247,8 +249,9 @@ module WaterDrop
247
249
  # @param handler [Rdkafka::Producer::DeliveryHandle]
248
250
  def wait(handler)
249
251
  handler.wait(
250
- max_wait_timeout: @config.max_wait_timeout,
251
- wait_timeout: @config.wait_timeout
252
+ # rdkafka max_wait_timeout is in seconds and we use ms
253
+ max_wait_timeout: @config.max_wait_timeout / 1_000.0,
254
+ wait_timeout: @config.wait_timeout / 1_000.0
252
255
  )
253
256
  end
254
257
 
@@ -283,7 +286,7 @@ module WaterDrop
283
286
  # If we're running for longer than the timeout, we need to re-raise the queue full.
284
287
  # This will prevent from situation where cluster is down forever and we just retry and retry
285
288
  # in an infinite loop, effectively hanging the processing
286
- raise unless monotonic_now - produce_time < @config.wait_timeout_on_queue_full * 1_000
289
+ raise unless monotonic_now - produce_time < @config.wait_timeout_on_queue_full
287
290
 
288
291
  label = caller_locations(2, 1)[0].label.split(' ').last
289
292
 
@@ -294,22 +297,28 @@ module WaterDrop
294
297
  begin
295
298
  raise Errors::ProduceError, e.inspect
296
299
  rescue Errors::ProduceError => e
297
- # We want to instrument on this event even when we restart it.
298
- # The reason is simple: instrumentation and visibility.
299
- # We can recover from this, but despite that we should be able to instrument this.
300
- # If this type of event happens too often, it may indicate that the buffer settings are not
301
- # well configured.
302
- @monitor.instrument(
303
- 'error.occurred',
304
- producer_id: id,
305
- message: message,
306
- error: e,
307
- type: "message.#{label}"
308
- )
300
+ # Users can configure this because in pipe-like flows with high throughput, queue full with
301
+ # retry may be used as a throttling system that will backoff and wait.
302
+ # In such scenarios this error notification can be removed and until queue full is
303
+ # retryable, it will not be raised as an error.
304
+ if @config.instrument_on_wait_queue_full
305
+ # We want to instrument on this event even when we restart it.
306
+ # The reason is simple: instrumentation and visibility.
307
+ # We can recover from this, but despite that we should be able to instrument this.
308
+ # If this type of event happens too often, it may indicate that the buffer settings are
309
+ # not well configured.
310
+ @monitor.instrument(
311
+ 'error.occurred',
312
+ producer_id: id,
313
+ message: message,
314
+ error: e,
315
+ type: "message.#{label}"
316
+ )
317
+ end
309
318
 
310
319
  # We do not poll the producer because polling happens in a background thread
311
320
  # It also should not be a frequent case (queue full), hence it's ok to just throttle.
312
- sleep @config.wait_backoff_on_queue_full
321
+ sleep @config.wait_backoff_on_queue_full / 1_000.0
313
322
  end
314
323
 
315
324
  @operations_in_progress.decrement
@@ -3,5 +3,5 @@
3
3
  # WaterDrop library
4
4
  module WaterDrop
5
5
  # Current WaterDrop version
6
- VERSION = '2.6.13'
6
+ VERSION = '2.7.0.alpha1'
7
7
  end
data/waterdrop.gemspec CHANGED
@@ -16,9 +16,11 @@ Gem::Specification.new do |spec|
16
16
  spec.description = spec.summary
17
17
  spec.license = 'MIT'
18
18
 
19
- spec.add_dependency 'karafka-core', '>= 2.2.3', '< 3.0.0'
19
+ spec.add_dependency 'karafka-core', '>= 2.4.0.alpha1', '< 3.0.0'
20
20
  spec.add_dependency 'zeitwerk', '~> 2.3'
21
21
 
22
+ spec.required_ruby_version = '>= 3.0.0'
23
+
22
24
  if $PROGRAM_NAME.end_with?('gem')
23
25
  spec.signing_key = File.expand_path('~/.ssh/gem-private_key.pem')
24
26
  end
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: waterdrop
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.6.13
4
+ version: 2.7.0.alpha1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -35,7 +35,7 @@ cert_chain:
35
35
  AnG1dJU+yL2BK7vaVytLTstJME5mepSZ46qqIJXMuWob/YPDmVaBF39TDSG9e34s
36
36
  msG3BiCqgOgHAnL23+CN3Rt8MsuRfEtoTKpJVcCfoEoNHOkc
37
37
  -----END CERTIFICATE-----
38
- date: 2024-01-29 00:00:00.000000000 Z
38
+ date: 2024-03-17 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: karafka-core
@@ -43,7 +43,7 @@ dependencies:
43
43
  requirements:
44
44
  - - ">="
45
45
  - !ruby/object:Gem::Version
46
- version: 2.2.3
46
+ version: 2.4.0.alpha1
47
47
  - - "<"
48
48
  - !ruby/object:Gem::Version
49
49
  version: 3.0.0
@@ -53,7 +53,7 @@ dependencies:
53
53
  requirements:
54
54
  - - ">="
55
55
  - !ruby/object:Gem::Version
56
- version: 2.2.3
56
+ version: 2.4.0.alpha1
57
57
  - - "<"
58
58
  - !ruby/object:Gem::Version
59
59
  version: 3.0.0
@@ -144,7 +144,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
144
144
  requirements:
145
145
  - - ">="
146
146
  - !ruby/object:Gem::Version
147
- version: '0'
147
+ version: 3.0.0
148
148
  required_rubygems_version: !ruby/object:Gem::Requirement
149
149
  requirements:
150
150
  - - ">="
metadata.gz.sig CHANGED
Binary file