karafka 2.0.31 → 2.0.33

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a4c598dcc6414c2a24f50452d5379c2a0aa26d7d1c7fd8d1b07f2af19a6e8125
4
- data.tar.gz: 184a8b1e7bb1f62672f90904ce6c3fd5e2da5f1868bcb2a971a1e7872e8c3252
3
+ metadata.gz: 511cc39ce92e82706fcd6ac9c54b3416c7908cf5e9773ee15b972219046778ca
4
+ data.tar.gz: 737059dc4f9de33577f4883590f607da1ee4ee330ff9f5c18bd42dfe261c567d
5
5
  SHA512:
6
- metadata.gz: a9d382efa4846f4419d86d1fa2742f663e2c255b203612b6810ab5bc44aafc8c9a1c6c299330889c0cc54cb952235ddbb8d45a3c2ed30c7132b8522fc46ef8d1
7
- data.tar.gz: 5a5907e0e7b958a6277785ef4254ac5d8cf7f563e371fdf3fe7705719164362858b46efae62dda6c7a1f02646ffd57d05437fde496ebf2b7e36bdcb8ac57865d
6
+ metadata.gz: d91e5de5bddd7b9ff2e423c986eda7f8e795528dc5ffb7a3694c82a2a096f8b3ee31679ac221bb0da4cd6efc7efa55ab195f081fbd2cf393df7d21549d950cd9
7
+ data.tar.gz: dbea8026eb3385b6ee14a749b3bc0187f4e91e9e66b60f2e8ac74fcdd409ad238b8e6f7dd5e5d5fbb5f0cc36299795ed63b794f1dd4a2a2b53e26776a7523876
checksums.yaml.gz.sig CHANGED
Binary file
data/CHANGELOG.md CHANGED
@@ -1,6 +1,47 @@
1
1
  # Karafka framework changelog
2
2
 
3
- ## 2.0.31 (2022-02-12)
3
+ ## 2.0.33 (2023-02-24)
4
+ - **[Feature]** Support `perform_all_later` in ActiveJob adapter for Rails `7.1+`
5
+ - **[Feature]** Introduce ability to assign and re-assign tags in consumer instances. This can be used for extra instrumentation that is context aware.
6
+ - **[Feature]** Introduce ability to assign and reassign tags to the `Karafka::Process`.
7
+ - [Improvement] When using `ActiveJob` adapter, automatically tag jobs with the name of the `ActiveJob` class that is running inside of the `ActiveJob` consumer.
8
+ - [Improvement] Make `::Karafka::Instrumentation::Notifications::EVENTS` list public for anyone wanting to re-bind those into a different notification bus.
9
+ - [Improvement] Set `fetch.message.max.bytes` for `Karafka::Admin` to `5MB` to make sure that all data is fetched correctly for Web UI under heavy load (many consumers).
10
+ - [Improvement] Introduce a `strict_topics_namespacing` config option to enable/disable the strict topics naming validations. This can be useful when working with pre-existing topics which we cannot or do not want to rename.
11
+ - [Fix] Karafka monitor is prematurely cached (#1314)
12
+
13
+ ### Upgrade notes
14
+
15
+ Since `#tags` were introduced on consumers, the `#tags` method is now part of the consumers API.
16
+
17
+ This means, that in case you were using a method called `#tags` in your consumers, you will have to rename it:
18
+
19
+ ```ruby
20
+ class EventsConsumer < ApplicationConsumer
21
+ def consume
22
+ messages.each do |message|
23
+ tags << message.payload.tag
24
+ end
25
+
26
+ tags.each { |tags| puts tag }
27
+ end
28
+
29
+ private
30
+
31
+ # This will collide with the tagging API
32
+ # This NEEDS to be renamed not to collide with `#tags` method provided by the consumers API.
33
+ def tags
34
+ @tags ||= Set.new
35
+ end
36
+ end
37
+ ```
38
+
39
+ ## 2.0.32 (2023-02-13)
40
+ - [Fix] Many non-existing topic subscriptions propagate poll errors beyond client
41
+ - [Improvement] Ignore `unknown_topic_or_part` errors in dev when `allow.auto.create.topics` is on.
42
+ - [Improvement] Optimize temporary errors handling in polling for a better backoff policy
43
+
44
+ ## 2.0.31 (2023-02-12)
4
45
  - [Feature] Allow for adding partitions via `Admin#create_partitions` API.
5
46
  - [Fix] Do not ignore admin errors upon invalid configuration (#1254)
6
47
  - [Fix] Topic name validation (#1300) - CandyFet
@@ -8,7 +49,7 @@
8
49
  - [Maintenance] Require `karafka-core` >= `2.0.11` and switch to shared RSpec locator.
9
50
  - [Maintenance] Require `karafka-rdkafka` >= `0.12.1`
10
51
 
11
- ## 2.0.30 (2022-01-31)
52
+ ## 2.0.30 (2023-01-31)
12
53
  - [Improvement] Alias `--consumer-groups` with `--include-consumer-groups`
13
54
  - [Improvement] Alias `--subscription-groups` with `--include-subscription-groups`
14
55
  - [Improvement] Alias `--topics` with `--include-topics`
data/Gemfile.lock CHANGED
@@ -1,8 +1,8 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- karafka (2.0.31)
5
- karafka-core (>= 2.0.11, < 3.0.0)
4
+ karafka (2.0.33)
5
+ karafka-core (>= 2.0.12, < 3.0.0)
6
6
  thor (>= 0.20)
7
7
  waterdrop (>= 2.4.10, < 3.0.0)
8
8
  zeitwerk (~> 2.3)
@@ -29,7 +29,7 @@ GEM
29
29
  activesupport (>= 5.0)
30
30
  i18n (1.12.0)
31
31
  concurrent-ruby (~> 1.0)
32
- karafka-core (2.0.11)
32
+ karafka-core (2.0.12)
33
33
  concurrent-ruby (>= 1.1)
34
34
  karafka-rdkafka (>= 0.12.1)
35
35
  karafka-rdkafka (0.12.1)
@@ -64,10 +64,9 @@ GEM
64
64
  waterdrop (2.4.10)
65
65
  karafka-core (>= 2.0.9, < 3.0.0)
66
66
  zeitwerk (~> 2.3)
67
- zeitwerk (2.6.6)
67
+ zeitwerk (2.6.7)
68
68
 
69
69
  PLATFORMS
70
- arm64-darwin-21
71
70
  x86_64-linux
72
71
 
73
72
  DEPENDENCIES
@@ -45,18 +45,23 @@ en:
45
45
  dead_letter_queue.topic_format: 'needs to be a string with a Kafka accepted format'
46
46
  dead_letter_queue.active_format: needs to be either true or false
47
47
  active_format: needs to be either true or false
48
- inconsistent_namespacing: needs to be consistent namespacing style
48
+ inconsistent_namespacing: |
49
+ needs to be consistent namespacing style
50
+ disable this validation by setting config.strict_topics_namespacing to false
49
51
 
50
52
  consumer_group:
51
53
  missing: needs to be present
52
54
  topics_names_not_unique: all topic names within a single consumer group must be unique
53
- topics_namespaced_names_not_unique: all topic names within a single consumer group must be unique considering namespacing styles
54
55
  id_format: 'needs to be a string with a Kafka accepted format'
55
56
  topics_format: needs to be a non-empty array
57
+ topics_namespaced_names_not_unique: |
58
+ all topic names within a single consumer group must be unique considering namespacing styles
59
+ disable this validation by setting config.strict_topics_namespacing to false
56
60
 
57
61
  job_options:
58
62
  missing: needs to be present
59
63
  dispatch_method_format: needs to be either :produce_async or :produce_sync
64
+ dispatch_many_method_format: needs to be either :produce_many_async or :produce_many_sync
60
65
  partitioner_format: 'needs to respond to #call'
61
66
  partition_key_type_format: 'needs to be either :key or :partition_key'
62
67
 
data/karafka.gemspec CHANGED
@@ -21,7 +21,7 @@ Gem::Specification.new do |spec|
21
21
  without having to focus on things that are not your business domain.
22
22
  DESC
23
23
 
24
- spec.add_dependency 'karafka-core', '>= 2.0.11', '< 3.0.0'
24
+ spec.add_dependency 'karafka-core', '>= 2.0.12', '< 3.0.0'
25
25
  spec.add_dependency 'thor', '>= 0.20'
26
26
  spec.add_dependency 'waterdrop', '>= 2.4.10', '< 3.0.0'
27
27
  spec.add_dependency 'zeitwerk', '~> 2.3'
@@ -11,7 +11,13 @@ module ActiveJob
11
11
  #
12
12
  # @param job [Object] job that should be enqueued
13
13
  def enqueue(job)
14
- ::Karafka::App.config.internal.active_job.dispatcher.call(job)
14
+ ::Karafka::App.config.internal.active_job.dispatcher.dispatch(job)
15
+ end
16
+
17
+ # Enqueues multiple jobs in one go
18
+ # @param jobs [Array<Object>] jobs that we want to enqueue
19
+ def enqueue_all(jobs)
20
+ ::Karafka::App.config.internal.active_job.dispatcher.dispatch_many(jobs)
15
21
  end
16
22
 
17
23
  # Raises info, that Karafka backend does not support scheduling jobs
@@ -12,12 +12,14 @@ module Karafka
12
12
  messages.each do |message|
13
13
  break if Karafka::App.stopping?
14
14
 
15
- ::ActiveJob::Base.execute(
16
- # We technically speaking could set this as deserializer and reference it from the
17
- # message instead of using the `#raw_payload`. This is not done on purpose to simplify
18
- # the ActiveJob setup here
19
- ::ActiveSupport::JSON.decode(message.raw_payload)
20
- )
15
+ # We technically speaking could set this as deserializer and reference it from the
16
+ # message instead of using the `#raw_payload`. This is not done on purpose to simplify
17
+ # the ActiveJob setup here
18
+ job = ::ActiveSupport::JSON.decode(message.raw_payload)
19
+
20
+ tags.add(:job_class, job['job_class'])
21
+
22
+ ::ActiveJob::Base.execute(job)
21
23
 
22
24
  mark_as_consumed(message)
23
25
  end
@@ -7,13 +7,14 @@ module Karafka
7
7
  # Defaults for dispatching
8
8
  # The can be updated by using `#karafka_options` on the job
9
9
  DEFAULTS = {
10
- dispatch_method: :produce_async
10
+ dispatch_method: :produce_async,
11
+ dispatch_many_method: :produce_many_async
11
12
  }.freeze
12
13
 
13
14
  private_constant :DEFAULTS
14
15
 
15
16
  # @param job [ActiveJob::Base] job
16
- def call(job)
17
+ def dispatch(job)
17
18
  ::Karafka.producer.public_send(
18
19
  fetch_option(job, :dispatch_method, DEFAULTS),
19
20
  topic: job.queue_name,
@@ -21,6 +22,30 @@ module Karafka
21
22
  )
22
23
  end
23
24
 
25
+ # Bulk dispatches multiple jobs using the Rails 7.1+ API
26
+ # @param jobs [Array<ActiveJob::Base>] jobs we want to dispatch
27
+ def dispatch_many(jobs)
28
+ # Group jobs by their desired dispatch method
29
+ # It can be configured per job class, so we need to make sure we divide them
30
+ dispatches = Hash.new { |hash, key| hash[key] = [] }
31
+
32
+ jobs.each do |job|
33
+ d_method = fetch_option(job, :dispatch_many_method, DEFAULTS)
34
+
35
+ dispatches[d_method] << {
36
+ topic: job.queue_name,
37
+ payload: ::ActiveSupport::JSON.encode(job.serialize)
38
+ }
39
+ end
40
+
41
+ dispatches.each do |type, messages|
42
+ ::Karafka.producer.public_send(
43
+ type,
44
+ messages
45
+ )
46
+ end
47
+ end
48
+
24
49
  private
25
50
 
26
51
  # @param job [ActiveJob::Base] job
@@ -15,7 +15,18 @@ module Karafka
15
15
  ).fetch('en').fetch('validations').fetch('job_options')
16
16
  end
17
17
 
18
- optional(:dispatch_method) { |val| %i[produce_async produce_sync].include?(val) }
18
+ optional(:dispatch_method) do |val|
19
+ %i[
20
+ produce_async
21
+ produce_sync
22
+ ].include?(val)
23
+ end
24
+ optional(:dispatch_many_method) do |val|
25
+ %i[
26
+ produce_many_async
27
+ produce_many_sync
28
+ ].include?(val)
29
+ end
19
30
  end
20
31
  end
21
32
  end
data/lib/karafka/admin.rb CHANGED
@@ -26,7 +26,9 @@ module Karafka
26
26
  'group.id': 'karafka_admin',
27
27
  # We want to know when there is no more data not to end up with an endless loop
28
28
  'enable.partition.eof': true,
29
- 'statistics.interval.ms': 0
29
+ 'statistics.interval.ms': 0,
30
+ # Fetch at most 5 MBs when using admin
31
+ 'fetch.message.max.bytes': 5 * 1_048_576
30
32
  }.freeze
31
33
 
32
34
  private_constant :Topic, :CONFIG_DEFAULTS, :MAX_WAIT_TIMEOUT, :MAX_ATTEMPTS
@@ -4,6 +4,9 @@
4
4
  module Karafka
5
5
  # Base consumer from which all Karafka consumers should inherit
6
6
  class BaseConsumer
7
+ # Allow for consumer instance tagging for instrumentation
8
+ include ::Karafka::Core::Taggable
9
+
7
10
  # @return [String] id of the current consumer
8
11
  attr_reader :id
9
12
  # @return [Karafka::Routing::Topic] topic to which a given consumer is subscribed
@@ -380,6 +380,14 @@ module Karafka
380
380
  when :network_exception # 13
381
381
  early_report = true
382
382
  when :transport # -195
383
+ early_report = true
384
+ # @see
385
+ # https://github.com/confluentinc/confluent-kafka-dotnet/issues/1366#issuecomment-821842990
386
+ # This will be raised each time poll detects a non-existing topic. When auto creation is
387
+ # on, we can safely ignore it
388
+ when :unknown_topic_or_part # 3
389
+ return nil if @subscription_group.kafka[:'allow.auto.create.topics']
390
+
383
391
  early_report = true
384
392
  end
385
393
 
@@ -423,8 +431,7 @@ module Karafka
423
431
  Instrumentation::Callbacks::Statistics.new(
424
432
  @subscription_group.id,
425
433
  @subscription_group.consumer_group_id,
426
- @name,
427
- ::Karafka::App.config.monitor
434
+ @name
428
435
  )
429
436
  )
430
437
 
@@ -434,8 +441,7 @@ module Karafka
434
441
  Instrumentation::Callbacks::Error.new(
435
442
  @subscription_group.id,
436
443
  @subscription_group.consumer_group_id,
437
- @name,
438
- ::Karafka::App.config.monitor
444
+ @name
439
445
  )
440
446
  )
441
447
 
@@ -27,6 +27,7 @@ module Karafka
27
27
 
28
28
  virtual do |data, errors|
29
29
  next unless errors.empty?
30
+ next unless ::Karafka::App.config.strict_topics_namespacing
30
31
 
31
32
  names = data.fetch(:topics).map { |topic| topic[:name] }
32
33
  names_hash = names.each_with_object({}) { |n, h| h[n] = true }
@@ -51,6 +51,7 @@ module Karafka
51
51
 
52
52
  virtual do |data, errors|
53
53
  next unless errors.empty?
54
+ next unless ::Karafka::App.config.strict_topics_namespacing
54
55
 
55
56
  value = data.fetch(:name)
56
57
  namespacing_chars_count = value.chars.find_all { |c| ['.', '_'].include?(c) }.uniq.count
@@ -9,12 +9,10 @@ module Karafka
9
9
  # @param subscription_group_id [String] id of the current subscription group instance
10
10
  # @param consumer_group_id [String] id of the current consumer group
11
11
  # @param client_name [String] rdkafka client name
12
- # @param monitor [WaterDrop::Instrumentation::Monitor] monitor we are using
13
- def initialize(subscription_group_id, consumer_group_id, client_name, monitor)
12
+ def initialize(subscription_group_id, consumer_group_id, client_name)
14
13
  @subscription_group_id = subscription_group_id
15
14
  @consumer_group_id = consumer_group_id
16
15
  @client_name = client_name
17
- @monitor = monitor
18
16
  end
19
17
 
20
18
  # Runs the instrumentation monitor with error
@@ -26,7 +24,7 @@ module Karafka
26
24
  # Same as with statistics (mor explanation there)
27
25
  return unless @client_name == client_name
28
26
 
29
- @monitor.instrument(
27
+ ::Karafka.monitor.instrument(
30
28
  'error.occurred',
31
29
  caller: self,
32
30
  subscription_group_id: @subscription_group_id,
@@ -10,12 +10,10 @@ module Karafka
10
10
  # @param subscription_group_id [String] id of the current subscription group
11
11
  # @param consumer_group_id [String] id of the current consumer group
12
12
  # @param client_name [String] rdkafka client name
13
- # @param monitor [WaterDrop::Instrumentation::Monitor] monitor we are using
14
- def initialize(subscription_group_id, consumer_group_id, client_name, monitor)
13
+ def initialize(subscription_group_id, consumer_group_id, client_name)
15
14
  @subscription_group_id = subscription_group_id
16
15
  @consumer_group_id = consumer_group_id
17
16
  @client_name = client_name
18
- @monitor = monitor
19
17
  @statistics_decorator = ::Karafka::Core::Monitoring::StatisticsDecorator.new
20
18
  end
21
19
 
@@ -28,7 +26,7 @@ module Karafka
28
26
  # all the time.
29
27
  return unless @client_name == statistics['name']
30
28
 
31
- @monitor.instrument(
29
+ ::Karafka.monitor.instrument(
32
30
  'statistics.emitted',
33
31
  subscription_group_id: @subscription_group_id,
34
32
  consumer_group_id: @consumer_group_id,
@@ -54,8 +54,6 @@ module Karafka
54
54
  error.occurred
55
55
  ].freeze
56
56
 
57
- private_constant :EVENTS
58
-
59
57
  # @return [Karafka::Instrumentation::Monitor] monitor instance for system instrumentation
60
58
  def initialize
61
59
  super
@@ -31,9 +31,11 @@ module Karafka
31
31
  break if revoked?
32
32
  break if Karafka::App.stopping?
33
33
 
34
- ::ActiveJob::Base.execute(
35
- ::ActiveSupport::JSON.decode(message.raw_payload)
36
- )
34
+ job = ::ActiveSupport::JSON.decode(message.raw_payload)
35
+
36
+ tags.add(:job_class, job['job_class'])
37
+
38
+ ::ActiveJob::Base.execute(job)
37
39
 
38
40
  # We cannot mark jobs as done after each if there are virtual partitions. Otherwise
39
41
  # this could create random markings.
@@ -23,6 +23,7 @@ module Karafka
23
23
  # They can be updated by using `#karafka_options` on the job
24
24
  DEFAULTS = {
25
25
  dispatch_method: :produce_async,
26
+ dispatch_many_method: :produce_many_async,
26
27
  # We don't create a dummy proc based partitioner as we would have to evaluate it with
27
28
  # each job.
28
29
  partitioner: nil,
@@ -33,7 +34,7 @@ module Karafka
33
34
  private_constant :DEFAULTS
34
35
 
35
36
  # @param job [ActiveJob::Base] job
36
- def call(job)
37
+ def dispatch(job)
37
38
  ::Karafka.producer.public_send(
38
39
  fetch_option(job, :dispatch_method, DEFAULTS),
39
40
  dispatch_details(job).merge!(
@@ -43,6 +44,28 @@ module Karafka
43
44
  )
44
45
  end
45
46
 
47
+ # Bulk dispatches multiple jobs using the Rails 7.1+ API
48
+ # @param jobs [Array<ActiveJob::Base>] jobs we want to dispatch
49
+ def dispatch_many(jobs)
50
+ dispatches = Hash.new { |hash, key| hash[key] = [] }
51
+
52
+ jobs.each do |job|
53
+ d_method = fetch_option(job, :dispatch_many_method, DEFAULTS)
54
+
55
+ dispatches[d_method] << dispatch_details(job).merge!(
56
+ topic: job.queue_name,
57
+ payload: ::ActiveSupport::JSON.encode(job.serialize)
58
+ )
59
+ end
60
+
61
+ dispatches.each do |type, messages|
62
+ ::Karafka.producer.public_send(
63
+ type,
64
+ messages
65
+ )
66
+ end
67
+ end
68
+
46
69
  private
47
70
 
48
71
  # @param job [ActiveJob::Base] job instance
@@ -25,9 +25,20 @@ module Karafka
25
25
  ).fetch('en').fetch('validations').fetch('job_options')
26
26
  end
27
27
 
28
- optional(:dispatch_method) { |val| %i[produce_async produce_sync].include?(val) }
29
28
  optional(:partitioner) { |val| val.respond_to?(:call) }
30
29
  optional(:partition_key_type) { |val| %i[key partition_key].include?(val) }
30
+ optional(:dispatch_method) do |val|
31
+ %i[
32
+ produce_async
33
+ produce_sync
34
+ ].include?(val)
35
+ end
36
+ optional(:dispatch_many_method) do |val|
37
+ %i[
38
+ produce_many_async
39
+ produce_many_sync
40
+ ].include?(val)
41
+ end
31
42
  end
32
43
  end
33
44
  end
@@ -4,6 +4,9 @@ module Karafka
4
4
  # Class used to catch signals from ruby Signal class in order to manage Karafka stop
5
5
  # @note There might be only one process - this class is a singleton
6
6
  class Process
7
+ # Allow for process tagging for instrumentation
8
+ extend ::Karafka::Core::Taggable
9
+
7
10
  # Signal types that we handle
8
11
  HANDLED_SIGNALS = %i[
9
12
  SIGINT
@@ -89,6 +89,11 @@ module Karafka
89
89
  # option [::WaterDrop::Producer, nil]
90
90
  # Unless configured, will be created once Karafka is configured based on user Karafka setup
91
91
  setting :producer, default: nil
92
+ # option [Boolean] when set to true, Karafka will ensure that the routing topic naming
93
+ # convention is strict
94
+ # Disabling this may be needed in scenarios where we do not have control over topics names
95
+ # and/or we work with existing systems where we cannot change topics names.
96
+ setting :strict_topics_namespacing, default: true
92
97
 
93
98
  # rdkafka default options
94
99
  # @see https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
@@ -168,6 +173,9 @@ module Karafka
168
173
 
169
174
  configure_components
170
175
 
176
+ # Refreshes the references that are cached that might have been changed by the config
177
+ ::Karafka.refresh!
178
+
171
179
  # Runs things that need to be executed after config is defined and all the components
172
180
  # are also configured
173
181
  Pro::Loader.post_setup(config) if Karafka.pro?
@@ -51,6 +51,9 @@ module Karafka
51
51
  # Sleeps for amount of time matching attempt, so we sleep more with each attempt in case of
52
52
  # a retry.
53
53
  def backoff
54
+ # backoff should not be included in the remaining time computation, otherwise it runs
55
+ # shortly, never back-offing beyond a small number because of the sleep
56
+ @remaining += backoff_interval
54
57
  # Sleep requires seconds not ms
55
58
  sleep(backoff_interval / 1_000.0)
56
59
  end
@@ -3,5 +3,5 @@
3
3
  # Main module namespace
4
4
  module Karafka
5
5
  # Current Karafka version
6
- VERSION = '2.0.31'
6
+ VERSION = '2.0.33'
7
7
  end
data/lib/karafka.rb CHANGED
@@ -95,6 +95,19 @@ module Karafka
95
95
  def boot_file
96
96
  Pathname.new(ENV['KARAFKA_BOOT_FILE'] || File.join(Karafka.root, 'karafka.rb'))
97
97
  end
98
+
99
+ # We need to be able to overwrite both monitor and logger after the configuration in case they
100
+ # would be changed because those two (with defaults) can be used prior to the setup and their
101
+ # state change should be reflected in the updated setup
102
+ #
103
+ # This method refreshes the things that might have been altered by the configuration
104
+ def refresh!
105
+ config = ::Karafka::App.config
106
+
107
+ @logger = config.logger
108
+ @producer = config.producer
109
+ @monitor = config.monitor
110
+ end
98
111
  end
99
112
  end
100
113
 
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.31
4
+ version: 2.0.33
5
5
  platform: ruby
6
6
  authors:
7
7
  - Maciej Mensfeld
@@ -35,7 +35,7 @@ cert_chain:
35
35
  Qf04B9ceLUaC4fPVEz10FyobjaFoY4i32xRto3XnrzeAgfEe4swLq8bQsR3w/EF3
36
36
  MGU0FeSV2Yj7Xc2x/7BzLK8xQn5l7Yy75iPF+KP3vVmDHnNl
37
37
  -----END CERTIFICATE-----
38
- date: 2023-02-13 00:00:00.000000000 Z
38
+ date: 2023-02-24 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: karafka-core
@@ -43,7 +43,7 @@ dependencies:
43
43
  requirements:
44
44
  - - ">="
45
45
  - !ruby/object:Gem::Version
46
- version: 2.0.11
46
+ version: 2.0.12
47
47
  - - "<"
48
48
  - !ruby/object:Gem::Version
49
49
  version: 3.0.0
@@ -53,7 +53,7 @@ dependencies:
53
53
  requirements:
54
54
  - - ">="
55
55
  - !ruby/object:Gem::Version
56
- version: 2.0.11
56
+ version: 2.0.12
57
57
  - - "<"
58
58
  - !ruby/object:Gem::Version
59
59
  version: 3.0.0
metadata.gz.sig CHANGED
Binary file