karafka-rdkafka 0.13.6 → 0.13.8

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4da2961103e92a94c36f605359cc5380ffa6d740cdc667329797ce9345caa663
4
- data.tar.gz: 419aff211c5bfe35c244549fa14e042e92a39ad8583af2b8844283b7f1dca5b0
3
+ metadata.gz: 4082c381a9d131273cc005b61b06f0e4c73b27044213d730fa5c7faeec606e07
4
+ data.tar.gz: 06754bdba16fc3feaf648670e766cdbbc60342b1a43902ef08e42a5acfd4b2ac
5
5
  SHA512:
6
- metadata.gz: c0b0d1069163ece1dcf5734ebc80e952a86c3eccc96885f856dc3f4569a0aaddc12db28fda7927d4c360b6f2ba0078a782f7ee6582554159c0e0b80c975e03bf
7
- data.tar.gz: 2e3ba6a6ef850bafa305f2d9f2a9d3b4268fedbafc50f2b5ce2fedffcb4bb018b8a1ed5b1ce1b3a1efbad51621a2ad74d448641774d93879da628c06d4a1260c
6
+ metadata.gz: 89c0b5078f97c31d0e209ebbf13279a2d079aef35f7b4c2a4bf92cac17a69ccfed1769f84d1d06ca734c9a95174b0b5fa4fa9b6933f63e30fe71c8493bda4ed1
7
+ data.tar.gz: 490a160689be7c4e261b2b8ff1ffdbcb5aec53cfe1dd0acabd0da33eff708cb7a1d09afe3c3a6752fc9a6345c005f1f0919604035a97240c5f9ed120870e802c
checksums.yaml.gz.sig CHANGED
Binary file
@@ -1,6 +1,8 @@
1
1
  name: ci
2
2
 
3
- concurrency: ci-${{ github.ref }}
3
+ concurrency:
4
+ group: ${{ github.workflow }}-${{ github.ref }}
5
+ cancel-in-progress: true
4
6
 
5
7
  on:
6
8
  pull_request:
@@ -28,12 +30,11 @@ jobs:
28
30
  - '3.0.0'
29
31
  - '2.7'
30
32
  - '2.7.0'
31
- - '2.6.8'
32
33
  include:
33
34
  - ruby: '3.2'
34
35
  coverage: 'true'
35
36
  steps:
36
- - uses: actions/checkout@v3
37
+ - uses: actions/checkout@v4
37
38
  - name: Install package dependencies
38
39
  run: "[ -e $APT_DEPS ] || sudo apt-get install -y --no-install-recommends $APT_DEPS"
39
40
 
data/CHANGELOG.md CHANGED
@@ -1,3 +1,10 @@
1
+ # 0.13.8 (2023-10-31)
2
+ - [Enhancement] Get consumer position (thijsc & mensfeld)
3
+
4
+ # 0.13.7 (2023-10-31)
5
+ - [Change] Drop support for Ruby 2.6 due to incompatibilities in usage of `ObjectSpace::WeakMap`
6
+ - [Fix] Fix dangling Opaque references.
7
+
1
8
  # 0.13.6 (2023-10-17)
2
9
  * **[Feature]** Support transactions API in the producer
3
10
  * [Enhancement] Add `raise_response_error` flag to the `Rdkafka::AbstractHandle`.
data/README.md CHANGED
@@ -1,8 +1,13 @@
1
1
  # Rdkafka
2
2
 
3
- [![Build Status](https://appsignal.semaphoreci.com/badges/rdkafka-ruby/branches/master.svg?style=shields)](https://appsignal.semaphoreci.com/projects/rdkafka-ruby)
3
+ [![Build Status](https://github.com/karafka/rdkafka-ruby/actions/workflows/ci.yml/badge.svg)](https://github.com/karafka/rdkafka-ruby/actions/workflows/ci.yml)
4
4
  [![Gem Version](https://badge.fury.io/rb/rdkafka.svg)](https://badge.fury.io/rb/rdkafka)
5
- [![Maintainability](https://api.codeclimate.com/v1/badges/ecb1765f81571cccdb0e/maintainability)](https://codeclimate.com/github/appsignal/rdkafka-ruby/maintainability)
5
+ [![Join the chat at https://slack.karafka.io](https://raw.githubusercontent.com/karafka/misc/master/slack.svg)](https://slack.karafka.io)
6
+
7
+ > [!NOTE]
8
+ > The `rdkafka-ruby` gem was created and developed by [AppSignal](https://www.appsignal.com/). Their impactful contributions have significantly shaped the Ruby Kafka and Karafka ecosystems. For robust monitoring, we highly recommend AppSignal.
9
+
10
+ ---
6
11
 
7
12
  The `rdkafka` gem is a modern Kafka client library for Ruby based on
8
13
  [librdkafka](https://github.com/edenhill/librdkafka/).
@@ -11,13 +16,7 @@ gem and targets Kafka 1.0+ and Ruby versions that are under security or
11
16
  active maintenance. We remove Ruby version from our CI builds if they
12
17
  become EOL.
13
18
 
14
- `rdkafka` was written because we needed a reliable Ruby client for
15
- Kafka that supports modern Kafka at [AppSignal](https://appsignal.com).
16
- We run it in production on very high traffic systems.
17
-
18
- This gem only provides a high-level Kafka consumer. If you are running
19
- an older version of Kafka and/or need the legacy simple consumer we
20
- suggest using the [Hermann](https://github.com/reiseburo/hermann) gem.
19
+ `rdkafka` was written because of the need for a reliable Ruby client for Kafka that supports modern Kafka at [AppSignal](https://appsignal.com). AppSignal runs it in production on very high-traffic systems.
21
20
 
22
21
  The most important pieces of a Kafka client are implemented. We're
23
22
  working towards feature completeness, you can track that here:
data/docker-compose.yml CHANGED
@@ -3,7 +3,7 @@ version: '2'
3
3
  services:
4
4
  kafka:
5
5
  container_name: kafka
6
- image: confluentinc/cp-kafka:7.5.0
6
+ image: confluentinc/cp-kafka:7.5.1
7
7
 
8
8
  ports:
9
9
  - 9092:9092
@@ -4,7 +4,7 @@ require File.expand_path('lib/rdkafka/version', __dir__)
4
4
 
5
5
  Gem::Specification.new do |gem|
6
6
  gem.authors = ['Thijs Cadier']
7
- gem.email = ["thijs@appsignal.com"]
7
+ gem.email = ["contact@karafka.io"]
8
8
  gem.description = "Modern Kafka client library for Ruby based on librdkafka"
9
9
  gem.summary = "The rdkafka gem is a modern Kafka client library for Ruby based on librdkafka. It wraps the production-ready C client using the ffi gem and targets Kafka 1.0+ and Ruby 2.4+."
10
10
  gem.license = 'MIT'
@@ -15,7 +15,7 @@ Gem::Specification.new do |gem|
15
15
  gem.name = 'karafka-rdkafka'
16
16
  gem.require_paths = ['lib']
17
17
  gem.version = Rdkafka::VERSION
18
- gem.required_ruby_version = '>= 2.6'
18
+ gem.required_ruby_version = '>= 2.7'
19
19
  gem.extensions = %w(ext/Rakefile)
20
20
  gem.cert_chain = %w[certs/cert_chain.pem]
21
21
 
@@ -208,6 +208,7 @@ module Rdkafka
208
208
  attach_function :rd_kafka_resume_partitions, [:pointer, :pointer], :int, blocking: true
209
209
  attach_function :rd_kafka_seek, [:pointer, :int32, :int64, :int], :int, blocking: true
210
210
  attach_function :rd_kafka_offsets_for_times, [:pointer, :pointer, :int], :int, blocking: true
211
+ attach_function :rd_kafka_position, [:pointer, :pointer], :int, blocking: true
211
212
 
212
213
  # Headers
213
214
  attach_function :rd_kafka_header_get_all, [:pointer, :size_t, :pointer, :pointer, SizePtr], :int
@@ -14,7 +14,7 @@ module Rdkafka
14
14
  # @private
15
15
  @@error_callback = nil
16
16
  # @private
17
- @@opaques = {}
17
+ @@opaques = ObjectSpace::WeakMap.new
18
18
  # @private
19
19
  @@log_queue = Queue.new
20
20
 
@@ -164,7 +164,13 @@ module Rdkafka
164
164
  Rdkafka::Bindings.rd_kafka_poll_set_consumer(kafka)
165
165
 
166
166
  # Return consumer with Kafka client
167
- Rdkafka::Consumer.new(Rdkafka::NativeKafka.new(kafka, run_polling_thread: false))
167
+ Rdkafka::Consumer.new(
168
+ Rdkafka::NativeKafka.new(
169
+ kafka,
170
+ run_polling_thread: false,
171
+ opaque: opaque
172
+ )
173
+ )
168
174
  end
169
175
 
170
176
  # Create a producer with this configuration.
@@ -182,7 +188,14 @@ module Rdkafka
182
188
  Rdkafka::Bindings.rd_kafka_conf_set_dr_msg_cb(config, Rdkafka::Callbacks::DeliveryCallbackFunction)
183
189
  # Return producer with Kafka client
184
190
  partitioner_name = self[:partitioner] || self["partitioner"]
185
- Rdkafka::Producer.new(Rdkafka::NativeKafka.new(native_kafka(config, :rd_kafka_producer), run_polling_thread: true), partitioner_name).tap do |producer|
191
+ Rdkafka::Producer.new(
192
+ Rdkafka::NativeKafka.new(
193
+ native_kafka(config, :rd_kafka_producer),
194
+ run_polling_thread: true,
195
+ opaque: opaque
196
+ ),
197
+ partitioner_name
198
+ ).tap do |producer|
186
199
  opaque.producer = producer
187
200
  end
188
201
  end
@@ -197,7 +210,13 @@ module Rdkafka
197
210
  opaque = Opaque.new
198
211
  config = native_config(opaque)
199
212
  Rdkafka::Bindings.rd_kafka_conf_set_background_event_cb(config, Rdkafka::Callbacks::BackgroundEventCallbackFunction)
200
- Rdkafka::Admin.new(Rdkafka::NativeKafka.new(native_kafka(config, :rd_kafka_producer), run_polling_thread: true))
213
+ Rdkafka::Admin.new(
214
+ Rdkafka::NativeKafka.new(
215
+ native_kafka(config, :rd_kafka_producer),
216
+ run_polling_thread: true,
217
+ opaque: opaque
218
+ )
219
+ )
201
220
  end
202
221
 
203
222
  # Error that is returned by the underlying rdkafka error if an invalid configuration option is present.
@@ -29,6 +29,13 @@ module Rdkafka
29
29
  ->(_) { close }
30
30
  end
31
31
 
32
+ # @return [String] consumer name
33
+ def name
34
+ @name ||= @native_kafka.with_inner do |inner|
35
+ ::Rdkafka::Bindings.rd_kafka_name(inner)
36
+ end
37
+ end
38
+
32
39
  # Close this consumer
33
40
  # @return [nil]
34
41
  def close
@@ -269,6 +276,32 @@ module Rdkafka
269
276
  end
270
277
  end
271
278
 
279
+ # Return the current positions (offsets) for topics and partitions.
280
+ # The offset field of each requested partition will be set to the offset of the last consumed message + 1, or nil in case there was no previous message.
281
+ #
282
+ # @param list [TopicPartitionList, nil] The topic with partitions to get the offsets for or nil to use the current subscription.
283
+ #
284
+ # @raise [RdkafkaError] When getting the positions fails.
285
+ #
286
+ # @return [TopicPartitionList]
287
+ def position(list=nil)
288
+ if list.nil?
289
+ list = assignment
290
+ elsif !list.is_a?(TopicPartitionList)
291
+ raise TypeError.new("list has to be nil or a TopicPartitionList")
292
+ end
293
+
294
+ tpl = list.to_native_tpl
295
+
296
+ response = @native_kafka.with_inner do |inner|
297
+ Rdkafka::Bindings.rd_kafka_position(inner, tpl)
298
+ end
299
+
300
+ Rdkafka::RdkafkaError.validate!(response)
301
+
302
+ TopicPartitionList.from_native_tpl(tpl)
303
+ end
304
+
272
305
  # Query broker for low (oldest/beginning) and high (newest/end) offsets for a partition.
273
306
  #
274
307
  # @param topic [String] The topic to query
@@ -4,8 +4,9 @@ module Rdkafka
4
4
  # @private
5
5
  # A wrapper around a native kafka that polls and cleanly exits
6
6
  class NativeKafka
7
- def initialize(inner, run_polling_thread:)
7
+ def initialize(inner, run_polling_thread:, opaque:)
8
8
  @inner = inner
9
+ @opaque = opaque
9
10
  # Lock around external access
10
11
  @access_mutex = Mutex.new
11
12
  # Lock around internal polling
@@ -112,6 +113,7 @@ module Rdkafka
112
113
 
113
114
  Rdkafka::Bindings.rd_kafka_destroy(@inner)
114
115
  @inner = nil
116
+ @opaque = nil
115
117
  end
116
118
  end
117
119
  end
@@ -1,7 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Rdkafka
4
- VERSION = "0.13.6"
4
+ VERSION = "0.13.8"
5
5
  LIBRDKAFKA_VERSION = "2.2.0"
6
6
  LIBRDKAFKA_SOURCE_SHA256 = "af9a820cbecbc64115629471df7c7cecd40403b6c34bfdbb9223152677a47226"
7
7
  end
data/renovate.json ADDED
@@ -0,0 +1,6 @@
1
+ {
2
+ "$schema": "https://docs.renovatebot.com/renovate-schema.json",
3
+ "extends": [
4
+ "config:base"
5
+ ]
6
+ }
@@ -337,8 +337,9 @@ describe Rdkafka::Consumer do
337
337
  end
338
338
  end
339
339
 
340
- describe "#commit, #committed and #store_offset" do
341
- # Make sure there's a stored offset
340
+
341
+ describe "#position, #commit, #committed and #store_offset" do
342
+ # Make sure there are messages to work with
342
343
  let!(:report) do
343
344
  producer.produce(
344
345
  topic: "consume_test_topic",
@@ -356,29 +357,33 @@ describe Rdkafka::Consumer do
356
357
  )
357
358
  end
358
359
 
359
- it "should only accept a topic partition list in committed" do
360
- expect {
361
- consumer.committed("list")
362
- }.to raise_error TypeError
360
+ describe "#position" do
361
+ it "should only accept a topic partition list in position if not nil" do
362
+ expect {
363
+ consumer.position("list")
364
+ }.to raise_error TypeError
365
+ end
363
366
  end
364
367
 
365
- it "should commit in sync mode" do
366
- expect {
367
- consumer.commit(nil, true)
368
- }.not_to raise_error
369
- end
368
+ describe "#committed" do
369
+ it "should only accept a topic partition list in commit if not nil" do
370
+ expect {
371
+ consumer.commit("list")
372
+ }.to raise_error TypeError
373
+ end
370
374
 
371
- it "should only accept a topic partition list in commit if not nil" do
372
- expect {
373
- consumer.commit("list")
374
- }.to raise_error TypeError
375
+ it "should commit in sync mode" do
376
+ expect {
377
+ consumer.commit(nil, true)
378
+ }.not_to raise_error
379
+ end
375
380
  end
376
381
 
377
382
  context "with a committed consumer" do
378
383
  before :all do
379
384
  # Make sure there are some messages.
380
385
  handles = []
381
- producer = rdkafka_producer_config.producer
386
+ producer = rdkafka_config.producer
382
387
  10.times do
383
388
  (0..2).each do |i|
384
389
  handles << producer.produce(
@@ -422,31 +427,33 @@ describe Rdkafka::Consumer do
422
427
  }.to raise_error(Rdkafka::RdkafkaError)
423
428
  end
424
429
 
425
- it "should fetch the committed offsets for the current assignment" do
426
- partitions = consumer.committed.to_h["consume_test_topic"]
427
- expect(partitions).not_to be_nil
428
- expect(partitions[0].offset).to eq 1
429
- end
430
+ describe "#committed" do
431
+ it "should fetch the committed offsets for the current assignment" do
432
+ partitions = consumer.committed.to_h["consume_test_topic"]
433
+ expect(partitions).not_to be_nil
434
+ expect(partitions[0].offset).to eq 1
435
+ end
430
436
 
431
- it "should fetch the committed offsets for a specified topic partition list" do
432
- list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list|
433
- list.add_topic("consume_test_topic", [0, 1, 2])
437
+ it "should fetch the committed offsets for a specified topic partition list" do
438
+ list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list|
439
+ list.add_topic("consume_test_topic", [0, 1, 2])
440
+ end
441
+ partitions = consumer.committed(list).to_h["consume_test_topic"]
442
+ expect(partitions).not_to be_nil
443
+ expect(partitions[0].offset).to eq 1
444
+ expect(partitions[1].offset).to eq 1
445
+ expect(partitions[2].offset).to eq 1
434
446
  end
435
- partitions = consumer.committed(list).to_h["consume_test_topic"]
436
- expect(partitions).not_to be_nil
437
- expect(partitions[0].offset).to eq 1
438
- expect(partitions[1].offset).to eq 1
439
- expect(partitions[2].offset).to eq 1
440
- end
441
447
 
442
- it "should raise an error when getting committed fails" do
443
- expect(Rdkafka::Bindings).to receive(:rd_kafka_committed).and_return(20)
444
- list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list|
445
- list.add_topic("consume_test_topic", [0, 1, 2])
448
+ it "should raise an error when getting committed fails" do
449
+ expect(Rdkafka::Bindings).to receive(:rd_kafka_committed).and_return(20)
450
+ list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list|
451
+ list.add_topic("consume_test_topic", [0, 1, 2])
452
+ end
453
+ expect {
454
+ consumer.committed(list)
455
+ }.to raise_error Rdkafka::RdkafkaError
446
456
  end
447
- expect {
448
- consumer.committed(list)
449
- }.to raise_error Rdkafka::RdkafkaError
450
457
  end
451
458
 
452
459
  describe "#store_offset" do
@@ -467,6 +474,8 @@ describe Rdkafka::Consumer do
467
474
  @new_consumer.store_offset(message)
468
475
  @new_consumer.commit
469
476
 
477
+ # TODO use position here, should be at offset
478
+
470
479
  list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list|
471
480
  list.add_topic("consume_test_topic", [0, 1, 2])
472
481
  end
@@ -481,6 +490,35 @@ describe Rdkafka::Consumer do
481
490
  @new_consumer.store_offset(message)
482
491
  }.to raise_error Rdkafka::RdkafkaError
483
492
  end
493
+
494
+ describe "#position" do
495
+ it "should fetch the positions for the current assignment" do
496
+ consumer.store_offset(message)
497
+
498
+ partitions = consumer.position.to_h["consume_test_topic"]
499
+ expect(partitions).not_to be_nil
500
+ expect(partitions[0].offset).to eq message.offset + 1
501
+ end
502
+
503
+ it "should fetch the positions for a specified assignment" do
504
+ consumer.store_offset(message)
505
+
506
+ list = Rdkafka::Consumer::TopicPartitionList.new.tap do |list|
507
+ list.add_topic_and_partitions_with_offsets("consume_test_topic", 0 => nil, 1 => nil, 2 => nil)
508
+ end
509
+ partitions = consumer.position(list).to_h["consume_test_topic"]
510
+ expect(partitions).not_to be_nil
511
+ expect(partitions[0].offset).to eq message.offset + 1
512
+ end
513
+
514
+ it "should raise an error when getting the position fails" do
515
+ expect(Rdkafka::Bindings).to receive(:rd_kafka_position).and_return(20)
516
+
517
+ expect {
518
+ consumer.position
519
+ }.to raise_error(Rdkafka::RdkafkaError)
520
+ end
521
+ end
484
522
  end
485
523
  end
486
524
  end
@@ -7,8 +7,9 @@ describe Rdkafka::NativeKafka do
7
7
  let(:native) { config.send(:native_kafka, config.send(:native_config), :rd_kafka_producer) }
8
8
  let(:closing) { false }
9
9
  let(:thread) { double(Thread) }
10
+ let(:opaque) { Rdkafka::Opaque.new }
10
11
 
11
- subject(:client) { described_class.new(native, run_polling_thread: true) }
12
+ subject(:client) { described_class.new(native, run_polling_thread: true, opaque: opaque) }
12
13
 
13
14
  before do
14
15
  allow(Thread).to receive(:new).and_return(thread)
@@ -844,19 +844,25 @@ describe Rdkafka::Producer do
844
844
  ).producer
845
845
  end
846
846
 
847
- it 'expect to allow to produce within a transaction, finalize and ship data' do
847
+ it 'expect to allow to produce within a transaction, finalize and not ship data' do
848
848
  producer.init_transactions
849
849
  producer.begin_transaction
850
850
 
851
+ sleep(5)
852
+
851
853
  handle1 = producer.produce(topic: 'produce_test_topic', payload: 'data1', partition: 1)
852
854
  handle2 = producer.produce(topic: 'example_topic', payload: 'data2', partition: 0)
853
855
 
854
856
  begin
855
- producer.commit_transaction
857
+ producer.commit_transaction(15_000)
856
858
  rescue Rdkafka::RdkafkaError => e
857
859
  next unless e.abortable?
858
860
 
859
- producer.abort_transaction
861
+ begin
862
+ producer.abort_transaction(15_000)
863
+ rescue Rdkafka::RdkafkaError => e
864
+ nil
865
+ end
860
866
 
861
867
  expect { handle1.wait(max_wait_timeout: 15) }.to raise_error(Rdkafka::RdkafkaError)
862
868
  expect { handle2.wait(max_wait_timeout: 15) }.to raise_error(Rdkafka::RdkafkaError)
data/spec/spec_helper.rb CHANGED
@@ -36,7 +36,7 @@ def rdkafka_consumer_config(config_overrides={})
36
36
  # Add consumer specific fields to it
37
37
  config[:"auto.offset.reset"] = "earliest"
38
38
  config[:"enable.partition.eof"] = false
39
- config[:"group.id"] = "ruby-test-#{Random.new.rand(0..1_000_000)}"
39
+ config[:"group.id"] = "ruby-test-#{SecureRandom.uuid}"
40
40
  # Enable debug mode if required
41
41
  if ENV["DEBUG_CONSUMER"]
42
42
  config[:debug] = "cgrp,topic,fetch"
@@ -107,6 +107,10 @@ def wait_for_unassignment(consumer)
107
107
  end
108
108
  end
109
109
 
110
+ def objects_of_type_count(type)
111
+ ObjectSpace.each_object(type).count
112
+ end
113
+
110
114
  def notify_listener(listener, &block)
111
115
  # 1. subscribe and poll
112
116
  consumer.subscribe("consume_test_topic")
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: karafka-rdkafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.13.6
4
+ version: 0.13.8
5
5
  platform: ruby
6
6
  authors:
7
7
  - Thijs Cadier
@@ -35,7 +35,7 @@ cert_chain:
35
35
  AnG1dJU+yL2BK7vaVytLTstJME5mepSZ46qqIJXMuWob/YPDmVaBF39TDSG9e34s
36
36
  msG3BiCqgOgHAnL23+CN3Rt8MsuRfEtoTKpJVcCfoEoNHOkc
37
37
  -----END CERTIFICATE-----
38
- date: 2023-10-17 00:00:00.000000000 Z
38
+ date: 2023-10-31 00:00:00.000000000 Z
39
39
  dependencies:
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: ffi
@@ -165,7 +165,7 @@ dependencies:
165
165
  version: '0'
166
166
  description: Modern Kafka client library for Ruby based on librdkafka
167
167
  email:
168
- - thijs@appsignal.com
168
+ - contact@karafka.io
169
169
  executables: []
170
170
  extensions:
171
171
  - ext/Rakefile
@@ -182,7 +182,6 @@ files:
182
182
  - README.md
183
183
  - Rakefile
184
184
  - certs/cert_chain.pem
185
- - certs/karafka-pro.pem
186
185
  - docker-compose.yml
187
186
  - ext/README.md
188
187
  - ext/Rakefile
@@ -211,6 +210,7 @@ files:
211
210
  - lib/rdkafka/producer/delivery_handle.rb
212
211
  - lib/rdkafka/producer/delivery_report.rb
213
212
  - lib/rdkafka/version.rb
213
+ - renovate.json
214
214
  - spec/rdkafka/abstract_handle_spec.rb
215
215
  - spec/rdkafka/admin/create_topic_handle_spec.rb
216
216
  - spec/rdkafka/admin/create_topic_report_spec.rb
@@ -251,7 +251,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
251
251
  requirements:
252
252
  - - ">="
253
253
  - !ruby/object:Gem::Version
254
- version: '2.6'
254
+ version: '2.7'
255
255
  required_rubygems_version: !ruby/object:Gem::Requirement
256
256
  requirements:
257
257
  - - ">="
metadata.gz.sig CHANGED
Binary file
@@ -1,11 +0,0 @@
1
- -----BEGIN RSA PUBLIC KEY-----
2
- MIIBigKCAYEApcd6ybskiNs9WUvBGVUE8GdWDehjZ9TyjSj/fDl/UcMYqY0R5YX9
3
- tnYxEwZZRMdVltKWxr88Qmshh1IQz6CpJVbcfYjt/158pSGPm+AUua6tkLqIvZDM
4
- ocFOMafmroI+BMuL+Zu5QH7HC2tkT16jclGYfMQkJjXVUQTk2UZr+94+8RlUz/CH
5
- Y6hPA7xPgIyPfyPCxz1VWzAwXwT++NCJQPBr5MqT84LNSEzUSlR9pFNShf3UCUT+
6
- 8LWOvjFSNGmMMSsbo2T7/+dz9/FM02YG00EO0x04qteggwcaEYLFrigDN6/fM0ih
7
- BXZILnMUqC/qrfW2YFg4ZqKZJuxaALqqkPxrkBDYqoqcAloqn36jBSke6tc/2I/J
8
- 2Afq3r53UoAbUH7h5I/L8YeaiA4MYjAuq724lHlrOmIr4D6yjYC0a1LGlPjLk869
9
- 2nsVXNgomhVb071E6amR+rJJnfvkdZgCmEBFnqnBV5A1u4qgNsa2rVcD+gJRvb2T
10
- aQtjlQWKPx5xAgMBAAE=
11
- -----END RSA PUBLIC KEY-----