rdkafka 0.16.1 → 0.17.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 2c95cc0f676ffb0921be329ffa7af3954d9d5111a2312906e06266bb80f19323
4
- data.tar.gz: 712b70533b65a862c3783cf6cc9179d5d700ea1a657cf5c44d143d52027fc176
3
+ metadata.gz: af536d53de701980f7ca5f5f426c3ea2450b4a1e46568c84edef9f5223544206
4
+ data.tar.gz: d7bfb39e6992112b84254b07cccced37730613c06ff58e9831990403c05c012c
5
5
  SHA512:
6
- metadata.gz: f78b1b2b11a1d51fb93f9f533e92937aa82731cdf76b24a8940a2dc667e7e6216d639b6983a8de54ba00f3271c1076066caf393cf68028d82cb6297772f4de86
7
- data.tar.gz: 2bc40080e7aed367bb7c6305ac10054b8e6569e645ddbec29390149124e2f11006661f0b470f8cb864edac18e26281e90649c62cc5b659626960c4a88dfc69f0
6
+ metadata.gz: 8801ec759340b9643a3d4971f21c0ab0563fdb7a416d2efeffa2e77de8525ebb69c92dcb394ceaae94513f0fd120acf41ff3f4a0efd405eae1438d8bb9835ee2
7
+ data.tar.gz: 12c937def332a678270d914645e2a0af6c06ffabdfc54284974cf386a68647411482df792e2289fea29473b904ef242309e574704426d355ce57a578aabd25a0
checksums.yaml.gz.sig CHANGED
Binary file
data/.ruby-version CHANGED
@@ -1 +1 @@
1
- 3.3.3
1
+ 3.3.4
data/CHANGELOG.md CHANGED
@@ -1,5 +1,12 @@
1
1
  # Rdkafka Changelog
2
2
 
3
+ ## 0.17.0 (2024-08-03)
4
+ - [Feature] Add `#seek_by` to be able to seek for a message by topic, partition and offset (zinahia)
5
+ - [Enhancement] Update `librdkafka` to `2.4.0`
6
+ - [Enhancement] Support ability to release patches to librdkafka.
7
+ - [Change] Remove old producer timeout API warnings.
8
+ - [Fix] Switch to local release of librdkafka to mitigate its unavailability.
9
+
3
10
  ## 0.16.1 (2024-07-10)
4
11
  - [Fix] Switch to local release of librdkafka to mitigate its unavailability.
5
12
 
@@ -21,6 +28,9 @@
21
28
  - [Fix] Background logger stops working after forking causing memory leaks (mensfeld)
22
29
  - [Fix] Fix bogus case/when syntax. Levels 1, 2, and 6 previously defaulted to UNKNOWN (jjowdy)
23
30
 
31
+ ## 0.15.2 (2024-07-10)
32
+ - [Fix] Switch to local release of librdkafka to mitigate its unavailability.
33
+
24
34
  ## 0.15.1 (2024-01-30)
25
35
  - [Enhancement] Provide support for Nix OS (alexandriainfantino)
26
36
  - [Enhancement] Replace `rd_kafka_offset_store` with `rd_kafka_offsets_store` (mensfeld)
@@ -46,6 +56,9 @@
46
56
  - [Enhancement] Increase the `#lag` and `#query_watermark_offsets` default timeouts from 100ms to 1000ms. This will compensate for network glitches and remote clusters operations (mensfeld)
47
57
  - [Change] Use `SecureRandom.uuid` instead of `random` for test consumer groups (mensfeld)
48
58
 
59
+ ## 0.14.1 (2024-07-10)
60
+ - [Fix] Switch to local release of librdkafka to mitigate its unavailability.
61
+
49
62
  ## 0.14.0 (2023-11-21)
50
63
  - [Enhancement] Add `raise_response_error` flag to the `Rdkafka::AbstractHandle`.
51
64
  - [Enhancement] Allow for setting `statistics_callback` as nil to reset predefined settings configured by a different gem (mensfeld)
@@ -63,6 +76,9 @@
63
76
  - [Change] Update Kafka Docker with Confluent KRaft (mensfeld)
64
77
  - [Change] Update librdkafka repo reference from edenhill to confluentinc (mensfeld)
65
78
 
79
+ ## 0.13.1 (2024-07-10)
80
+ - [Fix] Switch to local release of librdkafka to mitigate its unavailability.
81
+
66
82
  ## 0.13.0 (2023-07-24)
67
83
  - Support cooperative sticky partition assignment in the rebalance callback (methodmissing)
68
84
  - Support both string and symbol header keys (ColinDKelley)
@@ -79,6 +95,9 @@
79
95
  - Make metadata request timeout configurable (mensfeld)
80
96
  - call_on_partitions_assigned and call_on_partitions_revoked only get a tpl passed in (thijsc)
81
97
 
98
+ ## 0.12.1 (2024-07-11)
99
+ - [Fix] Switch to local release of librdkafka to mitigate its unavailability.
100
+
82
101
  ## 0.12.0 (2022-06-17)
83
102
  - Bumps librdkafka to 1.9.0
84
103
  - Fix crash on empty partition key (mensfeld)
data/README.md CHANGED
@@ -161,12 +161,13 @@ bundle exec rake produce_messages
161
161
 
162
162
  ## Versions
163
163
 
164
- | rdkafka-ruby | librdkafka |
165
- |-|-|
166
- | 0.16.0 (2024-06-13) | 2.3.0 (2023-10-25) |
167
- | 0.15.0 (2023-12-03) | 2.3.0 (2023-10-25) |
168
- | 0.14.0 (2023-11-21) | 2.2.0 (2023-07-12) |
169
- | 0.13.0 (2023-07-24) | 2.0.2 (2023-01-20) |
170
- | 0.12.0 (2022-06-17) | 1.9.0 (2022-06-16) |
171
- | 0.11.0 (2021-11-17) | 1.8.2 (2021-10-18) |
172
- | 0.10.0 (2021-09-07) | 1.5.0 (2020-07-20) |
164
+ | rdkafka-ruby | librdkafka | patches |
165
+ |-|-|-|
166
+ | 0.17.0 (2024-08-03) | 2.4.0 (2024-05-07) | no |
167
+ | 0.16.0 (2024-06-13) | 2.3.0 (2023-10-25) | no |
168
+ | 0.15.0 (2023-12-03) | 2.3.0 (2023-10-25) | no |
169
+ | 0.14.0 (2023-11-21) | 2.2.0 (2023-07-12) | no |
170
+ | 0.13.0 (2023-07-24) | 2.0.2 (2023-01-20) | no |
171
+ | 0.12.0 (2022-06-17) | 1.9.0 (2022-06-16) | no |
172
+ | 0.11.0 (2021-11-17) | 1.8.2 (2021-10-18) | no |
173
+ | 0.10.0 (2021-09-07) | 1.5.0 (2020-07-20) | no |
data/docker-compose.yml CHANGED
@@ -3,7 +3,7 @@ version: '2'
3
3
  services:
4
4
  kafka:
5
5
  container_name: kafka
6
- image: confluentinc/cp-kafka:7.6.1
6
+ image: confluentinc/cp-kafka:7.7.0
7
7
 
8
8
  ports:
9
9
  - 9092:9092
data/ext/README.md CHANGED
@@ -1,7 +1,7 @@
1
1
  # Ext
2
2
 
3
- This gem depends on the `librdkafka` C library. It is downloaded when
4
- this gem is installed.
3
+ This gem depends on the `librdkafka` C library. It is downloaded, stored in
4
+ `dist/` directory, and checked into source control.
5
5
 
6
6
  To update the `librdkafka` version follow the following steps:
7
7
 
@@ -9,8 +9,9 @@ To update the `librdkafka` version follow the following steps:
9
9
  version number and asset checksum for `tar.gz`.
10
10
  * Change the version in `lib/rdkafka/version.rb`
11
11
  * Change the `sha256` in `lib/rdkafka/version.rb`
12
- * Run `bundle exec rake` in the `ext` directory to download and build
13
- the new version
12
+ * Run `bundle exec rake dist:download` in the `ext` directory to download the
13
+ new release and place it in the `dist/` for you
14
+ * Run `bundle exec rake` in the `ext` directory to build the new version
14
15
  * Run `docker-compose pull` in the main gem directory to ensure the docker
15
16
  images used by the tests and run `docker-compose up`
16
17
  * Finally, run `bundle exec rspec` in the main gem directory to execute
data/ext/Rakefile CHANGED
@@ -1,6 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require File.expand_path('../../lib/rdkafka/version', __FILE__)
4
+ require "digest"
4
5
  require "fileutils"
5
6
  require "open-uri"
6
7
 
@@ -30,6 +31,8 @@ task :default => :clean do
30
31
  }
31
32
  recipe.configure_options = ["--host=#{recipe.host}"]
32
33
 
34
+ recipe.patch_files = Dir[File.join(releases, 'patches', "*.patch")].sort
35
+
33
36
  # Disable using libc regex engine in favor of the embedded one
34
37
  # The default regex engine of librdkafka does not always work exactly as most of the users
35
38
  # would expect, hence this flag allows for changing it to the other one
@@ -71,6 +74,42 @@ task :clean do
71
74
  FileUtils.rm_rf File.join(File.dirname(__FILE__), "tmp")
72
75
  end
73
76
 
77
+ namespace :dist do
78
+ task :dir do
79
+ ENV["RDKAFKA_DIST_PATH"] ||= File.expand_path(File.join(File.dirname(__FILE__), '..', 'dist'))
80
+ end
81
+
82
+ task :file => "dist:dir" do
83
+ ENV["RDKAFKA_DIST_FILE"] ||= File.join(ENV["RDKAFKA_DIST_PATH"], "librdkafka_#{Rdkafka::LIBRDKAFKA_VERSION}.tar.gz")
84
+ end
85
+
86
+ task :clean => "dist:file" do
87
+ Dir.glob(File.join("#{ENV['RDKAFKA_DIST_PATH']}", "*")).each do |filename|
88
+ next if filename.include? ENV["RDKAFKA_DIST_FILE"]
89
+
90
+ FileUtils.rm_rf filename
91
+ end
92
+ end
93
+
94
+ task :download => "dist:file" do
95
+ version = Rdkafka::LIBRDKAFKA_VERSION
96
+ librdkafka_download = "https://codeload.github.com/confluentinc/librdkafka/tar.gz/v#{version}"
97
+
98
+ URI.open(librdkafka_download) do |file|
99
+ filename = ENV["RDKAFKA_DIST_FILE"]
100
+ data = file.read
101
+
102
+ if Digest::SHA256.hexdigest(data) != Rdkafka::LIBRDKAFKA_SOURCE_SHA256
103
+ raise "SHA256 does not match downloaded file"
104
+ end
105
+
106
+ File.write(filename, data)
107
+ end
108
+ end
109
+
110
+ task :update => %w[dist:download dist:clean]
111
+ end
112
+
74
113
  namespace :build do
75
114
  desc "Build librdkafka at the given git sha or tag"
76
115
  task :git, [:ref] do |task, args|
@@ -78,8 +117,9 @@ namespace :build do
78
117
  version = "git-#{ref}"
79
118
 
80
119
  recipe = MiniPortile.new("librdkafka", version)
81
- recipe.files << "https://github.com/edenhill/librdkafka/archive/#{ref}.tar.gz"
120
+ recipe.files << "https://github.com/confluentinc/librdkafka/archive/#{ref}.tar.gz"
82
121
  recipe.configure_options = ["--host=#{recipe.host}","--enable-static", "--enable-zstd"]
122
+ recipe.patch_files = Dir[File.join(releases, 'patches', "*.patch")].sort
83
123
  recipe.cook
84
124
 
85
125
  ext = recipe.host.include?("darwin") ? "dylib" : "so"
@@ -16,9 +16,6 @@ module Rdkafka
16
16
  REGISTRY = {}
17
17
  # Default wait timeout is 31 years
18
18
  MAX_WAIT_TIMEOUT_FOREVER = 10_000_000_000
19
- # Deprecation message for wait_timeout argument in wait method
20
- WAIT_TIMEOUT_DEPRECATION_MESSAGE = "The 'wait_timeout' argument is deprecated and will be removed in future versions without replacement. " \
21
- "We don't rely on it's value anymore. Please refactor your code to remove references to it."
22
19
 
23
20
  private_constant :MAX_WAIT_TIMEOUT_FOREVER
24
21
 
@@ -59,16 +56,13 @@ module Rdkafka
59
56
  #
60
57
  # @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out.
61
58
  # If this is nil we will wait forever
62
- # @param wait_timeout [nil] deprecated
63
59
  # @param raise_response_error [Boolean] should we raise error when waiting finishes
64
60
  #
65
61
  # @return [Object] Operation-specific result
66
62
  #
67
63
  # @raise [RdkafkaError] When the operation failed
68
64
  # @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending
69
- def wait(max_wait_timeout: 60, wait_timeout: nil, raise_response_error: true)
70
- Kernel.warn(WAIT_TIMEOUT_DEPRECATION_MESSAGE) unless wait_timeout.nil?
71
-
65
+ def wait(max_wait_timeout: 60, raise_response_error: true)
72
66
  timeout = max_wait_timeout ? monotonic_now + max_wait_timeout : MAX_WAIT_TIMEOUT_FOREVER
73
67
 
74
68
  @mutex.synchronize do
@@ -423,6 +423,19 @@ module Rdkafka
423
423
  # @return [nil]
424
424
  # @raise [RdkafkaError] When seeking fails
425
425
  def seek(message)
426
+ seek_by(message.topic, message.partition, message.offset)
427
+ end
428
+
429
+ # Seek to a particular message by providing the topic, partition and offset.
430
+ # The next poll on the topic/partition will return the
431
+ # message at the given offset.
432
+ #
433
+ # @param topic [String] The topic in which to seek
434
+ # @param partition [Integer] The partition number to seek
435
+ # @param offset [Integer] The partition offset to seek
436
+ # @return [nil]
437
+ # @raise [RdkafkaError] When seeking fails
438
+ def seek_by(topic, partition, offset)
426
439
  closed_consumer_check(__method__)
427
440
 
428
441
  # rd_kafka_offset_store is one of the few calls that does not support
@@ -430,14 +443,14 @@ module Rdkafka
430
443
  native_topic = @native_kafka.with_inner do |inner|
431
444
  Rdkafka::Bindings.rd_kafka_topic_new(
432
445
  inner,
433
- message.topic,
446
+ topic,
434
447
  nil
435
448
  )
436
449
  end
437
450
  response = Rdkafka::Bindings.rd_kafka_seek(
438
451
  native_topic,
439
- message.partition,
440
- message.offset,
452
+ partition,
453
+ offset,
441
454
  0 # timeout
442
455
  )
443
456
  if response != 0
@@ -1,7 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Rdkafka
4
- VERSION = "0.16.1"
5
- LIBRDKAFKA_VERSION = "2.3.0"
6
- LIBRDKAFKA_SOURCE_SHA256 = "2d49c35c77eeb3d42fa61c43757fcbb6a206daa560247154e60642bcdcc14d12"
4
+ VERSION = "0.17.0"
5
+ LIBRDKAFKA_VERSION = "2.4.0"
6
+ LIBRDKAFKA_SOURCE_SHA256 = "d645e47d961db47f1ead29652606a502bdd2a880c85c1e060e94eea040f1a19a"
7
7
  end
@@ -80,7 +80,6 @@ describe Rdkafka::AbstractHandle do
80
80
  let(:pending_handle) { true }
81
81
 
82
82
  it "should wait until the timeout and then raise an error" do
83
- expect(Kernel).not_to receive(:warn)
84
83
  expect {
85
84
  subject.wait(max_wait_timeout: 0.1)
86
85
  }.to raise_error Rdkafka::AbstractHandle::WaitTimeoutError, /test_operation/
@@ -90,22 +89,15 @@ describe Rdkafka::AbstractHandle do
90
89
  context 'when pending_handle false' do
91
90
  let(:pending_handle) { false }
92
91
 
93
- it 'should show a deprecation warning when wait_timeout is set' do
94
- expect(Kernel).to receive(:warn).with(Rdkafka::AbstractHandle::WAIT_TIMEOUT_DEPRECATION_MESSAGE)
95
- subject.wait(wait_timeout: 0.1)
96
- end
97
-
98
92
  context "without error" do
99
93
  let(:result) { 1 }
100
94
 
101
95
  it "should return a result" do
102
- expect(Kernel).not_to receive(:warn)
103
96
  wait_result = subject.wait
104
97
  expect(wait_result).to eq(result)
105
98
  end
106
99
 
107
100
  it "should wait without a timeout" do
108
- expect(Kernel).not_to receive(:warn)
109
101
  wait_result = subject.wait(max_wait_timeout: nil)
110
102
  expect(wait_result).to eq(result)
111
103
  end
@@ -115,7 +107,6 @@ describe Rdkafka::AbstractHandle do
115
107
  let(:response) { 20 }
116
108
 
117
109
  it "should raise an rdkafka error" do
118
- expect(Kernel).not_to receive(:warn)
119
110
  expect {
120
111
  subject.wait
121
112
  }.to raise_error Rdkafka::RdkafkaError
@@ -34,7 +34,7 @@ describe Rdkafka::Admin do
34
34
  describe '#describe_errors' do
35
35
  let(:errors) { admin.class.describe_errors }
36
36
 
37
- it { expect(errors.size).to eq(162) }
37
+ it { expect(errors.size).to eq(168) }
38
38
  it { expect(errors[-184]).to eq(code: -184, description: 'Local: Queue full', name: '_QUEUE_FULL') }
39
39
  it { expect(errors[21]).to eq(code: 21, description: 'Broker: Invalid required acks value', name: 'INVALID_REQUIRED_ACKS') }
40
40
  end
@@ -258,6 +258,95 @@ describe Rdkafka::Consumer do
258
258
  end
259
259
  end
260
260
 
261
+ describe "#seek_by" do
262
+ let(:topic) { "consume_test_topic" }
263
+ let(:partition) { 0 }
264
+ let(:offset) { 0 }
265
+
266
+ it "should raise an error when seeking fails" do
267
+ expect(Rdkafka::Bindings).to receive(:rd_kafka_seek).and_return(20)
268
+ expect {
269
+ consumer.seek_by(topic, partition, offset)
270
+ }.to raise_error Rdkafka::RdkafkaError
271
+ end
272
+
273
+ context "subscription" do
274
+ let(:timeout) { 1000 }
275
+
276
+ before do
277
+ consumer.subscribe(topic)
278
+
279
+ # 1. partitions are assigned
280
+ wait_for_assignment(consumer)
281
+ expect(consumer.assignment).not_to be_empty
282
+
283
+ # 2. eat unrelated messages
284
+ while(consumer.poll(timeout)) do; end
285
+ end
286
+ after { consumer.unsubscribe }
287
+
288
+ def send_one_message(val)
289
+ producer.produce(
290
+ topic: topic,
291
+ payload: "payload #{val}",
292
+ key: "key 1",
293
+ partition: 0
294
+ ).wait
295
+ end
296
+
297
+ it "works when a partition is paused" do
298
+ # 3. get reference message
299
+ send_one_message(:a)
300
+ message1 = consumer.poll(timeout)
301
+ expect(message1&.payload).to eq "payload a"
302
+
303
+ # 4. pause the subscription
304
+ tpl = Rdkafka::Consumer::TopicPartitionList.new
305
+ tpl.add_topic(topic, 1)
306
+ consumer.pause(tpl)
307
+
308
+ # 5. seek by the previous message fields
309
+ consumer.seek_by(message1.topic, message1.partition, message1.offset)
310
+
311
+ # 6. resume the subscription
312
+ tpl = Rdkafka::Consumer::TopicPartitionList.new
313
+ tpl.add_topic(topic, 1)
314
+ consumer.resume(tpl)
315
+
316
+ # 7. ensure same message is read again
317
+ message2 = consumer.poll(timeout)
318
+
319
+ # This is needed because `enable.auto.offset.store` is true but when running in CI that
320
+ # is overloaded, offset store lags
321
+ sleep(2)
322
+
323
+ consumer.commit
324
+ expect(message1.offset).to eq message2.offset
325
+ expect(message1.payload).to eq message2.payload
326
+ end
327
+
328
+ it "allows skipping messages" do
329
+ # 3. send messages
330
+ send_one_message(:a)
331
+ send_one_message(:b)
332
+ send_one_message(:c)
333
+
334
+ # 4. get reference message
335
+ message = consumer.poll(timeout)
336
+ expect(message&.payload).to eq "payload a"
337
+
338
+ # 5. seek over one message
339
+ consumer.seek_by(message.topic, message.partition, message.offset + 2)
340
+
341
+ # 6. ensure that only one message is available
342
+ records = consumer.poll(timeout)
343
+ expect(records&.payload).to eq "payload c"
344
+ records = consumer.poll(timeout)
345
+ expect(records).to be_nil
346
+ end
347
+ end
348
+ end
349
+
261
350
  describe "#assign and #assignment" do
262
351
  it "should return an empty assignment if nothing is assigned" do
263
352
  expect(consumer.assignment).to be_empty
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: rdkafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.16.1
4
+ version: 0.17.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Thijs Cadier
@@ -36,7 +36,7 @@ cert_chain:
36
36
  AnG1dJU+yL2BK7vaVytLTstJME5mepSZ46qqIJXMuWob/YPDmVaBF39TDSG9e34s
37
37
  msG3BiCqgOgHAnL23+CN3Rt8MsuRfEtoTKpJVcCfoEoNHOkc
38
38
  -----END CERTIFICATE-----
39
- date: 2024-07-10 00:00:00.000000000 Z
39
+ date: 2024-08-03 00:00:00.000000000 Z
40
40
  dependencies:
41
41
  - !ruby/object:Gem::Dependency
42
42
  name: ffi
@@ -186,7 +186,7 @@ files:
186
186
  - README.md
187
187
  - Rakefile
188
188
  - certs/cert_chain.pem
189
- - dist/librdkafka_2.3.0.tar.gz
189
+ - dist/librdkafka_2.4.0.tar.gz
190
190
  - docker-compose.yml
191
191
  - ext/README.md
192
192
  - ext/Rakefile
metadata.gz.sig CHANGED
Binary file