rdkafka 0.10.0 → 0.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9f954490d858591f12e5a7fc1fed82e1b5c6b42049c5ed66d2d76bb352e03a7f
4
- data.tar.gz: 0ae8cbac09963d52a3075cece9e0bf6c5895e31f87995d7997ee7f04e52a10c8
3
+ metadata.gz: 821523c304fc7a1fbb2c7be2b58d98d56600b645b89fdb4093f976418650035d
4
+ data.tar.gz: '039b8e345fd8be5f295a293d64466071dbefd77d81b01460abb0fcf343a6bed3'
5
5
  SHA512:
6
- metadata.gz: 33d43862b6b7cee1a3284221f75fae0c6bdfa6df4faa4396378c96e34a8f1a2395202dd01d6a5f2baac5557010505f548e7862bdd14202de6b3f85c77ca95309
7
- data.tar.gz: 22d6243a853bfd630497eab13a8b6823f1a1d4a358aee53a98e68e5a0255f5ca68932db6d261cbad34dadeca5854522d8fceb15b43004ce8719a1d5d9aa3e9bd
6
+ metadata.gz: 2c7ac2199a63aacd3b1420890981ed5d953ae5cdadb874886cc4e396fa1fd8f69333633319beef35a05a002d75d22335a526a126e518cc3fbbb877a1c11ef2f7
7
+ data.tar.gz: 5d23c6beec3759877013b040018111453e05c41238014b07a27c1a9d8b96e8af3bc037aacd1ebe89f856435cf0afb8e34a9f89443f87cf1a3682736efb79b4bd
data/.rspec ADDED
@@ -0,0 +1 @@
1
+ --format documentation
@@ -13,7 +13,7 @@ blocks:
13
13
  - name: bundle exec rspec
14
14
  matrix:
15
15
  - env_var: RUBY_VERSION
16
- values: [ "2.5.8", "2.6.6", "2.7.2", "3.0.0", "jruby-9.2.13.0" ]
16
+ values: [ "2.6.8", "2.7.4", "3.0.2", "jruby-9.3.1.0"]
17
17
  commands:
18
18
  - sem-version ruby $RUBY_VERSION
19
19
  - checkout
data/CHANGELOG.md CHANGED
@@ -1,3 +1,11 @@
1
+ # 0.12.0
2
+ * Bumps librdkafka to 1.9.0
3
+
4
+ # 0.11.0
5
+ * Upgrade librdkafka to 1.8.2
6
+ * Bump supported minimum Ruby version to 2.6
7
+ * Better homebrew path detection
8
+
1
9
  # 0.10.0
2
10
  * Upgrade librdkafka to 1.5.0
3
11
  * Add error callback config
@@ -45,7 +53,7 @@
45
53
  * Use default Homebrew openssl location if present
46
54
  * Consumer lag handles empty topics
47
55
  * End iteration in consumer when it is closed
48
- * Add suport for storing message offsets
56
+ * Add support for storing message offsets
49
57
  * Add missing runtime dependency to rake
50
58
 
51
59
  # 0.4.1
data/Guardfile ADDED
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ logger level: :error
4
+
5
+ guard :rspec, cmd: "bundle exec rspec --format #{ENV.fetch("FORMAT", "documentation")}" do
6
+ require "guard/rspec/dsl"
7
+ dsl = Guard::RSpec::Dsl.new(self)
8
+
9
+ # Ruby files
10
+ ruby = dsl.ruby
11
+ dsl.watch_spec_files_for(ruby.lib_files)
12
+ watch(%r{^lib/(.+)\.rb}) { |m| "spec/#{m[1]}_spec.rb" }
13
+
14
+ # RSpec files
15
+ rspec = dsl.rspec
16
+ watch(rspec.spec_helper) { rspec.spec_dir }
17
+ watch(rspec.spec_support) { rspec.spec_dir }
18
+ watch(rspec.spec_files)
19
+ end
data/README.md CHANGED
@@ -7,7 +7,9 @@
7
7
  The `rdkafka` gem is a modern Kafka client library for Ruby based on
8
8
  [librdkafka](https://github.com/edenhill/librdkafka/).
9
9
  It wraps the production-ready C client using the [ffi](https://github.com/ffi/ffi)
10
- gem and targets Kafka 1.0+ and Ruby 2.4+.
10
+ gem and targets Kafka 1.0+ and Ruby versions that are under security or
11
+ active maintenance. We remove Ruby version from our CI builds if they
12
+ become EOL.
11
13
 
12
14
  `rdkafka` was written because we needed a reliable Ruby client for
13
15
  Kafka that supports modern Kafka at [AppSignal](https://appsignal.com).
data/bin/console ADDED
@@ -0,0 +1,11 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ # frozen_string_literal: true
4
+
5
+ ENV["IRBRC"] = File.join(File.dirname(__FILE__), ".irbrc")
6
+
7
+ require "bundler/setup"
8
+ require "rdkafka"
9
+
10
+ require "irb"
11
+ IRB.start(__FILE__)
data/docker-compose.yml CHANGED
@@ -4,13 +4,13 @@ version: '2'
4
4
 
5
5
  services:
6
6
  zookeeper:
7
- image: confluentinc/cp-zookeeper:latest
7
+ image: confluentinc/cp-zookeeper:5.2.6
8
8
  environment:
9
9
  ZOOKEEPER_CLIENT_PORT: 2181
10
10
  ZOOKEEPER_TICK_TIME: 2000
11
11
 
12
12
  kafka:
13
- image: confluentinc/cp-kafka:latest
13
+ image: confluentinc/cp-kafka:5.2.5-10
14
14
  depends_on:
15
15
  - zookeeper
16
16
  ports:
@@ -18,7 +18,7 @@ services:
18
18
  environment:
19
19
  KAFKA_BROKER_ID: 1
20
20
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
21
- KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
21
+ KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092,PLAINTEXT_HOST://localhost:9092
22
22
  KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
23
23
  KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
24
24
  KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
data/ext/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # Ext
2
2
 
3
- This gem dependes on the `librdkafka` C library. It is downloaded when
3
+ This gem depends on the `librdkafka` C library. It is downloaded when
4
4
  this gem is installed.
5
5
 
6
6
  To update the `librdkafka` version follow the following steps:
data/ext/Rakefile CHANGED
@@ -4,30 +4,14 @@ require "fileutils"
4
4
  require "open-uri"
5
5
 
6
6
  task :default => :clean do
7
- # MiniPortile#download_file_http is a monkey patch that removes the download
8
- # progress indicator. This indicator relies on the 'Content Length' response
9
- # headers, which is not set by GitHub
10
- class MiniPortile
11
- def download_file_http(url, full_path, _count)
12
- filename = File.basename(full_path)
13
- with_tempfile(filename, full_path) do |temp_file|
14
- params = { 'Accept-Encoding' => 'identity' }
15
- OpenURI.open_uri(url, 'rb', params) do |io|
16
- temp_file.write(io.read)
17
- end
18
- output
19
- end
20
- end
21
- end
22
-
23
7
  # Download and compile librdkafka
24
8
  recipe = MiniPortile.new("librdkafka", Rdkafka::LIBRDKAFKA_VERSION)
25
9
 
26
10
  # Use default homebrew openssl if we're on mac and the directory exists
27
11
  # and each of flags is not empty
28
- if recipe.host&.include?("darwin") && Dir.exist?("/usr/local/opt/openssl")
29
- ENV["CPPFLAGS"] = "-I/usr/local/opt/openssl/include" unless ENV["CPPFLAGS"]
30
- ENV["LDFLAGS"] = "-L/usr/local/opt/openssl/lib" unless ENV["LDFLAGS"]
12
+ if recipe.host&.include?("darwin") && system("which brew &> /dev/null") && Dir.exist?("#{homebrew_prefix = %x(brew --prefix openssl).strip}")
13
+ ENV["CPPFLAGS"] = "-I#{homebrew_prefix}/include" unless ENV["CPPFLAGS"]
14
+ ENV["LDFLAGS"] = "-L#{homebrew_prefix}/lib" unless ENV["LDFLAGS"]
31
15
  end
32
16
 
33
17
  recipe.files << {
data/lib/rdkafka/admin.rb CHANGED
@@ -90,7 +90,7 @@ module Rdkafka
90
90
  admin_options_ptr,
91
91
  queue_ptr
92
92
  )
93
- rescue Exception => err
93
+ rescue Exception
94
94
  CreateTopicHandle.remove(create_topic_handle.to_ptr.address)
95
95
  raise
96
96
  ensure
@@ -140,7 +140,7 @@ module Rdkafka
140
140
  admin_options_ptr,
141
141
  queue_ptr
142
142
  )
143
- rescue Exception => err
143
+ rescue Exception
144
144
  DeleteTopicHandle.remove(delete_topic_handle.to_ptr.address)
145
145
  raise
146
146
  ensure
@@ -246,14 +246,21 @@ module Rdkafka
246
246
  attach_function :rd_kafka_conf_set_dr_msg_cb, [:pointer, :delivery_cb], :void
247
247
 
248
248
  # Partitioner
249
- attach_function :rd_kafka_msg_partitioner_consistent_random, [:pointer, :pointer, :size_t, :int32, :pointer, :pointer], :int32
249
+ PARTITIONERS = %w(random consistent consistent_random murmur2 murmur2_random fnv1a fnv1a_random).each_with_object({}) do |name, hsh|
250
+ method_name = "rd_kafka_msg_partitioner_#{name}".to_sym
251
+ attach_function method_name, [:pointer, :pointer, :size_t, :int32, :pointer, :pointer], :int32
252
+ hsh[name] = method_name
253
+ end
250
254
 
251
- def self.partitioner(str, partition_count)
255
+ def self.partitioner(str, partition_count, partitioner_name = "consistent_random")
252
256
  # Return RD_KAFKA_PARTITION_UA(unassigned partition) when partition count is nil/zero.
253
257
  return -1 unless partition_count&.nonzero?
254
258
 
255
259
  str_ptr = FFI::MemoryPointer.from_string(str)
256
- rd_kafka_msg_partitioner_consistent_random(nil, str_ptr, str.size, partition_count, nil, nil)
260
+ method_name = PARTITIONERS.fetch(partitioner_name) do
261
+ raise Rdkafka::Config::ConfigError.new("Unknown partitioner: #{partitioner_name}")
262
+ end
263
+ public_send(method_name, nil, str_ptr, str.size, partition_count, nil, nil)
257
264
  end
258
265
 
259
266
  # Create Topics
@@ -97,7 +97,7 @@ module Rdkafka
97
97
  delivery_handle[:pending] = false
98
98
  # Call delivery callback on opaque
99
99
  if opaque = Rdkafka::Config.opaques[opaque_ptr.to_i]
100
- opaque.call_delivery_callback(Rdkafka::Producer::DeliveryReport.new(message[:partition], message[:offset], message[:err]))
100
+ opaque.call_delivery_callback(Rdkafka::Producer::DeliveryReport.new(message[:partition], message[:offset], message[:err]), delivery_handle)
101
101
  end
102
102
  end
103
103
  end
@@ -179,7 +179,7 @@ module Rdkafka
179
179
  # Set callback to receive delivery reports on config
180
180
  Rdkafka::Bindings.rd_kafka_conf_set_dr_msg_cb(config, Rdkafka::Callbacks::DeliveryCallbackFunction)
181
181
  # Return producer with Kafka client
182
- Rdkafka::Producer.new(native_kafka(config, :rd_kafka_producer)).tap do |producer|
182
+ Rdkafka::Producer.new(Rdkafka::Producer::Client.new(native_kafka(config, :rd_kafka_producer)), self[:partitioner]).tap do |producer|
183
183
  opaque.producer = producer
184
184
  end
185
185
  end
@@ -278,8 +278,8 @@ module Rdkafka
278
278
  attr_accessor :producer
279
279
  attr_accessor :consumer_rebalance_listener
280
280
 
281
- def call_delivery_callback(delivery_handle)
282
- producer.call_delivery_callback(delivery_handle) if producer
281
+ def call_delivery_callback(delivery_report, delivery_handle)
282
+ producer.call_delivery_callback(delivery_report, delivery_handle) if producer
283
283
  end
284
284
 
285
285
  def call_on_partitions_assigned(consumer, list)
@@ -498,11 +498,11 @@ module Rdkafka
498
498
  # Exception behavior is more complicated than with `each`, in that if
499
499
  # :yield_on_error is true, and an exception is raised during the
500
500
  # poll, and messages have already been received, they will be yielded to
501
- # the caller before the exception is allowed to propogate.
501
+ # the caller before the exception is allowed to propagate.
502
502
  #
503
503
  # If you are setting either auto.commit or auto.offset.store to false in
504
504
  # the consumer configuration, then you should let yield_on_error keep its
505
- # default value of false because you are gauranteed to see these messages
505
+ # default value of false because you are guaranteed to see these messages
506
506
  # again. However, if both auto.commit and auto.offset.store are set to
507
507
  # true, you should set yield_on_error to true so you can process messages
508
508
  # that you may or may not see again.
@@ -518,7 +518,7 @@ module Rdkafka
518
518
  # @yield [messages, pending_exception]
519
519
  # @yieldparam messages [Array] An array of received Message
520
520
  # @yieldparam pending_exception [Exception] normally nil, or an exception
521
- # which will be propogated after processing of the partial batch is complete.
521
+ # which will be propagated after processing of the partial batch is complete.
522
522
  #
523
523
  # @return [nil]
524
524
  def each_batch(max_items: 100, bytes_threshold: Float::INFINITY, timeout_ms: 250, yield_on_error: false, &block)
@@ -0,0 +1,47 @@
1
+ module Rdkafka
2
+ class Producer
3
+ class Client
4
+ def initialize(native)
5
+ @native = native
6
+
7
+ # Start thread to poll client for delivery callbacks
8
+ @polling_thread = Thread.new do
9
+ loop do
10
+ Rdkafka::Bindings.rd_kafka_poll(native, 250)
11
+ # Exit thread if closing and the poll queue is empty
12
+ if Thread.current[:closing] && Rdkafka::Bindings.rd_kafka_outq_len(native) == 0
13
+ break
14
+ end
15
+ end
16
+ end
17
+ @polling_thread.abort_on_exception = true
18
+ @polling_thread[:closing] = false
19
+ end
20
+
21
+ def native
22
+ @native
23
+ end
24
+
25
+ def finalizer
26
+ ->(_) { close }
27
+ end
28
+
29
+ def closed?
30
+ @native.nil?
31
+ end
32
+
33
+ def close(object_id=nil)
34
+ return unless @native
35
+
36
+ # Indicate to polling thread that we're closing
37
+ @polling_thread[:closing] = true
38
+ # Wait for the polling thread to finish up
39
+ @polling_thread.join
40
+
41
+ Rdkafka::Bindings.rd_kafka_destroy(@native)
42
+
43
+ @native = nil
44
+ end
45
+ end
46
+ end
47
+ end
@@ -11,7 +11,7 @@ module Rdkafka
11
11
  attr_reader :offset
12
12
 
13
13
  # Error in case happen during produce.
14
- # @return [string]
14
+ # @return [String]
15
15
  attr_reader :error
16
16
 
17
17
  private
@@ -1,4 +1,4 @@
1
- require "securerandom"
1
+ require "objspace"
2
2
 
3
3
  module Rdkafka
4
4
  # A producer for Kafka messages. To create a producer set up a {Config} and call {Config#producer producer} on that.
@@ -10,29 +10,22 @@ module Rdkafka
10
10
  attr_reader :delivery_callback
11
11
 
12
12
  # @private
13
- def initialize(native_kafka)
14
- @id = SecureRandom.uuid
15
- @closing = false
16
- @native_kafka = native_kafka
13
+ # Returns the number of arguments accepted by the callback, by default this is nil.
14
+ #
15
+ # @return [Integer, nil]
16
+ attr_reader :delivery_callback_arity
17
+
18
+ # @private
19
+ def initialize(client, partitioner_name)
20
+ @client = client
21
+ @partitioner_name = partitioner_name || "consistent_random"
17
22
 
18
23
  # Makes sure, that the producer gets closed before it gets GCed by Ruby
19
- ObjectSpace.define_finalizer(@id, proc { close })
20
-
21
- # Start thread to poll client for delivery callbacks
22
- @polling_thread = Thread.new do
23
- loop do
24
- Rdkafka::Bindings.rd_kafka_poll(@native_kafka, 250)
25
- # Exit thread if closing and the poll queue is empty
26
- if @closing && Rdkafka::Bindings.rd_kafka_outq_len(@native_kafka) == 0
27
- break
28
- end
29
- end
30
- end
31
- @polling_thread.abort_on_exception = true
24
+ ObjectSpace.define_finalizer(self, client.finalizer)
32
25
  end
33
26
 
34
27
  # Set a callback that will be called every time a message is successfully produced.
35
- # The callback is called with a {DeliveryReport}
28
+ # The callback is called with a {DeliveryReport} and {DeliveryHandle}
36
29
  #
37
30
  # @param callback [Proc, #call] The callback
38
31
  #
@@ -40,20 +33,14 @@ module Rdkafka
40
33
  def delivery_callback=(callback)
41
34
  raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call)
42
35
  @delivery_callback = callback
36
+ @delivery_callback_arity = arity(callback)
43
37
  end
44
38
 
45
39
  # Close this producer and wait for the internal poll queue to empty.
46
40
  def close
47
- ObjectSpace.undefine_finalizer(@id)
41
+ ObjectSpace.undefine_finalizer(self)
48
42
 
49
- return unless @native_kafka
50
-
51
- # Indicate to polling thread that we're closing
52
- @closing = true
53
- # Wait for the polling thread to finish up
54
- @polling_thread.join
55
- Rdkafka::Bindings.rd_kafka_destroy(@native_kafka)
56
- @native_kafka = nil
43
+ @client.close
57
44
  end
58
45
 
59
46
  # Partition count for a given topic.
@@ -65,7 +52,7 @@ module Rdkafka
65
52
  #
66
53
  def partition_count(topic)
67
54
  closed_producer_check(__method__)
68
- Rdkafka::Metadata.new(@native_kafka, topic).topics&.first[:partition_count]
55
+ Rdkafka::Metadata.new(@client.native, topic).topics&.first[:partition_count]
69
56
  end
70
57
 
71
58
  # Produces a message to a Kafka topic. The message is added to rdkafka's queue, call {DeliveryHandle#wait wait} on the returned delivery handle to make sure it is delivered.
@@ -75,8 +62,9 @@ module Rdkafka
75
62
  #
76
63
  # @param topic [String] The topic to produce to
77
64
  # @param payload [String,nil] The message's payload
78
- # @param key [String] The message's key
65
+ # @param key [String, nil] The message's key
79
66
  # @param partition [Integer,nil] Optional partition to produce to
67
+ # @param partition_key [String, nil] Optional partition key based on which partition assignment can happen
80
68
  # @param timestamp [Time,Integer,nil] Optional timestamp of this message. Integer timestamp is in milliseconds since Jan 1 1970.
81
69
  # @param headers [Hash<String,String>] Optional message headers
82
70
  #
@@ -105,7 +93,7 @@ module Rdkafka
105
93
  if partition_key
106
94
  partition_count = partition_count(topic)
107
95
  # If the topic is not present, set to -1
108
- partition = Rdkafka::Bindings.partitioner(partition_key, partition_count) if partition_count
96
+ partition = Rdkafka::Bindings.partitioner(partition_key, partition_count, @partitioner_name) if partition_count
109
97
  end
110
98
 
111
99
  # If partition is nil, use -1 to let librdafka set the partition randomly or
@@ -156,7 +144,7 @@ module Rdkafka
156
144
 
157
145
  # Produce the message
158
146
  response = Rdkafka::Bindings.rd_kafka_producev(
159
- @native_kafka,
147
+ @client.native,
160
148
  *args
161
149
  )
162
150
 
@@ -170,12 +158,21 @@ module Rdkafka
170
158
  end
171
159
 
172
160
  # @private
173
- def call_delivery_callback(delivery_handle)
174
- @delivery_callback.call(delivery_handle) if @delivery_callback
161
+ def call_delivery_callback(delivery_report, delivery_handle)
162
+ return unless @delivery_callback
163
+
164
+ args = [delivery_report, delivery_handle].take(@delivery_callback_arity)
165
+ @delivery_callback.call(*args)
166
+ end
167
+
168
+ def arity(callback)
169
+ return callback.arity if callback.respond_to?(:arity)
170
+
171
+ callback.method(:call).arity
175
172
  end
176
173
 
177
174
  def closed_producer_check(method)
178
- raise Rdkafka::ClosedProducerError.new(method) if @native_kafka.nil?
175
+ raise Rdkafka::ClosedProducerError.new(method) if @client.closed?
179
176
  end
180
177
  end
181
178
  end
@@ -1,5 +1,5 @@
1
1
  module Rdkafka
2
- VERSION = "0.10.0"
3
- LIBRDKAFKA_VERSION = "1.5.0"
4
- LIBRDKAFKA_SOURCE_SHA256 = "f7fee59fdbf1286ec23ef0b35b2dfb41031c8727c90ced6435b8cf576f23a656"
2
+ VERSION = "0.12.0"
3
+ LIBRDKAFKA_VERSION = "1.9.0"
4
+ LIBRDKAFKA_SOURCE_SHA256 = "59b6088b69ca6cf278c3f9de5cd6b7f3fd604212cd1c59870bc531c54147e889"
5
5
  end
data/lib/rdkafka.rb CHANGED
@@ -17,5 +17,6 @@ require "rdkafka/consumer/topic_partition_list"
17
17
  require "rdkafka/error"
18
18
  require "rdkafka/metadata"
19
19
  require "rdkafka/producer"
20
+ require "rdkafka/producer/client"
20
21
  require "rdkafka/producer/delivery_handle"
21
22
  require "rdkafka/producer/delivery_report"
data/rdkafka.gemspec CHANGED
@@ -14,15 +14,17 @@ Gem::Specification.new do |gem|
14
14
  gem.name = 'rdkafka'
15
15
  gem.require_paths = ['lib']
16
16
  gem.version = Rdkafka::VERSION
17
- gem.required_ruby_version = '>= 2.4'
17
+ gem.required_ruby_version = '>= 2.6'
18
18
  gem.extensions = %w(ext/Rakefile)
19
19
 
20
- gem.add_dependency 'ffi', '~> 1.9'
21
- gem.add_dependency 'mini_portile2', '~> 2.1'
22
- gem.add_dependency 'rake', '>= 12.3'
20
+ gem.add_dependency 'ffi', '~> 1.15'
21
+ gem.add_dependency 'mini_portile2', '~> 2.6'
22
+ gem.add_dependency 'rake', '> 12'
23
23
 
24
- gem.add_development_dependency 'pry', '~> 0.10'
24
+ gem.add_development_dependency 'pry'
25
25
  gem.add_development_dependency 'rspec', '~> 3.5'
26
- gem.add_development_dependency 'rake', '~> 12.0'
27
- gem.add_development_dependency 'simplecov', '~> 0.15'
26
+ gem.add_development_dependency 'rake'
27
+ gem.add_development_dependency 'simplecov'
28
+ gem.add_development_dependency 'guard'
29
+ gem.add_development_dependency 'guard-rspec'
28
30
  end
@@ -111,4 +111,3 @@ describe Rdkafka::AbstractHandle do
111
111
  end
112
112
  end
113
113
  end
114
-
@@ -52,7 +52,7 @@ describe Rdkafka::Admin do
52
52
  end
53
53
 
54
54
  describe "with an invalid partition count" do
55
- let(:topic_partition_count) { -1 }
55
+ let(:topic_partition_count) { -999 }
56
56
 
57
57
  it "raises an exception" do
58
58
  expect {
@@ -76,6 +76,13 @@ describe Rdkafka::Bindings do
76
76
  result_2 = (Zlib.crc32(partition_key) % partition_count)
77
77
  expect(result_1).to eq(result_2)
78
78
  end
79
+
80
+ it "should return the partition calculated by the specified partitioner" do
81
+ result_1 = Rdkafka::Bindings.partitioner(partition_key, partition_count, "murmur2")
82
+ ptr = FFI::MemoryPointer.from_string(partition_key)
83
+ result_2 = Rdkafka::Bindings.rd_kafka_msg_partitioner_murmur2(nil, ptr, partition_key.size, partition_count, nil, nil)
84
+ expect(result_1).to eq(result_2)
85
+ end
79
86
  end
80
87
 
81
88
  describe "stats callback" do
@@ -108,7 +108,7 @@ describe Rdkafka::Config do
108
108
  end
109
109
 
110
110
  it "should create a consumer with valid config" do
111
- consumer = rdkafka_config.consumer
111
+ consumer = rdkafka_consumer_config.consumer
112
112
  expect(consumer).to be_a Rdkafka::Consumer
113
113
  consumer.close
114
114
  end
@@ -136,7 +136,7 @@ describe Rdkafka::Config do
136
136
  end
137
137
 
138
138
  it "should create a producer with valid config" do
139
- producer = rdkafka_config.producer
139
+ producer = rdkafka_consumer_config.producer
140
140
  expect(producer).to be_a Rdkafka::Producer
141
141
  producer.close
142
142
  end