rdkafka 0.4.2 → 0.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 277ff64f82622eddd6c31265d1b4b91b6b60bed802590bf9b4d015ea7a32ce0a
4
- data.tar.gz: 027fc22c22349729bb04288f9010eda156a9d04cf9a616bb237b1fc56eb9aed5
3
+ metadata.gz: 8bf97d10412b4f3c0801f657796fd05c94ceede3caafe6a45719c840b534752a
4
+ data.tar.gz: 8cc93bffb8119cf9c97aa60cf1f6d634e66a99686e0785150ae51283a6514e8c
5
5
  SHA512:
6
- metadata.gz: 0cb6c97fa7ac62ead42bcd8da1d22f56b02f56e54a7671e7e2722d029839d24c7390c0fe10890530afb4e9a002e9f5fcc7e213777cf956e762d443edf466715e
7
- data.tar.gz: 54865cb4b0772cffa1c91fd4e971a3e23f9f9b24959c83402460bca8b2d268e15bef8df8ea011848a57d75c4088198d02f77713ff9123647f62b7b5ece38a787
6
+ metadata.gz: e8e085b3dd9d80d77003d2c7f0fbda69371c233b121eac2594556d7d62da99a4bed4aace93e9489783c789e64bafd732faaad3de2ffb5d3a5af8568c5b43676b
7
+ data.tar.gz: 5a267a9814e87e876544e04e6657829c298ff01b03cb5ff41a0205795ea2fb4f6a2147ee1a0949e52f7a069f5dc73d5ed2072867194032e8ea913ac5d6871e23
@@ -11,14 +11,27 @@ env:
11
11
  - KAFKA_HEAP_OPTS="-Xmx512m -Xms512m"
12
12
 
13
13
  rvm:
14
- - 2.1
15
- - 2.2
16
- - 2.3
17
14
  - 2.4
18
15
  - 2.5
16
+ - 2.6
17
+ - 2.7.1
18
+ - jruby-9.2.9.0
19
+ - jruby-9.2.10.0
20
+ - jruby-9.2.11.1
19
21
 
20
22
  before_install:
21
23
  - gem update --system
24
+ - |
25
+ r_eng="$(ruby -e 'STDOUT.write RUBY_ENGINE')";
26
+ if [ "$r_eng" == "jruby" ]; then
27
+ sudo apt-get update && \
28
+ sudo apt-get install -y git && \
29
+ sudo apt-get install -y libpthread-stubs0-dev && \
30
+ sudo apt-get install -y build-essential && \
31
+ sudo apt-get install -y zlib1g-dev && \
32
+ sudo apt-get install -y libssl-dev && \
33
+ sudo apt-get install -y libsasl2-dev
34
+ fi
22
35
 
23
36
  before_script:
24
37
  - docker-compose up -d
@@ -28,7 +41,7 @@ before_script:
28
41
  - ./cc-test-reporter before-build
29
42
 
30
43
  script:
31
- - bundle exec rspec
44
+ - bundle exec rspec --format documentation
32
45
 
33
46
  after_script:
34
47
  - docker-compose stop
@@ -1,3 +1,26 @@
1
+ # 0.8.0
2
+ * Upgrade librdkafka to 1.4.0
3
+ * Integrate librdkafka metadata API and add partition_key (by Adithya-copart)
4
+ * Ruby 2.7 compatibility fix (by Geoff Thé)A
5
+ * Add error to delivery report (by Alex Stanovsky)
6
+ * Don't override CPPFLAGS and LDFLAGS if already set on Mac (by Hiroshi Hatake)
7
+ * Allow use of Rake 13.x and up (by Tomasz Pajor)
8
+
9
+ # 0.7.0
10
+ * Bump librdkafka to 1.2.0 (by rob-as)
11
+ * Allow customizing the wait time for delivery report availability (by mensfeld)
12
+
13
+ # 0.6.0
14
+ * Bump librdkafka to 1.1.0 (by Chris Gaffney)
15
+ * Implement seek (by breunigs)
16
+
17
+ # 0.5.0
18
+ * Bump librdkafka to 1.0.0 (by breunigs)
19
+ * Add cluster and member information (by dmexe)
20
+ * Support message headers for consumer & producer (by dmexe)
21
+ * Add consumer rebalance listener (by dmexe)
22
+ * Implement pause/resume partitions (by dmexe)
23
+
1
24
  # 0.4.2
2
25
  * Delivery callback for producer
3
26
  * Document list param of commit method
data/README.md CHANGED
@@ -8,7 +8,11 @@
8
8
  The `rdkafka` gem is a modern Kafka client library for Ruby based on
9
9
  [librdkafka](https://github.com/edenhill/librdkafka/).
10
10
  It wraps the production-ready C client using the [ffi](https://github.com/ffi/ffi)
11
- gem and targets Kafka 1.0+ and Ruby 2.1+.
11
+ gem and targets Kafka 1.0+ and Ruby 2.4+.
12
+
13
+ `rdkafka` was written because we needed a reliable Ruby client for
14
+ Kafka that supports modern Kafka at [AppSignal](https://appsignal.com).
15
+ We run it in production on very high traffic systems.
12
16
 
13
17
  This gem only provides a high-level Kafka consumer. If you are running
14
18
  an older version of Kafka and/or need the legacy simple consumer we
@@ -68,13 +72,6 @@ end
68
72
  delivery_handles.each(&:wait)
69
73
  ```
70
74
 
71
- ## Known issues
72
-
73
- When using forked process such as when using Unicorn you currently need
74
- to make sure that you create rdkafka instances after forking. Otherwise
75
- they will not work and crash your Ruby process when they are garbage
76
- collected. See https://github.com/appsignal/rdkafka-ruby/issues/19
77
-
78
75
  ## Development
79
76
 
80
77
  A Docker Compose file is included to run Kafka and Zookeeper. To run
@@ -1,18 +1,22 @@
1
+
1
2
  version: '2'
2
3
  services:
3
4
  zookeeper:
4
- image: wurstmeister/zookeeper
5
- ports:
6
- - "2181:2181"
5
+ image: confluentinc/cp-zookeeper:latest
6
+ environment:
7
+ ZOOKEEPER_CLIENT_PORT: 2181
8
+ ZOOKEEPER_TICK_TIME: 2000
9
+
7
10
  kafka:
8
- image: wurstmeister/kafka:1.0.1
11
+ image: confluentinc/cp-kafka:latest
12
+ depends_on:
13
+ - zookeeper
9
14
  ports:
10
- - "9092:9092"
15
+ - 9092:9092
11
16
  environment:
12
- KAFKA_ADVERTISED_HOST_NAME: localhost
13
- KAFKA_ADVERTISED_PORT: 9092
17
+ KAFKA_BROKER_ID: 1
14
18
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
15
- KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
16
- KAFKA_CREATE_TOPICS: "consume_test_topic:3:1,empty_test_topic:3:1,load_test_topic:3:1,produce_test_topic:3:1,rake_test_topic:3:1,empty_test_topic:3:1"
17
- volumes:
18
- - /var/run/docker.sock:/var/run/docker.sock
19
+ KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
20
+ KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
21
+ KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
22
+ KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
@@ -5,19 +5,7 @@ this gem is installed.
5
5
 
6
6
  To update the `librdkafka` version follow the following steps:
7
7
 
8
- * Download the new version `tar.gz` from https://github.com/edenhill/librdkafka/
9
- * Generate a `sha256` with (if using MacOS) `shasum -a 256 <file>`
10
- * Change the `sha256` in `lib/rdkafka/version.rb`
8
+ * Go to https://github.com/edenhill/librdkafka/releases to get the new
9
+ version number and asset checksum for `tar.gz`.
11
10
  * Change the version in `lib/rdkafka/version.rb`
12
-
13
- ## Disclaimer
14
-
15
- Currently the `librdkafka` project does not provide
16
- checksums of releases. The checksum provided here is generated on a best
17
- effort basis. If the CDN would be compromised at the time of download the
18
- checksum could be incorrect.
19
-
20
- Do your own verification if you rely on this behaviour.
21
-
22
- Once https://github.com/appsignal/rdkafka-ruby/issues/44 is implemented
23
- we will change this process.
11
+ * Change the `sha256` in `lib/rdkafka/version.rb`
@@ -23,9 +23,10 @@ task :default => :clean do
23
23
  recipe = MiniPortile.new("librdkafka", Rdkafka::LIBRDKAFKA_VERSION)
24
24
 
25
25
  # Use default homebrew openssl if we're on mac and the directory exists
26
- if recipe.host.include?("darwin") && Dir.exists?("/usr/local/opt/openssl")
27
- ENV["CPPFLAGS"] = "-I/usr/local/opt/openssl/include"
28
- ENV["LDFLAGS"] = "-L/usr/local/opt/openssl/lib"
26
+ # and each of flags is not empty
27
+ if recipe.host&.include?("darwin") && Dir.exist?("/usr/local/opt/openssl")
28
+ ENV["CPPFLAGS"] = "-I/usr/local/opt/openssl/include" unless ENV["CPPFLAGS"]
29
+ ENV["LDFLAGS"] = "-L/usr/local/opt/openssl/lib" unless ENV["LDFLAGS"]
29
30
  end
30
31
 
31
32
  recipe.files << {
@@ -55,3 +56,22 @@ task :clean do
55
56
  FileUtils.rm_rf File.join(File.dirname(__FILE__), "ports")
56
57
  FileUtils.rm_rf File.join(File.dirname(__FILE__), "tmp")
57
58
  end
59
+
60
+ namespace :build do
61
+ desc "Build librdkafka at the given git sha or tag"
62
+ task :git, [:ref] do |task, args|
63
+ ref = args[:ref]
64
+ version = "git-#{ref}"
65
+
66
+ recipe = MiniPortile.new("librdkafka", version)
67
+ recipe.files << "https://github.com/edenhill/librdkafka/archive/#{ref}.tar.gz"
68
+ recipe.configure_options = ["--host=#{recipe.host}"]
69
+ recipe.cook
70
+
71
+ ext = recipe.host.include?("darwin") ? "dylib" : "so"
72
+ lib = File.expand_path("ports/#{recipe.host}/librdkafka/#{version}/lib/librdkafka.#{ext}", __dir__)
73
+
74
+ # Copy will copy the content, following any symlinks
75
+ FileUtils.cp(lib, __dir__)
76
+ end
77
+ end
@@ -3,10 +3,12 @@ require "rdkafka/version"
3
3
  require "rdkafka/bindings"
4
4
  require "rdkafka/config"
5
5
  require "rdkafka/consumer"
6
+ require "rdkafka/consumer/headers"
6
7
  require "rdkafka/consumer/message"
7
8
  require "rdkafka/consumer/partition"
8
9
  require "rdkafka/consumer/topic_partition_list"
9
10
  require "rdkafka/error"
11
+ require "rdkafka/metadata"
10
12
  require "rdkafka/producer"
11
13
  require "rdkafka/producer/delivery_handle"
12
14
  require "rdkafka/producer/delivery_report"
@@ -8,7 +8,7 @@ module Rdkafka
8
8
  extend FFI::Library
9
9
 
10
10
  def self.lib_extension
11
- if Gem::Platform.local.os.include?("darwin")
11
+ if RbConfig::CONFIG['host_os'] =~ /darwin/
12
12
  'dylib'
13
13
  else
14
14
  'so'
@@ -17,11 +17,32 @@ module Rdkafka
17
17
 
18
18
  ffi_lib File.join(File.dirname(__FILE__), "../../ext/librdkafka.#{lib_extension}")
19
19
 
20
+ RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS = -175
21
+ RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS = -174
22
+ RD_KAFKA_RESP_ERR__NOENT = -156
23
+ RD_KAFKA_RESP_ERR_NO_ERROR = 0
24
+
25
+ RD_KAFKA_OFFSET_END = -1
26
+ RD_KAFKA_OFFSET_BEGINNING = -2
27
+ RD_KAFKA_OFFSET_STORED = -1000
28
+ RD_KAFKA_OFFSET_INVALID = -1001
29
+
30
+ class SizePtr < FFI::Struct
31
+ layout :value, :size_t
32
+ end
33
+
20
34
  # Polling
21
35
 
22
36
  attach_function :rd_kafka_poll, [:pointer, :int], :void, blocking: true
23
37
  attach_function :rd_kafka_outq_len, [:pointer], :int, blocking: true
24
38
 
39
+ # Metadata
40
+
41
+ attach_function :rd_kafka_memberid, [:pointer], :string
42
+ attach_function :rd_kafka_clusterid, [:pointer], :string
43
+ attach_function :rd_kafka_metadata, [:pointer, :int, :pointer, :pointer, :int], :int
44
+ attach_function :rd_kafka_metadata_destroy, [:pointer], :void
45
+
25
46
  # Message struct
26
47
 
27
48
  class Message < FFI::Struct
@@ -148,6 +169,46 @@ module Rdkafka
148
169
  attach_function :rd_kafka_consumer_poll, [:pointer, :int], :pointer, blocking: true
149
170
  attach_function :rd_kafka_consumer_close, [:pointer], :void, blocking: true
150
171
  attach_function :rd_kafka_offset_store, [:pointer, :int32, :int64], :int
172
+ attach_function :rd_kafka_pause_partitions, [:pointer, :pointer], :int
173
+ attach_function :rd_kafka_resume_partitions, [:pointer, :pointer], :int
174
+ attach_function :rd_kafka_seek, [:pointer, :int32, :int64, :int], :int
175
+
176
+ # Headers
177
+ attach_function :rd_kafka_header_get_all, [:pointer, :size_t, :pointer, :pointer, SizePtr], :int
178
+ attach_function :rd_kafka_message_headers, [:pointer, :pointer], :int
179
+
180
+ # Rebalance
181
+
182
+ callback :rebalance_cb_function, [:pointer, :int, :pointer, :pointer], :void
183
+ attach_function :rd_kafka_conf_set_rebalance_cb, [:pointer, :rebalance_cb_function], :void
184
+
185
+ RebalanceCallback = FFI::Function.new(
186
+ :void, [:pointer, :int, :pointer, :pointer]
187
+ ) do |client_ptr, code, partitions_ptr, opaque_ptr|
188
+ case code
189
+ when RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS
190
+ Rdkafka::Bindings.rd_kafka_assign(client_ptr, partitions_ptr)
191
+ else # RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS or errors
192
+ Rdkafka::Bindings.rd_kafka_assign(client_ptr, FFI::Pointer::NULL)
193
+ end
194
+
195
+ opaque = Rdkafka::Config.opaques[opaque_ptr.to_i]
196
+ return unless opaque
197
+
198
+ tpl = Rdkafka::Consumer::TopicPartitionList.from_native_tpl(partitions_ptr).freeze
199
+ consumer = Rdkafka::Consumer.new(client_ptr)
200
+
201
+ begin
202
+ case code
203
+ when RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS
204
+ opaque.call_on_partitions_assigned(consumer, tpl)
205
+ when RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS
206
+ opaque.call_on_partitions_revoked(consumer, tpl)
207
+ end
208
+ rescue Exception => err
209
+ Rdkafka::Config.logger.error("Unhandled exception: #{err.class} - #{err.message}")
210
+ end
211
+ end
151
212
 
152
213
  # Stats
153
214
 
@@ -164,6 +225,8 @@ module Rdkafka
164
225
  RD_KAFKA_VTYPE_OPAQUE = 6
165
226
  RD_KAFKA_VTYPE_MSGFLAGS = 7
166
227
  RD_KAFKA_VTYPE_TIMESTAMP = 8
228
+ RD_KAFKA_VTYPE_HEADER = 9
229
+ RD_KAFKA_VTYPE_HEADERS = 10
167
230
 
168
231
  RD_KAFKA_MSG_F_COPY = 0x2
169
232
 
@@ -171,6 +234,17 @@ module Rdkafka
171
234
  callback :delivery_cb, [:pointer, :pointer, :pointer], :void
172
235
  attach_function :rd_kafka_conf_set_dr_msg_cb, [:pointer, :delivery_cb], :void
173
236
 
237
+ # Partitioner
238
+ attach_function :rd_kafka_msg_partitioner_consistent_random, [:pointer, :pointer, :size_t, :int32, :pointer, :pointer], :int32
239
+
240
+ def self.partitioner(str, partition_count)
241
+ # Return RD_KAFKA_PARTITION_UA(unassigned partition) when partition count is nil/zero.
242
+ return -1 unless partition_count&.nonzero?
243
+
244
+ str_ptr = FFI::MemoryPointer.from_string(str)
245
+ rd_kafka_msg_partitioner_consistent_random(nil, str_ptr, str.size, partition_count, nil, nil)
246
+ end
247
+
174
248
  DeliveryCallback = FFI::Function.new(
175
249
  :void, [:pointer, :pointer, :pointer]
176
250
  ) do |client_ptr, message_ptr, opaque_ptr|
@@ -184,7 +258,7 @@ module Rdkafka
184
258
  delivery_handle[:offset] = message[:offset]
185
259
  # Call delivery callback on opaque
186
260
  if opaque = Rdkafka::Config.opaques[opaque_ptr.to_i]
187
- opaque.call_delivery_callback(Rdkafka::Producer::DeliveryReport.new(message[:partition], message[:offset]))
261
+ opaque.call_delivery_callback(Rdkafka::Producer::DeliveryReport.new(message[:partition], message[:offset], message[:err]))
188
262
  end
189
263
  end
190
264
  end
@@ -67,11 +67,12 @@ module Rdkafka
67
67
 
68
68
  # Returns a new config with the provided options which are merged with {DEFAULT_CONFIG}.
69
69
  #
70
- # @param config_hash [Hash<String,Symbol => String>] The config options for rdkafka
70
+ # @param config_hash [Hash{String,Symbol => String}] The config options for rdkafka
71
71
  #
72
72
  # @return [Config]
73
73
  def initialize(config_hash = {})
74
74
  @config_hash = DEFAULT_CONFIG.merge(config_hash)
75
+ @consumer_rebalance_listener = nil
75
76
  end
76
77
 
77
78
  # Set a config option.
@@ -93,6 +94,13 @@ module Rdkafka
93
94
  @config_hash[key]
94
95
  end
95
96
 
97
+ # Get notifications on partition assignment/revocation for the subscribed topics
98
+ #
99
+ # @param listener [Object, #on_partitions_assigned, #on_partitions_revoked] listener instance
100
+ def consumer_rebalance_listener=(listener)
101
+ @consumer_rebalance_listener = listener
102
+ end
103
+
96
104
  # Create a consumer with this configuration.
97
105
  #
98
106
  # @raise [ConfigError] When the configuration contains invalid options
@@ -100,9 +108,19 @@ module Rdkafka
100
108
  #
101
109
  # @return [Consumer] The created consumer
102
110
  def consumer
103
- kafka = native_kafka(native_config, :rd_kafka_consumer)
111
+ opaque = Opaque.new
112
+ config = native_config(opaque)
113
+
114
+ if @consumer_rebalance_listener
115
+ opaque.consumer_rebalance_listener = @consumer_rebalance_listener
116
+ Rdkafka::Bindings.rd_kafka_conf_set_rebalance_cb(config, Rdkafka::Bindings::RebalanceCallback)
117
+ end
118
+
119
+ kafka = native_kafka(config, :rd_kafka_consumer)
120
+
104
121
  # Redirect the main queue to the consumer
105
122
  Rdkafka::Bindings.rd_kafka_poll_set_consumer(kafka)
123
+
106
124
  # Return consumer with Kafka client
107
125
  Rdkafka::Consumer.new(kafka)
108
126
  end
@@ -137,7 +155,7 @@ module Rdkafka
137
155
 
138
156
  private
139
157
 
140
- # This method is only intented to be used to create a client,
158
+ # This method is only intended to be used to create a client,
141
159
  # using it in another way will leak memory.
142
160
  def native_config(opaque=nil)
143
161
  Rdkafka::Bindings.rd_kafka_conf_new.tap do |config|
@@ -194,19 +212,32 @@ module Rdkafka
194
212
  Rdkafka::Bindings.rd_kafka_queue_get_main(handle)
195
213
  )
196
214
 
197
- FFI::AutoPointer.new(
198
- handle,
199
- Rdkafka::Bindings.method(:rd_kafka_destroy)
200
- )
215
+ # Return handle which should be closed using rd_kafka_destroy after usage.
216
+ handle
201
217
  end
202
218
  end
203
219
 
204
220
  # @private
205
221
  class Opaque
206
222
  attr_accessor :producer
223
+ attr_accessor :consumer_rebalance_listener
207
224
 
208
225
  def call_delivery_callback(delivery_handle)
209
226
  producer.call_delivery_callback(delivery_handle) if producer
210
227
  end
228
+
229
+ def call_on_partitions_assigned(consumer, list)
230
+ return unless consumer_rebalance_listener
231
+ return unless consumer_rebalance_listener.respond_to?(:on_partitions_assigned)
232
+
233
+ consumer_rebalance_listener.on_partitions_assigned(consumer, list)
234
+ end
235
+
236
+ def call_on_partitions_revoked(consumer, list)
237
+ return unless consumer_rebalance_listener
238
+ return unless consumer_rebalance_listener.respond_to?(:on_partitions_revoked)
239
+
240
+ consumer_rebalance_listener.on_partitions_revoked(consumer, list)
241
+ end
211
242
  end
212
243
  end
@@ -5,6 +5,9 @@ module Rdkafka
5
5
  #
6
6
  # To create a consumer set up a {Config} and call {Config#consumer consumer} on that. It is
7
7
  # mandatory to set `:"group.id"` in the configuration.
8
+ #
9
+ # Consumer implements `Enumerable`, so you can use `each` to consume messages, or for example
10
+ # `each_slice` to consume batches of messages.
8
11
  class Consumer
9
12
  include Enumerable
10
13
 
@@ -17,8 +20,12 @@ module Rdkafka
17
20
  # Close this consumer
18
21
  # @return [nil]
19
22
  def close
23
+ return unless @native_kafka
24
+
20
25
  @closing = true
21
26
  Rdkafka::Bindings.rd_kafka_consumer_close(@native_kafka)
27
+ Rdkafka::Bindings.rd_kafka_destroy(@native_kafka)
28
+ @native_kafka = nil
22
29
  end
23
30
 
24
31
  # Subscribe to one or more topics letting Kafka handle partition assignments.
@@ -31,20 +38,17 @@ module Rdkafka
31
38
  def subscribe(*topics)
32
39
  # Create topic partition list with topics and no partition set
33
40
  tpl = Rdkafka::Bindings.rd_kafka_topic_partition_list_new(topics.length)
41
+
34
42
  topics.each do |topic|
35
- Rdkafka::Bindings.rd_kafka_topic_partition_list_add(
36
- tpl,
37
- topic,
38
- -1
39
- )
43
+ Rdkafka::Bindings.rd_kafka_topic_partition_list_add(tpl, topic, -1)
40
44
  end
45
+
41
46
  # Subscribe to topic partition list and check this was successful
42
47
  response = Rdkafka::Bindings.rd_kafka_subscribe(@native_kafka, tpl)
43
48
  if response != 0
44
49
  raise Rdkafka::RdkafkaError.new(response, "Error subscribing to '#{topics.join(', ')}'")
45
50
  end
46
51
  ensure
47
- # Clean up the topic partition list
48
52
  Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl)
49
53
  end
50
54
 
@@ -60,19 +64,76 @@ module Rdkafka
60
64
  end
61
65
  end
62
66
 
67
+ # Pause producing or consumption for the provided list of partitions
68
+ #
69
+ # @param list [TopicPartitionList] The topic with partitions to pause
70
+ #
71
+ # @raise [RdkafkaTopicPartitionListError] When pausing subscription fails.
72
+ #
73
+ # @return [nil]
74
+ def pause(list)
75
+ unless list.is_a?(TopicPartitionList)
76
+ raise TypeError.new("list has to be a TopicPartitionList")
77
+ end
78
+
79
+ tpl = list.to_native_tpl
80
+
81
+ begin
82
+ response = Rdkafka::Bindings.rd_kafka_pause_partitions(@native_kafka, tpl)
83
+
84
+ if response != 0
85
+ list = TopicPartitionList.from_native_tpl(tpl)
86
+ raise Rdkafka::RdkafkaTopicPartitionListError.new(response, list, "Error pausing '#{list.to_h}'")
87
+ end
88
+ ensure
89
+ Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl)
90
+ end
91
+ end
92
+
93
+ # Resume producing consumption for the provided list of partitions
94
+ #
95
+ # @param list [TopicPartitionList] The topic with partitions to pause
96
+ #
97
+ # @raise [RdkafkaError] When resume subscription fails.
98
+ #
99
+ # @return [nil]
100
+ def resume(list)
101
+ unless list.is_a?(TopicPartitionList)
102
+ raise TypeError.new("list has to be a TopicPartitionList")
103
+ end
104
+
105
+ tpl = list.to_native_tpl
106
+
107
+ begin
108
+ response = Rdkafka::Bindings.rd_kafka_resume_partitions(@native_kafka, tpl)
109
+ if response != 0
110
+ raise Rdkafka::RdkafkaError.new(response, "Error resume '#{list.to_h}'")
111
+ end
112
+ ensure
113
+ Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl)
114
+ end
115
+ end
116
+
63
117
  # Return the current subscription to topics and partitions
64
118
  #
65
119
  # @raise [RdkafkaError] When getting the subscription fails.
66
120
  #
67
121
  # @return [TopicPartitionList]
68
122
  def subscription
69
- tpl = FFI::MemoryPointer.new(:pointer)
70
- tpl.autorelease = false
71
- response = Rdkafka::Bindings.rd_kafka_subscription(@native_kafka, tpl)
123
+ ptr = FFI::MemoryPointer.new(:pointer)
124
+ response = Rdkafka::Bindings.rd_kafka_subscription(@native_kafka, ptr)
125
+
72
126
  if response != 0
73
127
  raise Rdkafka::RdkafkaError.new(response)
74
128
  end
75
- Rdkafka::Consumer::TopicPartitionList.from_native_tpl(tpl.get_pointer(0))
129
+
130
+ native = ptr.read_pointer
131
+
132
+ begin
133
+ Rdkafka::Consumer::TopicPartitionList.from_native_tpl(native)
134
+ ensure
135
+ Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(native)
136
+ end
76
137
  end
77
138
 
78
139
  # Atomic assignment of partitions to consume
@@ -84,13 +145,17 @@ module Rdkafka
84
145
  unless list.is_a?(TopicPartitionList)
85
146
  raise TypeError.new("list has to be a TopicPartitionList")
86
147
  end
148
+
87
149
  tpl = list.to_native_tpl
88
- response = Rdkafka::Bindings.rd_kafka_assign(@native_kafka, tpl)
89
- if response != 0
90
- raise Rdkafka::RdkafkaError.new(response, "Error assigning '#{list.to_h}'")
150
+
151
+ begin
152
+ response = Rdkafka::Bindings.rd_kafka_assign(@native_kafka, tpl)
153
+ if response != 0
154
+ raise Rdkafka::RdkafkaError.new(response, "Error assigning '#{list.to_h}'")
155
+ end
156
+ ensure
157
+ Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl)
91
158
  end
92
- ensure
93
- Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) if tpl
94
159
  end
95
160
 
96
161
  # Returns the current partition assignment.
@@ -99,13 +164,23 @@ module Rdkafka
99
164
  #
100
165
  # @return [TopicPartitionList]
101
166
  def assignment
102
- tpl = FFI::MemoryPointer.new(:pointer)
103
- tpl.autorelease = false
104
- response = Rdkafka::Bindings.rd_kafka_assignment(@native_kafka, tpl)
167
+ ptr = FFI::MemoryPointer.new(:pointer)
168
+ response = Rdkafka::Bindings.rd_kafka_assignment(@native_kafka, ptr)
105
169
  if response != 0
106
170
  raise Rdkafka::RdkafkaError.new(response)
107
171
  end
108
- Rdkafka::Consumer::TopicPartitionList.from_native_tpl(tpl.get_pointer(0))
172
+
173
+ tpl = ptr.read_pointer
174
+
175
+ if !tpl.null?
176
+ begin
177
+ Rdkafka::Consumer::TopicPartitionList.from_native_tpl(tpl)
178
+ ensure
179
+ Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy tpl
180
+ end
181
+ end
182
+ ensure
183
+ ptr.free
109
184
  end
110
185
 
111
186
  # Return the current committed offset per partition for this consumer group.
@@ -117,18 +192,24 @@ module Rdkafka
117
192
  # @raise [RdkafkaError] When getting the committed positions fails.
118
193
  #
119
194
  # @return [TopicPartitionList]
120
- def committed(list=nil, timeout_ms=200)
195
+ def committed(list=nil, timeout_ms=1200)
121
196
  if list.nil?
122
197
  list = assignment
123
198
  elsif !list.is_a?(TopicPartitionList)
124
199
  raise TypeError.new("list has to be nil or a TopicPartitionList")
125
200
  end
201
+
126
202
  tpl = list.to_native_tpl
127
- response = Rdkafka::Bindings.rd_kafka_committed(@native_kafka, tpl, timeout_ms)
128
- if response != 0
129
- raise Rdkafka::RdkafkaError.new(response)
203
+
204
+ begin
205
+ response = Rdkafka::Bindings.rd_kafka_committed(@native_kafka, tpl, timeout_ms)
206
+ if response != 0
207
+ raise Rdkafka::RdkafkaError.new(response)
208
+ end
209
+ TopicPartitionList.from_native_tpl(tpl)
210
+ ensure
211
+ Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl)
130
212
  end
131
- TopicPartitionList.from_native_tpl(tpl)
132
213
  end
133
214
 
134
215
  # Query broker for low (oldest/beginning) and high (newest/end) offsets for a partition.
@@ -150,13 +231,16 @@ module Rdkafka
150
231
  partition,
151
232
  low,
152
233
  high,
153
- timeout_ms
234
+ timeout_ms,
154
235
  )
155
236
  if response != 0
156
237
  raise Rdkafka::RdkafkaError.new(response, "Error querying watermark offsets for partition #{partition} of #{topic}")
157
238
  end
158
239
 
159
- return low.read_int64, high.read_int64
240
+ return low.read_array_of_int64(1).first, high.read_array_of_int64(1).first
241
+ ensure
242
+ low.free
243
+ high.free
160
244
  end
161
245
 
162
246
  # Calculate the consumer lag per partition for the provided topic partition list.
@@ -172,6 +256,7 @@ module Rdkafka
172
256
  # @return [Hash<String, Hash<Integer, Integer>>] A hash containing all topics with the lag per partition
173
257
  def lag(topic_partition_list, watermark_timeout_ms=100)
174
258
  out = {}
259
+
175
260
  topic_partition_list.to_h.each do |topic, partitions|
176
261
  # Query high watermarks for this topic's partitions
177
262
  # and compare to the offset in the list.
@@ -190,6 +275,22 @@ module Rdkafka
190
275
  out
191
276
  end
192
277
 
278
+ # Returns the ClusterId as reported in broker metadata.
279
+ #
280
+ # @return [String, nil]
281
+ def cluster_id
282
+ Rdkafka::Bindings.rd_kafka_clusterid(@native_kafka)
283
+ end
284
+
285
+ # Returns this client's broker-assigned group member id
286
+ #
287
+ # This currently requires the high-level KafkaConsumer
288
+ #
289
+ # @return [String, nil]
290
+ def member_id
291
+ Rdkafka::Bindings.rd_kafka_memberid(@native_kafka)
292
+ end
293
+
193
294
  # Store offset of a message to be used in the next commit of this consumer
194
295
  #
195
296
  # When using this `enable.auto.offset.store` should be set to `false` in the config.
@@ -221,29 +322,67 @@ module Rdkafka
221
322
  end
222
323
  end
223
324
 
224
- # Commit the current offsets of this consumer
325
+ # Seek to a particular message. The next poll on the topic/partition will return the
326
+ # message at the given offset.
327
+ #
328
+ # @param message [Rdkafka::Consumer::Message] The message to which to seek
329
+ #
330
+ # @raise [RdkafkaError] When seeking fails
331
+ #
332
+ # @return [nil]
333
+ def seek(message)
334
+ # rd_kafka_offset_store is one of the few calls that does not support
335
+ # a string as the topic, so create a native topic for it.
336
+ native_topic = Rdkafka::Bindings.rd_kafka_topic_new(
337
+ @native_kafka,
338
+ message.topic,
339
+ nil
340
+ )
341
+ response = Rdkafka::Bindings.rd_kafka_seek(
342
+ native_topic,
343
+ message.partition,
344
+ message.offset,
345
+ 0 # timeout
346
+ )
347
+ if response != 0
348
+ raise Rdkafka::RdkafkaError.new(response)
349
+ end
350
+ ensure
351
+ if native_topic && !native_topic.null?
352
+ Rdkafka::Bindings.rd_kafka_topic_destroy(native_topic)
353
+ end
354
+ end
355
+
356
+ # Manually commit the current offsets of this consumer.
357
+ #
358
+ # To use this set `enable.auto.commit`to `false` to disable automatic triggering
359
+ # of commits.
360
+ #
361
+ # If `enable.auto.offset.store` is set to `true` the offset of the last consumed
362
+ # message for every partition is used. If set to `false` you can use {store_offset} to
363
+ # indicate when a message has been fully processed.
225
364
  #
226
365
  # @param list [TopicPartitionList,nil] The topic with partitions to commit
227
366
  # @param async [Boolean] Whether to commit async or wait for the commit to finish
228
367
  #
229
- # @raise [RdkafkaError] When comitting fails
368
+ # @raise [RdkafkaError] When committing fails
230
369
  #
231
370
  # @return [nil]
232
371
  def commit(list=nil, async=false)
233
372
  if !list.nil? && !list.is_a?(TopicPartitionList)
234
373
  raise TypeError.new("list has to be nil or a TopicPartitionList")
235
374
  end
236
- tpl = if list
237
- list.to_native_tpl
238
- else
239
- nil
240
- end
241
- response = Rdkafka::Bindings.rd_kafka_commit(@native_kafka, tpl, async)
242
- if response != 0
243
- raise Rdkafka::RdkafkaError.new(response)
375
+
376
+ tpl = list ? list.to_native_tpl : nil
377
+
378
+ begin
379
+ response = Rdkafka::Bindings.rd_kafka_commit(@native_kafka, tpl, async)
380
+ if response != 0
381
+ raise Rdkafka::RdkafkaError.new(response)
382
+ end
383
+ ensure
384
+ Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) if tpl
244
385
  end
245
- ensure
246
- Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) if tpl
247
386
  end
248
387
 
249
388
  # Poll for the next message on one of the subscribed topics
@@ -254,6 +393,8 @@ module Rdkafka
254
393
  #
255
394
  # @return [Message, nil] A message or nil if there was no new message within the timeout
256
395
  def poll(timeout_ms)
396
+ return unless @native_kafka
397
+
257
398
  message_ptr = Rdkafka::Bindings.rd_kafka_consumer_poll(@native_kafka, timeout_ms)
258
399
  if message_ptr.null?
259
400
  nil
@@ -277,16 +418,20 @@ module Rdkafka
277
418
  # Poll for new messages and yield for each received one. Iteration
278
419
  # will end when the consumer is closed.
279
420
  #
421
+ # If `enable.partition.eof` is turned on in the config this will raise an
422
+ # error when an eof is reached, so you probably want to disable that when
423
+ # using this method of iteration.
424
+ #
280
425
  # @raise [RdkafkaError] When polling fails
281
426
  #
282
427
  # @yieldparam message [Message] Received message
283
428
  #
284
429
  # @return [nil]
285
- def each(&block)
430
+ def each
286
431
  loop do
287
432
  message = poll(250)
288
433
  if message
289
- block.call(message)
434
+ yield(message)
290
435
  else
291
436
  if @closing
292
437
  break