rdkafka 0.5.0 → 0.8.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (47) hide show
  1. checksums.yaml +4 -4
  2. data/.semaphore/semaphore.yml +23 -0
  3. data/CHANGELOG.md +23 -0
  4. data/README.md +9 -9
  5. data/docker-compose.yml +17 -11
  6. data/ext/README.md +3 -15
  7. data/ext/Rakefile +23 -3
  8. data/lib/rdkafka.rb +8 -0
  9. data/lib/rdkafka/abstract_handle.rb +82 -0
  10. data/lib/rdkafka/admin.rb +144 -0
  11. data/lib/rdkafka/admin/create_topic_handle.rb +27 -0
  12. data/lib/rdkafka/admin/create_topic_report.rb +22 -0
  13. data/lib/rdkafka/admin/delete_topic_handle.rb +27 -0
  14. data/lib/rdkafka/admin/delete_topic_report.rb +22 -0
  15. data/lib/rdkafka/bindings.rb +63 -17
  16. data/lib/rdkafka/callbacks.rb +106 -0
  17. data/lib/rdkafka/config.rb +18 -7
  18. data/lib/rdkafka/consumer.rb +162 -46
  19. data/lib/rdkafka/consumer/headers.rb +7 -5
  20. data/lib/rdkafka/consumer/partition.rb +1 -1
  21. data/lib/rdkafka/consumer/topic_partition_list.rb +6 -16
  22. data/lib/rdkafka/error.rb +35 -4
  23. data/lib/rdkafka/metadata.rb +92 -0
  24. data/lib/rdkafka/producer.rb +43 -15
  25. data/lib/rdkafka/producer/delivery_handle.rb +7 -49
  26. data/lib/rdkafka/producer/delivery_report.rb +7 -2
  27. data/lib/rdkafka/version.rb +3 -3
  28. data/rdkafka.gemspec +3 -3
  29. data/spec/rdkafka/abstract_handle_spec.rb +114 -0
  30. data/spec/rdkafka/admin/create_topic_handle_spec.rb +52 -0
  31. data/spec/rdkafka/admin/create_topic_report_spec.rb +16 -0
  32. data/spec/rdkafka/admin/delete_topic_handle_spec.rb +52 -0
  33. data/spec/rdkafka/admin/delete_topic_report_spec.rb +16 -0
  34. data/spec/rdkafka/admin_spec.rb +192 -0
  35. data/spec/rdkafka/bindings_spec.rb +20 -2
  36. data/spec/rdkafka/callbacks_spec.rb +20 -0
  37. data/spec/rdkafka/config_spec.rb +17 -2
  38. data/spec/rdkafka/consumer/message_spec.rb +6 -1
  39. data/spec/rdkafka/consumer_spec.rb +145 -19
  40. data/spec/rdkafka/error_spec.rb +7 -3
  41. data/spec/rdkafka/metadata_spec.rb +78 -0
  42. data/spec/rdkafka/producer/delivery_handle_spec.rb +3 -43
  43. data/spec/rdkafka/producer/delivery_report_spec.rb +5 -1
  44. data/spec/rdkafka/producer_spec.rb +147 -72
  45. data/spec/spec_helper.rb +34 -6
  46. metadata +34 -10
  47. data/.travis.yml +0 -34
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c01eae203c0d0a12be82edacae9fb3ecde7c73dc96e6a95d456434a2d0a2e5d9
4
- data.tar.gz: 50bced86691e02ca0f1af3df2140751f530b667aa5ddf6f6822110991e3f4b43
3
+ metadata.gz: d09f2751b883400c550fee28fca85ffdd3e9f8090a9ba8e7bb22eadcba5c05c6
4
+ data.tar.gz: 107a5b723b40ec43b10f02b1993f42ff4b30a4ba16efc21611efb860de798a3a
5
5
  SHA512:
6
- metadata.gz: af1930db8fea0a9ebec2ab63c404be88a805349badbc575e351e90d694d989a962dcfd4db32a5e344fddc83e8f97ca7600188ccc6d8e46ccd9a73afa99affbed
7
- data.tar.gz: c8a629fe6ea2a9e1a0d6c38e6d2eb46f1ea8d6791e7d2bdd2751c83ab5a6ed6fc90d7783eaeef3e2f324f01d50e3bb3d308c12048001519dea42c3810c399240
6
+ metadata.gz: 373bd71c9e1f3c1fadd4deb9e67bec7aa5821e0a7835c851f1bb64fb1ad142692145e01efa62ac357d333eebb76cef7d676ddfa53ef3a9cd08720060b4bc0937
7
+ data.tar.gz: ec6d8932a2987e419f6fd731ff038cb88d9d898eab75d7eff82a6e606fcb71c249dd20f6d200bda44629f4bb81550c06ef4e8083ac1b4736c57932ab2486fdbc
@@ -0,0 +1,23 @@
1
+ version: v1.0
2
+ name: Rdkafka Ruby
3
+
4
+ agent:
5
+ machine:
6
+ type: e1-standard-2
7
+ os_image: ubuntu1804
8
+
9
+ blocks:
10
+ - name: Run specs
11
+ task:
12
+ jobs:
13
+ - name: bundle exec rspec
14
+ matrix:
15
+ - env_var: RUBY_VERSION
16
+ values: [ "2.5.8", "2.6.6", "2.7.2", "jruby-9.2.13.0" ]
17
+ commands:
18
+ - sem-version ruby $RUBY_VERSION
19
+ - checkout
20
+ - bundle install --path vendor/bundl
21
+ - cd ext && bundle exec rake && cd ..
22
+ - docker-compose up -d --no-recreate
23
+ - bundle exec rspec
@@ -1,8 +1,31 @@
1
+ # 0.8.1
2
+ * Fix topic_flag behaviour and add tests for Metadata (geoff2k)
3
+ * Add topic admin interface (geoff2k)
4
+ * Raise an exception if @native_kafka is nil (geoff2k)
5
+ * Option to use zstd compression (jasonmartens)
6
+
7
+ # 0.8.0
8
+ * Upgrade librdkafka to 1.4.0
9
+ * Integrate librdkafka metadata API and add partition_key (by Adithya-copart)
10
+ * Ruby 2.7 compatibility fix (by Geoff Thé)A
11
+ * Add error to delivery report (by Alex Stanovsky)
12
+ * Don't override CPPFLAGS and LDFLAGS if already set on Mac (by Hiroshi Hatake)
13
+ * Allow use of Rake 13.x and up (by Tomasz Pajor)
14
+
15
+ # 0.7.0
16
+ * Bump librdkafka to 1.2.0 (by rob-as)
17
+ * Allow customizing the wait time for delivery report availability (by mensfeld)
18
+
19
+ # 0.6.0
20
+ * Bump librdkafka to 1.1.0 (by Chris Gaffney)
21
+ * Implement seek (by breunigs)
22
+
1
23
  # 0.5.0
2
24
  * Bump librdkafka to 1.0.0 (by breunigs)
3
25
  * Add cluster and member information (by dmexe)
4
26
  * Support message headers for consumer & producer (by dmexe)
5
27
  * Add consumer rebalance listener (by dmexe)
28
+ * Implement pause/resume partitions (by dmexe)
6
29
 
7
30
  # 0.4.2
8
31
  * Delivery callback for producer
data/README.md CHANGED
@@ -1,14 +1,17 @@
1
1
  # Rdkafka
2
2
 
3
- [![Build Status](https://travis-ci.org/appsignal/rdkafka-ruby.svg?branch=master)](https://travis-ci.org/appsignal/rdkafka-ruby)
3
+ [![Build Status](https://appsignal.semaphoreci.com/badges/rdkafka-ruby/branches/master.svg?style=shields)](https://appsignal.semaphoreci.com/projects/rdkafka-ruby)
4
4
  [![Gem Version](https://badge.fury.io/rb/rdkafka.svg)](https://badge.fury.io/rb/rdkafka)
5
5
  [![Maintainability](https://api.codeclimate.com/v1/badges/ecb1765f81571cccdb0e/maintainability)](https://codeclimate.com/github/appsignal/rdkafka-ruby/maintainability)
6
- [![Test Coverage](https://api.codeclimate.com/v1/badges/ecb1765f81571cccdb0e/test_coverage)](https://codeclimate.com/github/appsignal/rdkafka-ruby/test_coverage)
7
6
 
8
7
  The `rdkafka` gem is a modern Kafka client library for Ruby based on
9
8
  [librdkafka](https://github.com/edenhill/librdkafka/).
10
9
  It wraps the production-ready C client using the [ffi](https://github.com/ffi/ffi)
11
- gem and targets Kafka 1.0+ and Ruby 2.3+.
10
+ gem and targets Kafka 1.0+ and Ruby 2.4+.
11
+
12
+ `rdkafka` was written because we needed a reliable Ruby client for
13
+ Kafka that supports modern Kafka at [AppSignal](https://appsignal.com).
14
+ We run it in production on very high traffic systems.
12
15
 
13
16
  This gem only provides a high-level Kafka consumer. If you are running
14
17
  an older version of Kafka and/or need the legacy simple consumer we
@@ -68,12 +71,9 @@ end
68
71
  delivery_handles.each(&:wait)
69
72
  ```
70
73
 
71
- ## Known issues
72
-
73
- When using forked process such as when using Unicorn you currently need
74
- to make sure that you create rdkafka instances after forking. Otherwise
75
- they will not work and crash your Ruby process when they are garbage
76
- collected. See https://github.com/appsignal/rdkafka-ruby/issues/19
74
+ Note that creating a producer consumes some resources that will not be
75
+ released until it `#close` is explicitly called, so be sure to call
76
+ `Config#producer` only as necessary.
77
77
 
78
78
  ## Development
79
79
 
@@ -1,18 +1,24 @@
1
+ ---
2
+
1
3
  version: '2'
4
+
2
5
  services:
3
6
  zookeeper:
4
- image: wurstmeister/zookeeper
5
- ports:
6
- - "2181:2181"
7
+ image: confluentinc/cp-zookeeper:latest
8
+ environment:
9
+ ZOOKEEPER_CLIENT_PORT: 2181
10
+ ZOOKEEPER_TICK_TIME: 2000
11
+
7
12
  kafka:
8
- image: wurstmeister/kafka:1.0.1
13
+ image: confluentinc/cp-kafka:latest
14
+ depends_on:
15
+ - zookeeper
9
16
  ports:
10
- - "9092:9092"
17
+ - 9092:9092
11
18
  environment:
12
- KAFKA_ADVERTISED_HOST_NAME: localhost
13
- KAFKA_ADVERTISED_PORT: 9092
19
+ KAFKA_BROKER_ID: 1
14
20
  KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
15
- KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
16
- KAFKA_CREATE_TOPICS: "consume_test_topic:3:1,empty_test_topic:3:1,load_test_topic:3:1,produce_test_topic:3:1,rake_test_topic:3:1,watermarks_test_topic:3:1"
17
- volumes:
18
- - /var/run/docker.sock:/var/run/docker.sock
21
+ KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
22
+ KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
23
+ KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
24
+ KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
@@ -5,19 +5,7 @@ this gem is installed.
5
5
 
6
6
  To update the `librdkafka` version follow the following steps:
7
7
 
8
- * Download the new version `tar.gz` from https://github.com/edenhill/librdkafka/
9
- * Generate a `sha256` with (if using MacOS) `shasum -a 256 <file>`
10
- * Change the `sha256` in `lib/rdkafka/version.rb`
8
+ * Go to https://github.com/edenhill/librdkafka/releases to get the new
9
+ version number and asset checksum for `tar.gz`.
11
10
  * Change the version in `lib/rdkafka/version.rb`
12
-
13
- ## Disclaimer
14
-
15
- Currently the `librdkafka` project does not provide
16
- checksums of releases. The checksum provided here is generated on a best
17
- effort basis. If the CDN would be compromised at the time of download the
18
- checksum could be incorrect.
19
-
20
- Do your own verification if you rely on this behaviour.
21
-
22
- Once https://github.com/appsignal/rdkafka-ruby/issues/44 is implemented
23
- we will change this process.
11
+ * Change the `sha256` in `lib/rdkafka/version.rb`
@@ -23,9 +23,10 @@ task :default => :clean do
23
23
  recipe = MiniPortile.new("librdkafka", Rdkafka::LIBRDKAFKA_VERSION)
24
24
 
25
25
  # Use default homebrew openssl if we're on mac and the directory exists
26
- if recipe.host.include?("darwin") && Dir.exists?("/usr/local/opt/openssl")
27
- ENV["CPPFLAGS"] = "-I/usr/local/opt/openssl/include"
28
- ENV["LDFLAGS"] = "-L/usr/local/opt/openssl/lib"
26
+ # and each of flags is not empty
27
+ if recipe.host&.include?("darwin") && Dir.exist?("/usr/local/opt/openssl")
28
+ ENV["CPPFLAGS"] = "-I/usr/local/opt/openssl/include" unless ENV["CPPFLAGS"]
29
+ ENV["LDFLAGS"] = "-L/usr/local/opt/openssl/lib" unless ENV["LDFLAGS"]
29
30
  end
30
31
 
31
32
  recipe.files << {
@@ -55,3 +56,22 @@ task :clean do
55
56
  FileUtils.rm_rf File.join(File.dirname(__FILE__), "ports")
56
57
  FileUtils.rm_rf File.join(File.dirname(__FILE__), "tmp")
57
58
  end
59
+
60
+ namespace :build do
61
+ desc "Build librdkafka at the given git sha or tag"
62
+ task :git, [:ref] do |task, args|
63
+ ref = args[:ref]
64
+ version = "git-#{ref}"
65
+
66
+ recipe = MiniPortile.new("librdkafka", version)
67
+ recipe.files << "https://github.com/edenhill/librdkafka/archive/#{ref}.tar.gz"
68
+ recipe.configure_options = ["--host=#{recipe.host}","--enable-static", "--enable-zstd"]
69
+ recipe.cook
70
+
71
+ ext = recipe.host.include?("darwin") ? "dylib" : "so"
72
+ lib = File.expand_path("ports/#{recipe.host}/librdkafka/#{version}/lib/librdkafka.#{ext}", __dir__)
73
+
74
+ # Copy will copy the content, following any symlinks
75
+ FileUtils.cp(lib, __dir__)
76
+ end
77
+ end
@@ -1,6 +1,13 @@
1
1
  require "rdkafka/version"
2
2
 
3
+ require "rdkafka/abstract_handle"
4
+ require "rdkafka/admin"
5
+ require "rdkafka/admin/create_topic_handle"
6
+ require "rdkafka/admin/create_topic_report"
7
+ require "rdkafka/admin/delete_topic_handle"
8
+ require "rdkafka/admin/delete_topic_report"
3
9
  require "rdkafka/bindings"
10
+ require "rdkafka/callbacks"
4
11
  require "rdkafka/config"
5
12
  require "rdkafka/consumer"
6
13
  require "rdkafka/consumer/headers"
@@ -8,6 +15,7 @@ require "rdkafka/consumer/message"
8
15
  require "rdkafka/consumer/partition"
9
16
  require "rdkafka/consumer/topic_partition_list"
10
17
  require "rdkafka/error"
18
+ require "rdkafka/metadata"
11
19
  require "rdkafka/producer"
12
20
  require "rdkafka/producer/delivery_handle"
13
21
  require "rdkafka/producer/delivery_report"
@@ -0,0 +1,82 @@
1
+ require "ffi"
2
+
3
+ module Rdkafka
4
+ class AbstractHandle < FFI::Struct
5
+ # Subclasses must define their own layout, and the layout must start with:
6
+ #
7
+ # layout :pending, :bool,
8
+ # :response, :int
9
+
10
+ REGISTRY = {}
11
+
12
+ CURRENT_TIME = -> { Process.clock_gettime(Process::CLOCK_MONOTONIC) }.freeze
13
+
14
+ private_constant :CURRENT_TIME
15
+
16
+ def self.register(handle)
17
+ address = handle.to_ptr.address
18
+ REGISTRY[address] = handle
19
+ end
20
+
21
+ def self.remove(address)
22
+ REGISTRY.delete(address)
23
+ end
24
+
25
+ # Whether the handle is still pending.
26
+ #
27
+ # @return [Boolean]
28
+ def pending?
29
+ self[:pending]
30
+ end
31
+
32
+ # Wait for the operation to complete or raise an error if this takes longer than the timeout.
33
+ # If there is a timeout this does not mean the operation failed, rdkafka might still be working on the operation.
34
+ # In this case it is possible to call wait again.
35
+ #
36
+ # @param max_wait_timeout [Numeric, nil] Amount of time to wait before timing out. If this is nil it does not time out.
37
+ # @param wait_timeout [Numeric] Amount of time we should wait before we recheck if the operation has completed
38
+ #
39
+ # @raise [RdkafkaError] When the operation failed
40
+ # @raise [WaitTimeoutError] When the timeout has been reached and the handle is still pending
41
+ #
42
+ # @return [Object] Operation-specific result
43
+ def wait(max_wait_timeout: 60, wait_timeout: 0.1)
44
+ timeout = if max_wait_timeout
45
+ CURRENT_TIME.call + max_wait_timeout
46
+ else
47
+ nil
48
+ end
49
+ loop do
50
+ if pending?
51
+ if timeout && timeout <= CURRENT_TIME.call
52
+ raise WaitTimeoutError.new("Waiting for #{operation_name} timed out after #{max_wait_timeout} seconds")
53
+ end
54
+ sleep wait_timeout
55
+ elsif self[:response] != 0
56
+ raise_error
57
+ else
58
+ return create_result
59
+ end
60
+ end
61
+ end
62
+
63
+ # @return [String] the name of the operation (e.g. "delivery")
64
+ def operation_name
65
+ raise "Must be implemented by subclass!"
66
+ end
67
+
68
+ # @return [Object] operation-specific result
69
+ def create_result
70
+ raise "Must be implemented by subclass!"
71
+ end
72
+
73
+ # Allow subclasses to override
74
+ def raise_error
75
+ raise RdkafkaError.new(self[:response])
76
+ end
77
+
78
+ # Error that is raised when waiting for the handle to complete
79
+ # takes longer than the specified timeout.
80
+ class WaitTimeoutError < RuntimeError; end
81
+ end
82
+ end
@@ -0,0 +1,144 @@
1
+ module Rdkafka
2
+ class Admin
3
+ # @private
4
+ def initialize(native_kafka)
5
+ @native_kafka = native_kafka
6
+ @closing = false
7
+
8
+ # Start thread to poll client for callbacks
9
+ @polling_thread = Thread.new do
10
+ loop do
11
+ Rdkafka::Bindings.rd_kafka_poll(@native_kafka, 250)
12
+ # Exit thread if closing and the poll queue is empty
13
+ if @closing && Rdkafka::Bindings.rd_kafka_outq_len(@native_kafka) == 0
14
+ break
15
+ end
16
+ end
17
+ end
18
+ @polling_thread.abort_on_exception = true
19
+ end
20
+
21
+ # Close this admin instance
22
+ def close
23
+ return unless @native_kafka
24
+
25
+ # Indicate to polling thread that we're closing
26
+ @closing = true
27
+ # Wait for the polling thread to finish up
28
+ @polling_thread.join
29
+ Rdkafka::Bindings.rd_kafka_destroy(@native_kafka)
30
+ @native_kafka = nil
31
+ end
32
+
33
+ # Create a topic with the given partition count and replication factor
34
+ #
35
+ # @raise [ConfigError] When the partition count or replication factor are out of valid range
36
+ # @raise [RdkafkaError] When the topic name is invalid or the topic already exists
37
+ #
38
+ # @return [CreateTopicHandle] Create topic handle that can be used to wait for the result of creating the topic
39
+ def create_topic(topic_name, partition_count, replication_factor)
40
+
41
+ # Create a rd_kafka_NewTopic_t representing the new topic
42
+ error_buffer = FFI::MemoryPointer.from_string(" " * 256)
43
+ new_topic_ptr = Rdkafka::Bindings.rd_kafka_NewTopic_new(
44
+ FFI::MemoryPointer.from_string(topic_name),
45
+ partition_count,
46
+ replication_factor,
47
+ error_buffer,
48
+ 256
49
+ )
50
+ if new_topic_ptr.null?
51
+ raise Rdkafka::Config::ConfigError.new(error_buffer.read_string)
52
+ end
53
+
54
+ # Note that rd_kafka_CreateTopics can create more than one topic at a time
55
+ pointer_array = [new_topic_ptr]
56
+ topics_array_ptr = FFI::MemoryPointer.new(:pointer)
57
+ topics_array_ptr.write_array_of_pointer(pointer_array)
58
+
59
+ # Get a pointer to the queue that our request will be enqueued on
60
+ queue_ptr = Rdkafka::Bindings.rd_kafka_queue_get_background(@native_kafka)
61
+ if queue_ptr.null?
62
+ Rdkafka::Bindings.rd_kafka_NewTopic_destroy(new_topic_ptr)
63
+ raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
64
+ end
65
+
66
+ # Create and register the handle we will return to the caller
67
+ create_topic_handle = CreateTopicHandle.new
68
+ create_topic_handle[:pending] = true
69
+ create_topic_handle[:response] = -1
70
+ CreateTopicHandle.register(create_topic_handle)
71
+ admin_options_ptr = Rdkafka::Bindings.rd_kafka_AdminOptions_new(@native_kafka, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_CREATETOPICS)
72
+ Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, create_topic_handle.to_ptr)
73
+
74
+ begin
75
+ Rdkafka::Bindings.rd_kafka_CreateTopics(
76
+ @native_kafka,
77
+ topics_array_ptr,
78
+ 1,
79
+ admin_options_ptr,
80
+ queue_ptr
81
+ )
82
+ rescue Exception => err
83
+ CreateTopicHandle.remove(create_topic_handle.to_ptr.address)
84
+ raise
85
+ ensure
86
+ Rdkafka::Bindings.rd_kafka_AdminOptions_destroy(admin_options_ptr)
87
+ Rdkafka::Bindings.rd_kafka_queue_destroy(queue_ptr)
88
+ Rdkafka::Bindings.rd_kafka_NewTopic_destroy(new_topic_ptr)
89
+ end
90
+
91
+ create_topic_handle
92
+ end
93
+
94
+ # Delete the named topic
95
+ #
96
+ # @raise [RdkafkaError] When the topic name is invalid or the topic does not exist
97
+ #
98
+ # @return [DeleteTopicHandle] Delete topic handle that can be used to wait for the result of deleting the topic
99
+ def delete_topic(topic_name)
100
+
101
+ # Create a rd_kafka_DeleteTopic_t representing the topic to be deleted
102
+ delete_topic_ptr = Rdkafka::Bindings.rd_kafka_DeleteTopic_new(FFI::MemoryPointer.from_string(topic_name))
103
+
104
+ # Note that rd_kafka_DeleteTopics can create more than one topic at a time
105
+ pointer_array = [delete_topic_ptr]
106
+ topics_array_ptr = FFI::MemoryPointer.new(:pointer)
107
+ topics_array_ptr.write_array_of_pointer(pointer_array)
108
+
109
+ # Get a pointer to the queue that our request will be enqueued on
110
+ queue_ptr = Rdkafka::Bindings.rd_kafka_queue_get_background(@native_kafka)
111
+ if queue_ptr.null?
112
+ Rdkafka::Bindings.rd_kafka_DeleteTopic_destroy(delete_topic_ptr)
113
+ raise Rdkafka::Config::ConfigError.new("rd_kafka_queue_get_background was NULL")
114
+ end
115
+
116
+ # Create and register the handle we will return to the caller
117
+ delete_topic_handle = DeleteTopicHandle.new
118
+ delete_topic_handle[:pending] = true
119
+ delete_topic_handle[:response] = -1
120
+ DeleteTopicHandle.register(delete_topic_handle)
121
+ admin_options_ptr = Rdkafka::Bindings.rd_kafka_AdminOptions_new(@native_kafka, Rdkafka::Bindings::RD_KAFKA_ADMIN_OP_DELETETOPICS)
122
+ Rdkafka::Bindings.rd_kafka_AdminOptions_set_opaque(admin_options_ptr, delete_topic_handle.to_ptr)
123
+
124
+ begin
125
+ Rdkafka::Bindings.rd_kafka_DeleteTopics(
126
+ @native_kafka,
127
+ topics_array_ptr,
128
+ 1,
129
+ admin_options_ptr,
130
+ queue_ptr
131
+ )
132
+ rescue Exception => err
133
+ DeleteTopicHandle.remove(delete_topic_handle.to_ptr.address)
134
+ raise
135
+ ensure
136
+ Rdkafka::Bindings.rd_kafka_AdminOptions_destroy(admin_options_ptr)
137
+ Rdkafka::Bindings.rd_kafka_queue_destroy(queue_ptr)
138
+ Rdkafka::Bindings.rd_kafka_DeleteTopic_destroy(delete_topic_ptr)
139
+ end
140
+
141
+ delete_topic_handle
142
+ end
143
+ end
144
+ end